Статті в журналах з теми "Constraints annotation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Constraints annotation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Constraints annotation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

SAMPSON, GEOFFREY, and ANNA BABARCZY. "Definitional and human constraints on structural annotation of English." Natural Language Engineering 14, no. 4 (October 2008): 471–94. http://dx.doi.org/10.1017/s1351324908004695.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe limits on predictability and refinement of English structural annotation are examined by comparing independent annotations, by experienced analysts using the same detailed published guidelines, of a common sample of written texts. Three conclusions emerge. First, while it is not easy to define watertight boundaries between the categories of a comprehensive structural annotation scheme, limits on inter-annotator agreement are in practice set more by the difficulty of conforming to a well-defined scheme than by the difficulty of making a scheme well defined. Secondly, although usage is often structurally ambiguous, commonly the alternative analyses are logical distinctions without a practical difference – which raises questions about the role of grammar in human linguistic behaviour. Finally, one specific area of annotation is strikingly more problematic than any other area examined, though this area (classifying the functions of clause-constituents) seems a particularly significant one for human language use. These findings should be of interest both to computational linguists and to students of language as an aspect of human cognition.
2

Anderson, Matthew, Salman Sadiq, Muzammil Nahaboo Solim, Hannah Barker, David H. Steel, Maged Habib, and Boguslaw Obara. "Biomedical Data Annotation: An OCT Imaging Case Study." Journal of Ophthalmology 2023 (August 22, 2023): 1–9. http://dx.doi.org/10.1155/2023/5747010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In ophthalmology, optical coherence tomography (OCT) is a widely used imaging modality, allowing visualisation of the structures of the eye with objective and quantitative cross-sectional three-dimensional (3D) volumetric scans. Due to the quantity of data generated from OCT scans and the time taken for an ophthalmologist to inspect for various disease pathology features, automated image analysis in the form of deep neural networks has seen success for the classification and segmentation of OCT layers and quantification of features. However, existing high-performance deep learning approaches rely on huge training datasets with high-quality annotations, which are challenging to obtain in many clinical applications. The collection of annotations from less experienced clinicians has the potential to alleviate time constraints from more senior clinicians, allowing faster data collection of medical image annotations; however, with less experience, there is the possibility of reduced annotation quality. In this study, we evaluate the quality of diabetic macular edema (DME) intraretinal fluid (IRF) biomarker image annotations on OCT B-scans from five clinicians with a range of experience. We also assess the effectiveness of annotating across multiple sessions following a training session led by an expert clinician. Our investigation shows a notable variance in annotation performance, with a correlation that depends on the clinician’s experience with OCT image interpretation of DME, and that having multiple annotation sessions has a limited effect on the annotation quality.
3

Lin, Jia-Wen, Feng Lu, Tai-Chen Lai, Jing Zou, Lin-Ling Guo, Zhi-Ming Lin, and Li Li. "Meibomian glands segmentation in infrared images with limited annotation." International Journal of Ophthalmology 17, no. 3 (March 18, 2024): 401–7. http://dx.doi.org/10.18240/ijo.2024.03.01.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AIM: To investigate a pioneering framework for the segmentation of meibomian glands (MGs), using limited annotations to reduce the workload on ophthalmologists and enhance the efficiency of clinical diagnosis. METHODS: Totally 203 infrared meibomian images from 138 patients with dry eye disease, accompanied by corresponding annotations, were gathered for the study. A rectified scribble-supervised gland segmentation (RSSGS) model, incorporating temporal ensemble prediction, uncertainty estimation, and a transformation equivariance constraint, was introduced to address constraints imposed by limited supervision information inherent in scribble annotations. The viability and efficacy of the proposed model were assessed based on accuracy, intersection over union (IoU), and dice coefficient. RESULTS: Using manual labels as the gold standard, RSSGS demonstrated outcomes with an accuracy of 93.54%, a dice coefficient of 78.02%, and an IoU of 64.18%. Notably, these performance metrics exceed the current weakly supervised state-of-the-art methods by 0.76%, 2.06%, and 2.69%, respectively. Furthermore, despite achieving a substantial 80% reduction in annotation costs, it only lags behind fully annotated methods by 0.72%, 1.51%, and 2.04%. CONCLUSION: An innovative automatic segmentation model is developed for MGs in infrared eyelid images, using scribble annotation for training. This model maintains an exceptionally high level of segmentation accuracy while substantially reducing training costs. It holds substantial utility for calculating clinical parameters, thereby greatly enhancing the diagnostic efficiency of ophthalmologists in evaluating meibomian gland dysfunction.
4

Grác, Marek, Markéta Masopustová, and Marie Valíčková. "Affordable Annotation of the Mobile App Reviews." Journal of Linguistics/Jazykovedný casopis 70, no. 2 (December 1, 2019): 491–97. http://dx.doi.org/10.2478/jazcas-2019-0077.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper focuses on the use-case study of the annotation of the mobile app reviews from Google Play and Apple Store. These annotations of sentiment polarity were created for later use in the automatic processing based on machine learning. This should solve some of the problems encountered in the previous analyses of the Czech language where data assumptions play a greater role than annotation itself (due to the financial constraints). Our proposal shows that some of the assumptions used for English do not apply to Czech and that it is possible to annotate such data without extensive financing.
5

Olivier, Brett G., and Frank T. Bergmann. "The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 660–90. http://dx.doi.org/10.1515/jib-2015-269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Summary Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state.The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
6

Luo, Yuan, and Peter Szolovits. "Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records." Biomedical Informatics Insights 8 (January 2016): BII.S38916. http://dx.doi.org/10.4137/bii.s38916.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
7

ISMAIL, MOHAMED MAHER BEN, and OUIEM BCHIR. "AUTOMATIC IMAGE ANNOTATION BASED ON SEMI-SUPERVISED CLUSTERING AND MEMBERSHIP-BASED CROSS MEDIA RELEVANCE MODEL." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 06 (September 2012): 1255009. http://dx.doi.org/10.1142/s0218001412550099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we propose a system for automatic image annotation that has two main components. The first component consists of a novel semi-supervised possibilistic clustering and feature weighting algorithm based on robust modeling of the generalized Dirichlet (GD) finite mixture. This algorithm is used to group image regions into prototypical region clusters that summarize the training data and can be used as the basis of annotating new test images. The constraints consist of pairs of image regions that should not be included in the same cluster. These constraints are deduced from the irrelevance of all concepts annotating the training images to help in guiding the clustering process. The second component of our system consists of a probabilistic model that relies on the possibilistic membership degrees, generated by the clustering algorithm, to annotate unlabeled images. The proposed system was implemented and tested on a data set that include thousands of images using four-fold cross validation.
8

BABARCZY, ANNA, JOHN CARROLL, and GEOFFREY SAMPSON. "Definitional, personal, and mechanical constraints on part of speech annotation performance." Natural Language Engineering 12, no. 1 (December 6, 2005): 77–90. http://dx.doi.org/10.1017/s1351324905003803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For one aspect of grammatical annotation, part-of-speech tagging, we investigate experimentally whether the ceiling on accuracy stems from limits to the precision of tag definition or limits to analysts' ability to apply precise definitions, and we examine how analysts' performance is affected by alternative types of semi-automatic support. We find that, even for analysts very well-versed in a part-of-speech tagging scheme, human ability to conform to the scheme is a more serious constraint than precision of scheme definition. We also find that although semi-automatic techniques can greatly increase speed relative to manual tagging, they have little effect on accuracy, either positively (by suggesting valid candidate tags) or negatively (by lending an appearance of authority to incorrect tag assignments). On the other hand, it emerges that there are large differences between individual analysts with respect to usability of particular types of semi-automatic support.
9

Ge, Hongwei, Zehang Yan, Jing Dou, Zhen Wang, and ZhiQiang Wang. "A Semisupervised Framework for Automatic Image Annotation Based on Graph Embedding and Multiview Nonnegative Matrix Factorization." Mathematical Problems in Engineering 2018 (June 27, 2018): 1–11. http://dx.doi.org/10.1155/2018/5987906.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic image annotation is for more accurate image retrieval and classification by assigning labels to images. This paper proposes a semisupervised framework based on graph embedding and multiview nonnegative matrix factorization (GENMF) for automatic image annotation with multilabel images. First, we construct a graph embedding term in the multiview NMF based on the association diagrams between labels for semantic constraints. Then, the multiview features are fused and dimensions are reduced based on multiview NMF algorithm. Finally, image annotation is achieved by using the new features through a KNN-based approach. Experiments validate that the proposed algorithm has achieved competitive performance in terms of accuracy and efficiency.
10

Nursimulu, Nirvana, Alan M. Moses, and John Parkinson. "Architect: A tool for aiding the reconstruction of high-quality metabolic models through improved enzyme annotation." PLOS Computational Biology 18, no. 9 (September 8, 2022): e1010452. http://dx.doi.org/10.1371/journal.pcbi.1010452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Constraint-based modeling is a powerful framework for studying cellular metabolism, with applications ranging from predicting growth rates and optimizing production of high value metabolites to identifying enzymes in pathogens that may be targeted for therapeutic interventions. Results from modeling experiments can be affected at least in part by the quality of the metabolic models used. Reconstructing a metabolic network manually can produce a high-quality metabolic model but is a time-consuming task. At the same time, current methods for automating the process typically transfer metabolic function based on sequence similarity, a process known to produce many false positives. We created Architect, a pipeline for automatic metabolic model reconstruction from protein sequences. First, it performs enzyme annotation through an ensemble approach, whereby a likelihood score is computed for an EC prediction based on predictions from existing tools; for this step, our method shows both increased precision and recall compared to individual tools. Next, Architect uses these annotations to construct a high-quality metabolic network which is then gap-filled based on likelihood scores from the ensemble approach. The resulting metabolic model is output in SBML format, suitable for constraints-based analyses. Through comparisons of enzyme annotations and curated metabolic models, we demonstrate improved performance of Architect over other state-of-the-art tools, notably with higher precision and recall on the eukaryote C. elegans and when compared to UniProt annotations in two bacterial species. Code for Architect is available at https://github.com/ParkinsonLab/Architect. For ease-of-use, Architect can be readily set up and utilized using its Docker image, maintained on Docker Hub.
11

Hua, Liujie, Yueyi Luo, Qianqian Qi, and Jun Long. "MedicalCLIP: Anomaly-Detection Domain Generalization with Asymmetric Constraints." Biomolecules 14, no. 5 (May 16, 2024): 590. http://dx.doi.org/10.3390/biom14050590.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Medical data have unique specificity and professionalism, requiring substantial domain expertise for their annotation. Precise data annotation is essential for anomaly-detection tasks, making the training process complex. Domain generalization (DG) is an important approach to enhancing medical image anomaly detection (AD). This paper introduces a novel multimodal anomaly-detection framework called MedicalCLIP. MedicalCLIP utilizes multimodal data in anomaly-detection tasks and establishes irregular constraints within modalities for images and text. The key to MedicalCLIP lies in learning intramodal detailed representations, which are combined with text semantic-guided cross-modal contrastive learning, allowing the model to focus on semantic information while capturing more detailed information, thus achieving more fine-grained anomaly detection. MedicalCLIP relies on GPT prompts to generate text, reducing the demand for professional descriptions of medical data. Text construction for medical data helps to improve the generalization ability of multimodal models for anomaly-detection tasks. Additionally, during the text–image contrast-enhancement process, the model’s ability to select and extract information from image data is improved. Through hierarchical contrastive loss, fine-grained representations are achieved in the image-representation process. MedicalCLIP has been validated on various medical datasets, showing commendable domain generalization performance in medical-data anomaly detection. Improvements were observed in both anomaly classification and segmentation metrics. In the anomaly classification (AC) task involving brain data, the method demonstrated a 2.81 enhancement in performance over the best existing approach.
12

Shepley, Andrew, Greg Falzon, Christopher Lawson, Paul Meek, and Paul Kwan. "U-Infuse: Democratization of Customizable Deep Learning for Object Detection." Sensors 21, no. 8 (April 8, 2021): 2611. http://dx.doi.org/10.3390/s21082611.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.
13

Pliakos, Konstantinos, and Constantine Kotropoulos. "Building an Image Annotation and Tourism Recommender System." International Journal on Artificial Intelligence Tools 24, no. 05 (October 2015): 1540021. http://dx.doi.org/10.1142/s0218213015400217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The interest in image annotation and recommendation has been increased due to the ever rising amount of data uploaded to the web. Despite the many efforts undertaken so far, accuracy or efficiency still remain open problems. Here, a complete image annotation and tourism recommender system is proposed. It is based on the probabilistic latent semantic analysis (PLSA) and hypergraph ranking, exploiting the visual attributes of the images and the semantic information found in image tags and geo-tags. In particular, semantic image annotation resorts to the PLSA, exploiting the textual information in image tags. It is further complemented by visual annotation based on visual image content classification. Tourist destinations, strongly related to a query image, are recommended using hypergraph ranking enhanced by enforcing group sparsity constraints. Experiments were conducted on a large image dataset of Greek sites collected from Flickr. The experimental results demonstrate the merits of the proposed model. Semantic image annotation by means of the PLSA has achieved an average precision of 92% at 10% recall. The accuracy of content-based image classification is 82, 6%. An average precision of 92% is measured at 1% recall for tourism recommendation.
14

Li, Jinyang, Yuval Moskovitch, Julia Stoyanovich, and H. V. Jagadish. "Query Refinement for Diversity Constraint Satisfaction." Proceedings of the VLDB Endowment 17, no. 2 (October 2023): 106–18. http://dx.doi.org/10.14778/3626292.3626295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Diversity, group representation, and similar needs often apply to query results, which in turn require constraints on the sizes of various subgroups in the result set. Traditional relational queries only specify conditions as part of the query predicate(s), and do not support such restrictions on the output. In this paper, we study the problem of modifying queries to have the result satisfy constraints on the sizes of multiple subgroups in it. This problem, in the worst case, cannot be solved in polynomial time. Yet, with the help of provenance annotation, we are able to develop a query refinement method that works quite efficiently, as we demonstrate through extensive experiments.
15

Lister, Allyson L., Matthew Pocock, and Anil Wipat. "Integration of constraints documented in SBML, SBO, and the SBML Manual facilitates validation of biological models." Journal of Integrative Bioinformatics 4, no. 3 (December 1, 2007): 252–63. http://dx.doi.org/10.1515/jib-2007-80.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The creation of quantitative, simulatable, Systems Biology Markup Language (SBML) models that accurately simulate the system under study is a time-intensive manual process that requires careful checking. Currently, the rules and constraints of model creation, curation, and annotation are distributed over at least three separate documents: the SBML schema document (XSD), the Systems Biology Ontology (SBO), and the “Structures and Facilities for Model Definition” document. The latter document contains the richest set of constraints on models, and yet it is not amenable to computational processing. We have developed a Web Ontology Language (OWL) knowledge base that integrates these three structure documents, and that contains a representative sample of the information contained within them. This Model Format OWL (MFO) performs both structural and constraint integration and can be reasoned over and validated. SBML Models are represented as individuals of OWL classes, resulting in a single computationally amenable resource for model checking. Knowledge that was only accessible to humans is now explicitly and directly available for computational approaches. The integration of all structural knowledge for SBML models into a single resource creates a new style of model development and checking.
16

Paulino, Hervé, Ana Almeida Matos, Jan Cederquist, Marco Giunti, João Matos, and António Ravara. "AtomiS: Data-Centric Synchronization Made Practical." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (October 16, 2023): 116–45. http://dx.doi.org/10.1145/3622801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data-Centric Synchronization (DCS) shifts the reasoning about concurrency restrictions from control structures to data declaration. It is a high-level declarative approach that abstracts away from the actual concurrency control mechanism(s) in use. Despite its advantages, the practical use of DCS is hindered by the fact that it may require many annotations and/or multiple implementations of the same method to cope with differently qualified parameters. To overcome these limitations, in this paper we present AtomiS, a new DCS approach that requires only qualifying types of parameters and return values in interface definitions, and of fields in class definitions. The latter may also be abstracted away in type parameters, rendering class implementations virtually annotation-free. From this high level specification, a static analysis infers the atomicity constraints that are local to each method, considering valid only the method variants that are consistent with the specification, and performs code generation for all valid variants of each method. The generated code is then the target for automatic injection of concurrency control primitives that are responsible for ensuring the absence of data-races, atomicity-violations and deadlocks. We provide a Java implementation and showcase the applicability of AtomiS in real-life code. For the benchmarks analysed, AtomiS requires fewer annotations than the original number of regions requiring locks, as well as fewer annotations than Atomic Sets (a reference DCS proposal).
17

Vilaplana, Jordi, Francesc Solsona, Ivan Teixido, Anabel Usié, Hiren Karathia, Rui Alves, and Jordi Mateo. "Database Constraints Applied to Metabolic Pathway Reconstruction Tools." Scientific World Journal 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/967294.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Our group developed two biological applications,Biblio-MetReSandHomol-MetReS, accessing the same database of organisms with annotated genes.Biblio-MetReSis a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature.Homol-MetReSallows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relationalSQLdatabase,MySQL.Then, the same database was implemented by aMapReduce-based database namedHBase.The results indicated that the standard configuration ofMySQLgives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.
18

CATALANO, CHIARA E., FRANCA GIANNINI, MARINA MONTI, and GIULIANA UCELLI. "A framework for the automatic annotation of car aesthetics." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 21, no. 1 (January 2007): 73–90. http://dx.doi.org/10.1017/s0890060407070151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The design of a new car is guided by a set of directives indicating the target market, specific engineering, and aesthetic constraints, which may also include the preservation of the company brand identity or the restyling of products already on the market. When creating a new product, designers usually evaluate other existing products to find sources of inspiration or to possibly reuse successful solutions. In the perspective of an optimized styling workflow, great benefit could be derived from the possibility of easily retrieving the related documentation and existing digital models both from internal and external repositories. In fact, the rapid growth of resources on the Web and the widespread adoption of computer-assisted design tools have made available huge amounts of data, the utilization of which could be improved by using more selective retrieval methods. In particular, the retrieval of aesthetic elements may help designers to create digital models conforming to specific styling properties more efficiently. The aim of our research is the definition of a framework that supports (semi)automatic extraction of semantic data from three-dimensional models and other multimedia data to allow car designers to reuse knowledge and design solutions within the styling department. The first objective is then to capture and structure the explicit and implicit elements contributing to the definition of car aesthetics, which can be realistically tackled through computational models and methods. The second step is the definition of a system architecture that is able to transfer such semantic evaluation through the automatic annotation of car models.
19

Aasman, Susan, Liliana Melgar Estrada, Tom Slootweg, and Rob Wegter. "Tales of a Tool Encounter." Audiovisual Data in Digital Humanities 7, no. 14 (December 31, 2018): 73. http://dx.doi.org/10.18146/2213-0969.2018.jethc154.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article explores the affordances and functionalities of the Dutch CLARIAH research infrastructure – and the integrated video annotation tool – for doing media historical research with digitised audiovisual sources from television archives. The growing importance of digital research infrastructures, archives and tools, has enticed media historians to rethink their research practices more and more in terms of methodological transparency, tool criticism and reflection. Moreover, also questions related to the heuristics and hermeneutics of our scholarly work need to be reconsidered. The article hence sketches the role of digital research infrastructures for the humanities (in the Netherlands), and the use of video annotation in media studies and other research domains. By doing so, the authors reflect on their own specific engagements with the CLARIAH infrastructure and its tools, both as media historians and co-developers. This dual position greatly determines the possibilities and constraints for the various modes of digital scholarship relevant to media history. To exemplify this, two short case studies – based on a pilot project ‘Me and Myself. Tracing First Person in Documentary History in AV-Collections’ – show how the authors deployed video annotation to segment interpretative units of interest, rather than opting for units of analysis common in statistical analysis. The deliberate choice to abandon formal modes of moving image annotation and analysis ensued from a delicate interplay between the desired interpretative research goals, and the integration of tool criticism and reflection in the research design. The authors found that due to the formal and stylistic complexity of documentaries, also alternative, hermeneutic research strategies ought to be supported by digital infrastructures and its tools.
20

Triana-Martinez, Jenniffer Carolina, Julian Gil-González, Jose A. Fernandez-Gallego, Andrés Marino Álvarez-Meza, and Cesar German Castellanos-Dominguez. "Chained Deep Learning Using Generalized Cross-Entropy for Multiple Annotators Classification." Sensors 23, no. 7 (March 28, 2023): 3518. http://dx.doi.org/10.3390/s23073518.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Supervised learning requires the accurate labeling of instances, usually provided by an expert. Crowdsourcing platforms offer a practical and cost-effective alternative for large datasets when individual annotation is impractical. In addition, these platforms gather labels from multiple labelers. Still, traditional multiple-annotator methods must account for the varying levels of expertise and the noise introduced by unreliable outputs, resulting in decreased performance. In addition, they assume a homogeneous behavior of the labelers across the input feature space, and independence constraints are imposed on outputs. We propose a Generalized Cross-Entropy-based framework using Chained Deep Learning (GCECDL) to code each annotator’s non-stationary patterns regarding the input space while preserving the inter-dependencies among experts through a chained deep learning approach. Experimental results devoted to multiple-annotator classification tasks on several well-known datasets demonstrate that our GCECDL can achieve robust predictive properties, outperforming state-of-the-art algorithms by combining the power of deep learning with a noise-robust loss function to deal with noisy labels. Moreover, network self-regularization is achieved by estimating each labeler’s reliability within the chained approach. Lastly, visual inspection and relevance analysis experiments are conducted to reveal the non-stationary coding of our method. In a nutshell, GCEDL weights reliable labelers as a function of each input sample and achieves suitable discrimination performance with preserved interpretability regarding each annotator’s trustworthiness estimation.
21

Happa, Jassim, and Michael Goldsmith. "On properties of cyberattacks and their nuances." PSU Research Review 1, no. 2 (August 14, 2017): 76–90. http://dx.doi.org/10.1108/prr-04-2017-0024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose Several attack models attempt to describe behaviours of attacks with the intent to understand and combat them better. However, all models are to some degree incomplete. They may lack insight about minor variations about attacks that are observed in the real world (but are not described in the model). This may lead to similar attacks being classified as the same type of attack, or in some cases the same instance of attack. The appropriate solution would be to modify the model or replace it entirely. However, doing so may be undesirable as the model may work well for most cases or time and resource constraints may factor in as well. This paper aims to explore the potential value of adding information about attacks and attackers to existing models. Design/methodology/approach This paper investigates used cases of minor variations in attacks and how it may and may not be appropriate to communicate subtle differences in existing attack models through the use of annotations. In particular, the authors investigate commonalities across a range of existing models and identify where and how annotations may be helpful. Findings The authors propose that nuances (of attack properties) can be appended as annotations to existing attack models. Using annotations appropriately should enable analysts and researchers to express subtle but important variations in attacks that may not fit the model currently being used. Research limitations/implications This work only demonstrated a few simple, generic examples. In the future, the authors intend to investigate how this annotation approach can be extended further. Particularly, they intend to explore how annotations can be created computationally; the authors wish to obtain feedback from security analysts through interviews, identify where potential biases may arise and identify other real-world applications. Originality/value The value of this paper is that the authors demonstrate how annotations may help analysts communicate and ask better questions during identification of unknown aspects of attacks faster,e.g. as a means of storing mental notes in a structured manner, especially while facing zero-day attacks when information is incomplete.
22

Seeker, Wolfgang, and Jonas Kuhn. "Morphological and Syntactic Case in Statistical Dependency Parsing." Computational Linguistics 39, no. 1 (March 2013): 23–55. http://dx.doi.org/10.1162/coli_a_00134.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most morphologically rich languages with free word order use case systems to mark the grammatical function of nominal elements, especially for the core argument functions of a verb. The standard pipeline approach in syntactic dependency parsing assumes a complete disambiguation of morphological (case) information prior to automatic syntactic analysis. Parsing experiments on Czech, German, and Hungarian show that this approach is susceptible to propagating morphological annotation errors when parsing languages displaying syncretism in their morphological case paradigms. We develop a different architecture where we use case as a possibly underspecified filtering device restricting the options for syntactic analysis. Carefully designed morpho-syntactic constraints can delimit the search space of a statistical dependency parser and exclude solutions that would violate the restrictions overtly marked in the morphology of the words in a given sentence. The constrained system outperforms a state-of-the-art data-driven pipeline architecture, as we show experimentally, and, in addition, the parser output comes with guarantees about local and global morpho-syntactic wellformedness, which can be useful for downstream applications.
23

Huang, Huimin, Yawen Huang, Shiao Xie, Lanfen Lin, Ruofeng Tong, Yen-Wei Chen, Yuexiang Li, and Yefeng Zheng. "Combinatorial CNN-Transformer Learning with Manifold Constraints for Semi-supervised Medical Image Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2330–38. http://dx.doi.org/10.1609/aaai.v38i3.28007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Semi-supervised learning (SSL), as one of the dominant methods, aims at leveraging the unlabeled data to deal with the annotation dilemma of supervised learning, which has attracted much attentions in the medical image segmentation. Most of the existing approaches leverage a unitary network by convolutional neural networks (CNNs) with compulsory consistency of the predictions through small perturbations applied to inputs or models. The penalties of such a learning paradigm are that (1) CNN-based models place severe limitations on global learning; (2) rich and diverse class-level distributions are inhibited. In this paper, we present a novel CNN-Transformer learning framework in the manifold space for semi-supervised medical image segmentation. First, at intra-student level, we propose a novel class-wise consistency loss to facilitate the learning of both discriminative and compact target feature representations. Then, at inter-student level, we align the CNN and Transformer features using a prototype-based optimal transport method. Extensive experiments show that our method outperforms previous state-of-the-art methods on three public medical image segmentation benchmarks.
24

Abrahams, Liam, and Laurence D. Hurst. "A Depletion of Stop Codons in lincRNA is Owing to Transfer of Selective Constraint from Coding Sequences." Molecular Biology and Evolution 37, no. 4 (December 16, 2019): 1148–64. http://dx.doi.org/10.1093/molbev/msz299.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Although the constraints on a gene’s sequence are often assumed to reflect the functioning of that gene, here we propose transfer selection, a constraint operating on one class of genes transferred to another, mediated by shared binding factors. We show that such transfer can explain an otherwise paradoxical depletion of stop codons in long intergenic noncoding RNAs (lincRNAs). Serine/arginine-rich proteins direct the splicing machinery by binding exonic splice enhancers (ESEs) in immature mRNA. As coding exons cannot contain stop codons in one reading frame, stop codons should be rare within ESEs. We confirm that the stop codon density (SCD) in ESE motifs is low, even accounting for nucleotide biases. Given that serine/arginine-rich proteins binding ESEs also facilitate lincRNA splicing, a low SCD could transfer to lincRNAs. As predicted, multiexon lincRNA exons are depleted in stop codons, a result not explained by open reading frame (ORF) contamination. Consistent with transfer selection, stop codon depletion in lincRNAs is most acute in exonic regions with the highest ESE density, disappears when ESEs are masked, is consistent with stop codon usage skews in ESEs, and is diminished in both single-exon lincRNAs and introns. Owing to low SCD, the maximum lengths of pseudo-ORFs frequently exceed null expectations. This has implications for ORF annotation and the evolution of de novo protein-coding genes from lincRNAs. We conclude that not all constraints operating on genes need be explained by the functioning of the gene but may instead be transferred owing to shared binding factors.
25

Zheng, Minghang, Sizhe Li, Qingchao Chen, Yuxin Peng, and Yang Liu. "Phrase-Level Temporal Relationship Mining for Temporal Sentence Localization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3669–77. http://dx.doi.org/10.1609/aaai.v37i3.25478.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we address the problem of video temporal sentence localization, which aims to localize a target moment from videos according to a given language query. We observe that existing models suffer from a sheer performance drop when dealing with simple phrases contained in the sentence. It reveals the limitation that existing models only capture the annotation bias of the datasets but lack sufficient understanding of the semantic phrases in the query. To address this problem, we propose a phrase-level Temporal Relationship Mining (TRM) framework employing the temporal relationship relevant to the phrase and the whole sentence to have a better understanding of each semantic entity in the sentence. Specifically, we use phrase-level predictions to refine the sentence-level prediction, and use Multiple Instance Learning to improve the quality of phrase-level predictions. We also exploit the consistency and exclusiveness constraints of phrase-level and sentence-level predictions to regularize the training process, thus alleviating the ambiguity of each phrase prediction. The proposed approach sheds light on how machines can understand detailed phrases in a sentence and their compositions in their generality rather than learning the annotation biases. Experiments on the ActivityNet Captions and Charades-STA datasets show the effectiveness of our method on both phrase and sentence temporal localization and enable better model interpretability and generalization when dealing with unseen compositions of seen concepts. Code can be found at https://github.com/minghangz/TRM.
26

Hay, Johnny, Eilidh Troup, Ivan Clark, Julian Pietsch, Tomasz Zieliński, and Andrew Millar. "PyOmeroUpload: A Python toolkit for uploading images and metadata to OMERO." Wellcome Open Research 5 (May 18, 2020): 96. http://dx.doi.org/10.12688/wellcomeopenres.15853.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Tools and software that automate repetitive tasks, such as metadata extraction and deposition to data repositories, are essential for researchers to share Open Data, routinely. For research that generates microscopy image data, OMERO is an ideal platform for storage, annotation and publication according to open research principles. We present PyOmeroUpload, a Python toolkit for automatically extracting metadata from experiment logs and text files, processing images and uploading these payloads to OMERO servers to create fully annotated, multidimensional datasets. The toolkit comes packaged in portable, platform-independent Docker images that enable users to deploy and run the utilities easily, regardless of Operating System constraints. A selection of use cases is provided, illustrating the primary capabilities and flexibility offered with the toolkit, along with a discussion of limitations and potential future extensions. PyOmeroUpload is available from: https://github.com/SynthSys/pyOmeroUpload.
27

Hay, Johnny, Eilidh Troup, Ivan Clark, Julian Pietsch, Tomasz Zieliński, and Andrew Millar. "PyOmeroUpload: A Python toolkit for uploading images and metadata to OMERO." Wellcome Open Research 5 (August 26, 2020): 96. http://dx.doi.org/10.12688/wellcomeopenres.15853.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Tools and software that automate repetitive tasks, such as metadata extraction and deposition to data repositories, are essential for researchers to share Open Data, routinely. For research that generates microscopy image data, OMERO is an ideal platform for storage, annotation and publication according to open research principles. We present PyOmeroUpload, a Python toolkit for automatically extracting metadata from experiment logs and text files, processing images and uploading these payloads to OMERO servers to create fully annotated, multidimensional datasets. The toolkit comes packaged in portable, platform-independent Docker images that enable users to deploy and run the utilities easily, regardless of Operating System constraints. A selection of use cases is provided, illustrating the primary capabilities and flexibility offered with the toolkit, along with a discussion of limitations and potential future extensions. PyOmeroUpload is available from: https://github.com/SynthSys/pyOmeroUpload.
28

Lee, Young-Seol, and Sung-Bae Cho. "A Mobile Picture Tagging System Using Tree-Structured Layered Bayesian Networks." Mobile Information Systems 9, no. 3 (2013): 209–24. http://dx.doi.org/10.1155/2013/794726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Advances in digital media technology have increased in multimedia content. Tagging is one of the most effective methods to manage a great volume of multimedia content. However, manual tagging has limitations such as human fatigue and subjective and ambiguous keywords. In this paper, we present an automatic tagging method to generate semantic annotation on a mobile phone. In order to overcome the constraints of the mobile environment, the method uses two layered Bayesian networks. In contrast to existing techniques, this approach attempts to design probabilistic models with fixed tree structures and intermediate nodes. To evaluate the performance of this method, an experiment is conducted with data collected over a month. The result shows the efficiency and effectiveness of our proposed method. Furthermore, a simple graphic user interface is developed to visualize and evaluate recognized activities and probabilities.
29

KIM, SUN, JEONG-HYEON CHOI, AMIT SAPLE, and JIONG YANG. "A HYBRID GENE TEAM MODEL AND ITS APPLICATION TO GENOME ANALYSIS." Journal of Bioinformatics and Computational Biology 04, no. 02 (April 2006): 171–96. http://dx.doi.org/10.1142/s0219720006001850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
It is well-known that functionally related genes occur in a physically clustered form, especially operons in bacteria. By leveraging on this fact, there has recently been an interesting problem formulation known as gene team model, which searches for a set of genes that co-occur in a pair of closely related genomes. However, many gene teams, even experimentally verified operons, frequently scatter within other genomes. Thus, the gene team model should be refined to reflect this observation. In this paper, we generalized the gene team model, that looks for gene clusters in a physically clustered form, to multiple genome cases with relaxed constraints. We propose a novel hybrid pattern model that combines the set and the sequential pattern models. Our model searches for gene clusters with and/or without physical proximity constraint. This model is implemented and tested with 97 genomes (120 replicons). The result was analyzed to show the usefulness of our model. We also compared the result from our hybrid model to those from the traditional gene team model. We also show that predicted gene teams can be used for various genome analysis: operon prediction, phylogenetic analysis of organisms, contextual sequence analysis and genome annotation. Our program is fast enough to provide a service on the web at . Users can select any combination of 97 genomes to predict gene teams.
30

Koh, HyunSeung, and Susan C. Herring. "Historical insights for ebook design." Library Hi Tech 34, no. 4 (November 21, 2016): 764–86. http://dx.doi.org/10.1108/lht-06-2016-0075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose The purpose of this paper is to provide ebook designers and researchers with design insights by promoting historical knowledge about books and reading as sources of ideas to implement in current and future ebooks. Design/methodology/approach The authors review historical features of books and practices of reading that have been implemented, weakened, or lost over time, referring to historical texts and resources, and relate them to ebook viewers (software) and readers (hardware) that are currently on the market. In particular, the review focuses on the physical form of the book and the practices of reading, annotation, and bookshelving. Findings While some older forms and reading practices have been implemented in ebook devices, others have been forgotten over time, due in part to physical constraints that are no longer relevant. The authors suggest that features that constrained print books and print reading in the past might actually improve the design of ebooks and e-reading in the present. Research limitations/implications This review is necessarily based on a limited set of existing historical sources. Practical implications Translating insights into novel tangible designs is always a challenging task. Ebook designers can gain insights from this paper that can be applied in a variety of design contexts. Originality/value No previous work on ebook design has foregrounded historical aspects of books and reading as viable sources of ideas to implement in ebooks.
31

Ma, Zhonggang, Siteng Zhang, He Jia, Kuan Liu, Xiaofei Xie, and Yuanchuang Qu. "A Knowledge Graph-Based Approach to Recommending Low-Carbon Construction Schemes of Bridges." Buildings 13, no. 5 (May 22, 2023): 1352. http://dx.doi.org/10.3390/buildings13051352.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the development of the engineering construction industry, knowledge became an important strategic resource for construction enterprises, and knowledge graphs are an effective method for knowledge management. In the context of peak carbon dioxide emissions and carbon neutrality, low carbon emission became one of the important indicators for the selection of construction schemes, and knowledge management research related to low carbon construction must be performed. This study investigated a method of incorporating low-carbon construction knowledge into the bridge construction scheme knowledge graph construction process and proposed a bridge construction scheme recommendation method that considers carbon emission constraints based on the knowledge graph and similarity calculation. First, to solve the problem of the poor fitting effect of model parameters caused by less annotation of the corpus in the bridge construction field, an improved entity recognition model was proposed for low-resource conditions with limited data. A knowledge graph of low carbon construction schemes for bridges was constructed using a small sample dataset. Then, based on the construction of this knowledge graph, the entities and relationships related to construction schemes were obtained, and the comprehensive similarity of bridge construction schemes was calculated by combining the similarity calculation principle to realize the recommendation of bridge construction schemes under different constraints. Experiments on the constructed bridge low carbon construction scheme dataset showed that the proposed model achieved good accuracy with named entity recognition tasks. The comparative analysis with the construction scheme of the project verified the validity of the proposed construction scheme considering carbon emission constraints, which can provide support for the decision of the low-carbon construction scheme of bridges.
32

Lara, Juan De, Esther Guerra, and Jörg Kienzle. "Facet-oriented Modelling." ACM Transactions on Software Engineering and Methodology 30, no. 3 (May 2021): 1–59. http://dx.doi.org/10.1145/3428076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Models are the central assets in model-driven engineering (MDE), as they are actively used in all phases of software development. Models are built using metamodel-based languages, and so objects in models are typed by a metamodel class. This typing is static, established at creation time, and cannot be changed later. Therefore, objects in MDE are closed and fixed with respect to the class they conform to, the fields they have, and the well-formedness constraints they must comply with. This hampers many MDE activities, like the reuse of model-related artefacts such as transformations, the opportunistic or dynamic combination of metamodels, or the dynamic reconfiguration of models. To alleviate this rigidity, we propose making model objects open so that they can acquire or drop so-called facets . These contribute with a type, fields and constraints to the objects holding them. Facets are defined by regular metamodels, hence being a lightweight extension of standard metamodelling. Facet metamodels may declare usage interfaces , as well as laws that govern the assignment of facets to objects (or classes). This article describes our proposal, reporting on a theory, analysis techniques, and an implementation. The benefits of the approach are validated on the basis of five case studies dealing with annotation models, transformation reuse, multi-view modelling, multi-level modelling, and language product lines.
33

Li, Dapeng, and Emmanuel Gaquerel. "Next-Generation Mass Spectrometry Metabolomics Revives the Functional Analysis of Plant Metabolic Diversity." Annual Review of Plant Biology 72, no. 1 (June 17, 2021): 867–91. http://dx.doi.org/10.1146/annurev-arplant-071720-114836.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The remarkable diversity of specialized metabolites produced by plants has inspired several decades of research and nucleated a long list of theories to guide empirical ecological studies. However, analytical constraints and the lack of untargeted processing workflows have long precluded comprehensive metabolite profiling and, consequently, the collection of the critical currencies to test theory predictions for the ecological functions of plant metabolic diversity. Developments in mass spectrometry (MS) metabolomics have revolutionized the large-scale inventory and annotation of chemicals from biospecimens. Hence, the next generation of MS metabolomics propelled by new bioinformatics developments provides a long-awaited framework to revisit metabolism-centered ecological questions, much like the advances in next-generation sequencing of the last two decades impacted all research horizons in genomics. Here, we review advances in plant (computational) metabolomics to foster hypothesis formulation from complex metabolome data. Additionally, we reflect on how next-generation metabolomics could reinvigorate the testing of long-standing theories on plant metabolic diversity.
34

Lupo, Cosimo, Natanael Spisak, Aleksandra M. Walczak, and Thierry Mora. "Learning the statistics and landscape of somatic mutation-induced insertions and deletions in antibodies." PLOS Computational Biology 18, no. 6 (June 2, 2022): e1010167. http://dx.doi.org/10.1371/journal.pcbi.1010167.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Affinity maturation is crucial for improving the binding affinity of antibodies to antigens. This process is mainly driven by point substitutions caused by somatic hypermutations of the immunoglobulin gene. It also includes deletions and insertions of genomic material known as indels. While the landscape of point substitutions has been extensively studied, a detailed statistical description of indels is still lacking. Here we present a probabilistic inference tool to learn the statistics of indels from repertoire sequencing data, which overcomes the pitfalls and biases of standard annotation methods. The model includes antibody-specific maturation ages to account for variable mutational loads in the repertoire. After validation on synthetic data, we applied our tool to a large dataset of human immunoglobulin heavy chains. The inferred model allows us to identify universal statistical features of indels in heavy chains. We report distinct insertion and deletion hotspots, and show that the distribution of lengths of indels follows a geometric distribution, which puts constraints on future mechanistic models of the hypermutation process.
35

Marasigan, Rufo, Jr, Mon Arjay Malbog, Enrique Festijo, and Drandreb Earl Juanico. "CocoSense: Coconut Tree Detection and Localization using YOLOv7." E3S Web of Conferences 488 (2024): 03015. http://dx.doi.org/10.1051/e3sconf/202448803015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Coconut farming in the Philippines often needs help with challenges in efficient tree monitoring, directly affecting its productivity and sustainability. Although prevalent, traditional methodologies, such as field surveys, exhibit labor intensiveness and potential data inaccuracy constraints. This study sought to leverage the capabilities of the YOLOv7 object detection algorithm to enhance coconut tree monitoring. Our objectives centered on (1) precise detection of coconut trees using orthophotos, (2) their enumeration, and (3) generating accurate coordinates for each tree. The DJI Phantom 4 RTK unmanned aerial vehicle (UAV) was used to capture high-resolution images of the study area in Tiaong, Quezon. Post-acquisition, these images underwent processing and annotation to generate datasets for training the YOLOv7 model. The algorithm's output shows a remarkable 98% accuracy rate in tree detection, with an average localization accuracy of 86.30%. The results demonstrate the potential of YOLOv7 in accurately detecting and localizing coconut trees under diverse environmental conditions.
36

Ribone, Andrés I., Mónica Fass, Sergio Gonzalez, Veronica Lia, Norma Paniego, and Máximo Rivarola. "Co-Expression Networks in Sunflower: Harnessing the Power of Multi-Study Transcriptomic Public Data to Identify and Categorize Candidate Genes for Fungal Resistance." Plants 12, no. 15 (July 25, 2023): 2767. http://dx.doi.org/10.3390/plants12152767.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fungal plant diseases are a major threat to food security worldwide. Current efforts to identify and list loci involved in different biological processes are more complicated than originally thought, even when complete genome assemblies are available. Despite numerous experimental and computational efforts to characterize gene functions in plants, about ~40% of protein-coding genes in the model plant Arabidopsis thaliana L. are still not categorized in the Gene Ontology (GO) Biological Process (BP) annotation. In non-model organisms, such as sunflower (Helianthus annuus L.), the number of BP term annotations is far fewer, ~22%. In the current study, we performed gene co-expression network analysis using eight terabytes of public transcriptome datasets and expression-based functional prediction to categorize and identify loci involved in the response to fungal pathogens. We were able to construct a reference gene network of healthy green tissue (GreenGCN) and a gene network of healthy and stressed root tissues (RootGCN). Both networks achieved robust, high-quality scores on the metrics of guilt-by-association and selective constraints versus gene connectivity. We were able to identify eight modules enriched in defense functions, of which two out of the three modules in the RootGCN were also conserved in the GreenGCN, suggesting similar defense-related expression patterns. We identified 16 WRKY genes involved in defense related functions and 65 previously uncharacterized loci now linked to defense response. In addition, we identified and classified 122 loci previously identified within QTLs or near candidate loci reported in GWAS studies of disease resistance in sunflower linked to defense response. All in all, we have implemented a valuable strategy to better describe genes within specific biological processes.
37

Bondorf, Anders, and Jesper Jørgensen. "Efficient analyses for realistic off-line partial evaluation." Journal of Functional Programming 3, no. 3 (July 1993): 315–46. http://dx.doi.org/10.1017/s0956796800000769.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractBased on Henglein's efficient binding-time analysis for the lambda calculus (with constants and ‘fix’) (Henglein, 1991), we develop three efficient analyses for use in the preprocessing phase of Similix, a self-applicable partial evaluator for a higher-order subset of Scheme. The analyses developed in this paper are almost-linear in the size of the analysed program. (1) A flow analysis determines possible value flow between lambda-abstractions and function applications and between constructor applications and selector/predicate applications. The flow analysis is not particularly biased towards partial evaluation; the analysis corresponds to the closure analysis of Bondorf (1991b). (2) A (monovariant) binding-time analysis distinguishes static from dynamic values; the analysis treats both higher-order functions and partially static data structures. (3) A new is-used analysis, not present in Bondorf (1991b), finds a non-minimal binding-time annotation which is ‘safe’ in a certain way: a first-order value may only become static if its result is ‘needed’ during specialization; this ‘poor man's generalization’ (Holst, 1988) increases termination of specialization. The three analyses are performed sequentially in the above mentioned order since each depends on results from the previous analyses. The input to all three analyses are constraint sets generated from the program being analysed. The constraints are solved efficiently by a normalizing union/find-based algorithm in almost-linear time. Whenever possible, the constraint sets are partitioned into subsets which are solved in a specific order; this simplifies constraint normalization. The framework elegantly allows expressing both forwards and backwards components of analyses. In particular, the new is-used analysis is of backwards nature. The three constraint normalization algorithms are proved correct (soundness, completeness, termination, existence of a best solution). The analyses have been implemented and integrated in the Similix system. The new analyses are indeed much more efficient than those of Bondorf (1991b); the almost-linear complexity of the new analyses is confirmed by the implementation.
38

Fóthi, Áron, Adrián Szlatincsán, and Ellák Somfai. "Cluster2Former: Semisupervised Clustering Transformers for Video Instance Segmentation." Sensors 24, no. 3 (February 3, 2024): 997. http://dx.doi.org/10.3390/s24030997.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A novel approach for video instance segmentation is presented using semisupervised learning. Our Cluster2Former model leverages scribble-based annotations for training, significantly reducing the need for comprehensive pixel-level masks. We augment a video instance segmenter, for example, the Mask2Former architecture, with similarity-based constraint loss to handle partial annotations efficiently. We demonstrate that despite using lightweight annotations (using only 0.5% of the annotated pixels), Cluster2Former achieves competitive performance on standard benchmarks. The approach offers a cost-effective and computationally efficient solution for video instance segmentation, especially in scenarios with limited annotation resources.
39

Papp, Adam, Julian Pegoraro, Daniel Bauer, Philip Taupe, Christoph Wiesmeyr, and Andreas Kriechbaum-Zabini. "Automatic Annotation of Hyperspectral Images and Spectral Signal Classification of People and Vehicles in Areas of Dense Vegetation with Deep Learning." Remote Sensing 12, no. 13 (July 1, 2020): 2111. http://dx.doi.org/10.3390/rs12132111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Despite recent advances in image and video processing, the detection of people or cars in areas of dense vegetation is still challenging due to landscape, illumination changes and strong occlusion. In this paper, we address this problem with the use of a hyperspectral camera—installed on the ground or possibly a drone—and detection based on spectral signatures. We introduce a novel automatic method for annotating spectral signatures based on a combination of state-of-the-art deep learning methods. After we collected millions of samples with our method, we used a deep learning approach to train a classifier to detect people and cars. Our results show that, based only on spectral signature classification, we can achieve an Matthews Correlation Coefficient of 0.83. We evaluate our classification method in areas with varying vegetation and discuss the limitations and constraints that the current hyperspectral imaging technology has. We conclude that spectral signature classification is possible with high accuracy in uncontrolled outdoor environments. Nevertheless, even with state-of-the-art compact passive hyperspectral imaging technology, high dynamic range of illumination and relatively low image resolution continue to pose major challenges when developing object detection algorithms for areas of dense vegetation.
40

Yang, Xiaoping, Zhongxia Zhang, Zhongqiu Zhang, Yuting Mo, Lianbei Li, Li Yu, and Peican Zhu. "Automatic Construction and Global Optimization of a Multisentiment Lexicon." Computational Intelligence and Neuroscience 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/2093406.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Manual annotation of sentiment lexicons costs too much labor and time, and it is also difficult to get accurate quantification of emotional intensity. Besides, the excessive emphasis on one specific field has greatly limited the applicability of domain sentiment lexicons (Wang et al., 2010). This paper implements statistical training for large-scale Chinese corpus through neural network language model and proposes an automatic method of constructing a multidimensional sentiment lexicon based on constraints of coordinate offset. In order to distinguish the sentiment polarities of those words which may express either positive or negative meanings in different contexts, we further present a sentiment disambiguation algorithm to increase the flexibility of our lexicon. Lastly, we present a global optimization framework that provides a unified way to combine several human-annotated resources for learning our 10-dimensional sentiment lexicon SentiRuc. Experiments show the superior performance of SentiRuc lexicon in category labeling test, intensity labeling test, and sentiment classification tasks. It is worth mentioning that, in intensity label test, SentiRuc outperforms the second place by 21 percent.
41

Li, Jian, Lili Niu, Shuba krishna, Fnu Kinshuk, Martin Jones, Carlos Hernandez, Michael Clark, and Sandra Balladares. "Abstract 2351: Genomics annotation and interpretation in somatic oncology using structured data from a clinical knowledgebase." Cancer Research 84, no. 6_Supplement (March 22, 2024): 2351. http://dx.doi.org/10.1158/1538-7445.am2024-2351.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Accurate and comprehensive interpretation of genomic variants has become a bottleneck in clinical sequencing applications due to the accelerating precision oncology and biomedical information explosion. This motivated us to build Ephesus, a framework enabling curation of predictive, prognostic and diagnostic evidence for clinical biomarkers in cancers. Currently Ephesus is the primary content source for Roche navify Mutation Profiler (nMP). The Ephesus’ data model ensures adherence to best practices in clinical genomic content curation. Variant classification follows the AMP guidelines for somatic variant interpretation. Variant interpretations are derived from international regulatory approvals and clinical practice guidelines (FDA, EMA, TGA, eVIQ, etc.) and recommendations (NCCN, ESMO). The data model is mapped to a set of relational tables. To enforce data integrity and validity, strategies such as data normalization based on standard nomenclatural ontologies, a submit-review-approval workflow, and biological constraints are adopted. Ephesus is deployed as a web application used by curators inside and outside Roche. The UI allows users to conduct edits, filtering, sorting and bulk operations on entities by attributes. In order to minimize manual effort and maximize content coverage, inference rules are applied before ingesting into nMP. Currently Ephesus is developed for data collection, browsing and summarizing primary entities such as biomarkers, evidence items, genes, variants, variant groups, and drugs. The content from the 2023-Aug snapshot contains 11,596 directly curated biomarker profiles (including biomarker combinations), and 6M+ expanded profiles for 41 major cancer types. To evaluate the clinical reporting value of the curated knowledge, ~160k real cancer patient samples of the AACR GENIE project were queried on Ephesus. Compared with three other major knowledgebases, Ephesus/NMP is observed showing the highest performance. Table 1. Percentage of patients or biomarkers with at least one interpretation from each knowledgebase CGI (2022-10-17) CIVIC(2023-09-01) ClinVar(2023-09-09) NMP(2023-08-11) AACR-Genie-v14.0 Total patient % with any interpretation 25% (40,545) 46% (73,835) 81% (130,882) 89% (142,905) 160,965 biomarker % (including combos) with any interpretation 0.03% (256) 0.06% (526) 13.26% (122,424) 73.79% (681,128) 923,093 Citation Format: Jian Li, Lili Niu, Shuba krishna, Fnu Kinshuk, Martin Jones, Carlos Hernandez, Michael Clark, Sandra Balladares. Genomics annotation and interpretation in somatic oncology using structured data from a clinical knowledgebase [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 2351.
42

Meyer, Dominique E., Eric Lo, Jonathan Klingspon, Anton Netchaev, Charles Ellison, and Falko Kuester. "TunnelCAM- A HDR Spherical Camera Array for Structural Integrity Assessments of Dam Interiors." Electronic Imaging 2020, no. 7 (January 26, 2020): 227–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.7.iss-227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The United States of America has an estimate of 84,000 dams of which approximately 15,500 are rated as high-risk as of 2016. Recurrent geological and structural health changes require dam assets to be subject to continuous structural monitoring, assessment and restoration. The objective of the developed system is targeted at evaluating the feasibility for standardization in remote, digital inspections of the outflow works of such assets to replace human visual inspections. This work proposes both a mobile inspection platform and an image processing pipeline to reconstruct 3D models of the outflow tunnel and gates of dams for structural defect identification. We begin by presenting the imaging system with consideration to lighting conditions and acquisition strategies. We then propose and formulate global optimization constraints that optimize system poses and geometric estimates of the environment. Following that, we present a RANSAC frame-work that fits geometric cylinder primitives for texture projection and geometric deviation, as well as an interactive annotation frame-work for 3D anomaly marking. Results of the system and processing are demonstrated at the Blue Mountain Dam, Arkansas and the F.E. Walter Dam, Pennsylvania.
43

Wang, Liwei, Henning Koehler, Ke Deng, Xiaofang Zhou, and Shazia Sadiq. "Flexible Provenance Tracing." International Journal of Systems and Service-Oriented Engineering 2, no. 2 (April 2011): 1–20. http://dx.doi.org/10.4018/jssoe.2011040101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The description of the origins of a piece of data and the transformations by which it arrived in a database is termed the data provenance. The importance of data provenance has already been widely recognized in database community. The two major approaches to representing provenance information use annotations and inversion. While annotation is metadata pre-computed to include the derivation history of a data product, the inversion method finds the source data based on the situation that some derivation process can be inverted. Annotations are flexible to represent diverse provenance metadata but the complete provenance data may outsize data itself. Inversion method is concise by using a single inverse query or function but the provenance needs to be computed on-the-fly. This paper proposes a new provenance representation which is a hybrid of annotation and inversion methods in order to achieve combined advantage. This representation is adaptive to the storage constraint and the response time requirement of provenance inversion on-the-fly.
44

Giacopuzzi, Edoardo, Niko Popitsch, and Jenny C. Taylor. "GREEN-DB: a framework for the annotation and prioritization of non-coding regulatory variants from whole-genome sequencing data." Nucleic Acids Research 50, no. 5 (March 2, 2022): 2522–35. http://dx.doi.org/10.1093/nar/gkac130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Non-coding variants have long been recognized as important contributors to common disease risks, but with the expansion of clinical whole genome sequencing, examples of rare, high-impact non-coding variants are also accumulating. Despite recent advances in the study of regulatory elements and the availability of specialized data collections, the systematic annotation of non-coding variants from genome sequencing remains challenging. Here, we propose a new framework for the prioritization of non-coding regulatory variants that integrates information about regulatory regions with prediction scores and HPO-based prioritization. Firstly, we created a comprehensive collection of annotations for regulatory regions including a database of 2.4 million regulatory elements (GREEN-DB) annotated with controlled gene(s), tissue(s) and associated phenotype(s) where available. Secondly, we calculated a variation constraint metric and showed that constrained regulatory regions associate with disease-associated genes and essential genes from mouse knock-outs. Thirdly, we compared 19 non-coding impact prediction scores providing suggestions for variant prioritization. Finally, we developed a VCF annotation tool (GREEN-VARAN) that can integrate all these elements to annotate variants for their potential regulatory impact. In our evaluation, we show that GREEN-DB can capture previously published disease-associated non-coding variants as well as identify additional candidate disease genes in trio analyses.
45

Li, Yu, Kun Qian, Weiyan Shi, and Zhou Yu. "End-to-End Trainable Non-Collaborative Dialog System." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8293–302. http://dx.doi.org/10.1609/aaai.v34i05.6345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
End-to-end task-oriented dialog models have achieved promising performance on collaborative tasks where users willingly coordinate with the system to complete a given task. While in non-collaborative settings, for example, negotiation and persuasion, users and systems do not share a common goal. As a result, compared to collaborate tasks, people use social content to build rapport and trust in these non-collaborative settings in order to advance their goals. To handle social content, we introduce a hierarchical intent annotation scheme, which can be generalized to different non-collaborative dialog tasks. Building upon TransferTransfo (Wolf et al. 2019), we propose an end-to-end neural network model to generate diverse coherent responses. Our model utilizes intent and semantic slots as the intermediate sentence representation to guide the generation process. In addition, we design a filter to select appropriate responses based on whether these intermediate representations fit the designed task and conversation constraints. Our non-collaborative dialog model guides users to complete the task while simultaneously keeps them engaged. We test our approach on our newly proposed AntiScam dataset and an existing PersuasionForGood dataset. Both automatic and human evaluations suggest that our model outperforms multiple baselines in these two non-collaborative tasks.
46

Xiao, Aoran, Jiaxing Huang, Dayan Guan, Fangneng Zhan, and Shijian Lu. "Transfer Learning from Synthetic to Real LiDAR Point Cloud for Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2795–803. http://dx.doi.org/10.1609/aaai.v36i3.20183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge transfer from synthetic to real data has been widely studied to mitigate data annotation constraints in various computer vision tasks such as semantic segmentation. However, the study focused on 2D images and its counterpart in 3D point clouds segmentation lags far behind due to the lack of large-scale synthetic datasets and effective transfer methods. We address this issue by collecting SynLiDAR, a large-scale synthetic LiDAR dataset that contains point-wise annotated point clouds with accurate geometric shapes and comprehensive semantic classes. SynLiDAR was collected from multiple virtual environments with rich scenes and layouts which consists of over 19 billion points of 32 semantic classes. In addition, we design PCT, a novel point cloud translator that effectively mitigates the gap between synthetic and real point clouds. Specifically, we decompose the synthetic-to-real gap into an appearance component and a sparsity component and handle them separately which improves the point cloud translation greatly. We conducted extensive experiments over three transfer learning setups including data augmentation, semi-supervised domain adaptation and unsupervised domain adaptation. Extensive experiments show that SynLiDAR provides a high-quality data source for studying 3D transfer and the proposed PCT achieves superior point cloud translation consistently across the three setups. The dataset is available at https://github.com/xiaoaoran/SynLiDAR.
47

Intrigila, Benedetto, Giuseppe Della Penna, and Andrea D’Ambrogio. "A Lightweight BPMN Extension for Business Process-Oriented Requirements Engineering." Computers 10, no. 12 (December 16, 2021): 171. http://dx.doi.org/10.3390/computers10120171.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Process-oriented requirements engineering approaches are often required to deal with the effective adaptation of existing processes in order to easily introduce new or updated requirements. Such approaches are based on the adoption of widely used notations, such as the one introduced by the Business Process Model and Notation (BPMN) standard. However, BPMN models do not convey enough information on the involved entities and how they interact with process activities, thus leading to ambiguities, as well as to incomplete and inconsistent requirements definitions. This paper proposes an approach that allows stakeholders and software analysts to easily merge and integrate behavioral and data properties in a BPMN model, so as to fully exploit the potential of BPMN without incurring into the aforementioned limitation. The proposed approach introduces a lightweight BPMN extension that specifically addresses the annotation of data properties in terms of constraints, i.e., pre- and post-conditions that the different process activities must satisfy. The visual representation of the annotated model conveys all the information required both by stakeholders, to understand and validate requirements, and by software analysts and developers, to easily map these updates to the corresponding software implementation. The presented approach is illustrated by use of two running examples, which have also been used to carry out a preliminary validation activity.
48

Wang, Qi, and Xiyou Su. "Research on Named Entity Recognition Methods in Chinese Forest Disease Texts." Applied Sciences 12, no. 8 (April 12, 2022): 3885. http://dx.doi.org/10.3390/app12083885.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Named entity recognition of forest diseases plays a key role in knowledge extraction in the field of forestry. The aim of this paper is to propose a named entity recognition method based on multi-feature embedding, a transformer encoder, a bi-gated recurrent unit (BiGRU), and conditional random fields (CRF). According to the characteristics of the forest disease corpus, several features are introduced here to improve the method’s accuracy. In this paper, we analyze the characteristics of forest disease texts; carry out pre-processing, labeling, and extraction of multiple features; and construct forest disease texts. In the input representation layer, the method integrates multi-features, such as characters, radicals, word boundaries, and parts of speech. Then, implicit features (e.g., sentence context features) are captured through the transformer’s encoding layer. The obtained features are transmitted to the BiGRU layer for further deep feature extraction. Finally, the CRF model is used to learn constraints and output the optimal annotation of disease names, damage sites, and drug entities in the forest disease texts. The experimental results on the self-built data set of forest disease texts show that the precision of the proposed method for entity recognition reached more than 93%, indicating that it can effectively solve the task of named entity recognition in forest disease texts.
49

Sawsaa, Ahlam F., and Joan Lu. "Modeling Domain Ontology for Occupational Therapy Resources Using Natural Language Programming (NLP) Technology to Model Domain Ontology of Occupational Therapy Resources." International Journal of Information Retrieval Research 3, no. 4 (October 2013): 104–19. http://dx.doi.org/10.4018/ijirr.2013100106.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Creation and development of a formal domain ontology of Ocuupational Therapy (OT) requires the prescription and formal evaluation of the results through specific criteria. Methontology of development ontologies was followed to create OTO ontology, and was implemented by using Protégé- OWL. Accuracy of OTO ontology was assessed using a set of ontology design cretria. This paper describes a software engineering approach to model domain ontology for occupational therapy resources (OT) using Natural Language Programming (NLP) technology. The rules were written to annotate the domain concepts using Java Annotation Patterns Engine (JAPE) grammar. It is used to support regular expression matching and thus annotate OT concepts by using the GATE developer tool. This speeds up the time-consuming development of the ontology, which is important for experts in the domain facing time constraints and high workloads. The rules provide significant results: the pattern matching of OT concepts based on the lookup list produced 403 correct concepts and the accuracy was generally higher. Using NLP technique is a good approach to reducing the domain expert's work, and the results can be evaluated. This study contributes to the understanding of ontology development and evaluation methods to address the knowledge gap of using ontology in the decision support system component of occupational therepy.
50

Almeida, Sílvia, Marko Radeta, Tomoya Kataoka, João Canning-Clode, Miguel Pessanha Pais, Rúben Freitas, and João Gama Monteiro. "Designing Unmanned Aerial Survey Monitoring Program to Assess Floating Litter Contamination." Remote Sensing 15, no. 1 (December 23, 2022): 84. http://dx.doi.org/10.3390/rs15010084.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Monitoring marine contamination by floating litter can be particularly challenging since debris are continuously moving over a large spatial extent pushed by currents, waves, and winds. Floating litter contamination have mostly relied on opportunistic surveys from vessels, modeling and, more recently, remote sensing with spectral analysis. This study explores how a low-cost commercial unmanned aircraft system equipped with a high-resolution RGB camera can be used as an alternative to conduct floating litter surveys in coastal waters or from vessels. The study compares different processing and analytical strategies and discusses operational constraints. Collected UAS images were analyzed using three different approaches: (i) manual counting (MC), using visual inspection and image annotation with object counts as a baseline; (ii) pixel-based detection, an automated color analysis process to assess overall contamination; and (iii) machine learning (ML), automated object detection and identification using state-of-the-art convolutional neural network (CNNs). Our findings illustrate that MC still remains the most precise method for classifying different floating objects. ML still has a heterogeneous performance in correctly identifying different classes of floating litter; however, it demonstrates promising results in detecting floating items, which can be leveraged to scale up monitoring efforts and be used in automated analysis of large sets of imagery to assess relative floating litter contamination.

До бібліографії