Добірка наукової літератури з теми "State Token Mechanism"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "State Token Mechanism".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "State Token Mechanism"

1

Hu, Anwen, Zhicheng Dou, Jian-Yun Nie, and Ji-Rong Wen. "Leveraging Multi-Token Entities in Document-Level Named Entity Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7961–68. http://dx.doi.org/10.1609/aaai.v34i05.6304.

Повний текст джерела
Анотація:
Most state-of-the-art named entity recognition systems are designed to process each sentence within a document independently. These systems are easy to confuse entity types when the context information in a sentence is not sufficient enough. To utilize the context information within the whole document, most document-level work let neural networks on their own to learn the relation across sentences, which is not intuitive enough for us humans. In this paper, we divide entities to multi-token entities that contain multiple tokens and single-token entities that are composed of a single token. We
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Yixiao, Xiang Chen, and Jiaguang Sun. "Improve Language Modeling for Code Completion Through Learning General Token Repetition of Source Code with Optimized Memory." International Journal of Software Engineering and Knowledge Engineering 29, no. 11n12 (2019): 1801–18. http://dx.doi.org/10.1142/s0218194019400229.

Повний текст джерела
Анотація:
In last few years, applying language model to source code is the state-of-the-art method for solving the problem of code completion. However, compared with natural language, code has more obvious repetition characteristics. For example, a variable can be used many times in the following code. Variables in source code have a high chance to be repetitive. Cloned code and templates, also have the property of token repetition. Capturing the token repetition of source code is important. In different projects, variables or types are usually named differently. This means that a model trained in a fin
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kim, Jinsu, Eunsun Choi, Byung-Gyu Kim, and Namje Park. "Proposal of a Token-Based Node Selection Mechanism for Node Distribution of Mobility IoT Blockchain Nodes." Sensors 23, no. 19 (2023): 8259. http://dx.doi.org/10.3390/s23198259.

Повний текст джерела
Анотація:
Various elements, such as evolutions in IoT services resulting from sensoring by vehicle parts and advances in small communication technology devices, have significantly impacted the mass spread of mobility services that are provided to users in need of limited resources. In particular, business models are progressing away from one-off costs towards longer-term costs, as represented by shared services utilizing kick-boards or bicycles and subscription services for vehicle software. Advances in shared mobility services, as described, are calling for solutions that can enhance the reliability of
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bai, He, Peng Shi, Jimmy Lin, et al. "Segatron: Segment-Aware Transformer for Language Modeling and Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 14 (2021): 12526–34. http://dx.doi.org/10.1609/aaai.v35i14.17485.

Повний текст джерела
Анотація:
Transformers are powerful for sequence modeling. Nearly all state-of-the-art language models and pre-trained language models are based on the Transformer architecture. However, it distinguishes sequential tokens only with the token position index. We hypothesize that better contextual representations can be generated from the Transformer with richer positional information. To verify this, we propose a segment-aware Transformer (Segatron), by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token. We first introduce the segment-aware m
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sitnik, A. A. "NFT as an Object of Legal Regulation." Actual Problems of Russian Law 17, no. 12 (2022): 84–93. http://dx.doi.org/10.17803/1994-1471.2022.145.12.084-093.

Повний текст джерела
Анотація:
The paper is devoted to the study of the legal nature of a non-fungible token — NFT. The paper discusses the concept and types of tokens. The author defines a token as a unit of accounting in a distributed ledger that digitally represents financial instruments or other assets that expresses the economic value of the objects being represented and allows the rights associated with them to be exercised. According to a common point of view, NFT serves as a means of digital expression of a particular object, it has characteristics (signs) inherent exclusively to it, by virtue of which it cannot be
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Huang, Lingbo, Yushi Chen, and Xin He. "Spectral-Spatial Mamba for Hyperspectral Image Classification." Remote Sensing 16, no. 13 (2024): 2449. http://dx.doi.org/10.3390/rs16132449.

Повний текст джерела
Анотація:
Recently, transformer has gradually attracted interest for its excellence in modeling the long-range dependencies of spatial-spectral features in HSI. However, transformer has the problem of the quadratic computational complexity due to the self-attention mechanism, which is heavier than other models and thus has limited adoption in HSI processing. Fortunately, the recently emerging state space model-based Mamba shows great computational efficiency while achieving the modeling power of transformers. Therefore, in this paper, we first proposed spectral-spatial Mamba (SS-Mamba) for HSI classific
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liu, Huey-Ing, and Wei-Lin Chen. "X-Transformer: A Machine Translation Model Enhanced by the Self-Attention Mechanism." Applied Sciences 12, no. 9 (2022): 4502. http://dx.doi.org/10.3390/app12094502.

Повний текст джерела
Анотація:
Machine translation has received significant attention in the field of natural language processing not only because of its challenges but also due to the translation needs that arise in the daily life of modern people. In this study, we design a new machine translation model named X-Transformer, which refines the original Transformer model regarding three aspects. First, the model parameter of the encoder is compressed. Second, the encoder structure is modified by adopting two layers of the self-attention mechanism consecutively and reducing the point-wise feed forward layer to help the model
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Guo, Chaopeng, Pengyi Zhang, Bangyao Lin, and Jie Song. "A Dual Incentive Value-Based Paradigm for Improving the Business Market Profitability in Blockchain Token Economy." Mathematics 10, no. 3 (2022): 439. http://dx.doi.org/10.3390/math10030439.

Повний текст джерела
Анотація:
Blockchain solves the problem of mutual trust and consensus in the business market of the token economy. In the existing paradigm of blockchain token economy, there are disadvantages of lacking the incentive mechanism, business applications and virtual token value. These shortcomings reduce consumers’ willingness to consume and the profits of the merchants. In this paper, we propose a novel “Dual incentive value-based” paradigm to improve the business market profitability in blockchain token economy. To evaluate our proposed paradigm, we propose a business study case for improving merchants’ e
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Khoo, Ling Min Serena, Hai Leong Chieu, Zhong Qian, and Jing Jiang. "Interpretable Rumor Detection in Microblogs by Attending to User Interactions." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8783–90. http://dx.doi.org/10.1609/aaai.v34i05.6405.

Повний текст джерела
Анотація:
We address rumor detection by learning to differentiate between the community's response to real and fake claims in microblogs. Existing state-of-the-art models are based on tree models that model conversational trees. However, in social media, a user posting a reply might be replying to the entire thread rather than to a specific user. We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network. We investigated variants of this model: (1) a structure aware self-attention model (StA-PLAN) tha
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Keerthana, R. L., Awadhesh Kumar Singh, Poonam Saini, and Diksha Malhotra. "Explaining Sarcasm of Tweets using Attention Mechanism." Scalable Computing: Practice and Experience 24, no. 4 (2023): 787–96. http://dx.doi.org/10.12694/scpe.v24i4.2166.

Повний текст джерела
Анотація:
Emotion identification from text can help boost the effectiveness of sentiment analysis models. Sarcasm is one of the more difficult emotions to detect, particularly in textual data. Even though several models for detecting sarcasm have been presented, their performance falls way short of that of other emotion detection models. As a result, few strategies have been introduced in the paper that helped to enhance the performance of sarcasm detection models. To compare performance, the model was tested using the TweetEval benchmark dataset. On the TweetEval benchmark, the technique proposed in th
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "State Token Mechanism"

1

Kady, Charbel. "Managing Business Process Continuity and Integrity Using Pattern-Based Corrections." Electronic Thesis or Diss., IMT Mines Alès, 2024. http://www.theses.fr/2024EMAL0014.

Повний текст джерела
Анотація:
Cette thèse présente une approche pour la gestion des déviations dans les flux de travail utilisant le Business Process Model and Notation (BPMN). La recherche répond au besoin de gestion efficace des déviations en intégrant un cadre complet comprenant la correction des déviations basée sur des modèles et un mécanisme enrichi de State Token. L’approche est testée par une étude de cas dans le domaine de l’apiculture, démontrant l’applicabilité pratique et l’efficacité de la méthode proposée. Les contributions clés incluent le développement d’une bibliothèque de modèles, la caractérisation des é
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "State Token Mechanism"

1

Ante, Lennart. "Blockchain-Based Tokens as Financing Instruments." In Fostering Innovation and Competitiveness With FinTech, RegTech, and SupTech. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4390-0.ch007.

Повний текст джерела
Анотація:
Blockchain technology represents a technological basis with which existing corporate financing processes can be supplemented. The issuance of digital tokens offers several potential advantages such as tradability, efficiency, automation, and cost benefits compared to traditional financial products. This transformation of financing processes and capital markets can allow small and medium-sized enterprises (SMEs) to access capital markets and at the same time close existing retail investment gaps. In this chapter, the challenges of SME financing are described and blockchain-based financing (initial coin offerings [ICOs] and security token offerings [STOs]) is introduced. The blockchain-based financing mechanisms are compared with conventional forms of financing and potentials and challenges are discussed. In conclusion, it is stated that potential clearly outweighs risk and that the majority of all existing challenges can be tackled through sensible and coordinated regulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lipton, Alexander. "Toward a Stable Tokenized Medium of Exchange." In Cryptoassets. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190077310.003.0005.

Повний текст джерела
Анотація:
This chapter discusses the current state of the crypto land and argues that stable crypto tokens, which can be viewed as an electronic analogue of cash, can help augment the existing TCP/IP (Transition Control Protocal/Internet Protocol) with a much-needed mechanism in order to bring existing banking and payment systems into the twenty-first century. It describes three existing approaches to designing such tokens—fiat collateralization, cryptocurrency collateralization, and dynamic stabilization—and concludes that only regulatorily compliant fiat-backed tokens are viable in the long run. It also discusses asset-backed cryptocurrencies and argues that in some instances they can provide a much-needed counterpoint for today's fiat currencies, and pave a way forward toward ensuring world-wide financial stability and inclusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Włosik, Katarzyna. "Initial coin offering jako nowa forma finansowania i inwestycji." In Innowacje finansowe w gospodarce 4.0. Wydawnictwo Uniwersytetu Ekonomicznego w Poznaniu, 2021. http://dx.doi.org/10.18559/978-83-8211-083-8/4.

Повний текст джерела
Анотація:
This part of the monograph is related to initial coin offering – a mechanism that allows blockchain-based companies or projects to obtain financing. In return for financial support, ICO participants are offered different types of digital tokens – payment, utility or investment tokens. The chapter contains the systematization of issues related to ICO and tokens as well as a description of stages of initial coin offering. The SWOT analysis of ICO highlights the strengths and opportunities related to ICO – inter alia the possibility of portfolio diversification and the limited access for individual investors to early-stage investments (apart from ICO). Also the weaknesses of initial coin offering (e.g. the need to prepare a due diligence by an investor) and associated risks (e.g. regulatory uncertainty) are considered. Moreover, the author identifies research areas related to ICO. They include, among others, the identification of ICO success factors and the identification of factors affecting the rates of return on tokens.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sai Swaroop, Akella, and S. Rama Sree. "Network Mechanism Establishment and Authentication Using Digital Certificate Management." In Advances in Transdisciplinary Engineering. IOS Press, 2023. http://dx.doi.org/10.3233/atde221270.

Повний текст джерела
Анотація:
The fundamental idea behind all types of data frameworks is authentication, however targeted single-side validation is thought to be weak and vulnerable, posing a security risk of single-side failure or breakdown brought on by external attacks or internal fraud. In this paper, we proposed a blockchain-based decentralized validation illustrating plan (named BlockAuth) for the edge and IoT environments to provide a safer, dependable, and solid acclimation to non-critical failure arrangement, where each edge device is viewed as a hub to construct a blockchain network. We designed a safe sign-up and verification process, and the decentralized blockchain confirmation convention encouraged blockchain agreement and brilliant agreement. We also carried out a full blockchain-based validation stage for the evaluation of practicality, security, and execution. With a considerable level of protection setup for the executives, the evaluation and examination indicate that the suggested BlockAuth plot offers a safer, dependable, and solid adaptability to internal failure decentralized new verification. The suggested BlockAuth scheme is suitable for a token-positioned secret word, testament, and Verification is anticipated for unquestionably high security requirements in the edge & IoT environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Wenda, Kaixuan Chen, Shunyu Liu, Tongya Zheng, Wenjie Huang, and Mingli Song. "Learning a Mini-Batch Graph Transformer via Two-Stage Interaction Augmentation." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240842.

Повний текст джерела
Анотація:
Mini-batch Graph Transformer (MGT), as an emerging graph learning model, has demonstrated significant advantages in semi-supervised node prediction tasks with improved computational efficiency and enhanced model robustness. However, existing methods for processing local information either rely on sampling or simple aggregation, which respectively result in the loss and squashing of critical neighbor information. Moreover, the limited number of nodes in each mini-batch restricts the model’s capacity to capture the global characteristic of the graph. In this paper, we propose LGMformer, a novel MGT model that employs a two-stage augmented interaction strategy, transitioning from local to global perspectives, to address the aforementioned bottlenecks. The local interaction augmentation (LIA) presents a neighbor-target interaction Transformer (NTIformer) to acquire an insightful understanding of the co-interaction patterns between neighbors and the target node, resulting in a locally effective token list that serves as input for the MGT. In contrast, global interaction augmentation (GIA) adopts a cross-attention mechanism to incorporate entire graph prototypes into the target node representation, thereby compensating for the global graph information to ensure a more comprehensive perception. To this end, LGMformer achieves the enhancement of node representations under the MGT paradigm. Experimental results related to node classification on the ten benchmark datasets demonstrate the effectiveness of the proposed method. Our code is available at https://github.com/l-wd/LGMformer.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

"Introduction to Financial Digital Assets." In Financial Digital Assets and the Financial Risk Modeling of Portfolio Investments. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-8120-5.ch001.

Повний текст джерела
Анотація:
The chapter provides a comprehensive overview of the evolving landscape of digital assets, setting the stage for a deeper exploration of their impact on the financial industry. This chapter begins by defining digital assets, encompassing cryptocurrencies like Bitcoin and Ethereum, tokenized securities, and non-fungible tokens (NFTs). It delves into the foundational technologies underpinning these assets, such as blockchain and distributed ledger technology, highlighting their role in ensuring transparency, security, and decentralization. The chapter also examines the historical context and evolution of digital assets, tracing their journey from niche innovations to mainstream financial instruments. Key concepts such as decentralization, tokenization, and smart contracts are introduced, providing readers with a solid understanding of the mechanisms driving the digital asset ecosystem.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wu, Junjie, Mingjie Sun, Chen Gong, Nan Yu, and Guohong Fu. "PromptCD: Coupled and Decoupled Prompt Learning for Vision-Language Models." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240504.

Повний текст джерела
Анотація:
Large-scale pre-trained vision-language models (VLMs), like CLIP, have presented striking generalizability for adapting to image classification in a few shot setting. Most existing methods explore a set of learnable tokens, such as prompt learning, on data-efficient utilization for task adaptation. However, they focus on either the coupled-modality property by prompt projection or decoupled-modality characteristic by prompt consistency, which ignores effective interaction between prompts. To model the deep yet sufficient cross-modal interaction and enhance the generalization between both seen and unseen tasks, in this paper, we propose a novel coupled and decoupled prompt learning framework, dubbed PromptCD, for vision-language models. Specifically, we introduce a bi-directional coupled-modality mechanism to intensify the interaction between both vision and language branches. Additionally, we propose mixture consistency to further improve the generalization and discrimination of the models on unseen tasks. The integration of such a mechanism and consistency facilitates the proposed framework adaptation for various downstream tasks. We conduct extensive experiments on 11 image classification datasets under a range of evaluation protocols, including base-to-novel and domain generalization, and cross-dataset recognition. Experimental results demonstrate that our proposed PromptCD overall outperforms state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fagan, Melinda Bonnie. "Stem cells." In Routledge Encyclopedia of Philosophy. Routledge, 2023. http://dx.doi.org/10.4324/9780415249126-q152-1.

Повний текст джерела
Анотація:
What is a stem cell? The term is a combination of ‘cell’ and ‘stem’. A cell is a major category of living thing, while a stem is a site of growth and support for something else. In science today, a stem cell is defined as a cell derived from a multicellular organism, which is able to both self-renew (produce more stem cells of the same kind) and differentiate (produce cells corresponding to later developmental stages of the source organism). So the concept of a stem cell is somewhat complex, bearing on questions of biological individuality, relations between cells and organisms, and our understanding of development. Stem cell phenomena range from everyday to extraordinary laboratory products. On the everyday side: hair, skin, and blood cells are shed and replaced by ongoing stem cell activities. Stem cells help maintain organs and tissues in mature multicellular organisms. Regeneration in wound-healing is also, often, stem-cell mediated. The hydra’s mythic regeneration potential is due to its plentiful stem cells; similarly for plants. Looking to earlier developmental stages, embryonic cells also exhibit stem cell capacities. If such cells are removed from an early embryo and grown in artificial cell culture, this produces an embryonic stem cell line – an indefinitely renewable source of cells that can, under appropriate conditions, develop to produce many (even all) cell types found in a mature organism. Other experimental products of stem cells include embryoid bodies, organoids, and embryo-like structures. Stem cells are thus found in living organisms (in vivo) and grown artificially (in vitro). Stem cells raise several important metaphysical questions for philosophers of biology. One concerns biological individuality. Multicellular organisms are paradigmatic biological individuals. There are strong reasons to think cells are individuals. Stem cells are cells that divide and develop into other kinds of cell, tissues, organs, and even analogues of whole organisms. Are stem cells individuals? One way to answer this question is in terms of cell lineages. Complicating matters, stem cells mediate between cell and organismal levels of biological organisation. This raises questions about individuality and development for organisms and constituent cell lineages. Metaphysical theories about the nature of stem cells – natural kinds, causal mechanisms, processes – are also unsettled, as is the science. Different metaphysical theories about the nature of stem cells present a problem of theory choice. Alternatives include: stem cells as entities, stemness as a state, disposition to develop, and cell-environment systems. Our knowledge about stem cells is incomplete, based on many different kinds of experiment. The main ways of identifying stem cells are to find, grow, or make them: cell-sorting, in vitro culture, and reprogramming, respectively. The basic design is to remove cells from an organismal source and place them in an environment where they can self-renew. After measuring cell traits in this environment, some cells are moved to a new environment to encourage differentiation. Cell traits in the new environment are then measured. The results correlate traits of an organismal source, candidate stem cells, and differentiated cells. Collectively, these experiments yield many different varieties of stem cell. Characterisation of these varieties is closely tied to technologies and experimental methods for culturing, visualising, and manipulating cells. Uncertainty is a constant, however. It’s impossible to experimentally show that a single cell is a stem cell; all methods of identifying stem cells require populations of homogeneous stem cells. But homogeneity for cells that by definition transform into other things is a fragile assumption. Consequently, stem cells are identified relative to particular experimental methods. Our knowledge of stem cells accumulates by multiplying experimental contexts and relating their outcomes to one another. In practice, knowledge about stem cells has the form of a proliferating network of models. In vitro stem cells are a prominent example: concrete approximations of early developmental stages of a multicellular organism of a particular species. Other important stem cell-based models are organoids and human-animal chimeras. Different stem cell models complement one another, highlighting different aspects of development. More generally, stem cell biology is replete with abstract and concrete models. Social organisation of experiments and resultant models is important for understanding the epistemology of stem cell research. Abstract models play a less prominent role in stem cell research, although lineage tree models are important representations of stem cells and their potential. Classifying stem cells is an unsettled and messy affair, with many different cross-cutting or overlapping distinctions used in practice. There are many varieties of stem cell, but no single agreed-upon system for classifying them. Lineage tree models offer one prospect for such a system. In popular culture, stem cells are associated with medical promise on the one hand, and embryo destruction on the other. Stem cells are tokens of medical promise and hope; the idea being to use their potential to cure a wide range of injuries and diseases. This promise motivates stem cell ‘clinics’ alongside scientific research. The former peddle cures for many ailments unencumbered by scientific evidence or regulatory approval. The latter challenged by ethical questions about human embryo research. Tension between medical hopes and objections to human embryo research has produced a large bioethics literature. Key ethical debates are about research using human embryos, creating human–animal chimeras, and how to balance hope and hype in regulating and funding stem cell research. Broad anti-science cultural movements encourage proliferation of stem cell ‘clinics’ that market alleged cures directly to consumers, bypassing scientific and medical standards.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "State Token Mechanism"

1

Zhou, Yan, Longtao Huang, Tao Guo, Jizhong Han, and Songlin Hu. "A Span-based Joint Model for Opinion Target Extraction and Target Sentiment Classification." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/762.

Повний текст джерела
Анотація:
Target-Based Sentiment Analysis aims at extracting opinion targets and classifying the sentiment polarities expressed on each target. Recently, token based sequence tagging methods have been successfully applied to jointly solve the two tasks, which aims to predict a tag for each token. Since they do not treat a target containing several words as a whole, it might be difficult to make use of the global information to identify that opinion target, leading to incorrect extraction. Independently predicting the sentiment for each token may also lead to sentiment inconsistency for different words i
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yin, Shi, Shijie Huang, Shangfei Wang, et al. "1DFormer: A Transformer Architecture Learning 1D Landmark Representations for Facial Landmark Tracking." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/176.

Повний текст джерела
Анотація:
Recently, heatmap regression methods based on 1D landmark representations have shown prominent performance on locating facial landmarks. However, previous methods ignored to make deep explorations on the good potentials of 1D landmark representations for sequential and structural modeling of multiple landmarks to track facial landmarks. To address this limitation, we propose a Transformer architecture, namely 1DFormer, which learns informative 1D landmark representations by capturing the dynamic and the geometric patterns of landmarks via token communications in both temporal and spatial dimen
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Jie, Shaowei Chen, Bingquan Wang, Jiaxin Zhang, Na Li, and Tong Xu. "Attention as Relation: Learning Supervised Multi-head Self-Attention for Relation Extraction." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/524.

Повний текст джерела
Анотація:
Joint entity and relation extraction is critical for many natural language processing (NLP) tasks, which has attracted increasing research interest. However, it is still faced with the challenges of identifying the overlapping relation triplets along with the entire entity boundary and detecting the multi-type relations. In this paper, we propose an attention-based joint model, which mainly contains an entity extraction module and a relation detection module, to address the challenges. The key of our model is devising a supervised multi-head self-attention mechanism as the relation detection m
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Theil, Christoph Kilian, Samuel Broscheit, and Heiner Stuckenschmidt. "PRoFET: Predicting the Risk of Firms from Event Transcripts." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/724.

Повний текст джерела
Анотація:
Financial risk, defined as the chance to deviate from return expectations, is most commonly measured with volatility. Due to its value for investment decision making, volatility prediction is probably among the most important tasks in finance and risk management. Although evidence exists that enriching purely financial models with natural language information can improve predictions of volatility, this task is still comparably underexplored. We introduce PRoFET, the first neural model for volatility prediction jointly exploiting both semantic language representations and a comprehensive set of
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhou, Chengjie, Chao Che, Pengfei Wang, and Qiang Zhang. "SCAT: A Time Series Forecasting with Spectral Central Alternating Transformers." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/622.

Повний текст джерела
Анотація:
Time series forecasting has essential applications across various domains. For instance, forecasting power time series can optimize energy usage and bolster grid stability and reliability. Existing models based on transformer architecture are limited to classical design, ignoring the impact of spatial information and noise on model architecture design. Therefore, we propose an atypical design of Transformer-based models for multivariate time series forecasting. This design consists of two critical components: (i) spectral clustering center of time series employed as the focal point for attenti
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shen, Tao, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. "Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/604.

Повний текст джерела
Анотація:
Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", fo
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Wenchang, Hua Wang, and Fan Zhang. "Skip-Timeformer: Skip-Time Interaction Transformer for Long Sequence Time-Series Forecasting." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/608.

Повний текст джерела
Анотація:
Recent studies have raised questions about the suitability of the Transformer architecture for long sequence time-series forecasting. These forecasting models leverage Transformers to capture dependencies between multiple time steps in a time series, with embedding tokens composed of data from individual time steps. However, challenges arise when applying Transformers to predict long sequences with strong periodicity, leading to performance degradation and increased computational burden. Furthermore, embedding tokens formed one time step at a time may struggle to reveal meaningful information
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kahatapitiya, Kumara, and Michael S. Ryoo. "SWAT: Spatial Structure Within and Among Tokens." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/106.

Повний текст джерела
Анотація:
Modeling visual data as tokens (i.e., image patches) using attention mechanisms, feed-forward networks or convolutions has been highly effective in recent years. Such methods usually have a common pipeline: a tokenization method, followed by a set of layers/blocks for information mixing, both within and among tokens. When image patches are converted into tokens, they are often flattened, discarding the spatial structure within each patch. As a result, any processing that follows (eg: multi-head self-attention) may fail to recover and/or benefit from such information. In this paper, we argue th
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Zicheng, Li Wang, Siyuan Li, Zedong Wang, Haitao Lin, and Stan Z. Li. "LongVQ: Long Sequence Modeling with Vector Quantization on Structured Memory." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/510.

Повний текст джерела
Анотація:
Transformer models have been successful in various sequence processing tasks, but the self-attention mechanism's computational cost limits its practicality for long sequences. Although there are existing attention variants that improve computational efficiency, they have a limited ability to abstract global information effectively based on their hand-crafted mixing strategies. On the other hand, state-space models (SSMs) are tailored for long sequences but cannot capture complicated local information. Therefore, the combination of them as a unified token mixer is a trend in recent long-sequenc
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yan, Fan, and Ming Li. "Towards Generating Summaries for Lexically Confusing Code through Code Erosion." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/512.

Повний текст джерела
Анотація:
Code summarization aims to summarize code functionality as high-level nature language descriptions to assist in code comprehension. Recent approaches in this field mainly focus on generating summaries for code with precise identifier names, in which meaningful words can be found indicating code functionality. When faced with lexically confusing code, current approaches are likely to fail since the correlation between code lexical tokens and summaries is scarce. To tackle this problem, we propose a novel summarization framework named VECOS. VECOS introduces an erosion mechanism to conquer the m
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!