Auswahl der wissenschaftlichen Literatur zum Thema „Slot value inference“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Slot value inference" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Slot value inference"

1

Shi, Dan, Chaobin You, Jiantao Huang, Taihao Li und Deyi Xiong. „CORECODE: A Common Sense Annotated Dialogue Dataset with Benchmark Tasks for Chinese Large Language Models“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 17 (24.03.2024): 18952–60. http://dx.doi.org/10.1609/aaai.v38i17.29861.

Der volle Inhalt der Quelle
Annotation:
As an indispensable ingredient of intelligence, commonsense reasoning is crucial for large language models (LLMs) in real-world scenarios. In this paper, we propose CORECODE, a dataset that contains abundant commonsense knowledge manually annotated on dyadic dialogues, to evaluate the commonsense reasoning and commonsense conflict detection capabilities of Chinese LLMs. We categorize commonsense knowledge in everyday conversations into three dimensions: entity, event, and social interaction. For easy and consistent annotation, we standardize the form of commonsense knowledge annotation in open-domain dialogues as "domain: slot = value". A total of 9 domains and 37 slots are defined to capture diverse commonsense knowledge. With these pre-defined domains and slots, we collect 76,787 commonsense knowledge annotations from 19,700 dialogues through crowdsourcing. To evaluate and enhance the commonsense reasoning capability for LLMs on the curated dataset, we establish a series of dialogue-level reasoning and detection tasks, including commonsense knowledge filling, commonsense knowledge generation, commonsense conflict phrase detection, domain identification, slot identification, and event causal inference. A wide variety of existing open-source Chinese LLMs are evaluated with these tasks on our dataset. Experimental results demonstrate that these models are not competent to predict CORECODE's plentiful reasoning content, and even ChatGPT could only achieve 0.275 and 0.084 accuracy on the domain identification and slot identification tasks under the zero-shot setting. We release the data and codes of CORECODE at https://github.com/danshi777/CORECODE to promote commonsense reasoning evaluation and study of LLMs in the context of daily conversations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ismail, Nanang, Iim Nursalim, Hendri Maja Saputra und Teddy Surya Gunawan. „Implementation of Fuzzy Logic Control System on Rotary Car Parking System Prototype“. Indonesian Journal of Electrical Engineering and Computer Science 12, Nr. 2 (01.11.2018): 706. http://dx.doi.org/10.11591/ijeecs.v12.i2.pp706-715.

Der volle Inhalt der Quelle
Annotation:
Rotary car parking system (RCPS) is one of the effective parking models used in the metropolitan area because the mechanical parking system is designed vertically to conserve the land usage. This paper discussed the implementation of fuzzy logic with the Sugeno Inference Model on the RCPS miniature control system. The research started with kinematics analysis and a mathematical model was derived to determine the slot position and optimal power requirements for each condition. Furthermore, the Fuzzy Inference model used was the Sugeno Model, taking into account two variables: distance and angle. These two variables were selected because in the designed miniature RCPS there will be rotational changes of rotation and rotation in turn. Variable distance was divided into four clusters, such as Zero, Near, Medium and Far. While the angle variables were divided into four clusters as well, such as Zero, Small, Medium, and Big. The test results on a miniature RCPS consisting of six parking slots showed that fuzzy based control provided better results when compared to conventional systems. Step response on the control system without fuzzy control showed the rise time value of 0.58 seconds, peak time of 0.85 seconds, settling time of 0.89, percentage overshoot of 0.20%, and steady state error of 4.14%. While the fuzzy control system provided the rise time value of 0.54 seconds, settling time of 0.83 seconds, steady state error of 2.32%, with no overshoot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Heck, Michael, Nurul Lubis, Carel van Niekerk, Shutong Feng, Christian Geishauser, Hsien-Chin Lin und Milica Gašić. „Robust Dialogue State Tracking with Weak Supervision and Sparse Data“. Transactions of the Association for Computational Linguistics 10 (2022): 1175–92. http://dx.doi.org/10.1162/tacl_a_00513.

Der volle Inhalt der Quelle
Annotation:
Abstract Generalizing dialogue state tracking (DST) to new data is especially challenging due to the strong reliance on abundant and fine-grained supervision during training. Sample sparsity, distributional shift, and the occurrence of new concepts and topics frequently lead to severe performance degradation during inference. In this paper we propose a training strategy to build extractive DST models without the need for fine-grained manual span labels. Two novel input-level dropout methods mitigate the negative impact of sample sparsity. We propose a new model architecture with a unified encoder that supports value as well as slot independence by leveraging the attention mechanism. We combine the strengths of triple copy strategy DST and value matching to benefit from complementary predictions without violating the principle of ontology independence. Our experiments demonstrate that an extractive DST model can be trained without manual span labels. Our architecture and training strategies improve robustness towards sample sparsity, new concepts, and topics, leading to state-of-the-art performance on a range of benchmarks. We further highlight our model’s ability to effectively learn from non-dialogue data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Park, Se Jin, Minsu Kim, Joanna Hong, Jeongsoo Choi und Yong Man Ro. „SyncTalkFace: Talking Face Generation with Precise Lip-Syncing via Audio-Lip Memory“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 2 (28.06.2022): 2062–70. http://dx.doi.org/10.1609/aaai.v36i2.20102.

Der volle Inhalt der Quelle
Annotation:
The challenge of talking face generation from speech lies in aligning two different modal information, audio and video, such that the mouth region corresponds to input audio. Previous methods either exploit audio-visual representation learning or leverage intermediate structural information such as landmarks and 3D models. However, they struggle to synthesize fine details of the lips varying at the phoneme level as they do not sufficiently provide visual information of the lips at the video synthesis step. To overcome this limitation, our work proposes Audio-Lip Memory that brings in visual information of the mouth region corresponding to input audio and enforces fine-grained audio-visual coherence. It stores lip motion features from sequential ground truth images in the value memory and aligns them with corresponding audio features so that they can be retrieved using audio input at inference time. Therefore, using the retrieved lip motion features as visual hints, it can easily correlate audio with visual dynamics in the synthesis step. By analyzing the memory, we demonstrate that unique lip features are stored in each memory slot at the phoneme level, capturing subtle lip motion based on memory addressing. In addition, we introduce visual-visual synchronization loss which can enhance lip-syncing performance when used along with audio-visual synchronization loss in our model. Extensive experiments are performed to verify that our method generates high-quality video with mouth shapes that best align with the input audio, outperforming previous state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Collette, Sven, Wolfgang M. Pauli, Peter Bossaerts und John O'Doherty. „Neural computations underlying inverse reinforcement learning in the human brain“. eLife 6 (30.10.2017). http://dx.doi.org/10.7554/elife.29718.

Der volle Inhalt der Quelle
Annotation:
In inverse reinforcement learning an observer infers the reward distribution available for actions in the environment solely through observing the actions implemented by another agent. To address whether this computational process is implemented in the human brain, participants underwent fMRI while learning about slot machines yielding hidden preferred and non-preferred food outcomes with varying probabilities, through observing the repeated slot choices of agents with similar and dissimilar food preferences. Using formal model comparison, we found that participants implemented inverse RL as opposed to a simple imitation strategy, in which the actions of the other agent are copied instead of inferring the underlying reward structure of the decision problem. Our computational fMRI analysis revealed that anterior dorsomedial prefrontal cortex encoded inferences about action-values within the value space of the agent as opposed to that of the observer, demonstrating that inverse RL is an abstract cognitive process divorceable from the values and concerns of the observer him/herself.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Slot value inference"

1

Laurel, Jacob, und Sasa Misailovic. „Continualization of Probabilistic Programs With Correction“. In Programming Languages and Systems, 366–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44914-8_14.

Der volle Inhalt der Quelle
Annotation:
AbstractProbabilistic Programming offers a concise way to represent stochastic models and perform automated statistical inference. However, many real-world models have discrete or hybrid discrete-continuous distributions, for which existing tools may suffer non-trivial limitations. Inference and parameter estimation can be exceedingly slow for these models because many inference algorithms compute results faster (or exclusively) when the distributions being inferred are continuous. To address this discrepancy, this paper presents Leios. Leios is the first approach for systematically approximating arbitrary probabilistic programs that have discrete, or hybrid discrete-continuous random variables. The approximate programs have all their variables fully continualized. We show that once we have the fully continuous approximate program, we can perform inference and parameter estimation faster by exploiting the existing support that many languages offer for continuous distributions. Furthermore, we show that the estimates obtained when performing inference and parameter estimation on the continuous approximation are still comparably close to both the true parameter values and the estimates obtained when performing inference on the original model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Smith, Elizabeth L., und David Walshaw. „Modelling Bivariate Extremes in a Region“. In Bayesian Statistics 7, 681–90. Oxford University PressOxford, 2003. http://dx.doi.org/10.1093/oso/9780198526155.003.0048.

Der volle Inhalt der Quelle
Annotation:
Abstract Practitioners of extreme value methodology have been slow to accept the Bayesian paradigm, and the initial work that has been carried out in recent years has reflected the history of the classical approach, in concentrating solely on univariate problems. In this paper we take a first step towards balancing the substantial frequentist literature on multivariate extreme value inference by considering problems of bivariate inference from a Bayesian point of view. We relate the bivariate case to inference problems for extremes of environmental variables recorded at a number of locations in a spatial region. We show how inference for bivariate extreme value models can be implemented using an MCMC scheme, and compare two popular model families. We then select one of these families for use in a practical example involving rainfall data. We employ prior information on marginal behavior of extremes constructed from carefully elicited expert beliefs, while prior beliefs about the dependence parameter relate the strength of dependence inversely to the distance between locations, thus exploiting the spatial aspect inherent in the inference problem. We briefly discuss how our ongoing work in this area will lead to a spatial model which enables inference at a particular location of interest to be improved through the model for bivariate extremal dependencies with other locations. We conclude with a pointer to inference for max-stable process models, which are being developed by the authors to address the problems involved in truly multivariate inference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mareschal, Denis, und Sam Blakeman. „Fast and Slow Learning in Human-Like Intelligence“. In Human-Like Machine Intelligence, 316–37. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198862536.003.0016.

Der volle Inhalt der Quelle
Annotation:
In this chapter we review the extent to which rapid one-short learning or fast-mapping exists in human learning. We find that it exists in both children and adults, but that it is almost always accompanied by slow consolidated learning in which new knowledge is integrated with existing knowledge-bases. Rapid learning is also present in a broad range of non-human species, particularly in the context of high reward values. We argue that reward prediction errors guide the extent to which fast or slow learning dominates, and present a Complementary Learning Systems neural network model (CTDL) of cortical/hippocampal learning that uses reward prediction errors to adjudicate between learning in the two systems. Developing human-like artificial intelligence will require implementing multiple learning and inference systems governed by a flexible control system with an equal capacity to that of human control systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie