Journal articles on the topic 'Race Model Architecture'

To see the other types of publications on this topic, follow the link: Race Model Architecture.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Race Model Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Townsend, James T., and Georgie Nozawa. "Serial exhaustive models can violate the race model inequality: Implications for architecture and capacity." Psychological Review 104, no. 3 (1997): 595–602. http://dx.doi.org/10.1037/0033-295x.104.3.595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Samuel, Arthur G. "Merge: Contorted architecture, distorted facts, and purported autonomy." Behavioral and Brain Sciences 23, no. 3 (June 2000): 345–46. http://dx.doi.org/10.1017/s0140525x00443244.

Full text
Abstract:
Norris, McQueen & Cutler claim that Merge is an autonomous model, superior to the interactive TRACE model and the autonomous Race model. Merge is actually an interactive model, despite claims to the contrary. The presentation of the literature seriously distorts many findings, in order to advocate autonomy. It is Merge's interactivity that allows it to simulate findings in the literature.
APA, Harvard, Vancouver, ISO, and other styles
3

Pugin, Konstantin V., Kirill A. Mamrosenko, and Alexander M. Giatsintov. "Software architecture for display controller and operating system interaction." Radioelectronics. Nanosystems. Information Technologies. 13, no. 1 (March 27, 2021): 87–94. http://dx.doi.org/10.17725/rensit.2021.13.087.

Full text
Abstract:
Article describes solutions for developing programs that provide interaction between Linux operating system and multiple display controller hardware blocks (outputs), that use one clock generation IP-block with phase-locked loop (PLL). There is no API for such devices in Linux, thus new software model was developed. This model is based on official Linux GPU developer driver model, but was modified to cover case described earlier. Article describes three models for display controller driver development – monolithic, component and semi-monolithic. These models cannot cover case described earlier, because they assume that one clock generator should be attached to one output. A new model was developed, that is based on component model, but has additional mechanics to prevent race condition that can happen while using one clock generator with multiple outputs. Article also presents modified model for bootloaders graphics drivers. This model has been simplified over developed Linux model, but also has component nature (with less components) and race prevention mechanics (but with weaker conditions). Hardware interaction driver components that are developed using provided software models are interchangeable between Linux and bootloader.
APA, Harvard, Vancouver, ISO, and other styles
4

Otto, Thomas U., and Pascal Mamassian. "Multisensory Decisions: the Test of a Race Model, Its Logic, and Power." Multisensory Research 30, no. 1 (2017): 1–24. http://dx.doi.org/10.1163/22134808-00002541.

Full text
Abstract:
The use of separate multisensory signals is often beneficial. A prominent example is the speed-up of responses to two redundant signals relative to the components, which is known as the redundant signals effect (RSE). A convenient explanation for the effect is statistical facilitation, which is inherent in the basic architecture of race models (Raab, 1962,Trans. N. Y. Acad. Sci.24, 574–590). However, this class of models has been largely rejected in multisensory research, which we think results from an ambiguity in definitions and misinterpretations of the influential race model test (Miller, 1982,Cogn. Psychol.14, 247–279). To resolve these issues, we here discuss four main items. First, we clarify definitions and ask how successful models of perceptual decision making can be extended from uni- to multisensory decisions. Second, we review the race model test and emphasize elements leading to confusion with its interpretation. Third, we introduce a new approach to study the RSE. As a major change of direction, our working hypothesis is that the basic race model architecture is correct even if the race model test seems to suggest otherwise. Based on this approach, we argue that understanding the variability of responses is the key to understand the RSE. Finally, we highlight the critical role of model testability to advance research on multisensory decisions. Despite being largely rejected, it should be recognized that race models, as part of a broader class of parallel decision models, demonstrate, in fact, a convincing explanatory power in a range of experimental paradigms. To improve research consistency in the future, we conclude with a short checklist for RSE studies.
APA, Harvard, Vancouver, ISO, and other styles
5

Maull, Thomas, and Adriano Schommer. "Optimizing Torque Delivery for an Energy-Limited Electric Race Car Using Model Predictive Control." World Electric Vehicle Journal 13, no. 12 (November 24, 2022): 224. http://dx.doi.org/10.3390/wevj13120224.

Full text
Abstract:
This paper presents a torque controller for the energy optimization of the powertrain of an electric Formula Student race car. Limited battery capacity within electric race car designs requires energy management solutions to minimize lap time while simultaneously controlling and managing the overall energy consumption to finish the race. The energy-managing torque control algorithm developed in this work optimizes the finite onboard energy from the battery pack to reduce lap time and energy consumption when energy deficits occur. The longitudinal dynamics of the vehicle were represented by a linearized first-principles model and validated against a parameterized electric Formula Student race car model in commercial lap time simulation software. A Simulink-based model predictive controller (MPC) architecture was created to balance energy use requirements with optimum lap time. This controller was tested against a hardware-limited and torque-limited system in a constant torque request and a varying torque request scenario. The controller decreased the elapsed time to complete a 150 m straight-line acceleration by 11.4% over the torque-limited solution and 13.5% in a 150 m Formula Student manoeuvre.
APA, Harvard, Vancouver, ISO, and other styles
6

Marewski, Julian N., and Katja Mehlhorn. "Using the ACT-R architecture to specify 39 quantitative process models of decision making." Judgment and Decision Making 6, no. 6 (August 2011): 439–519. http://dx.doi.org/10.1017/s1930297500002473.

Full text
Abstract:
AbstractHypotheses about decision processes are often formulated qualitatively and remain silent about the interplay of decision, memorial, and other cognitive processes. At the same time, existing decision models are specified at varying levels of detail, making it difficult to compare them. We provide a methodological primer on how detailed cognitive architectures such as ACT-R allow remedying these problems. To make our point, we address a controversy, namely, whether noncompensatory or compensatory processes better describe how people make decisions from the accessibility of memories. We specify 39 models of accessibility-based decision processes in ACT-R, including the noncompensatory recognition heuristic and various other popular noncompensatory and compensatory decision models. Additionally, to illustrate how such models can be tested, we conduct a model comparison, fitting the models to one experiment and letting them generalize to another. Behavioral data are best accounted for by race models. These race models embody the noncompensatory recognition heuristic and compensatory models as a race between competing processes, dissolving the dichotomy between existing decision models.
APA, Harvard, Vancouver, ISO, and other styles
7

Venkataratamani, Prasanna Venkhatesh, and Aditya Murthy. "Distinct mechanisms explain the control of reach speed planning: evidence from a race model framework." Journal of Neurophysiology 120, no. 3 (September 1, 2018): 1293–306. http://dx.doi.org/10.1152/jn.00707.2017.

Full text
Abstract:
Previous studies have investigated the computational architecture underlying the voluntary control of reach movements that demands a change in position or direction of movement planning. Here we used a novel task in which subjects had to either increase or decrease the movement speed according to a change in target color that occurred randomly during a trial. The applicability of different race models to such a speed redirect task was assessed. We found that the predictions of an independent race model that instantiated an abort-and-replan strategy was consistent with all aspects of performance in the fast-to-slow speed condition. The results from modeling indicated a peculiar asymmetry, in that although the fast-to-slow speed change required inhibition, none of the standard race models was able to explain how movements changed from slow to fast speeds. Interestingly, a weighted averaging model that simulated the gradual merging of two kinematic plans explained behavior in the slow-to-fast speed task. In summary, our work shows how a race model framework can provide an understanding of how the brain controls different aspects of reach movement planning and help distinguish between an abort-and-replan strategy and merging of plans. NEW & NOTEWORTHY For the first time, a race model framework was used to understand how reach speeds are modified. We provide evidence that a fast-to-slow speed change required aborting the current plan and a complete respecification of a new plan, while none of the race models was able to explain an instructed increase of hand movement speed, which was instead accomplished by a merging of a new kinematic plan with the existing kinematic plan.
APA, Harvard, Vancouver, ISO, and other styles
8

Spataro, Davide, Donato D’Ambrosio, Giuseppe Filippone, Rocco Rongo, William Spataro, and Davide Marocco. "The new SCIARA-fv3 numerical model and acceleration by GPGPU strategies." International Journal of High Performance Computing Applications 31, no. 2 (July 27, 2016): 163–76. http://dx.doi.org/10.1177/1094342015584520.

Full text
Abstract:
This paper presents the parallel implementation, using the Compute Unified Device Architecture (CUDA) architecture, of the SCIARA-fv3 Complex Cellular Automata model for simulating lava flows. The computational model is based on a Bingham-like rheology and both flow velocity and the physical time corresponding to a computational step have been made explicit. The parallelization design has involved, among other issues, the application of strategies that can avoid incorrect computation results due to race conditions and achieving the best performance and occupancy of the underlying available hardware. Two hardware types were adopted for testing different versions of the CUDA implementations of the SCIARA-fv3 model, namely the GTX 580 and GTX 680 graphic processors. Despite its computational complexity, carried out experiments of the model parallelization have shown significant performance improvements, confirming that graphic hardware can represent a valid solution for the implementation of Cellular Automata models.
APA, Harvard, Vancouver, ISO, and other styles
9

Pinho, João, Gabriel Costa, Pedro U. Lima, and Miguel Ayala Botto. "Learning-Based Model Predictive Control for Autonomous Racing." World Electric Vehicle Journal 14, no. 7 (June 21, 2023): 163. http://dx.doi.org/10.3390/wevj14070163.

Full text
Abstract:
In this paper, we present the adaptation of the terminal component learning-based model predictive control (TC-LMPC) architecture for autonomous racing to the Formula Student Driverless (FSD) context. We test the TC-LMPC architecture, a reference-free controller that is able to learn from previous iterations by building an appropriate terminal safe set and terminal cost from collected trajectories and input sequences, in a vehicle simulator dedicated to the FSD competition. One major problem in autonomous racing is the difficulty in obtaining accurate highly nonlinear vehicle models that cover the entire performance envelope. This is more severe as the controller pushes for incrementally more aggressive behavior. To address this problem, we use offline and online measurements and machine learning (ML) techniques for the online adaptation of the vehicle model. We test two sparse Gaussian process regression (GPR) approximations for model learning. The novelty in the model learning segment is the use of a selection method for the initial training dataset that maximizes the information gain criterion. The TC-LMPC with model learning achieves a 5.9 s reduction (3%) in the total 10-lap FSD race time.
APA, Harvard, Vancouver, ISO, and other styles
10

Corneil, Brian D., and James K. Elsley. "Countermanding Eye-Head Gaze Shifts in Humans: Marching Orders Are Delivered to the Head First." Journal of Neurophysiology 94, no. 1 (July 2005): 883–95. http://dx.doi.org/10.1152/jn.01171.2004.

Full text
Abstract:
The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged ∼120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged ∼190 ms or ∼70–75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were ∼25 and 90 ms, respectively, consistent with the known physiology of the final motor paths. Further, the threshold of the head movement branch was estimated to be 85% of that required to activate gaze shifts. From these results, we conclude that a commitment to a head movement is made in advance of gaze shifts and that the comparative SSRT differences result primarily from biomechanical differences inherent to eye and head motion.
APA, Harvard, Vancouver, ISO, and other styles
11

Gopal, Atul, and Aditya Murthy. "Eye-hand coordination during a double-step task: evidence for a common stochastic accumulator." Journal of Neurophysiology 114, no. 3 (September 2015): 1438–54. http://dx.doi.org/10.1152/jn.00276.2015.

Full text
Abstract:
Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.
APA, Harvard, Vancouver, ISO, and other styles
12

Litschko, Christof, Stefan Brühmann, Agnes Csiszár, Till Stephan, Vanessa Dimchev, Julia Damiano-Guercio, Alexander Junemann, et al. "Functional integrity of the contractile actin cortex is safeguarded by multiple Diaphanous-related formins." Proceedings of the National Academy of Sciences 116, no. 9 (February 11, 2019): 3594–603. http://dx.doi.org/10.1073/pnas.1821638116.

Full text
Abstract:
The contractile actin cortex is a thin layer of filamentous actin, myosin motors, and regulatory proteins beneath the plasma membrane crucial to cytokinesis, morphogenesis, and cell migration. However, the factors regulating actin assembly in this compartment are not well understood. Using the Dictyostelium model system, we show that the three Diaphanous-related formins (DRFs) ForA, ForE, and ForH are regulated by the RhoA-like GTPase RacE and synergize in the assembly of filaments in the actin cortex. Single or double formin-null mutants displayed only moderate defects in cortex function whereas the concurrent elimination of all three formins or of RacE caused massive defects in cortical rigidity and architecture as assessed by aspiration assays and electron microscopy. Consistently, the triple formin and RacE mutants encompassed large peripheral patches devoid of cortical F-actin and exhibited severe defects in cytokinesis and multicellular development. Unexpectedly, many forA−/E−/H− and racE− mutants protruded efficiently, formed multiple exaggerated fronts, and migrated with morphologies reminiscent of rapidly moving fish keratocytes. In 2D-confinement, however, these mutants failed to properly polarize and recruit myosin II to the cell rear essential for migration. Cells arrested in these conditions displayed dramatically amplified flow of cortical actin filaments, as revealed by total internal reflection fluorescence (TIRF) imaging and iterative particle image velocimetry (PIV). Consistently, individual and combined, CRISPR/Cas9-mediated disruption of genes encoding mDia1 and -3 formins in B16-F1 mouse melanoma cells revealed enhanced frequency of cells displaying multiple fronts, again accompanied by defects in cell polarization and migration. These results suggest evolutionarily conserved functions for formin-mediated actin assembly in actin cortex mechanics.
APA, Harvard, Vancouver, ISO, and other styles
13

He, Quan. "Research on Basketball Teaching Method Based on the Dynamic Access Control Model of SOA." Advanced Materials Research 791-793 (September 2013): 1445–49. http://dx.doi.org/10.4028/www.scientific.net/amr.791-793.1445.

Full text
Abstract:
With the development of computer information science and technology, mode controlled by dynamic access and feedback can adjust the effect of physical education and sports teaching process. Basketball elective course is one of the favorite sport programs of many students in college. The game teaching is a new teaching model which not only breaks the traditional boring teaching method but also uses race in teaching and training process. It not only can mobilize the students' interest and enthusiasm for learning but also can improve the efficiency of the students. The dynamic SOA access model is an architecture model facing to service. Through SOA access control model, teachers and students can do a good interactive teaching and can communicate with each other about the teaching method and process timely and can review the effectiveness of teaching. Based on this, the paper designs and develops a new SOA-ARBA dynamic access control model and combines it with basketball sports teaching process. It also designs and evaluates teaching effect evaluation method of basketball teaching method and gets comparison chart of learning interest and achievement of game teaching method.
APA, Harvard, Vancouver, ISO, and other styles
14

Guevara-Escudero, Melisa, Angy N. Osorio, and Andrés J. Cortés. "Integrative Pre-Breeding for Biotic Resistance in Forest Trees." Plants 10, no. 10 (September 26, 2021): 2022. http://dx.doi.org/10.3390/plants10102022.

Full text
Abstract:
Climate change is unleashing novel biotic antagonistic interactions for forest trees that may jeopardize populations’ persistence. Therefore, this review article envisions highlighting major opportunities from ecological evolutionary genomics to assist the identification, conservation, and breeding of biotic resistance in forest tree species. Specifically, we first discuss how assessing the genomic architecture of biotic stress resistance enables us to recognize a more polygenic nature for a trait typically regarded Mendelian, an expectation from the Fisherian runaway pathogen–host concerted arms-race evolutionary model. Secondly, we outline innovative pipelines to capture and harness natural tree pre-adaptations to biotic stresses by merging tools from the ecology, phylo-geography, and omnigenetics fields within a predictive breeding platform. Promoting integrative ecological genomic studies promises a better understanding of antagonistic co-evolutionary interactions, as well as more efficient breeding utilization of resistant phenotypes.
APA, Harvard, Vancouver, ISO, and other styles
15

Setyabudi, Irawan, and Petrus Paulus Pain Pati. "PERMUKIMAN TRADISIONAL DI KAWASAN LANSKAP PANTAI DI SENDIKI, DESA TAMBAKREJO KECAMATAN SUMBERMANJING WETAN KABUPATEN MALANG." BUANA SAINS 19, no. 1 (October 11, 2019): 69. http://dx.doi.org/10.33366/bs.v19i1.1528.

Full text
Abstract:
Traditional settlements are places that still hold customary and cultural values related to beliefs or religion that are specific or unique to a particular society. In each region of Nusantara there are various cultures, and in it there are traditional settlements as identities. This research is located in the Sendiki beach area which is a tourist attraction in the southern Malang district, precisely in the village of Tambakrejo. The problem is the diminishing public awareness in preserving the existence of settlement forms because of the current of modernization. The unique settlement pattern model in the village of Tambakrejo lined up along the road following the traditional settlement pattern of Tanean Lanjeng, because the settlements were dominated by Madura race. On the other hand because it is located in East Java, the formation of his house was also adapted to the building form of joglo. Another problem is the low public awareness in maintaining environmental quality which impacts the degradation of ecosystem quality. The aims of this research include identifying the architectural forms of houses, landscapes and traditional settlements in the village of Tambakrejo, as an effort to preserve them. The research method was conducted qualitatively by analyzing the data using Focus Group Discussion (FGD). The research thinking is adapted to the ideas of Rapoport. The stages of research start from the identification of physical, biophysical, socio-cultural and economic aspects, to the description of analysis and synthesis in settlement patterns and traditional homes. The results obtained in the form of a description of traditional settlement patterns, the formation of residential architecture and landscape patterns of settlements. The conclusions of this study include documentation of traditional architecture, landscapes and settlements as knowledge to respect the natural environment and culture of the people living.
APA, Harvard, Vancouver, ISO, and other styles
16

Azarianpour Esfahani, Sepideh, Pingfu Fu, Haider Mahdi, and Anant Madabhushi. "Computational features of TIL architecture are differentially prognostic of uterine cancer between African and Caucasian American women." Journal of Clinical Oncology 39, no. 15_suppl (May 20, 2021): 5585. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.5585.

Full text
Abstract:
5585 Background: Although the vast majority of endometrial cancer (EC) is early-stage and thus, curable by surgery, chemotherapy, and radiotherapy (with at least 85% 5-year OS), a fraction of them are aggressive neoplasms such as high-grade or deeply invasive lesions and thus exhibit poor prognosis. African American (AA) women are disproportionately affected by high-grade EC and have 80% higher mortality rate compared with Caucasian American (CA) women. In this work, we evaluated the prognostic ability of computational measurements of architecture of tumor-infiltrating lymphocytes (ArcTIL) from H&E slide images for EC. We also investigated the presence of morphologic differences in terms of ArcTIL features between AA and CA women and whether ArcTIL based population-specific models were more prognostic of OS in AA women compared to a population-agnostic model. Methods: The study included digitized H&E tissue slides from 445 post-surgery EC patients from TCGA, with further chemotherapy, or radiotherapy, including only the AA and CA patients, patients without reported race or from other populations were excluded. The dataset was divided into discovery (D1, n = 300), and a validation set (D2, n = 145), while ensuring population balance between two splits (D1(AA) = 65, D1(CA) = 235, D2(AA) = 37, D2(CA) = 108). A machine learning approach was employed to identify tumor regions, and tumor-associated stroma on the diagnostic slides and then used to automatically identify TILs within these compartments. Graph network theory based computational algorithms were used to capture 85 quantitative descriptors of the architectural patterns of intratumoral and stromal TILs. A multivariable Cox regression model (MCRM) was used to create population specific-prognostic models (MAA, MCA) and a population-agnostic model (MAA+CA)) to predict OS. All 3 models were evaluated on D2(AA), D2(CA), and D2. Results: MAA identified 4 prognostic features relating to interaction of TIL clusters with cancer nuclei in stromal compartment and was prognostic of OS on D2(AA) (see Table) but not prognostic in D2(CA) nor D2(AA+CA). MCA and MAA+CA identified respectively 7 and 6 prognostic features relating to interaction of TIL clusters with cancer nuclei (both in the epithelial and stromal regions) and were prognostic of OS on D2(CA) and D2, but not prognostic in D2(AA). Conclusions: Our findings suggest an important role of stromal TIL architecture in prognosticating OS in AA women with EC, while epithelial TIL features were more prognostic in CA women. These findings need to be validated in larger, multi-site validation sets.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
17

Ghaffarian, Mohammad Saeid, Gholamreza Moradi, Somayyeh Khajehpour, Mohammad Mahdi Honari, and Rashid Mirzavand. "Dual-Band/Dual-Mode Rat-Race/Branch-Line Coupler Using Split Ring Resonators." Electronics 10, no. 15 (July 28, 2021): 1812. http://dx.doi.org/10.3390/electronics10151812.

Full text
Abstract:
A novel dual-band/dual-mode compact hybrid coupler which acts as a dual-band branch-line coupler at the lower band and as a rat-race coupler at the higher band is presented in this paper. One of the most interesting features of the proposed structure is that outputs of the proposed coupler in each mode of operation are on the same side. This unique design is implemented using artificial transmission lines (ATLs) based on open split ring resonators (OSRR). The low-cost miniaturized coupler could be operated as a dual-band 90° branch-line coupler at 3.3 and 3.85 GHz and 180° rat-race coupler at 5.3 GHz. The proposed coupler could be utilized in the antenna array feeding circuit to form the antenna beam. The structure’s analytical circuit design based on its equivalent circuit model is provided and verified by measurement results.
APA, Harvard, Vancouver, ISO, and other styles
18

Spielberg, Nathan A., Matthew Brown, Nitin R. Kapania, John C. Kegelman, and J. Christian Gerdes. "Neural network vehicle models for high-performance automated driving." Science Robotics 4, no. 28 (March 27, 2019): eaaw1975. http://dx.doi.org/10.1126/scirobotics.aaw1975.

Full text
Abstract:
Automated vehicles navigate through their environment by first planning and subsequently following a safe trajectory. To prove safer than human beings, they must ultimately perform these tasks as well or better than human drivers across a broad range of conditions and in critical situations. We show that a feedforward-feedback control structure incorporating a simple physics-based model can be used to track a path up to the friction limits of the vehicle with performance comparable with a champion amateur race car driver. The key is having the appropriate model. Although physics-based models are useful in their transparency and intuition, they require explicit characterization around a single operating point and fail to make use of the wealth of vehicle data generated by autonomous vehicles. To circumvent these limitations, we propose a neural network structure using a sequence of past states and inputs motivated by the physical model. The neural network achieved better performance than the physical model when implemented in the same feedforward-feedback control architecture on an experimental vehicle. More notably, when trained on a combination of data from dry roads and snow, the model was able to make appropriate predictions for the road surface on which the vehicle was traveling without the need for explicit road friction estimation. These findings suggest that the network structure merits further investigation as the basis for model-based control of automated vehicles over their full operating range.
APA, Harvard, Vancouver, ISO, and other styles
19

Azhar, M. Waqar, Miquel Pericàs, and Per Stenström. "Task-RM: A Resource Manager for Energy Reduction in Task-Parallel Applications under Quality of Service Constraints." ACM Transactions on Architecture and Code Optimization 19, no. 1 (March 31, 2022): 1–26. http://dx.doi.org/10.1145/3494537.

Full text
Abstract:
Improving energy efficiency is an important goal of computer system design. This article focuses on a general model of task-parallel applications under quality-of-service requirements on the completion time. Our technique, called Task-RM , exploits the variance in task execution-times and imbalance between tasks to allocate just enough resources in terms of voltage-frequency and core-allocation so that the application completes before the deadline. Moreover, we provide a solution that can harness additional energy savings with the availability of additional processors. We observe that, for the proposed run-time resource manager to allocate resources, it requires specification of the soft deadlines to the tasks. This is accomplished by analyzing the energy-saving scenarios offline and by providing Task-RM with the performance requirements of the tasks. The evaluation shows an energy saving of 33% compared to race-to-idle and 22% compared to dynamic slack allocation (DSA) with an overhead of less than 1%.
APA, Harvard, Vancouver, ISO, and other styles
20

Salomatin, Aleksey Yu, and Natalya V. Makeeva. "Ethnic and Race Problems in a Federal State (Based on the Results of the U.S. Presidential Campaign of 2020)." Civil society in Russia and abroad 1 (March 11, 2021): 3–7. http://dx.doi.org/10.18572/2221-3287-2021-1-3-7.

Full text
Abstract:
The United States, originally formed as a haven for immigrants of different ethnic and racial backgrounds, has felt the inefficiency of its famous «melting pot»since the beginning of the XXI century. In this ethnically-and racially-conflicted state, one of the long-standing problems was considered to be the problem of African-Americans, who after World War II dispersed outside the southern states. This problem was only partially solved, and the policy of positive discrimination in favor of blacks in recent years has become increasingly critical. At the same time, the country, in which the proportion of white citizens is sharply decreasing and the number of Spanish-speaking people and people of Asian origin is increasing, is experiencing an increasing cultural, civilizational and geographical split. During the 2020 presidential campaign, the United States faced not only an epidemic of coronavirus infection, but also unprecedented protest activity, which was a test for the outdated mechanism of this federal state. The domestic political crisis might have been averted or mitigated if the United States had had a more centralized model of federalism, which would have strengthened administrative coordination and allowed ethno-racial egoism to be contained within certain limits.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Zhuosheng, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, and Rui Wang. "SG-Net: Syntax-Guided Machine Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9636–43. http://dx.doi.org/10.1609/aaai.v34i05.6511.

Full text
Abstract:
For machine reading comprehension, the capacity of effectively modeling the linguistic knowledge from the detail-riddled and lengthy passages and getting ride of the noises is essential to improve its performance. Traditional attentive models attend to all words without explicit constraint, which results in inaccurate concentration on some dispensable words. In this work, we propose using syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanism for better linguistically motivated word representations. In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention. Syntax-guided network (SG-Net) is then composed of this extra SDOI-SAN and the SAN from the original Transformer encoder through a dual contextual architecture for better linguistics inspired representation. To verify its effectiveness, the proposed SG-Net is applied to typical pre-trained language model BERT which is right based on a Transformer encoder. Extensive experiments on popular benchmarks including SQuAD 2.0 and RACE show that the proposed SG-Net design helps achieve substantial performance improvement over strong baselines.
APA, Harvard, Vancouver, ISO, and other styles
22

Yan, Yifan, and Weiwei Yu. "Cross-age face synthesis based on conditional adversarial autoencoder." Frontiers in Computing and Intelligent Systems 3, no. 1 (March 17, 2023): 65–71. http://dx.doi.org/10.54097/fcis.v3i1.6026.

Full text
Abstract:
Face aging aims to render face images with desired age attribute. It has tremendous impact to a wide-range of applications, e.g., criminal investigation, entertainment. The rapid development of generative adversarial networks (GANs) has shown impressive results in face aging. Among them, the Conditional Adversarial Autoencoder (CAAE) proposed in 2017 has achieved good results in face aging. However, the generated faces still have the problems that the aging features are not obvious and the identity information are not well maintained. In addition, research have shown that the human aging process is affected by genes. Different races have different external characteristics of aging. However, the current research does not take the racial factor into account, ignores the racial differences in the aging process. It affects the accuracy of transformation. To solve the above problems, this paper proposes a cross-age face synthesis based on conditional adversarial autoencoder: First, a conditional adversarial autoencoder is used as the infrastructure to build a cross-age face synthesis model based on race constraints. Secondly, the discriminator is composed of a discriminant network and a classification network, and a category loss function is designed to generate a real face that matches the target age; Finally, the model uses a identity feature extractor and a discriminator of the multi-scale architecture. Through multi-level discrimination from pixel values to high-level semantic information, the loss of personal identity features is minimized. UTKFace and MegaAge-Asian datasets are used in the experiment. Three comparative experiments are designed for the above improvements. The results show that the racial constraints make the generated images effectively maintain the racial characteristics, such as skin color and texture; The classification function of the discriminator improves the aging effect; The design of the multi-scale discriminator enables the generated face to have more stable local structure and identity information. Through qualitative and quantitative analysis, it is shown that this method has higher aging accuracy and identity retention than the CAAE method.
APA, Harvard, Vancouver, ISO, and other styles
23

Singer, Uriel, and Kira Radinsky. "EqGNN: Equalized Node Opportunity in Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8333–41. http://dx.doi.org/10.1609/aaai.v36i8.20808.

Full text
Abstract:
Graph neural networks (GNNs), has been widely used for supervised learning tasks in graphs reaching state-of-the-art results. However, little work was dedicated to creating unbiased GNNs, i.e., where the classification is uncorrelated with sensitive attributes, such as race or gender. Some ignore the sensitive attributes or optimize for the criteria of statistical parity for fairness. However, it has been shown that neither approaches ensure fairness, but rather cripple the utility of the prediction task. In this work, we present a GNN framework that allows optimizing representations for the notion of Equalized Odds fairness criteria. The architecture is composed of three components: (1) a GNN classifier predicting the utility class, (2) a sampler learning the distribution of the sensitive attributes of the nodes given their labels. It generates samples fed into a (3) discriminator that discriminates between true and sampled sensitive attributes using a novel ``permutation loss'' function. Using these components, we train a model to neglect information regarding the sensitive attribute only with respect to its label. To the best of our knowledge, we are the first to optimize GNNs for the equalized odds criteria. We evaluate our classifier over several graph datasets and sensitive attributes and show our algorithm reaches state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
24

Du, Wenliao, Ansheng Li, Pengfei Ye, and Chengliang Liu. "Fault Diagnosis of Plunger Pump in Truck Crane Based on Relevance Vector Machine with Particle Swarm Optimization Algorithm." Shock and Vibration 20, no. 4 (2013): 781–92. http://dx.doi.org/10.1155/2013/610235.

Full text
Abstract:
Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM) with particle swarm optimization (PSO) algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN), ant colony optimization artificial neural network (ANT-ANN), RVM, and support vectors, machines with particle swarm optimization (PSO-SVM), respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.
APA, Harvard, Vancouver, ISO, and other styles
25

Kyriakou, Kyriakos-Ioannis D., and Nikolaos D. Tselikas. "Complementing JavaScript in High-Performance Node.js and Web Applications with Rust and WebAssembly." Electronics 11, no. 19 (October 7, 2022): 3217. http://dx.doi.org/10.3390/electronics11193217.

Full text
Abstract:
We examine whether the novel systems programming language named Rust can be utilized alongside JavaScript in Node.js and Web-based applications development. The paper describes how JavaScript can be used as a high-level scripting language in combination with Rust in place of C++ in order to realize efficiency and be free of race conditions as well as memory-related software issues. Furthermore, we conducted stress tests in order to evaluate the performance of the proposed architecture in various scenarios. Rust-based implementations were able to outperform JS by 1.15 by over 115 times across the range of measurements and overpower Node.js’s concurrency model by 14.5 times or more without the need for fine-tuning. In Web browsers, the single-thread WebAssembly implementation outperformed the respective pure JS implementation by about two to four times. WebAssembly executed inside of Chromium compared to the equivalent Node.js implementations was able to deliver 93.5% the performance of the single-threaded implementation and 67.86% the performance of the multi-threaded implementation, which translates to 1.87 to over 24 times greater performance than the equivalent manually optimized pure JS implementation. Our findings provide substantial evidence that Rust is capable of providing the low-level features needed for non-blocking operations and hardware access while maintaining high-level similarities to JavaScript, aiding productivity.
APA, Harvard, Vancouver, ISO, and other styles
26

Luitse, Dieuwertje, and Wiebke Denkena. "The great Transformer: Examining the role of large language models in the political economy of AI." Big Data & Society 8, no. 2 (July 2021): 205395172110477. http://dx.doi.org/10.1177/20539517211047734.

Full text
Abstract:
In recent years, AI research has become more and more computationally demanding. In natural language processing (NLP), this tendency is reflected in the emergence of large language models (LLMs) like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental footprints, and future social ramifications. In December 2020, critical research on LLMs led Google to fire Timnit Gebru, co-lead of the company’s AI Ethics team, which sparked a major public controversy around LLMs and the growing corporate influence over AI research. This article explores the role LLMs play in the political economy of AI as infrastructural components for AI research and development. Retracing the technical developments that have led to the emergence of LLMs, we point out how they are intertwined with the business model of big tech companies and further shift power relations in their favour. This becomes visible through the Transformer, which is the underlying architecture of most LLMs today and started the race for ever bigger models when it was introduced by Google in 2017. Using the example of GPT-3, we shed light on recent corporate efforts to commodify LLMs through paid API access and exclusive licensing, raising questions around monopolization and dependency in a field that is increasingly divided by access to large-scale computing power.
APA, Harvard, Vancouver, ISO, and other styles
27

Jacqueline, Sébastien, Catherine Bunel, and Laurent Lengignon. "Enhancement of ESD performances of Silicon Capacitors for RFID solutions." International Symposium on Microelectronics 2020, no. 1 (September 1, 2020): 000085–89. http://dx.doi.org/10.4071/2380-4505-2020.1.000085.

Full text
Abstract:
Abstract Radio-Frequency IDentification devices such as smart cards and RFID tags are based on the presence of a resonant tuned LC circuit associated to the RFID Integrated Circuit (IC). The use of discrete capacitor, external to the IC gives greater flexibility and design freedom. In the race of miniaturization, manufacturers of RFID devices always require smaller electronic components. To save space and in the same time improve performances, capacitors are exposed to height and volume constraints. In the same time, the capacitor has to withstand ESD stresses that can occur during the assembly of the device and during operation. Murata has developed a unique thin capacitor technology in silicon. This paper reports the development of a range of low profile capacitors with enhanced ESD performances. The manufacturing process optimization and the design adjustments will be presented here. The process was optimized by taking into account the main electrical parameters: leakage current, breakdown voltage, capacitance density, capacitance accuracy, Equivalent Series Resistance (ESR) and Self-Resonant Frequency (SRF). The dielectric stack was defined in order to integrate up to 330pF in 0402 case. The process architecture, based on accurate planar capacitor with thick dielectric will be discussed. With this architecture there is no constraint to reach low thickness, such as 100μm or even lower. The ESD threshold of each Silicon Capacitor was investigated with design variations associated to Human Body Model measurements. A Single Project Wafer (SPW) was founded with 36 different capacitor designs. Design modulations specifically addressed the orientation and position of the contacts openings. Special care was taken to maximize the width of the contact holes and metal tracks. A mosaic approach, constructed out of a massive network of parallelized elementary cells was also implemented, so that the charges of the ESD pulse do not concentrate at the same place, leading to electrical failure. Examples of defects due to ESD stress will be shown with failure analysis cross-sections and ways to enhance the ESD threshold by design will be illustrated.
APA, Harvard, Vancouver, ISO, and other styles
28

Rajan, Sanju, and Linda Joseph. "An Adaptable Optimal Network Topology Model for Efficient Data Centre Design in Storage Area Networks." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 2s (January 31, 2023): 43–50. http://dx.doi.org/10.17762/ijritcc.v11i2s.6027.

Full text
Abstract:
In this research, we look at how different network topologies affect the energy consumption of modular data centre (DC) setups. We use a combined-input directed approach to assess the benefits of rack-scale and pod-scale fragmentation across a variety of electrical, optoelectronic, and composite network architectures in comparison to a conventional DC. When the optical transport architecture is implemented and the appropriate resource components are distributed, the findings reveal fragmentation at the layer level is adequate, even compared to a pod-scale DC. Composable DCs can operate at peak efficiency because of the optical network topology. Logical separation of conventional DC servers across an optical network architecture is also investigated in this article. When compared to physical decentralisation at the rack size, logical decomposition of data centers inside each rack offers a small decrease in the overall DC energy usage thanks to better resource needs allocation. This allows for a flexible, composable architecture that can accommodate performance based in-memory applications. Moreover, we look at the state of fundamentalmodel and its use in both static and dynamic data centres. According to our findings, typical DCs become more energy efficient when workload modularity increases, although excessive resource use still exists. By enabling optimal resource use and energy savings, disaggregation and micro-services were able to reduce the typical DC's up to 30%. Furthermore, we offer a heuristic to duplicate the Mixed integer model's output trends for energy-efficient allocation of caseloads in modularized DCs.
APA, Harvard, Vancouver, ISO, and other styles
29

Hajtmanek, Roman, Peter Morgenstein, Tomáš Hubinský, Ján Legény, and Robert Špaček. "Determination of Solar-Surface-Area-to-Volume Ratio: Early Design Stage Solar Performance Assessment of Buildings." Buildings 13, no. 2 (January 19, 2023): 296. http://dx.doi.org/10.3390/buildings13020296.

Full text
Abstract:
One of the main targets of globally aimed strategies such as the UN-supported Race to Zero campaign or the European Green Deal is the decarbonisation of the building sector. The implementation of renewable energy sources in new urban structures, as well as the complex reconstruction of existing buildings, represents a key area of sustainable urban development. Supporting this approach, this paper introduces the solar-surface-area-to-volume ratio (Rsol) and the solar performance indicator (Psol), applicable for evaluation of the energy performance of basic building shapes at early design stages. The indicators are based on the preprocessors calculated using two different mathematical models—Robinson and Stone’s cumulative sky algorithm and Kittler and Mikler’s model—which are then compared and evaluated. Contrary to the commonly used surface-area-to-volume ratio, the proposed indicators estimate the potential for energy generation by active solar appliances integrated in the building envelope and allow optimisation of building shape in relation to potential energy losses and potential solar gains simultaneously. On the basis of the mathematical models, an online application optimising building shape to maximise sun-exposed surfaces has been developed. In connection with the solar-surface-area-to-volume ratio, it facilitates the quantitative evaluation of energy efficiency of various shapes by the wider professional public. The proposed indicators, verified in a case study presented, shall result in the increased sustainability of building sector by improving the utilisation of solar energy and overall energy performance of buildings.
APA, Harvard, Vancouver, ISO, and other styles
30

Goonetilleke, Samanthi C., Jeffrey P. Wong, and Brian D. Corneil. "Validation of a within-trial measure of the oculomotor stop process." Journal of Neurophysiology 108, no. 3 (August 1, 2012): 760–70. http://dx.doi.org/10.1152/jn.00174.2012.

Full text
Abstract:
The countermanding (or stop signal) task requires subjects try to withhold a planned movement upon the infrequent presentation of a stop signal. We have previously proposed a within-trial measure of movement cancellation based on neck muscle recruitment during the cancellation of eye-head gaze shifts. Here, we examined such activity after either a bright or dim stop signal, a manipulation known to prolong the stop signal reaction time (SSRT). Regardless of stop signal intensity, subjects generated an appreciable number of head-only errors during successfully cancelled gaze shifts (compensatory eye-in-head motion ensured gaze stability), wherein subtle head motion toward a peripheral target was ultimately stopped by a braking pulse of antagonist neck muscle activity. Both the SSRT and timing of antagonist muscle recruitment relative to the stop signal increased for dim stop signals and decreased for longer stop signal delays. Moreover, we observed substantial variation in the distribution of antagonist muscle recruitment latencies across our sample. The magnitude and variance of the SSRTs and antagonist muscle recruitment latencies correlated positively across subjects, as did the within-subject changes across bright and dim stop signals. Finally, we fitted our behavioral data with a race model architecture that incorporated a lower threshold for initiating head movements. This model allowed us to estimate the efferent delay between the completion of a central stop process and the recruitment of antagonist neck muscles; the estimated efferent delay remained consistent within subjects across stop signal intensity. Overall, these results are consistent with the hypothesis that neck muscle recruitment during a specific subset of cancelled trials provides a peripheral expression of oculomotor cancellation on a single trial. In the discussion, we briefly speculate on the potential value of this measure for research in basic or clinical domains and consider current issues that limit more widespread use.
APA, Harvard, Vancouver, ISO, and other styles
31

Angel Valenzuela Ygnacio, Luis, Margarita Giraldo Retuerto, and Laberiano Andrade-Arenas. "Mobile application with business intelligence to optimize the control process of tourist agencies." Indonesian Journal of Electrical Engineering and Computer Science 29, no. 3 (March 1, 2023): 1708. http://dx.doi.org/10.11591/ijeecs.v29.i3.pp1708-1718.

Full text
Abstract:
<span lang="EN-US">Currently, tourism companies do not have a strict control of clients, drivers, buses, guides and tickets; Drivers are known to drive at excessive speed and put their lives and the lives of their customers at risk, this occurs because drivers want to gain more passengers or race with a car from the same company. For this reason, the research work aims to optimize the travel control process for tourism companies by applying business intelligence. The kimball methodology was used, which allows making a dimensional model with dimensions and a central table of the process or event that occurs in real time, tourism companies would be benefiting since in decision-making in the field of travel control, they would have the most relevant data. In addition, users are also beneficiaries, clients who minimize risks when acquiring the tourism service, this would increase more profits for the tourism company and increase its clientele.</span>
APA, Harvard, Vancouver, ISO, and other styles
32

Suliman, Y. A., J. E. Lund-Jacobsen, R. Christensen, L. E. Kristensen, and D. Furst. "POS0204-HPR THE ROLE OF ARTIFICIAL INTELLIGENCE IN DETECTING DISTINCTIVE FACIAL FEATURES IN PATIENTS WITH SYSTEMIC SCLEROSIS, A PILOT STUDY." Annals of the Rheumatic Diseases 82, Suppl 1 (May 30, 2023): 327.2–328. http://dx.doi.org/10.1136/annrheumdis-2023-eular.3702.

Full text
Abstract:
BackgroundScleroderma (SSc) is a rare autoimmune fibrosing multisystem disease with high rates of morbidity and mortality. Scleroderma is usually diagnosed by rheumatologists and/or dermatologists. However, a delay in disease recognition may occur due to a delay in referral, as internists and family physicians are not familiar with the disease and its features. Even though, the facial features of the disease are characteristic for scleroderma, diagnosis is usually based on other signs and symptoms (e.g., skin thickening on extremities), and confirmed with investigations such as autoantibody tests, HRCT, and endoscopies. Late presentation of SSc pts. to rheumatologists is commonly reported[1]; patients with dcSSc generally presented to their primary health care Practitioner (HCP) after symptoms had persisted for up to 1 year.We hypothesize thatfacial features of SSc patients are distinctive and can be detected by a trained AI system after processing a mobile phone picture of SSc patient’s face through Convolutional Neural Networks (CNN). This system could be used by family practioner and internists, aiding them to increase suspicion of SSc and refer patients in a timely manner.ObjectivesIn a pilot study, we aim to examine the ability of an AI facial recognition system to identify SSc related facial features.MethodsImages of SSc pts were compared to a group of age and sex matched normal faces. Deep Learning (DL) - Artificial Intelligence (AI) algorithms evaluated all the pixels in the facial map and identified their utility in the facial recognition prediction models. Using a transfer learning implementation developed by the Danish Viceron ApS AI company. This core model is well-established and experienced algorithm for AI facial feature recognition, based on > 1 million general public faces for facial feature recognition. AI evaluated multiple layers of mathematical models, either isolated or pooled to generate a predictive model and eliminate unnecessary data. Smoothing and uniformity protocols were established for the obtained through preprocessing for the Convolutional Neural Networks (CNN) (Figure 1).ResultsImages of 60 SSc pts from the internet were compared to a group of age and sex matched normal faces. We developed models among the 60 SSc facial images and matched controls that were able to identify SSc distinctive facial features with variable specificity and sensitivity. Multiple AI models were used on the training set which includes the first 40 patients and as a partial validation step on the other 20 patients with an equal baseline group built from normal faces. The best pre-trained model, fine-tuned on the training set, achieved 80-90% accuracy on the respective datasets. A larger data set, may provide better accuracy, allowing for more generalized facial recognition. The latter is planned, given the initial reasonable success from the pilot study.ConclusionAutomated pre-processing and the application of AI algorithms in SSc face identification, gave encouraging pilot results (80-90% accuracy). Further testing to develop a larger protocol to establish the effects of race/ethnicity, sex, age, disease duration, disease activity is warranted.Reference[1]Oliver Distler et al Factors influencing early referral, early diagnosis and management in patients with diffuse cutaneous systemic sclerosis,Rheumatology, Issue 5, May 2018,Figure 1.The Visual Geometry Group (VGG)-16 CNN architecture and Inception-V3 architecture represent baseline Deep Learning solutions. Based on the robustness the (VGG)-16 CNN architecture was chosen as a starting point.Acknowledgements:NIL.Disclosure of InterestsNone Declared.
APA, Harvard, Vancouver, ISO, and other styles
33

Dang, Xiaochao, Lin Su, Zhanjun Hao, and Xu Shang. "Dynamic Offloading Method for Mobile Edge Computing of Internet of Vehicles Based on Multi-Vehicle Users and Multi-MEC Servers." Electronics 11, no. 15 (July 26, 2022): 2326. http://dx.doi.org/10.3390/electronics11152326.

Full text
Abstract:
With the continuous development of intelligent transportation system technology, vehicle users have higher and higher requirements for low latency and high service quality of task computing. The computing offloading technology of mobile edge computing (MEC) has received extensive attention in the Internet of Vehicles (IoV) architecture. However, due to the limited resources of the MEC server, it cannot meet the task requests from multiple vehicle users simultaneously. For this reason, making correct and fast offloading decisions to provide users with a service with low latency, low energy consumption, and low cost is still a considerable challenge. Regarding the issue above, in the IoV environment where vehicle users race, this paper designs a three-layer system task offloading overhead model based on the Edge-Cloud collaboration of multiple vehicle users and multiple MEC servers. To solve the problem of minimizing the total cost of the system performing tasks, an Edge-Cloud collaborative, dynamic computation offloading method (ECDDPG) based on a deep deterministic policy gradient is designed. This method is deployed at the edge service layer to make fast offloading decisions for tasks generated by vehicle users. The simulation results show that the performance is better than the Deep Q-network (DQN) method and the Actor-Critic method regarding reward value and convergence. In the face of the change in wireless channel bandwidth and the number of vehicle users, compared with the basic method strategy, the proposed method has better performance in reducing the total computational cost, computing delay, and energy consumption. At the same time, the computational complexity of the system execution tasks is significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
34

Park, Jiheum, Michael Artin, Kate E. Lee, Yoanna S. Pumpalova, Myles Ingram, Benjamin May, Michael Park, Chin Hur, and Nicholas Tatonetti. "Deep learning on time series laboratory test results from electronic health records for early detection of pancreatic cancer." Journal of Clinical Oncology 40, no. 16_suppl (June 1, 2022): e16268-e16268. http://dx.doi.org/10.1200/jco.2022.40.16_suppl.e16268.

Full text
Abstract:
e16268 Background: Pancreatic cancer (PC) has a uniquely poor survival rate due to the absence of proven and effective methods for early detection. We thus aimed to leverage recent advances in deep learning towards the task of inferring early risk of PC from longitudinal laboratory test data contained within Electronic Health Records (EHR) data. Methods: In this study, we develop a novel deep learning framework for incorporating longitudinal clinical data from EHR to infer risk for PC. This framework includes a novel training protocol, which enforces an emphasis on early detection by applying an independent Poisson-random mask on proximal-time measurements for each variable. Data fusion for irregular multivariate time-series features is enabled by a “grouped” neural network (GrpNN) architecture, which uses representation learning to generate a dimensionally reduced vector for each measurement set before generating a final prediction. These models were evaluated using EHR data from Tripartite Request Assessment Committee (TRAC). Results: Our framework demonstrated better performance on early detection (AUROC 0.671, CI 95% 0.667–0.675, p < 0.001) at 12 months prior to diagnosis compared to a logistic regression and a feedforward neural network baseline (black-box model). We demonstrate that our masking strategy results greater improvements at distal times prior to diagnosis, and that our GrpNN model improves generalizability by reducing overfitting relative to the feedforward baseline (Table). The results were consistent across reported race. Conclusions: Our study presents new approaches for integrating multimodal longitudinal clinical data with bias reduction strategies which results in improved early detection of PC. This study demonstrates for the first time the utility of multivariate time series laboratory test results for early detection of PC. Our proposed algorithm is potentially generalizable to improve risk predictions for other types of cancer and other diseases where early detection can improve survival. We split data into train set (80%) and hold-out set (20%) and presented mean AUROC and AUPRC with 95% confidence intervals.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
35

Williams, S., A. Seixas, G. Avirappattu, R. Robbins, L. Lough, A. Rogers, L. Beaugris, M. Bernard, and G. Jean-Louis. "1058 Modeling Self-reported Sleep Duration And Hypertension Using Deep Learning Network: Analysis Of The National Health And Nutrition Examination Survey Data." Sleep 43, Supplement_1 (April 2020): A402. http://dx.doi.org/10.1093/sleep/zsaa056.1054.

Full text
Abstract:
Abstract Introduction Epidemiologic data show strong associations between self-reported sleep duration and hypertension (HTN). Modeling these associations is suboptimal when utilizing traditional logistic regressions. In this study, we modeled the associations of sleep duration and HTN using Deep Learning Network. Methods Data were extracted from participants (n=38,540) in the National Health and Nutrition Examination Survey (2006-2016), a nationally representative study of the US civilian non-institutionalized population. Self-reported demographic, medical history and sleep duration were determined from household interview questions. HTN was determined as SBP ≥ 130 mmHg and DBP ≥ 80 mmHg. We used a deep neural network architecture with three hidden layers with two input features and one binary output to model associations of sleep duration with HTN. The input features are the hours of sleep (limited to between 4 and 10 hours) and its square; and the output variable HTN. Probability predictions were generated 100 times from resampled (with replacement) data and averaged. Results Participants ranged from 18 to 85 years old; 51% Female, 41% white, 22% black, 26% Hispanic, 46% married, and 25% &lt; high school. The model showed that sleeping 7 hours habitually was associated with the least observed HTN probabilities (P=0.023%). HTN probabilities increased as sleep duration decreased (6hrs=0.05%; 5hrs=0.110%; 4hrs=0.16%); HTN probabilities for long sleepers were: (8hrs=0.027; 9hrs=0.024; 10hrs=0.022). Whites showed sleeping 7hrs or 9hrs was associated with lowest HTN probabilities (0.008 vs. 0.005); blacks showed the lowest HTN probabilities associated with sleeping 8hrs (0.07), and Hispanics showed the lowest HTN probabilities sleeping 7hrs (0.04). Conclusion We found that sleeping 7 hours habitually confers the least amount of risk for HTN. Probability of HTN varies as a function of individual’s sex and race/ethnicity. Likewise, the finding that blacks experience the lowest HTN probability when they sleep habitually 8 hours is of great public health importance. Support This study was supported by funding from the NIH: R01MD007716, R01HL142066, R01AG056531, T32HL129953, K01HL135452, and K07AG052685.
APA, Harvard, Vancouver, ISO, and other styles
36

Ibrahim, Sarmad Khaleel, Nooruldeen Q. Ismaeel, and Saif A. Abdulhussien. "Comparison study of channel coding on non-orthogonal multiple access techniques." Bulletin of Electrical Engineering and Informatics 11, no. 2 (April 1, 2022): 909–16. http://dx.doi.org/10.11591/eei.v11i2.3348.

Full text
Abstract:
Some of the benefits of fifth-generation (5G) mobile communications include low latency, fast data rates, and increased amount of perceived service quality of users and base station capacity. The purpose of this paper is to solve some of the problems in the traditional mobile system by increasing the channel capacity, non-orthogonal multiple access (NOMA), has a chance of winning the race, power-domain NOMA (PD-NOMA) is widely used in but it requires a large power imbalance between the signals allocated to various users to work. This paper also proposes an improved mobile system model and compares it with a traditional mobile system, then evaluates the effect of channel coding types on the spectrum efficiency performance. A proposed mobile system relied on increasing the number of users as well as increasing the frequency spectrum and is also proposed to improve the error rate, which is incorporated into NOMA and orthogonal frequency division multiplexing (OFDM) schemes at the same time to provide great flexibility and compatibility with other services, such as the 5G and sixth-generation (6G) systems. The mobile gully system (MGS) system is compared to a traditional system, the result is demonstrated that the proposed outperforms the orthogonal multiple access (OMA) system in terms of sum-rate capacity, and bit error rate (BER) performance.
APA, Harvard, Vancouver, ISO, and other styles
37

Nyrkov, Anatoliy, Konstantin Ianiushkin, Andrey Nyrkov, Yulia Romanova, and Vagiz Gaskarov. "Data structures access model for remote shared memory." E3S Web of Conferences 244 (2021): 07001. http://dx.doi.org/10.1051/e3sconf/202124407001.

Full text
Abstract:
Recent achievements in high-performance computing significantly narrow the performance gap between single and multi-node computing, and open up opportunities for systems with remote shared memory. The combination of in-memory storage, remote direct memory access and remote calls requires rethinking how data organized, protected and queried in distributed systems. Reviewed models let us implement new interpretations of distributed algorithms allowing us to validate different approaches to avoid race conditions, decrease resource acquisition or synchronization time. In this paper, we describe the data model for mixed memory access with analysis of optimized data structures. We also provide the result of experiments, which contain a performance comparison of data structures, operating with different approaches, evaluate the limitations of these models, and show that the model does not always meet expectations. The purpose of this paper to assist developers in designing data structures that will help to achieve architectural benefits or improve the design of existing distributed system.
APA, Harvard, Vancouver, ISO, and other styles
38

Nawaz Jadoon, Rab, Mohsin Fayyaz, WuYang Zhou, Muhammad Amir Khan, and Ghulam Mujtaba. "PCOI: Packet Classification‐Based Optical Interconnect for Data Centre Networks." Mathematical Problems in Engineering 2020 (July 17, 2020): 1–11. http://dx.doi.org/10.1155/2020/2903157.

Full text
Abstract:
To support cloud services, Data Centre Networks (DCNs) are constructed to have many servers and network devices, thus increasing the routing complexity and energy consumption of the DCN. The introduction of optical technology in DCNs gives several benefits related to routing control and energy efficiency. This paper presents a novel Packet Classification based Optical interconnect (PCOI) architecture for DCN which simplifies the routing process by classifying the packet at the sender rack and reduces energy consumption by utilizing the passive optical components. This architecture brings some key benefits to optical interconnects in DCNs which include (i) routing simplicity, (ii) reduced energy consumption, (iii) scalability to large port count, (iv) packet loss avoidance, and (v) all-to-one communication support. The packets are classified based on destination rack and are arranged in the input queues. This paper presents the input and output queuing analysis of the PCOI architecture in terms of mathematical analysis, the TCP simulation in NS2, and the physical layer analysis by conducting simulation in OptiSystem. The packet loss in the PCOI has been avoided by adopting the input and output queuing model. The output queue of PCOI architecture represents an M/D/32 queue. The simulation results show that PCOI achieved a significant improvement in terms of throughput and low end-to-end delay. The eye-diagram results show that a good quality optical signal is received at the output, showing a very low Bit Error Rate (BER).
APA, Harvard, Vancouver, ISO, and other styles
39

Kundu, Sourav, and Rajshekhar Singhania. "Forecasting the United States Unemployment Rate by Using Recurrent Neural Networks with Google Trends Data." International Journal of Trade, Economics and Finance 11, no. 6 (December 2020): 135–40. http://dx.doi.org/10.18178/ijtef.2020.11.6.679.

Full text
Abstract:
We study the problem of obtaining an accurate forecast of the unemployment claims using online search data. The motivation for this study arises from the fact that there is a need for nowcasting or providing a reliable short-term estimate of the unemployment rate. The data regarding initial jobless claims are published by the US Department of labor weekly. To tackle the problem of getting an accurate forecast, we propose the use of the novel Long Short-Term Memory (LSTM) architecture of Recurrent Neural Networks, to predict the unemployment claims (initial jobless claims) using the Google Trends query share for certain keywords. We begin by analyzing the correlation of a large number of keywords belonging to different aspects of the economy with the US initial jobless claims data. We take 15-year weekly data from January 2004 to January 2019 and create two different models for analysis: a Vector Autoregressive Model (VAR) model combining the official unemployment claims series with the search trends for the keyword ‘job offers’ taken from Google Trends and an LSTM model with only the Google trends time series data for the complete set of identified keywords. Our analysis reveals that the LSTM model outperforms the VAR model by a significant margin in predicting the unemployment claims across different forecast horizons.
APA, Harvard, Vancouver, ISO, and other styles
40

Krdzavac, Nenad, Dragan Gasevic, and Vladan Devedzic. "Model driven engineering of a tableau algorithm for description logics." Computer Science and Information Systems 6, no. 1 (2009): 23–43. http://dx.doi.org/10.2298/csis0901023k.

Full text
Abstract:
This paper presents a method for implementing tableau algorithm for description logics (DLs). The architectures of the present DL reasoners such as RACER or FaCT were developed using programming languages as Java or LISP. The implementations are not based on original definition of the abstract syntax, but they require transformation of abstract syntax into concrete syntax implementation languages use. In order to address these issues, we propose the use of model-driven engineering principles for the development of a DL reasoner where a definition of a DL abstract syntax is provided by means of metamodels. The presented approach is based on the use of a MOF-based model repository and QVT-like transformations, which transform models compliant to the DL metamodel taken from the OMG's Ontology Definition Metamodel specification into models compliant to the Tableau metamodel defined in this paper. .
APA, Harvard, Vancouver, ISO, and other styles
41

Abbott, Erik C., Alexandria Brenkmann, Craig Galbraith, Joshua Ong, Ian J. Schwerdt, Brent D. Albrecht, Tolga Tasdizen, and Luther W. McDonald IV. "Dependence of UO2 surface morphology on processing history within a single synthetic route." Radiochimica Acta 107, no. 12 (November 26, 2019): 1121–31. http://dx.doi.org/10.1515/ract-2018-3065.

Full text
Abstract:
AbstractThis study aims to determine forensic signatures for processing history of UO2based on modifications in intermediate materials within the uranyl peroxide route. Uranyl peroxide was calcined to multiple intermediate U-oxides including Am-UO3, α-UO3, and α-U3O8during the production of UO2. The intermediate U-oxides were then reduced to α-UO2via hydrogen reduction under identical conditions. Powder X-ray diffractometry (p-XRD) and X-ray photoelectron spectroscopy (XPS) were used to analyze powders of the intermediate U-oxides and resulting UO2to evaluate the phase and purity of the freshly synthesized materials. All U-oxides were also analyzed via scanning electron microscopy (SEM) to determine the morphology of the freshly prepared powders. The microscopy images were subsequently analyzed using the Morphological Analysis for Materials (MAMA) version 2.1 software to quantitatively compare differences in the morphology of UO2from each intermediate U-oxide. In addition, the microscopy images were analyzed using a machine learning model which was trained based on a VGG 16 architecture. Results show no differences in the XRD or XPS spectra of the UO2produced from each intermediate. However, results from both the segmentation and machine learning proved that the morphology was quantifiably different. In addition, the morphology of UO2was very similar, if not identical, to the intermediate material from which it was prepared, thus making quantitative morphological analysis a reliable forensic signature of processing history.
APA, Harvard, Vancouver, ISO, and other styles
42

Deng, Tianyang, Yu Niu, Lingfeng Yin, Zhiqiang Lin, and Zhanjie Li. "Load Distribution Optimization of Steel Storage Rack Based on Genetic Algorithm." Buildings 12, no. 11 (October 24, 2022): 1782. http://dx.doi.org/10.3390/buildings12111782.

Full text
Abstract:
The distribution of load has high uncertainty, which is the main cause of a rack structure’s instabilities. The objective of this study was to identify the most unfavorable and favorable load distributions on steel storage racks with and without bracings under seismic loading through a stochastic optimization—a genetic algorithm (GA). This paper begins with optimizing the most unfavorable and favorable load distributions on the steel storage racks with and without bracings using GA. Based on the optimization results, the failure position and seismic performance influencing factors, such as the load distributions on the racks and at hazardous positions, are then identified. In addition, it is demonstrated that the maximum stress ratio of the uprights under the most unfavorable load distribution is higher than that under the full-load normal design, and it is not the case that the higher the center of gravity the more dangerous the steel storage rack is, demonstrating that the load distribution pattern has a significant impact on the structural safety of steel storage racks. The statistics of the distributions of the load generated during the optimization of the GA and the contours of the probability distributions of the load are generated. Combining the probability distribution contours and the GA’s optimization findings, the “convex” distribution hazard model and the “concave” distribution safety model for a steel storage rack with bracings are identified. In addition, the features of the distribution hazard model and the load distribution safety model are also identified for steel storage racks without bracings.
APA, Harvard, Vancouver, ISO, and other styles
43

Agus, Minarno Eko, Sasongko Yoni Bagas, Munarko Yuda, Nugroho Adi Hanung, and Zaidah Ibrahim. "Convolutional Neural Network featuring VGG-16 Model for Glioma Classification." JOIV : International Journal on Informatics Visualization 6, no. 3 (September 30, 2022): 660. http://dx.doi.org/10.30630/joiv.6.3.1230.

Full text
Abstract:
Magnetic Resonance Imaging (MRI) is a body sensing technique that can produce detailed images of the condition of organs and tissues. Specifically related to brain tumors, the resulting images can be analyzed using image detection techniques so that tumor stages can be classified automatically. Detection of brain tumors requires a high level of accuracy because it is related to the effectiveness of medical actions and patient safety. So far, the Convolutional Neural Network (CNN) or its combination with GA has given good results. For this reason, in this study, we used a similar method but with a variant of the VGG-16 architecture. VGG-16 variant adds 16 layers by modifying the dropout layer (using softmax activation) to reduce overfitting and avoid using a lot of hyper-parameters. We also experimented with using augmentation techniques to anticipate data limitations. Experiment using data The Cancer Imaging Archive (TCIA) - The Repository of Molecular Brain Neoplasia Data (REMBRANDT) contains MRI images of 130 patients with different ailments, grades, races, and ages with 520 images. The tumor type was Glioma, and the images were divided into grades II, III, and IV, with the composition of 226, 101, and 193 images, respectively. The data is divided by 68% and 32% for training and testing purposes. We found that VGG-16 was more effective for brain tumor image classification, with an accuracy of up to 100%.
APA, Harvard, Vancouver, ISO, and other styles
44

Arviv, Eyal, Simo Hanouna, and Oren Tsur. "It’s a Thin Line Between Love and Hate: Using the Echo in Modeling Dynamics of Racist Online Communities." Proceedings of the International AAAI Conference on Web and Social Media 15 (May 22, 2021): 61–70. http://dx.doi.org/10.1609/icwsm.v15i1.18041.

Full text
Abstract:
The (((echo))) symbol - triple parentheses surrounding a name, made it to mainstream social networks in early 2016, with the intensification of the U.S. Presidential race. It was used by members of the alt-right, white supremacists and internet trolls to tag people of Jewish heritage - a modern incarnation of the infamous yellow badge (Judenstern) used in Nazi-Germany. Tracking this trending meme, its meaning, and its function has proved elusive for its semantic ambiguity (e.g., a symbol for a virtual hug). In this paper we report of the construction of an appropriate dataset allowing the reconstruction of networks of racist communities and the way they are embedded in the broader community. We combine natural language processing and structural network analysis to study communities promoting hate. In order to overcome dog-whistling and linguistic ambiguity, we propose a multi-modal neural architecture based on a BERT transformer and a BiLSTM network on the tweet level, while also taking into account the users ego-network and meta features. Our multi-modal neural architecture outperforms a set of strong baselines. We further show how the use of language and network structure in tandem allows the detection of the leaders of the hate communities. We further study the "intersectionality" of hate and show that the antisemitic echo correlates with hate speech that targets other minority and protected groups. Finally, we analyze the role IRA trolls assumed in this network as part of the Russian interference campaign. Our findings allow a better understanding of recent manifestations of racism and the dynamics that facilitate it.
APA, Harvard, Vancouver, ISO, and other styles
45

Di Rito, Gianpietro, Romain Kovel, Marco Nardeschi, Nicola Borgarelli, and Benedetto Luciano. "Minimisation of Failure Transients in a Fail-Safe Electro-Mechanical Actuator Employed for the Flap Movables of a High-Speed Helicopter-Plane." Aerospace 9, no. 9 (September 19, 2022): 527. http://dx.doi.org/10.3390/aerospace9090527.

Full text
Abstract:
The work deals with the model-based characterization of the failure transients of a fail-safe rotary EMA developed by Umbragroup (Italy) for the flap movables of the RACER helicopter-plane by Airbus Helicopters (France). Since the reference application requires quasi-static position-tracking with high disturbance-rejection capability, the attention is focused on control hardover faults which determine an actuator runaway from the commanded setpoint. To perform the study, a high-fidelity nonlinear model of the EMA is developed from physical first principles and the main features of health-monitoring and closed-loop control functions (integrating the conventional nested loops architecture with a deformation feedback loop enhancing the actuator stiffness) are presented. The EMA model is then validated with experiments by identifying its parameters by ad-hoc tests. Simulation results are finally proposed to characterize the failure transients in worst case scenarios by highlighting the importance of using a specifically designed back-electromotive damper circuitry into the EMA power electronics to limit the position deviation after the fault detection.
APA, Harvard, Vancouver, ISO, and other styles
46

Aliu, Nora, Vlora Aliu, and Modest Gashi. "9. Digital Technologies - The Future Way of Learning in Higher Education." Review of Artistic Education 26, no. 1 (March 1, 2023): 285–92. http://dx.doi.org/10.2478/rae-2023-0039.

Full text
Abstract:
Abstract Digitalization trends have moved forward with accelerated steps, surrounding all spheres of our lives with the provision of light services, faster and less expensive communications, more functions, and a great influence on increasing quality of life. In developed countries, digitization of education is seen as one of the priority goals for achieving sustainable development. Higher education is an essential pillar in developing new knowledge economies for the twenty-first century, and Kosovo national authorities are strategically oriented toward the digitalization of higher education. Digitization includes a wide range of activities ranging from lecturers, group work lectures, and inclusion in individual or group study, and exams as an integral part of the revolutionization of higher education. The period of COVID19 has pushed forward the digitization of education in many countries of the world. In Kosovo, this was the period that established the dividing boundaries between the traditional multi-century teaching eras with the new digital era. This period is also characterized by the challenges faced by teaching and learning in the use of efficient digitized methods. This paper explores the impact of digitization on teaching and learning, specifically in medicine and architecture. Also, this work is intended to offer a model of how digital transformation can be used to build competitive advantages for universities. Based on the condition of the accreditation agency and the standards of the International Society for Technology in Education, we can say that the use of software and artistic methods in teaching processes affects the development and advancement of young people.
APA, Harvard, Vancouver, ISO, and other styles
47

Meqdad, Maytham N., Hafiz Tayyab Rauf, and Seifedine Kadry. "Bone Anomaly Detection by Extracting Regions of Interest and Convolutional Neural Networks." Applied System Innovation 6, no. 1 (February 2, 2023): 21. http://dx.doi.org/10.3390/asi6010021.

Full text
Abstract:
The most suitable method for assessing bone age is to check the degree of maturation of the ossification centers in the radiograph images of the left wrist. So, a lot of effort has been made to help radiologists and provide reliable automated methods using these images. This study designs and tests Alexnet and GoogLeNet methods and a new architecture to assess bone age. All these methods are implemented fully automatically on the DHA dataset including 1400 wrist images of healthy children aged 0 to 18 years from Asian, Hispanic, Black, and Caucasian races. For this purpose, the images are first segmented, and 4 different regions of the images are then separated. Bone age in each region is assessed by a separate network whose architecture is new and obtained by trial and error. The final assessment of bone age is performed by an ensemble based on the Average algorithm between 4 CNN models. In the section on results and model evaluation, various tests are performed, including pre-trained network tests. The better performance of the designed system compared to other methods is confirmed by the results of all tests. The proposed method achieves an accuracy of 83.4% and an average error rate of 0.1%.
APA, Harvard, Vancouver, ISO, and other styles
48

Hossein Abbasi Abyaneh, Ali, Maizi Liao, and Seyed Majid Zahedi. "Malcolm: Multi-agent Learning for Cooperative Load Management at Rack Scale." ACM SIGMETRICS Performance Evaluation Review 51, no. 1 (June 26, 2023): 39–40. http://dx.doi.org/10.1145/3606376.3593550.

Full text
Abstract:
We consider the problem of balancing the load among servers in dense racks for microsecond-scale workloads. To balance the load in such settings, tens of millions of scheduling decisions have to be made per second. Achieving this throughput while providing microsecond-scale latency is extremely challenging. To address this challenge, we design a fully decentralized load-balancing framework, which allows servers to collectively balance the load in the system. We model the interactions among servers as a cooperative stochastic game. To find the game's parametric Nash equilibrium, we design and implement a decentralized algorithm based on multi-agent-learning theory. We empirically show that our proposed algorithm is adaptive and scalable while outperforming state-of-the art alternatives. The full paper of this abstract can be found at https://doi.org/10.1145/3570611.
APA, Harvard, Vancouver, ISO, and other styles
49

Fakhruzzaman, Muhammad Noor, and Sie Wildan Gunawan. "CekUmpanKlik: an artificial intelligence-based application to detect Indonesian clickbait." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 4 (December 1, 2022): 1232. http://dx.doi.org/10.11591/ijai.v11.i4.pp1232-1238.

Full text
Abstract:
This study attempted to deploy a high performing natural language processing model which specifically trained on flagging clickbait Indonesian news headline. The deployed model is accessible from any internet-connected device because it implements representational state transfer application programming interface (RESTful API). The application is useful to avoid clickbait news which often solely purposed to rack money but not delivering trustworthy news. With many online news outlets adopting the click-based advertising, clickbait headline become ubiquitous. Thus, newsworthy articles often cluttered with clickbait news. Leveraging state-of-the-art bidirectional encoder representation from transformers (BERT), a lightweight web application is developed. This study offloaded the computing resources needed to train the model on a separate instance of virtual server and then deployed the trained model on the cloud, while the client-side application only needs to send a request to the API and the cloud server will handle the rest, often known as three-layer architecture. This study designed and developed a web-based application to detect clickbait in Indonesian using IndoBERT as its language model. The application usage and potentials were discussed. The source code and running application are available for public with a performance of mean receiver operating characteristic-area under the curve (ROC-AUC) of 89%.
APA, Harvard, Vancouver, ISO, and other styles
50

Petrescu, Alexandru Radu. "7. Stylistic Considerations on Scene No. 8 from the Musical Performance PreȚioasele Ridicole [The Affected Young Ladies] by Vasile Spătărelu." Review of Artistic Education 21, no. 1 (June 1, 2021): 46–55. http://dx.doi.org/10.2478/rae-2021-0007.

Full text
Abstract:
Abstract Molière is one of the playwrights who not only marked and defined the seventeenth century in France, but, creator of the modern comedy and discoverer of the authentic comic, contributed fully to educating the public in his present and for the future. His comedies have stood the test of time, as they resonated with the audience, which found itself in them. In the twentieth century, his plays entered the artistic territories adjacent to the theater trough other creators, who turned them into film scripts and librettos for musical theater. The subject of our research is a fragment pertaining to the farcePrețioasele ridicole [The Affected Young Ladies] turned into a musical show under the signature of Vasile Spătărelu. An important name in Romanian music, creator with great melodic imagination, harmonic refinement and perfect literary taste, the composer from Iași made a possible compositional model for the genre, which is part of the Romanian musical show’s route opened by Paul Constantinescu and Pascal Bentoiu. His work is distinguished by the stylistic area to which it adheres, by rhythm and vivacity, by the original combination between the spoken and the sung text, by the ingenious architecture of the scenes, by the adequacy of the writing technique to the desired effect and expression. Fully requesting all the resources of the interpreters, we consider that the analysis of the important moments is very useful to them. If pages of theatrical exegesis were dedicated to the characters Magdelon and Cathos, for Mascaril we did not identify something similar, much less a stylistic-interpretive analysis from a musical perspective. In order to achieve a complete characterization of the character, useful to potential student performers, we will comment on the first segments of the Scene no.8 in which Mascaril’s “identity” is established, as he presents himself in front of the two “precious ones”. (the entrance and the monologue).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography