Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Visual Digital Facial Markers.

Статті в журналах з теми "Visual Digital Facial Markers"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Visual Digital Facial Markers".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Liu, Jia, Xianjie Zhou*, and Yue Sun. "14 ANALYSIS OF THE EFFECTIVENESS OF DIGITAL EMOTION RECOGNITION DISPLAY DESIGN IN EARLY INTERVENTION FOR SCHIZOPHRENIA." Schizophrenia Bulletin 51, Supplement_1 (February 18, 2025): S8. https://doi.org/10.1093/schbul/sbaf007.014.

Повний текст джерела
Анотація:
Abstract Background Schizophrenia is a chronic and severe mental disorder characterized by disorders of thinking, hallucinations, delusions, and emotional and social dysfunction. Due to their unique psychological development stage and social environment, college students are of special importance for the early identification and intervention of schizophrenia. Early diagnosis and treatment of schizophrenia is essential to improve prognosis, reduce disability, and improve quality of life. However, due to the complexity and heterogeneity of schizophrenia, early diagnosis and treatment pathway selection face many challenges. Knowledge management, as a systematic approach, is able to integrate and optimize information, knowledge, and skills to provide support for the early diagnosis and treatment of schizophrenia. Study aims to explore how to through the perspective of knowledge management, combined with clinical symptoms, drug response, cognitive health and mental related factors, optimize the early diagnosis of schizophrenia and treatment path, and explore the special group of schizophrenia patients’ health care utilization and cost benefit, in order to provide more accurate and more effective mental health services for college students. Methods Using the case-control study method, 300 college students who met the diagnostic criteria for schizophrenia and 150 healthy college students were selected as the control group. By using neuroimaging techniques such as Functional Magnetic Resonance Imaging (fMRI) and Wisconsin Card Sorting Test (WCST), the cognitive and social functions of two groups of students were evaluated. Meanwhile, the visual scanning path pattern analysis technique was used to examine the patient’s ability to process facial emotion perception. After data collection, multiple linear regression analysis was used to explore the relationship between cognitive function, social function, and facial emotion perception and the severity of schizophrenia symptoms. Results The results showed that the schizophrenia patient group performed significantly less in cognitive function tests and social function assessment than the control group (P<0.01). Multiple linear regression analysis showed that cognitive function (β=0.65, P<0.001) and social function (β=0.52, P<0.001) were significant predictors of symptom severity in schizophrenia. In addition, there was a significant correlation between facial emotion perceptual processing ability and symptom severity (β=0.48, P<0.01). In terms of treatment pathway optimization, patient treatment adherence and quality of life can be significantly improved through knowledge management strategies such as patient education, family support and integration of community resources (P<0.05). Discussion The results reveal the key role of cognitive function, social function and facial emotional perception in the early recognition and intervention of schizophrenia. These dimensions not only serve as important indicators for early diagnosis of schizophrenia but also show their importance in assessing patient symptom severity. Impairments of cognitive and social functioning, and a decline in facial emotion perceptual processing, together constitute the biological and behavioral markers of early recognition in schizophrenia. Through the application of knowledge management strategies, such as patient education, family support, and integration of community resources, one can optimize treatment paths for patients with schizophrenia and improve treatment outcomes. Meanwhile, through the application of knowledge management strategies, the significant improvement of symptom severity score and quality of life scale scores after receiving comprehensive treatment, further confirmed the effectiveness of these strategies in improving the treatment effect and patient quality of life. Funding No. YKSZ001.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mai, Hang-Nga, and Du-Hyeong Lee. "Effects of Artificial Extraoral Markers on Accuracy of Three-Dimensional Dentofacial Image Integration: Smartphone Face Scan versus Stereophotogrammetry." Journal of Personalized Medicine 12, no. 3 (March 18, 2022): 490. http://dx.doi.org/10.3390/jpm12030490.

Повний текст джерела
Анотація:
Recently, three-dimensional (3D) facial scanning has been gaining popularity in personalized dentistry. Integration of the digital dental model into the 3D facial image allows for a treatment plan to be made in accordance with the patients’ individual needs. The aim of this study was to evaluate the effects of extraoral markers on the accuracy of digital dentofacial integrations. Facial models were generated using smartphone and stereophotogrammetry. Dental models were generated with and without extraoral markers and were registered to the facial models by matching the teeth or markers (n = 10 in each condition; total = 40). Accuracy of the image integration was measured in terms of general 3D position, occlusal plane, and dental midline deviations. The Mann–Whitney U test and two-way analysis of variance were used to compare results among face-scanning systems and matching methods (α = 0.05). As result, the accuracy of dentofacial registration was significantly affected by the use of artificial markers and different face-scanning systems (p < 0.001). The deviations were smallest in stereophotogrammetry with the marker-based matching and highest in smartphone face scans with the tooth-based matching. In comparison between the two face-scanning systems, the stereophotogrammetry generally produced smaller discrepancies than smartphones.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Conley, Quincy. "Attracting Visual Attention in a Digital Age." International Journal of Cyber Behavior, Psychology and Learning 14, no. 1 (November 10, 2024): 1–24. http://dx.doi.org/10.4018/ijcbpl.359336.

Повний текст джерела
Анотація:
The purpose of this study was to determine whether previously established visual attention patterns remained intact during video scenes designed to elicit specific emotions using a novel suite of biosensors. To examine the relationship between visual attention and emotion, data from eye tracking, facial expression recognition (FER), and galvanic skin response (GSR) combined with survey data were used to identify the bottom-up and top-down features of saliency in videos that contributed to their “interestingness.” Using a mixed-methods design and convenience sampling, participants (N = 42) watched 60 video clips designed to evoke different emotional responses (positive, neutral, or negative). The results indicated that using a suite of biosensors to examine the impacts of bottom-up and top-down features of visual attention was effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Leone, Massimo. "Digital Cosmetics." Chinese Semiotic Studies 16, no. 4 (November 25, 2020): 551–80. http://dx.doi.org/10.1515/css-2020-0030.

Повний текст джерела
Анотація:
AbstractThe earliest extant depictions of the human face are not simply realistic but represented through specific technologies (means) and techniques (styles). In these representations, the face was probably idealized in order to empower its agency through simulacra. The history of art sees humans become increasingly aware of the impact of technology and technique on the production of visual representations of the face. With photography, and even more so with its digital version, technology is developed, hidden, and miniaturized so as to democratize and market technique. The result, however, a naturalization of technology, is increasingly problematic in the era of algorithms: artificial intelligence absorbs the social bias of its engineers. This is particularly evident in the domain of “digital cosmetics”: successful apps are used to process and share billions of facial images, yet few critically reflect on the aesthetic ideology underpinning them. This is an urgent task for visual, social, and cultural semiotics.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kristanto, Verry Noval, Imam Riadi, and Yudi Prayudi. "Analisa Deteksi dan Pengenalan Wajah pada Citra dengan Permasalahan Visual." JISKA (Jurnal Informatika Sunan Kalijaga) 8, no. 1 (January 30, 2023): 78–89. http://dx.doi.org/10.14421/jiska.2023.8.1.78-89.

Повний текст джерела
Анотація:
Facial recognition is a significant part of criminal investigations because it may be used to identify the offender when the criminal's face is consciously or accidentally recorded on camera or video. However, a majority of these digital photos have poor picture quality, which complicates and lengthens the process of identifying a face image. The purpose of this study is to discover and identify faces in these low-quality digital photographs using the Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) face identification method and the Viola-Jones face recognition method. The success percentage for the labeled face in the wild (LFW) dataset is 63.33%, whereas the success rate for face94 is 46.66%, while LDA is only a maximum of 20% on noise and brightness. One of the names and faces from the dataset is displayed by the facial recognition system. The brightness of the image, where the facial item is located, and any new objects that have entered the scene have an impact on the success rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bhumika M. N., Amit Chauhan, Pavana M. S., Sinchan Ullas Nayak, Sujana S., Sujith P., Vedashree D., and Fr Jobi Xavier. "Role of Genetic Markers in Deformation of Lip Prints: A Review." UTTAR PRADESH JOURNAL OF ZOOLOGY 44, no. 21 (October 14, 2023): 334–40. http://dx.doi.org/10.56557/upjoz/2023/v44i213704.

Повний текст джерела
Анотація:
Cheiloscopy is an application of lip phenotypes (sub-clinical cleft phenotype/lip whorls) to establish the identity of an individual. Any kind of change in lips can be caused by facial expression/ facial movement and allow an accurate clinical assessment. All along an orthodontic estimation of lip protrusion, lip competence and lip lines are examined by visual inspection and are recorded in the medical notes. It has been shown in molecular studies that initiation and growth of facial primordia are restrained by an interaction between fibroblast growth factors, sonic hedgehog, bone morphogenetic proteins, homeobox genes Barx1 and Msx1, the distal-less homeobox (Dlx) genes, and local retinoic acid gradients. While mesoderm proliferation during facial development may cause inadequate growth of the maxillary, medial and lateral nasal processes. This review study is mainly focused on the determination of the responsible genetic markers for the deformation of lip prints. This summarized study can be helpful in the identification of an individual especially wherever lip prints are recovered from the scene of the crime.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hansen, Mark B. N. "Affect as Medium, or the `Digital-Facial-Image'." Journal of Visual Culture 2, no. 2 (August 2003): 205–28. http://dx.doi.org/10.1177/14704129030022004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ravelli, Louise J., and Theo Van Leeuwen. "Modality in the digital age." Visual Communication 17, no. 3 (April 13, 2018): 277–97. http://dx.doi.org/10.1177/1470357218764436.

Повний текст джерела
Анотація:
Kress and Van Leeuwen’s book Reading Images: The Grammar of Visual Design (2006[1996]) provides a robust framework for describing modality in visual texts. However, in the digital age, familiar markers of modality are being creatively reconfigured. New technological affordances, including new modes of production, multiple platforms for distribution, and increased user control of modal variables, raise questions about the role of modality in contemporary communication practices and require the framework to be adapted and further developed. This article attempts to set the agenda for such adaptations and, more generally, for rethinking visual modality and its impact in the digital age.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Knific Košir, Aja, and Helena Gabrijelčič Tomc. "Visual effects and their importance in the field of visual media creation." Journal of graphic engineering and design 13, no. 2 (June 2022): 5–13. http://dx.doi.org/10.24867/jged-2022-2-005.

Повний текст джерела
Анотація:
The paper presents visual effects and their importance in the creation of visual media and film industry. After defining the field and the term visual effects, the reader is introduced to the techniques and approaches used to create visual effects, i.e., computer-generated Imagery, 3D computer graphics, motion capture, matchmoving, chroma key, rotoscoping, matte painting, and digital compositing. This is followed by a presentation of the history of visual effects from its beginnings to the digital age, taking in the most successful examples of film production such as Terminator, Toy Story, The Matrix, and Star Wars. As an example of the most representative production, the paper includes a more detailed description of the techniques, methods, and approaches used in the Lord of the Rings film trilogy, focusing on the creation of the visual appearance of the Gollum character, his movement, and facial expressions, the creation of crowds with autonomous agents and the introduction of digital duplicates. The review concludes with an overview of trends for the future of the field.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nagtode, Priti. "Research Paper on Transformative Innovations in Identity Verification and Recognition." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 31, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem35159.

Повний текст джерела
Анотація:
Integrating real-time human detection into identity authentication greatly improves both security and user experience. This strategy decreases fraud risk by analysing physiological and behavioural markers such as facial and eye movements. Its implementation in financial, healthcare, e-commerce, and law enforcement sectors promise to strengthen security measures. Although obstacles exist, the benefits of this human-centered approach are significant, paving the path for a safer digital future. Keywords: Identity verification, biometric authentication, digital security, real-time human detection, fraud prevention, user experience, computer vision, physiological and behavioural clues, facial and vocal patterns, banking, healthcare, e-commerce, law enforcement.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

McLaughlin, Jason, Shiaofen Fang, Sandra W. Jacobson, H. Eugene Hoyme, Luther Robinson, and Tatiana Foroud. "Interactive Feature Visualization and Detection for 3D Face Classification." International Journal of Cognitive Informatics and Natural Intelligence 5, no. 2 (April 2011): 1–16. http://dx.doi.org/10.4018/jcini.2011040101.

Повний текст джерела
Анотація:
A new visual approach to the surface shape analysis and classification of 3D facial images is presented. It allows the users to visually explore the natural patterns and geometric features of 3D facial scans to provide decision-making information for face classification which can be used for the diagnosis of diseases that exhibit facial characteristics. Using surface feature analysis under a digital geometry analysis framework, the method employs an interactive feature visualization technique that allows interactive definition, modification and exploration of facial features to provide the best discriminatory power for a given classification problem. OpenGL based surface shading and interactive lighting are employed to generate visual maps of discriminatory features to visually represent the salient differences between labeled classes. This technique will be applied to a medical diagnosis application for Fetal Alcohol Syndrome (FAS) which is known to exhibit certain facial patterns.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Babu, Shaikh, Ahankare Anand, Vatse Aditya, Waghmare Ajinkya, and Prof M. P. Shinde. "VISUAL CRYPTOGRAPHY: STRENGTHENING BANKING AUTHENTICATION WITH IMAGE PROCESSING." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 10 (October 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem26471.

Повний текст джерела
Анотація:
In today's digital era, we present a multi-factor authentication system that combines Visual Cryptography, Face Authentication, and OTP Verification to fortify banking security. Visual Cryptography splits images into secure shares, Face Authentication verifies unique facial features, and OTP Verification adds an extra layer. The synergy of these factors forms a robust, secure, and user- friendly system, reducing unauthorized access and fraud. This project contributes to cyber-security advancements and improves the banking user experience. In an ever-growing digital banking landscape, this innovative approach ensures data confidentiality and addresses evolving threats. Keywords: Visual Cryptography, Image Processing, Face Recognition, Encryption, Multi-factor Authentication
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wang, Zhi, Rui Zhou, Fu Feng Li, Peng Qian, and Zhu Mei Sun. "Facial Diagnosis of CHB Clinical Syndromes Based on Computer Technology." Applied Mechanics and Materials 423-426 (September 2013): 2968–72. http://dx.doi.org/10.4028/www.scientific.net/amm.423-426.2968.

Повний текст джерела
Анотація:
To objectively describe the facial features of different clinical syndromes of chronic hepatitis B (CHB) with the computer visual and extraction technologies and analyze the changes of facial indexes in facial diagnosis of Traditional Chinese Medicine (TCM). Methods: We used Chinese medicine face consultation digital detection system to acquire face-on photos and analyze the complexion and gloss features. Results: (1) Compared with the normal people, the patients with CHB have different facial indexes. (2)There are differences for facial indexes of patients with CHB of varied clinical syndromes. Conclusion: We extracted objective facial indexes by computer technology, which is a useful method for clinical diagnosis of disease and provides a new thought and method for TCM diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Corte-Real, Ana, Rita Ribeiro, Pedro Armelim Almiro, and Tiago Nunes. "Digital Orofacial Identification Technologies in Real-World Scenarios." Applied Sciences 14, no. 13 (July 5, 2024): 5892. http://dx.doi.org/10.3390/app14135892.

Повний текст джерела
Анотація:
Three-dimensional technology using personal data records has been explored for human identification. The present study aimed to explore two methodologies, photography and orofacial scanning, for assessing orofacial records in forensic scenarios, highlighting their impact on human identification. A pilot and quasi-experimental study was performed using Canon 5D-Full Frame equipment (Tokyo, Japan) and an i700 scanner (Medit, Lusobionic, Portugal) (Seoul, Republic of Korea) with Medit Scan for Clinics (MSC) and Smile Design software (V3.3.2). The sample included living patients (n = 10) and individuals in forensic cases (n = 10). The study was divided into two complementary phases: (i) data collection using 2D and 3D technologies and (ii) visual comparison by superimposition procedures, 3D dental images with 3D facial records (3D–3D), and 2D photography with screen printing of 3D facial records (2D-3S). Statistical analyses were performed using descriptive procedures (Likert scale) and the Mann–Whitney U test. The Mann–Whitney U test comparing the data (n = 220 records) from living individuals and those in forensic cases identified statistically significant differences in the performance of the photographic methods for evaluating intraoral mineralisation (p = 0.004), intraoral soft tissues (p = 0.016), intraoral distortion (p = 0.005) and the scan methods for intraoral extra devices (p = 0.003) and extraoral soft tissues (p = 0.005). A visual comparison (n = 40) allowed 3D–3D superimposition. Additionally, 2D-3S superimposition qualitatively identified the middle third of the face as the corporal area within the anatomical features required for successful surgery. In conclusion, the present study presented evidence-based data suggesting that the IO scan method, as an emergent technology, should be explored as a valuable tool in forensic facial identification in real-world scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Garcia-Lara, Luis F., and Ignacio G. Bugueno-Cordova. "Transcultural language, native Chilean peoples and a new AI-based artistic-cultural expression." HUMAN REVIEW. International Humanities Review / Revista Internacional de Humanidades 19, no. 5 (March 8, 2023): 1–10. http://dx.doi.org/10.37467/revhuman.v19.4932.

Повний текст джерела
Анотація:
This work aims to rescue, transcribe and create new artistic and cultural expressions through the use of native peoples’ historical visual recordings, integrating intelligent technologies. For this purpose, a Chilean native peoples’ digital repository is collected, in order to apply a Digital Humanities-based methodology. From the chosen material, portraits are selected, recoloured through a AI-based model; the facial mesh is constructed using a facial landmark detector; the points of the mesh are reconstructed by a Delaunay triangulation; to finally apply an additive manufacturing process. Thus, these physical pieces allow to compare the native people’s own physiognomies, creating new cultural expressions.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Naben, Maria Fridolin, Januarius Mujiyanto, and Abdurrachman Faridi. "The Existence of Pragmatic Markers in Americas’ Got Talent Judges’ Commentaries." English Education Journal 9, no. 3 (June 28, 2019): 327–33. http://dx.doi.org/10.15294/eej.v9i3.30965.

Повний текст джерела
Анотація:
This study attempts to explain the use of pragmatic markers in Americas’ Got Talent judges’ commentaries. The aims of the study are to analyze the existence of verbal and visual pragmatic markers and explain their relationship. The verbal pragmatic markers are categorized into four types based on the typology of pragmatic markers proposed by Fraser (1996). They are basic markers, commentary markers, parallel markers, and discourse markers. While the visual pragmatic markers divided into thinking face, pointing with gaze and hand movement and smile following the pragmatic function facial gestures from Bavelas & Chovil (2013). This research employed descriptive method with qualitative approach. The object of the study is judges of Americas’ Got Talent season 13 which consist of Simon Cowell, Heidi Klum, Mell B, and Howie Mendel. The study revealed that the judges used the basic markers to express the main message of the comment, commentary markers to express the message contains in the comment toward the main message, parallel markers to express the complement message toward the main message and the discourse markers to express the relation between the main message and the other utterance. The visual pragmatic markers performed also signals certain message related to verbal markers. The judges performed the thinking face to signal the word search, pointing with gaze and hand movement to emphasize the messages convey in utterance and smile to signal pleasure. This research could provide an understanding of EFL learners in using pragmatic markers as a way to improve communication strategy in communication.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Jiang, Yaqian, and Camilla Vásquez. "Exploring local meaning-making resources." Pragmatics of Internet Memes 3, no. 2 (November 19, 2019): 260–82. http://dx.doi.org/10.1075/ip.00042.jia.

Повний текст джерела
Анотація:
Abstract This study examines various combinations of visual and textual meaning-making resources in a popular Chinese meme. The meme features an exogenous image – the grinning facial expression of a U.S. wrestler, D’Angelo Dinero – that has been recontextualized into numerous other visual texts, to create semiotic ensembles with local meanings, which are then distributed across Chinese social media platforms. We analyzed 60 of these image macros, and our findings show that local meanings are created when Dinero’s facial expression is blended with visual references to Chinese digital culture, Chinese popular culture, Chinese social class issues, Chinese politics, and Chinese institutions. The majority of textual elements in the image macros are Chinese; however, the handful of examples that also include other languages typically involve multilingual wordplay and carnivalesque themes. We argue that although the multivalency of the wrestlers’ facial expression invites interpretations of a wide range of affective meanings, an overarching rebellious or transgressive stance is consistent across individual texts.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zoss, Gaspard, Prashanth Chandran, Eftychios Sifakis, Markus Gross, Paulo Gotardo, and Derek Bradley. "Production-Ready Face Re-Aging for Visual Effects." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–12. http://dx.doi.org/10.1145/3550454.3555520.

Повний текст джерела
Анотація:
Photorealistic digital re-aging of faces in video is becoming increasingly common in entertainment and advertising. But the predominant 2D painting workflow often requires frame-by-frame manual work that can take days to accomplish, even by skilled artists. Although research on facial image re-aging has attempted to automate and solve this problem, current techniques are of little practical use as they typically suffer from facial identity loss, poor resolution, and unstable results across subsequent video frames. In this paper, we present the first practical, fully-automatic and production-ready method for re-aging faces in video images. Our first key insight is in addressing the problem of collecting longitudinal training data for learning to re-age faces over extended periods of time, a task that is nearly impossible to accomplish for a large number of real people. We show how such a longitudinal dataset can be constructed by leveraging the current state-of-the-art in facial re-aging that, although failing on real images, does provide photoreal re-aging results on synthetic faces. Our second key insight is then to leverage such synthetic data and formulate facial re-aging as a practical image-to-image translation task that can be performed by training a well-understood U-Net architecture, without the need for more complex network designs. We demonstrate how the simple U-Net, surprisingly, allows us to advance the state of the art for re-aging real faces on video, with unprecedented temporal stability and preservation of facial identity across variable expressions, viewpoints, and lighting conditions. Finally, our new face re-aging network (FRAN) incorporates simple and intuitive mechanisms that provides artists with localized control and creative freedom to direct and fine-tune the re-aging effect, a feature that is largely important in real production pipelines and often overlooked in related research work.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Yarovaya, N. Y. "Animation in digital media." Vestnik VGIK 16, no. 1(59) (May 8, 2024): 145–57. http://dx.doi.org/10.69975/2074-0832-2024-59-1-145-157.

Повний текст джерела
Анотація:
The article is devoted to the creation of animated works in the digital environment. Modern technologies develop new approaches to the formation of communication between the viewer and the work. Analysis of the new creation technologies for computer animation reveals that animation performed on game engines with the use of motion capture and computer graphics transforms the production process, with characters “animated” via digitizing of the actor’s movement. Actor's facial expressions and movements conveyed to the digital character depend on the actor's skill in portraying their character's personality. This develops limitations of creating animated plastic movement, exaggeration, hyperbolization, and conventionality, thus making animators look for new approaches to visual imagery, as well as opportunities to improve the means of artistic expression for the modern computer animation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Allison, Tanine. "Race and the digital face: Facial (mis)recognition in Gemini Man." Convergence: The International Journal of Research into New Media Technologies 27, no. 4 (July 29, 2021): 999–1017. http://dx.doi.org/10.1177/13548565211031041.

Повний текст джерела
Анотація:
Ang Lee’s 2019 film Gemini Man features the most realistic digital human to grace the cinematic screen, specifically a computer-generated version of young Will Smith who battles his more aged self throughout the film. And for the first time in film history, this photorealistic digital human is Black. This essay explores why this groundbreaking achievement has not been acknowledged or celebrated by the film's production or publicity teams. I argue that Will Smith’s particular “post-racial” identity mediates contemporary concerns related to the racialized implications of facial recognition and other digital imaging technologies, as well as to the future of the film industry in the digital age. In the second half of the essay, I examine how the appearance of Will Smith in deepfake parody videos illustrates how race circulates on screens of various media formats. I conclude with a call to use digital visual effects, deepfake tools, and other advanced technologies to further racial justice instead of repeating the problematic usage of the past.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Abbott, Michael, and Charles Forceville. "Visual representation of emotion in manga: Loss of control is Loss of hands in Azumanga Daioh Volume 4." Language and Literature: International Journal of Stylistics 20, no. 2 (May 2011): 91–112. http://dx.doi.org/10.1177/0963947011402182.

Повний текст джерела
Анотація:
Comics and manga have many ways to convey the expression of emotion, ranging from exaggerated facial expressions and hand/arm positions to the squiggles around body parts that Kennedy (1982) calls ‘pictorial runes’. According to Ekman at least some emotions — happiness, surprise, fear, sadness, anger, disgust — are universal, but this is not necessarily the case for their expression in comics and manga. While many of the iconic markers and pictorial runes that Forceville (2005) charted in an Asterix album to indicate that a character is angry occur also in Japanese manga, Shinohara and Matsunaka also found markers and runes that appear to be typical for manga. In this article we examine an unusual signal conveying that a character is emotionally affected in Volume 4 of Kiyohiko Azuma’s Azumanga Daioh: the ‘loss of hands’. Our findings (1) show how non-facial information helps express emotion in manga; (2) demonstrate how hand loss contributes to the characterization of Azuma’s heroines; (3) support the theorization of emotion in Conceptual Metaphor Theory.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fink, Bernhard, and Ian Penton-Voak. "Evolutionary Psychology of Facial Attractiveness." Current Directions in Psychological Science 11, no. 5 (October 2002): 154–58. http://dx.doi.org/10.1111/1467-8721.00190.

Повний текст джерела
Анотація:
The human face communicates an impressive number of visual signals. Although adults' ratings of facial attractiveness are consistent across studies, even cross-culturally, there has been considerable controversy surrounding attempts to identify the facial features that cause faces to be judged attractive or unattractive. Studies of physical attractiveness have attempted to identify the features that contribute to attractiveness by studying the relationships between attractiveness and (a) symmetry, (b) averageness, and (c) nonaverage sexually dimorphic features (hormone markers). Evolutionary psychology proposes that these characteristics all pertain to health, suggesting that humans have evolved to view certain features as attractive because they were displayed by healthy individuals. However, the question remains how single features that are considered attractive relate to each other, and if they form a single ornament that signals mate quality. Moreover, some researchers have recently explained attractiveness preferences in terms of individual differences that are predictable. This article briefly describes what is currently known from attractiveness research, reviews some recent advances, and suggests areas for future researchers' attention.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Švegar, Domagoj. "What does facial symmetry reveal about health and personality?" Polish Psychological Bulletin 47, no. 3 (September 1, 2016): 356–65. http://dx.doi.org/10.1515/ppb-2016-0042.

Повний текст джерела
Анотація:
Abstract Over the last two decades, facial symmetry has been intensively researched. The present article aims to summarize empirical research concerning relations between facial symmetry and health and facial symmetry and personality. A systematic review of the literature shows that facial symmetry is one of the most influential visual markers of attractiveness and health, important for mate selection, while asymmetry can be considered a consequence of an individual’s inability to resist environmental and genetic stressors during development of the organism. However, in spite of evidence suggesting that preferences for facial symmetry are deeply rooted in our evolutionary history, a strong connection between facial symmetry and health is demonstrated only in studies measuring perceived health, while there is only scarce evidence corroborating the link between symmetry and actual health. The interconnections between facial symmetry and personality have not yet been extensively researched. Less than a dozen studies have addressed that issue and they have reached different conclusions. Some evidence suggests that facial symmetry signals personality attributes that indicate good psychological health, while other findings imply that pro-social personality traits negatively correlate with facial symmetry.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Aginor, Spanoulis, Yalatia Papastergiou, Panagiota Mitropoulou, and Georgina Burke. "WED 242 A challenging case of periorbital swelling." Journal of Neurology, Neurosurgery & Psychiatry 89, no. 10 (September 13, 2018): A35.2—A35. http://dx.doi.org/10.1136/jnnp-2018-abn.122.

Повний текст джерела
Анотація:
A 54 year old lady was referred with a eighteen month history of slowly progressive, asymmetric, periorbital and facial oedema. She was thought to have inflammatory orbital pseudotumour.During this time, she had also developed a dry mouth, joint pains and enlarged salivary glands. A salivary gland ultrasound scan was suggestive of Sjogren’s disease although antinuclear antibody and rheumatoid factor were negative. She had recently been prescribed omeprazole for mild dysphagia and hoarse voice from vocal cord oedema.Past medical history included Hashimoto thyroiditis for which she was taking levothyroxine.Clinical examination revealed peri-orbital and facial oedema causing proptosis of the right globe and complete lid closure. Visual acuity, eye movements and visual fields of the left eye were normal. Her voice was hoarse and she had mouth ulcers. She had a widespread erythematous rash that was thought to be a drug reaction to omeprazole.Apart from mild lymphopenia and mildly deranged liver function, blood tests, including inflammatory markers and thyroid function, were unremarkable.MRI of the brain and orbits revealed diffuse oedema of facial structures, including the orbital muscles. A CT body scan was unremarkable.A temporalis muscle biopsy confirmed a high grade NK/T cell lymphoma.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yang, Yang, and Dingguo Yu. "Short Video Copyright Storage Algorithm Based on Blockchain and Expression Recognition." International Journal of Digital Multimedia Broadcasting 2022 (February 27, 2022): 1–11. http://dx.doi.org/10.1155/2022/8827815.

Повний текст джерела
Анотація:
Blockchain technology is widely used in the field of digital right protection technology. The traditional digital right protection scheme is not only inefficient and highly centralized but also has the risk of being modified. Due to its own characteristics, blockchain cannot completely store all the original files of digital resources. In this paper, a convolutional neural network algorithm based on visual priority rule is proposed (CNNVP). This algorithm can recognize facial expressions in the original files of digital resources (for short video of face class). The algorithm extracts facial expression features accurately and makes these features form log files that can represent the original files of digital resources. Then, the paper proposes a short video copyright storage algorithm based on blockchain and facial expression recognition and stores the log file into the blockchain. The above methods not only improve the efficiency of short video copyright storage, reduce the degree of storage centralization, and eliminate the risk that copyright is easy to be modified. Moreover, the computing operation of deep learning technology on short video not only ensures the privacy of storage certificate information but also ensures the possibility of blockchain storage of video information. Experiments show that the algorithm proposed in this paper is more efficient than the traditional copyright storage method. Moreover, the algorithm proposed in this paper can provide technical support to the media resource management department.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Miranda, Luma da Silva, Carolina Gomes da Silva, João Antônio de Moraes, and Albert Rilliard. "Visual and auditory cues of assertions and questions in brazilian portuguese and Mexican Spanishy." Journal of Speech Sciences 9 (September 9, 2020): 73–92. http://dx.doi.org/10.20396/joss.v9i00.14958.

Повний текст джерела
Анотація:
The aim of this paper is to compare the multimodal production of questions in two different language varieties: Brazilian Portuguese and Mexican Spanish. Descriptions of the auditory and visual cues of two speech acts, assertions and questions, are presented based on Brazilian and Mexican corpora. The sentence “Como você sabe” was produced as an yes-no (echo) question and an assertion by ten speakers (five male) from Rio de Janeiro and the sentence “Apaga la tele” was produced as a yes-no question and an assertion by five speakers (three male) from Mexico City. The results show that, whereas the Brazilian Portuguese and Mexican Spanish assertions are produced with different F0 contours and different facial expressions, questions in both languages are produced with specific F0 contours but similar facial expressions. The outcome of this comparative study suggests that lowering the eyebrows, tightening the lid and wrinkling the nose can be considered question markers in both language varieties.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ali Saber Amsalam, Ali Al-Naji, Ammar Yahya Daeef, and Javaan Chahl. "Computer Vision System for Facial Palsy Detection." Journal of Techniques 5, no. 1 (March 31, 2023): 44–51. http://dx.doi.org/10.51173/jt.v5i1.1133.

Повний текст джерела
Анотація:
Facial palsy (FP) is a disorder that affects the seventh facial nerve, which makes the patient unable to control facial movements and expressions with other vital activities. It affects one side of the face, and it is usually diagnosed by the asymmetry of the two sides of the face through visual inspection by a doctor. However, the visual inspection is human-based, which is prone to errors because the doctor is exposed to omission due to fatigue and work stress. Therefore, it is important to develop a new method for detecting FP through artificial intelligence and use a more accurate computerized system to reduce the effort and cost of patients and increase the accuracy of diagnosis. This work aims to establish a safe, useful and high-accuracy diagnostic system for FP that can be used by the patient and proposes to detect FP using a digital camera and deep learning techniques automatically. The system could be used by the patient himself at home without needing to visit the hospital. The proposed system trained 570 images, including 200 images of FP palsy. The proposed FP system achieved an accuracy of 98%. This confirms the effectiveness of the proposed system and makes it an efficient medical examination tool for detecting FP.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chen, Jian. "Passive Infrared Markers for Indoor Robotic Positioning and Navigation." Electronic Imaging 2020, no. 6 (January 26, 2020): 13–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.6.iriacv-013.

Повний текст джерела
Анотація:
By using a new materials system, we developed invisible passive infrared markers that can take on various visual foreground patterns and colors, including white. The material can be coated over many different surfaces such as paper, plastic, wood, metal, and others. Dual-purpose signs are demonstrated where the visual foreground is for human view while the infrared background is for machine view. By hiding digital information in the infrared spectral range, we can enable fiducial markers to enter public spaces without introducing any intrusive visual features for humans. These fiducial markers are robust and easy to detect using off-the-shelf near infrared cameras to assist robot positioning and object identification. This can reduce the barrier for low-cost robots, that are currently deployed in warehouses and factories, to enter offices, stores, and other public spaces and to work alongside with people.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Samad, Manar D., Norou Diawara, Jonna L. Bobzien, John W. Harrington, Megan A. Witherow, and Khan M. Iftekharuddin. "A Feasibility Study of Autism Behavioral Markers in Spontaneous Facial, Visual, and Hand Movement Response Data." IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, no. 2 (February 2018): 353–61. http://dx.doi.org/10.1109/tnsre.2017.2768482.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Agustia, Km Tri Sutrisna, Putu Chrisma Dewi, and Ida Bagus Kurniawan. "The Semiotic Visual Analysis Of Non-Verbal Language In Hotel Advertisements." International Journal of Linguistics and Discourse Analytics (ijolida) 5, no. 1 (September 30, 2023): 24–33. http://dx.doi.org/10.52232/ijolida.v5i1.101.

Повний текст джерела
Анотація:
Numerous digital advertisements deviate from the use of nonverbal communication techniques. Instead of conveying information about a particular product, it provides a misleading and incorrect interpretation. Using the theory of visual semiotics, this study seeks to interpret and comprehend the forms of non-verbal language used in digital advertisements to convey information about specific hotel products. The primary objectives of this study are as follows: (1) to describe the structure of advertising components as nonverbal language in digital advertising for hotels marketing in Bali; (2) to describe the relationship between signs and their references as signifiers and signs in digital advertising for hotel marketing; and (3) to describe and provide an overview of recommendations regarding the appropriate role of meaning in digital advertising for hotel marketing. This research design employs qualitative methods for collecting data from research participants in the form of Instagram images. Nonverbal behavior observed in digital advertising, such as facial expressions, gestures, body language, movement, contact, and appearance, is the subject of study. To provide an overview of the applicability of semiotics to the practice of communicating specific information through digital advertising for hotel marketing. This study's findings include the incorporation of semiotic science variables into the design of digital advertisements for hotel marketing. To avoid misunderstandings between advertisers and their intended audiences, it is possible to establish a perfect match between the desired outcome and the intended message. This research also seeks to expand linguistics into emerging disciplines, such as the business world and commercial advertising
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Pellitteri, Federica, Luca Brucculeri, Giorgio Alfredo Spedicato, Giuseppe Siciliani, and Luca Lombardo. "Comparison of the accuracy of digital face scans obtained by two different scanners:." Angle Orthodontist 91, no. 5 (April 7, 2021): 641–49. http://dx.doi.org/10.2319/092720-823.1.

Повний текст джерела
Анотація:
ABSTRACT Objectives To compare the degree of accuracy of the Face Hunter facial scanner and the Dental Pro application for facial scanning, with respect to both manual measurements and each other. Materials and Methods Twenty-five patients were measured manually and scanned using each device. Six reference markers were placed on each subject's face at the cephalometric points Tr, Na′, Prn, Pog′, and L–R Zyg. Digital measurement software was used to calculate the distances between the cephalometric reference points on each of the scans. Geomagic X Control was used to superimpose the scans, automatically determining the best-fit alignment and calculating the percentage of overlapping surfaces within the tolerance ranges. Results Individual comparisons of the four distances measured anthropometrically and on the scans yielded an intraclass correlation coefficient index greater than .9. The t-test for matched samples yielded a P value below the significance threshold. Right and left cheeks reached around 60% of the surface, with a margin of error between 0.5 mm and −0.5 mm. The forehead was the only area in which most of the surface fell within the poorly reproducible range, presenting values out of tolerance of more than 20%. Conclusions Three-dimensional scans of the facial surface provide an excellent analytical tool for clinical evaluation; it does not appear that one or the other of the measuring tools is systematically more accurate, and the cheeks are the area with the highest average percentage of surface in the highly reproducible range.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Abdullah, Johari Yap, Cicero Moraes, Mokhtar Saidin, Zainul Ahmad Rajion, Helmi Hadi, Shaiful Shahidan, and Jafri Malin Abdullah. "Forensic Facial Approximation of 5000-Year-Old Female Skull from Shell Midden in Guar Kepah, Malaysia." Applied Sciences 12, no. 15 (August 5, 2022): 7871. http://dx.doi.org/10.3390/app12157871.

Повний текст джерела
Анотація:
Forensic facial approximation was applied to a 5000-year-old female skull from a shell midden in Guar Kepah, Malaysia. The skull was scanned using a computed tomography (CT) scanner in the Radiology Department of the Hospital Universiti Sains Malaysia using a Light Speed Plus scanner with a 1 mm section thickness in spiral mode and a 512 × 512 matrix. The resulting images were stored in Digital Imaging and Communications in Medicine (DICOM) format. A three-dimensional (3D) model of the skull was obtained from the CT scan data using Blender’s 3D modelling and animation software. After the skull was reconstructed, it was placed on the Frankfurt plane, and soft tissue thickness markers were placed based on 34 Malay CT scan data of the nose and lips. The technique based on facial approximation by data extracted from facial measurements of living individuals showed greater anatomical coherence when combined with anatomical deformation. The facial approximation in this study will pave the way towards understanding face prediction based on skull structures, soft tissue prediction rules, and soft tissue thickness descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Mondragón, Edward Acero, and Eduardo Andrés Tuta Quintero. "Fluctuating facial asymmetry and visual perceptive background during a tissue diagnostic histopathological." Romanian Journal of Neurology 21, no. 3 (September 30, 2022): 225–29. http://dx.doi.org/10.37897/rjn.2022.3.5.

Повний текст джерела
Анотація:
Background. Fluctuating facial asymmetry (FFA) is accentuated throughout life and has perceptual psychological implications; tissue diagnosis shows interindividual differences at first glance, for example, in the number of fixations, but no reports are available regarding the visual perceptual background in relation to individuals with less or more FFA during the tissue diagnostic task. Materials and methods. In medical students, including 13 men (SD = 19.4 years) and 8 women (SD = 18.1 years), FFA was determined as follows: n = 9 <FFA and n = 12 >FFA. The entire population performed tissue diagnostic analysis of normal skin and skin with squamous cell carcinoma pathology from digital images to establish the duration and number of fixations and the total time taken for diagnosis. Results. Individuals with > FFA show significant differences in the visual perceptual background during diagnostic analysis of normal and pathological skin, which are magnified by the fixation duration and the number of fixations when the tissue diagnosis is pathological. Conclusion. Compared to those with lower FFA, medical students with greater FFA performing tissue diagnosis of pathological tissue have visual perceptual backgrounds characterized by less time spent in each fixation but with more fixations.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Sheth, Kinjal, and Vishal Vora. "Transfer Learning Based Fine-Tuned Novel Approach for Detecting Facial Retouching." Iraqi Journal for Electrical and Electronic Engineering 20, no. 1 (November 11, 2023): 84–94. http://dx.doi.org/10.37917/ijeee.20.1.9.

Повний текст джерела
Анотація:
Facial retouching, also referred to as digital retouching, is the process of modifying or enhancing facial characteristics in digital images or photographs. While it can be a valuable technique for fixing flaws or achieving a desired visual appeal, it also gives rise to ethical considerations. This study involves categorizing genuine and retouched facial images from the standard ND-IIITD retouched faces dataset using a transfer learning methodology. The impact of different primary optimization algorithms—specifically Adam, RMSprop, and Adadelta—utilized in conjunction with a fine-tuned ResNet50 model is examined to assess potential enhancements in classification effectiveness. Our proposed transfer learning ResNet50 model demonstrates superior performance compared to other existing approaches, particularly when the RMSprop and Adam optimizers are employed in the fine-tuning process. By training the transfer learning ResNet50 model on the ND-IIITD retouched faces dataset with the "ImageNet" weight, we achieve a validation accuracy of 98.76%, a training accuracy of 98.32%, and an overall accuracy of 98.52% for classifying real and retouched faces in just 20 epochs. Comparative analysis indicates that the choice of optimizer during the fine-tuning of the transfer learning ResNet50 model can further enhance the classification accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Swoboda, Danya, Jared Boasen, Pierre-Majorique Léger, Romain Pourchon, and Sylvain Sénécal. "Comparing the Effectiveness of Speech and Physiological Features in Explaining Emotional Responses during Voice User Interface Interactions." Applied Sciences 12, no. 3 (January 25, 2022): 1269. http://dx.doi.org/10.3390/app12031269.

Повний текст джерела
Анотація:
The rapid rise of voice user interface technology has changed the way users traditionally interact with interfaces, as tasks requiring gestural or visual attention are swapped by vocal commands. This shift has equally affected designers, required to disregard common digital interface guidelines in order to adapt to non-visual user interaction (No-UI) methods. The guidelines regarding voice user interface evaluation are far from the maturity of those surrounding digital interface evaluation, resulting in a lack of consensus and clarity. Thus, we sought to contribute to the emerging literature regarding voice user interface evaluation and, consequently, assist user experience professionals in their quest to create optimal vocal experiences. To do so, we compared the effectiveness of physiological features (e.g., phasic electrodermal activity amplitude) and speech features (e.g., spectral slope amplitude) to predict the intensity of users’ emotional responses during voice user interface interactions. We performed a within-subjects experiment in which the speech, facial expression, and electrodermal activity responses of 16 participants were recorded during voice user interface interactions that were purposely designed to elicit frustration and shock, resulting in 188 analyzed interactions. Our results suggest that the physiological measure of facial expression and its extracted feature, automatic facial expression-based valence, is most informative of emotional events lived through voice user interface interactions. By comparing the unique effectiveness of each feature, theoretical and practical contributions may be noted, as the results contribute to voice user interface literature while providing key insights favoring efficient voice user interface evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Rushton, Richard. "Response to Mark B.N. Hansen’s ‘Affect as Medium, or the “Digital-Facial-Image”’." Journal of Visual Culture 3, no. 3 (December 2004): 353–57. http://dx.doi.org/10.1177/1470412904048567.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Behiery, Valerie. "Muslim Women Visual Artist’ Online Organizations." HAWWA 13, no. 3 (October 15, 2015): 297–322. http://dx.doi.org/10.1163/15692086-12341284.

Повний текст джерела
Анотація:
This study examines two American online organizations established as networks of support for Muslim women artists: Muslim Women in the Arts (mwia) and the International Muslimah Artists’ Network (iman). While the broader context is to explore the intersections of three important identity markers, namely, gender (woman), occupation (artist) and religion (Muslim) often overlooked in identity theory (Peek 2005), the more specific aim is to probe the effects of these digital culturescapes on Muslim women’s artistic agency and success. The data collected from interviews with member artists confirm the necessity of such organizations, offer suggestions on how they could be improved and outline the difficulties they face due to their largely volunteer and online nature.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kimura, Namiko, Hyoungseop Kim, Takako Okawachi, Takao Fuchigami, Masahiro Tezuka, Toshiro Kibe, Muhammad Subhan Amir, et al. "Pilot Study of Visual and Quantitative Image Analysis of Facial Surface Asymmetry in Unilateral Complete Cleft Lip and Palate." Cleft Palate-Craniofacial Journal 56, no. 7 (December 26, 2018): 960–69. http://dx.doi.org/10.1177/1055665618819645.

Повний текст джерела
Анотація:
Objective: To visualize and quantitatively analyze facial surface asymmetry following primary cleft lip repair in patients with unilateral cleft lip and palate (UCLP) and to compare this with noncleft controls. Design: Retrospective comparative study. Patients: Twenty-two patients with complete UCLP who underwent primary lip repair from 2009 to 2013 were enrolled in this study. The preserved 3-dimensional (3D) data of 23 healthy Japanese participants with the same age were used as controls. Interventions: All patients had received primary labioplasty in accordance with Cronin triangular flap method with orbicular oris muscle reconstruction. Main Outcome Measures: Shadow and zebra images established from moiré images, which were reconstructed from 3D facial data using stereophotogrammetry, were bisected and reversed by the symmetry axes (the middle line of the face). The discrepancies of the gravity and density between cleft and noncleft sides in 2 regions of interest, facial and lip areas, were then calculated and compared with those of healthy participants. Results: In the UCLP group, the mean discrepancies of gravity on shadow and zebra images were 1.76 ± 0.70 and 2.63 ± 1.72 pixels, respectively, in the facial area and 1.31 ± 0.36 and 3.83 ± 2.08 pixels, respectively, in the lip area. There was a significant difference in the mean discrepancies of gravity and density on zebra images in the lip area between the UCLP and control groups. Conclusions: Our image analysis of digital facial surface asymmetry in patients with UCLP provides visual and quantitative information, and it may contribute to improvements in muscle reconstruction on cleft lip repair.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Archana S Nadhan, Dioline Sara, Boosi Shyamala, Dr Chetana Tukkoji,. "Design a model of Image Restoration using AI in Digital Image Processing." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 862–65. http://dx.doi.org/10.17762/turcomat.v12i5.1497.

Повний текст джерела
Анотація:
Image restoration is the process of obtaining a distorted/noise image and giving an approximate clear image of the original image. False focus, motion blur and noise are forms of distortion. Image restoration can be done by reversing the process called Point Extension Function (PSF). In this process, the blurred image is generated by point source imaging and can be used to restore the image lost due to the blur process. Like to form. Modern artificial intelligence (AI) applied to image processing includes facial recognition, object recognition and detection, video, image action, and visual search. It helps to develop smart applications in digital image processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Mazida, Fadhila, Riris Tiani, and Afidhatul Latifah. "Political Ecology on Instagram Digital Media as A Politeness Branding Image." E3S Web of Conferences 317 (2021): 05010. http://dx.doi.org/10.1051/e3sconf/202131705010.

Повний текст джерела
Анотація:
The convergence of technology has a major impact on human life. The dependence on media forms media ecology. Media ecology becomes a bridge in creating a group self-image. The main study of this research was to analyze the forms of politeness verbal messages on digital media @PandemicTalks. The Instagram account @PandemicTalks presents a new style of sharing information on digital media through graphic and visual content. The research orientation focused on visual verbal messages with the substance of COVID-19 related to the government policies. This qualitative research used both netnographic and descriptive phenomenological method. The results show that the COVID-19 Pandemic formed the characters of technology literate society. The communication lines were more dynamically interactive. Language was as a social and political control in creating the politeness branding image. The politeness strategy used the verbal wisdom markers. The sharing function politeness of visual verbal messages on the @PandemicTalks account was more dominant. The branding image used a persuasive euphemistic language style.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Pradhan, Yash, Chayan Khatry, Jignesh Lad, and Jaychand Upadhyay. "Vision to the Bi-Sionary." International Journal for Research in Applied Science and Engineering Technology 10, no. 8 (August 31, 2022): 1299–303. http://dx.doi.org/10.22214/ijraset.2022.46411.

Повний текст джерела
Анотація:
Abstract: Visual impairment is one of the biggest limitations for humanity, especially in this day and age when information is communicated a lot by text messages (electronic and paper based) rather than voice. Facial recognition is category of biometric software that maps an individual’s facial features mathematically and stores the data as a sprint The software uses deep learning algorithms to compare a live capture or digital image to be stored face print in order to verify an individual’s identity vbnnn This project aims to develop a device to help people with visual impairment. In this project, we manufactured a device that converts an image’s text to speech. The basic outline is an embedded system that captures an image, extracts only the area of interest (i.e. region of the image that comprises text) and changes that text to speech. It is incorporated using a Raspberry Pi 3 and a Raspberry Pi camera module. We have two phases in our project that are Text to Speech and Facial Recognition. Every Module for picture handling and voice preparing are available in the device. It likewise can play and stop the output while reading. The expectation is that it has less error rate and less processing time and less cost productivity.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Pradhan, Yash, and Sonali Choudhary. "Analog Driven Robot using IoT and Machine Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 8 (August 31, 2022): 1510–14. http://dx.doi.org/10.22214/ijraset.2022.46434.

Повний текст джерела
Анотація:
Abstract: Visual impairment is one of the biggest limitations for humanity, especially in this day and age when information is communicated a lot by text messages (electronic and paper based) rather than voice. Facial recognition is category of biometric software that maps an individual’s facial features mathematically and stores the data as a sprint The software uses deep learning algorithms to compare a live capture or digital image to be stored face print in order to verify an individual’s identity vbnnn This project aims to develop a device to help people with visual impairment. In this project, we manufactured a device that converts an image’s text to speech. The basic outline is an embedded system that captures an image, extracts only the area of interest (i.e. region of the image that comprises text) and changes that text to speech. It is incorporated using a Raspberry Pi 3 and a Raspberry Pi camera module. We have two phases in our project that are Text to Speech and Facial Recognition. Every Module for picture handling and voice preparing are available in the device. It likewise can play and stop the output while reading. The expectation is that it has less error rate and less processing time and less cost productivity.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Song, Kai-Tai, and Chen-Chu Chlen. "Visual tracking of a moving person for a home robot." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 219, no. 4 (June 1, 2005): 259–69. http://dx.doi.org/10.1243/095965105x9597.

Повний текст джерела
Анотація:
This paper presents a visual tracking system for a home robot to pursue a person. The system works by detecting a human face and tracking a person via controlling a two-degree-of-freedom robot head and the robot body. An image processing system has been developed to extract facial features using a complementary metal-oxide semiconductor (CMOS) web camera. An algorithm is proposed to recognize a human face by using skin colour and elliptical edge information of a human face. A digital signal processing (DSP)-based motor control card is designed and implemented for robot motion control. The visual tracking control system has been integrated on a self-constructed prototype home robot. Experimental results show that the robot tracks a person in real-time.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Malavika.M, Afrin Dinusha.J, Mr. Shenbagharaman A, and Dr. B. Shunmugapriya. "Real-Time Gender and Age Detection Using Visual and Vocal Cues." International Research Journal on Advanced Science Hub 7, no. 02 (February 22, 2025): 94–102. https://doi.org/10.47392/irjash.2025.012.

Повний текст джерела
Анотація:
This study describes a comprehensive multi-modal system that uses voice signals, real-time webcam feeds, and facial photos as three different input modalities for gender and age detection. Convolutional Neural Networks (CNNs) are used for image-based detection in order to extract face traits, categorize gender, and estimate age. For face detection and picture pre-processing, Open CV is included, guaranteeing that the model can handle a range of lighting situations, facial expressions, and occlusions. Deep Neural Networks (DNNs) are used in voice-based identification to evaluate speech features including pitch, tone, and rhythm, which serve as important markers of age and gender. Because it was trained on a wide range of voice sample datasets, the system is resilient to variations in ambient noise, accents, and languages. The webcam-based input continually detects gender and estimates age from live facial data using a real-time processing pipeline that combines CNN and video stream analysis. This results in dynamic and accurate findings even in difficult circumstances. Every modality is intended to work in concert with the others to provide a flexible and adaptable solution. A thorough evaluation of the system's performance reveals great accuracy for each of the three input types. This multi-input architecture is a flexible tool with real-world applications that could be used in a variety of industries, such as security, tailored marketing, human- computer interaction, and assistive technology for people with impairments. Subsequent research endeavours will centre on including other modalities, like behavioural inputs, and optimizing the model to achieve even quicker processing speeds. The integration of different inputs allows the system to be highly adaptive and expandable, delivering personalized user experiences. To provide a more complete picture of users, the suggested framework can potentially be expanded to include additional demographic predictions, like ethnicity and emotion recognition. This work is an important advancement in the disciplines of artificial intelligence (AI) and human-computer interaction because it helps to build intelligent systems that can function effectively in real-time, multi-environment settings.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Arnold, Taylor, and Lauren Tilton. "Distant viewing: analyzing large visual corpora." Digital Scholarship in the Humanities 34, Supplement_1 (March 16, 2019): i3—i16. http://dx.doi.org/10.1093/llc/fqz013.

Повний текст джерела
Анотація:
AbstractIn this article we establish a methodological and theoretical framework for the study of large collections of visual materials. Our framework, distant viewing, is distinguished from other approaches by making explicit the interpretive nature of extracting semantic metadata from images. In other words, one must ‘view’ visual materials before studying them. We illustrate the need for the interpretive process of viewing by simultaneously drawing on theories of visual semiotics, photography, and computer vision. Two illustrative applications of the distant viewing framework to our own research are draw upon to explicate the potential and breadth of the approach. A study of television series shows how facial detection is used to compare the role of actors within the narrative arcs across two competing series. An analysis of the Farm Security Administration–Office of War Information corpus of documentary photography is used to establish how photographic style compared and differed amongst those photographers involved with the collection. We then aim to show how our framework engages with current methodological and theoretical conversations occurring within the digital humanities.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Ning, Zihao. "Face Image Generation for Anime Characters based on Generative Adversarial Network." Theoretical and Natural Science 87, no. 1 (January 15, 2025): 166–72. https://doi.org/10.54254/2753-8818/2025.20348.

Повний текст джерела
Анотація:
With the increasing demand for digital art, animation, and games, facial generation for anime characters has attracted growing research interest in recent years, which aims to build models to automatically generate unique and high-quality character images. Thanks to the rapid advancement of deep learning techniques, particularly generative adversarial networks, GAN-based image generation methods have continuously achieved breakthroughs in generation effectiveness and speed. Focusing on generating realistic anime face images, this paper proposes an anime character face image generation model based on GANs, which integrates Bath Normalization and Dropout to maintain strong stability and avoid overfitting. Comprehensive experiments show the efficacy of the proposed method, which can achieve high diversity in facial features and styles while maintaining visual coherence
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Kenke, Ralph. "An art installation in action: Prototyping a virtual embodiment generator for the prevention of intrusive technology." Virtual Creativity 11, no. 2 (October 1, 2021): 237–52. http://dx.doi.org/10.1386/vcr_00050_3.

Повний текст джерела
Анотація:
By engaging with network technologies on computers and digital devices, equipped with sensors such as cameras, we are part of the telematic society that can connect with people in real-time near as well as far apart. While digital technology enables us to connect and maintain relationships with other people, we still rely on virtual embodiment to represent our persona in the digital world. There is a range of visual representations that function as a virtual embodiment in the digital world, often a profile image, an avatar or a graphic device are used as a proxy. In this particular practice-based research project, I am exploring how the risks of sharing biometric data can be prevented, by prototyping an abstract virtual embodiment in an art installation. The computational design applied to procedures such as facial recognition driven by artificial intelligence (AI), and machine learning can result in intrusive experiences. If the act of being detected or identified by AI becomes intrusive, are there ways we can use abstract virtual embodiments to represent ourselves without being detected? This visual essay is positioning the practice-based research in an art context while documenting the conceptual process of an interactive prototype.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Saadon, Jordan R., Fan Yang, Ryan Burgert, Selma Mohammad, Theresa Gammel, Michael Sepe, Miriam Rafailovich, Charles B. Mikell, Pawel Polak, and Sima Mofakham. "Real-time emotion detection by quantitative facial motion analysis." PLOS ONE 18, no. 3 (March 10, 2023): e0282730. http://dx.doi.org/10.1371/journal.pone.0282730.

Повний текст джерела
Анотація:
Background Research into mood and emotion has often depended on slow and subjective self-report, highlighting a need for rapid, accurate, and objective assessment tools. Methods To address this gap, we developed a method using digital image speckle correlation (DISC), which tracks subtle changes in facial expressions invisible to the naked eye, to assess emotions in real-time. We presented ten participants with visual stimuli triggering neutral, happy, and sad emotions and quantified their associated facial responses via detailed DISC analysis. Results We identified key alterations in facial expression (facial maps) that reliably signal changes in mood state across all individuals based on these data. Furthermore, principal component analysis of these facial maps identified regions associated with happy and sad emotions. Compared with commercial deep learning solutions that use individual images to detect facial expressions and classify emotions, such as Amazon Rekognition, our DISC-based classifiers utilize frame-to-frame changes. Our data show that DISC-based classifiers deliver substantially better predictions, and they are inherently free of racial or gender bias. Limitations Our sample size was limited, and participants were aware their faces were recorded on video. Despite this, our results remained consistent across individuals. Conclusions We demonstrate that DISC-based facial analysis can be used to reliably identify an individual’s emotion and may provide a robust and economic modality for real-time, noninvasive clinical monitoring in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Poonam, Srishti Srivastava, Sushma Srikanta Kurandwad, and Parvashree H. M. "Music Recommendation Based on Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (November 30, 2023): 1738–40. http://dx.doi.org/10.22214/ijraset.2023.56899.

Повний текст джерела
Анотація:
Abstract: In an decreasingly digital and data- driven world, personalization of services and happy recommendation is an important aspect to ameliorate the stoner experience. In recent times, there has been a major elaboration in music recommendation system, fasteningon stoner sentiment and song characteristics. The proposed system takes advantage of the idea that visual stimulants, similar as the content of images, can be used to infer the stoner's emotional state. By recycling facial expressions and other visual cues, the system detects and classifies feelings similar as happiness, sadness, wrathfulness and others. This emotional data is also used to elect music tracks that align with the detected emotional state. The recommendation system frame is displayed by druggies rulings that enable song recommendation on the bases of the stoner's preferences or intensity of current feelings. Playing music based on the mood of the user is an application of deep learning which is introduced to the listeners. This can be done by figuring out the user's facial expression which depicts their mood.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Barber, Tiffany E. "GhostcatchingandAfter Ghostcatching, Dances in the Dark." Dance Research Journal 47, no. 1 (April 2015): 45–67. http://dx.doi.org/10.1017/s0149767715000030.

Повний текст джерела
Анотація:
In 1999, Bill T. Jones, in collaboration with digital artists Paul Kaiser and Shelley Eshkar, presented an installation at the intersection of dance, drawing, and digital imaging.Ghostcatchingfeatured Jones's previously improvised movements recorded using motion capture technology. In 2010, Kaiser, Eshkar, and Marc Downie of the OpenEndedGroup revisedGhostcatchinginto a new piece titledAfter Ghostcatching, composed of unused sequences of Jones's movement and sound captured forGhostcatching. This essay focuses on the extended relation betweenGhostcatchingandAfter Ghostcatchingto track a shift from so-called identity politics to a discourse of post-racialism over a ten-year period in U.S. history. A consideration of various media—motion capture technology, digital art and imaging, and improvised, virtual dance—as well as formal analysis of each piece, highlight the political effects and visual implications of each work in a racially mediated world. In this article, I question the status of Jones's raced, sexed, and gendered body within neoliberal fantasies of post-racialism. In spite of the persistence of visible markers such as skin color that are mobilized to construct racial subjects, with the development of digital imaging and new visual technologies, to what degree is race actually visual? That is, how are race and the racialized body in motion subject to and determined by specific media, i.e., photography and digital art, improvised dance and choreographic form? This analysis ofGhostcatchingandAfter Ghostcatchingreveals how each piece tests the boundaries of choreographic form and digital imaging technologies as well as the category of race as inherently visual—a test that posits race as technology itself in visual, haptic, and spatial terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії