Articoli di riviste sul tema "Face editing"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Face editing.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Face editing".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Liu, Shuang, Dan Li, Tianchi Cao, Yuke Sun, Yingsong Hu e Junwen Ji. "GAN-Based Face Attribute Editing". IEEE Access 8 (2020): 34854–67. http://dx.doi.org/10.1109/access.2020.2974043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Massey, Gregory D. "The Papers of Henry Laurens and Modern Historical Documentary Editing". Public Historian 27, n. 1 (2005): 39–60. http://dx.doi.org/10.1525/tph.2005.27.1.39.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In early 2003 the Papers of Henry Laurens published the sixteenth and last volume of its letterpress edition. Since its inception over forty years ago, the project has been a microcosm of the changes that have occurred in historical documentary editing. The project pioneered the use of computers to create more accurate and comprehensive indexes. It went further than most projects in adopting a literal transcription policy. Over the past twenty years, the Laurens Papers' difficulties in maintaining a staff and producing volumes in the face of budget constraints mirror the problems faced by other projects as federal support for documentary editing has decreased or remained stagnant.
3

Niu, Yongjie, Mingquan Zhou e Zhan Li. "Disentangling the latent space of GANs for semantic face editing". PLOS ONE 18, n. 10 (26 ottobre 2023): e0293496. http://dx.doi.org/10.1371/journal.pone.0293496.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Disentanglement research is a critical and important issue in the field of image editing. In order to perform disentangled editing on images generated by generative models, this paper presents an unsupervised, model-agnostic, two-stage trained editing framework. This work addresses the problem of discovering interpretable, disentangled directions of edited image attributes in the latent space of generative models. This effort’s primary objective was to address the limitations discovered in previous research, mainly (a) the discovered editing directions are interpretable but significantly entangled, i.e., changes to one attribute affect the others and (b) Prior research has utilized direction discovery and direction disentanglement separately, and they can’t work synergistically. More specifically, this paper proposes a two-stage training method that discovers the editing direction with semantics, perturbs the dimension of the direction vector, adjusts it with a penalty mechanism, and makes the editing direction more disentangled. This allows easy distinguishable image editing, such as age and facial expressions in facial images. Experimentally compared to other methods, the proposed method outperforms them both qualitatively and quantitatively in terms of interpretability, disentanglement, and distinguishability of the generated images. The implementation of our method is available at https://github.com/ydniuyongjie/twoStageForFaceEdit.
4

Xu, Zhiliang, Xiyu Yu, Zhibin Hong, Zhen Zhu, Junyu Han, Jingtuo Liu, Errui Ding e Xiang Bai. "FaceController: Controllable Attribute Editing for Face in the Wild". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 4 (18 maggio 2021): 3083–91. http://dx.doi.org/10.1609/aaai.v35i4.16417.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Face attribute editing aims to generate faces with one or multiple desired face attributes manipulated while other details are preserved. Unlike prior works such as GAN inversion which has an expensive reverse mapping process, we propose a simple feed-forward network to generate high-fidelity manipulated faces. By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild. The proposed method can consequently be applied to various applications such as face swapping, face relighting, and makeup transfer. In our method, we decouple identity, expression, pose, and illumination by using 3D priors; separate texture and colors by using region-wise style codes. All the information is embedded into adversarial learning by our identity-style normalization module. Disentanglement losses are proposed to enhance the generator to extract information independently from each attribute. Comprehensive quantitative and qualitative evaluations have been conducted. In a single framework, our method achieves the best or competitive scores on a variety of face applications.
5

Song, Linsen, Jie Cao, Lingxiao Song, Yibo Hu e Ran He. "Geometry-Aware Face Completion and Editing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 2506–13. http://dx.doi.org/10.1609/aaai.v33i01.33012506.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Face completion is a challenging generation task because it requires generating visually pleasing new pixels that are semantically consistent with the unmasked face region. This paper proposes a geometry-aware Face Completion and Editing NETwork (FCENet) by systematically studying facial geometry from the unmasked region. Firstly, a facial geometry estimator is learned to estimate facial landmark heatmaps and parsing maps from the unmasked face image. Then, an encoder-decoder structure generator serves to complete a face image and disentangle its mask areas conditioned on both the masked face image and the estimated facial geometry images. Besides, since low-rank property exists in manually labeled masks, a low-rank regularization term is imposed on the disentangled masks, enforcing our completion network to manage occlusion area with various shape and size. Furthermore, our network can generate diverse results from the same masked input by modifying estimated facial geometry, which provides a flexible mean to edit the completed face appearance. Extensive experimental results qualitatively and quantitatively demonstrate that our network is able to generate visually pleasing face completion results and edit face attributes as well.
6

Ambroziak, Tomasz. "Jak wydawać cyrylickie akta sejmikowe? Analiza rosyjskich, ukraińskich i białoruskich współczesnych zasad wydawniczych oraz wybranej praktyki edytorskiej. Część I". Miscellanea Historico-Iuridica 21, n. 1 (2022): 321–45. http://dx.doi.org/10.15290/mhi.2022.21.01.11.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One area in which significant progress has been made in the publication of sources in recent years is the editing of sejmik records. The fundamental question facing publishers of sources is the issue of editing priniples. Publishers of Lithuanian sejmik records are in a peculiar situation, as they face the problem of publishing sources in Polish and Ruthenian in one volume. Meanwhile, in Polish editing practice, there are no strictly defined rules for publishing Cyrillic sources, and contemporary experience in this matter is quite modest. The team preparing the edition of the sejmik records of the Vilnius voivodship from 1566–1655 faced a similar problem. In the course of the work, in order to determine the principles of editing the text, the existing theoretical models and solutions adopted in publishing practice were analyzed and evaluated for their suitability for the tasks ahead. This article will analyze contemporary publishing instructions and methodological recommendations for the publication of Cyrillic sources formulated in Russian, Belarusian and Ukrainian editing, as well as selected source publications of Cyrillic documents from the territory of the Grand Duchy of Lithuania from the 16th–17th centuries. The conclusions of the analysis will be used to formulate proposals for specific solutions for the publishers of sejmik records.
7

Xiao, Zhujun, Jenna Cryan, Yuanshun Yao, Yi Hong Gordon Cheo, Yuanchao Shu, Stefan Saroiu, Ben Y. Zhao e Haitao Zheng. ""My face, my rules": Enabling Personalized Protection Against Unacceptable Face Editing". Proceedings on Privacy Enhancing Technologies 2023, n. 3 (luglio 2023): 252–67. http://dx.doi.org/10.56553/popets-2023-0080.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Today, face editing is widely used to refine/alter photos in both professional and recreational settings. Yet it is also used to modify (and repost) existing online photos for cyberbullying. Our work considers an important open question: 'How can we support the collaborative use of face editing on social platforms while protecting against unacceptable edits and reposts by others?' This is challenging because, as our user study shows, users vary widely in their definition of what edits are (un)acceptable. Any global filter policy deployed by social platforms is unlikely to address the needs of all users, but hinders social interactions enabled by photo editing. Instead, we argue that face edit protection policies should be implemented by social platforms based on individual user preferences. When posting an original photo online, a user can choose to specify the types of face edits (dis)allowed on the photo. Social platforms use these per-photo edit policies to moderate future photo uploads, i.e., edited photos containing modifications that violate the original photo's policy are either blocked or shelved for user approval. Realizing this personalized protection, however, faces two immediate challenges: (1) how to accurately recognize specific modifications, if any, contained in a photo; and (2) how to associate an edited photo with its original photo (and thus the edit policy). We show that these challenges can be addressed by combining highly efficient hashing based image search and scalable semantic image comparison, and build a prototype protector (Alethia) covering nine edit types. Evaluations using IRB-approved user studies and data-driven experiments (on 839K face photos) show that Alethia accurately recognizes edited photos that violate user policies and induces a feeling of protection to study participants. This demonstrates the initial feasibility of personalized face edit protection. We also discuss current limitations and future directions to push the concept forward.
8

Upadhyay, Mukunda, Badri Raj Lamichhane e Bal Krishna Nyaupane. "Facial Attribute Editing Using Generative Adversarial Network". Journal of Engineering and Sciences 2, n. 1 (6 dicembre 2023): 57–63. http://dx.doi.org/10.3126/jes2.v2i1.60394.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Facial attribute editing tasks have immense applications in today’s digital world, including virtual makeup, generating faces in the animation and gaming industry, social media face image enhancement and improving face recognition systems. This task can be achieved manually or automatically. Manual facial attribute editing, performed with software such as Adobe Photoshop, is a tedious and time-consuming process that requires an expert person. However, Automatic facial attribute editing tasks that can perform facial attribute editing within a few seconds are achievable using encoder-decoder and deep learning-based generative models, such as conditional Generative Adversarial Networks. In our work, we use different attribute vectors as conditional information to generate desired target images, and encoder-decoder structures incorporate feature transfer units to choose and alter encoder-based features. Later, these encoder features are concatenated with the decoder feature to strengthen the attribute editing ability of the model. For this research, we apply reconstruction loss to preserve other details of a face image except target attributes. Adversarial loss is employed for visually realistic editing and attribute manipulation loss is employed to ensure that the generated image possesses the correct attributes. Furthermore, we adopt the WGAN-GP loss function type to improve training stability and reduce the mode collapse problem that often occurs in GAN. Experiments on the Celebi dataset show that this method produces visually realistic facial attribute edited images with PSNR/SSIM 31.7/0.95 and 89.23 % of average attribute editing accuracy for 13 facial attributes including Bangs, Mustache, Bald, Bushy Eyebrows, Blond Hair, Eyeglasses, Black Hair, Brown Hair, Mouth Slightly Open, Male, No Beard, pale Skin and Young.
9

Oktaviani, Reni, Siti Ansoriyah e Etsa Purbarani. "Syllabus Development of Language Editing Courses Indonesia Based on Information and Communication Technology Integrated XXI Century". Aksis : Jurnal Pendidikan Bahasa dan Sastra Indonesia 6, n. 1 (29 giugno 2022): 52–61. http://dx.doi.org/10.21009/aksis.060105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The background of this research is about a pandemic that has changed learning patterns from face-to-face to face-to-face. This causes the syllabus that existed before the pandemic to adapt to current conditions. In addition, this syllabus applies a project-based learning model and case studies. The purpose of this research is to explain and develop the syllabus of the Indonesian Language Editing course based on integrated Information and Communication Technology (ICT) in the XXI century. This research uses research and development. The stages in this research, namely: first, explain the process of developing the syllabus for the integrated ICT-based Indonesian language editing course in the XXI century. Second, developing a syllabus for the XXI century integrated ICT-based Indonesian Language Editing course. Data collection techniques were obtained through tests and non-tests in the form of observations, questionnaires, and documentation and then processed using content analysis techniques. Based on this, the development of an integrated ICT-based syllabus in the XXI century was applied to level IV students of the Indonesian Language and Literature Education Study Program to obtain effective learning outcomes in the Indonesian Language Editing course.
10

Hair, P. E. H. "On Editing Barbot". History in Africa 20 (1993): 53–59. http://dx.doi.org/10.2307/3171964.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In 1974 the very first issue of HA included an analysis of a small section of John Barbot's Description of the coasts of North and South Guinea. Since this represented the first fruits of a project to edit Barbot's writings on Guinea, it is appropriate that, now the completed edition is published, a review of the history of the editing, the methods and problems of the editors, and the problems that the consumer will face in using the edition, should also appear in HA.Why Barbot? When, twenty years ago, I decided that Barbot's account of Guinea should be edited, I already knew that it was partly unoriginal, and that in an ideal world priority would be given to editing the other, earlier, recognized compendium on Guinea, the relevant section of Dapper's account of all Africa. For although Dapper is also partly unoriginal, it has probably a wider range of new material than Barbot, not least the very detailed Kquoja account. Why then Barbot rather than Dapper? The answer is simple. I recognized the lack of critical editing of Guinea sources and felt I had to take the plunge somewhere. But whereas Dapper wrote in Dutch, a language of which I have only dictionary command, the earlier manuscript version of Barbot was in French, a language I could cope with. Dapper will have his turn. Adam Jones, one of the co-editors of “Barbot on Guinea,” having Dutch, has already published studies of Dapper's sources. Moreover, in the edition of Barbot we have taken the unusual step of including in the annotation fairly frequent references to the lines of transmission of information, for instance, not only noting the material Barbot borrowed from Dapper but also, where the material was not original to Dapper, the sources of his borrowing—thus doing part of the work of a critical edition of Dapper. In fact we have generally tried to make the edition of Barbot a starting point for the critical study of many other pre-1700 Guinea sources.
11

Pang, Min, Ligang He, Liqun Kuang, Min Chang, Zhiying He e Xie Han. "Developing a Parametric 3D Face Model Editing Algorithm". IEEE Access 8 (2020): 167209–24. http://dx.doi.org/10.1109/access.2020.3022987.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Li, Xiaoyan, Tongliang Liu, Jiankang Deng e Dacheng Tao. "Video Face Editing Using Temporal-Spatial-Smooth Warping". ACM Transactions on Intelligent Systems and Technology 7, n. 3 (aprile 2016): 1–28. http://dx.doi.org/10.1145/2819000.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Song, Xiaoxia, Mingwen Shao, Wangmeng Zuo e Cunhe Li. "Face attribute editing based on generative adversarial networks". Signal, Image and Video Processing 14, n. 6 (27 febbraio 2020): 1217–25. http://dx.doi.org/10.1007/s11760-020-01660-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Hays, Richard, Trevor Gibbs, Julie Hunt, Barbara Jennings e Ken Masters. "The changing face of MedEdPublish". MedEdPublish 11 (2 novembre 2021): 1. http://dx.doi.org/10.12688/mep.17500.1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
MedEdPublish has come a long way since it was launched in 2016 by AMEE as an independent academic e-journal that supports scholarship in health professions education. Beginning as a relatively small, in-house publication on a web platform adapted for the purpose, we invited members of our community of practice to submit articles on any topic in health professions education, and encouraged a wide range of article types. All articles were published so long as they met editing criteria and where within scope. Reviews were welcomed from both members of our Review panel and the general readership, all published openly with contributors identified. Many articles attracted several reviews, responses and comments, creating interactive discussion threads that provided learning opportunities for all. The outcome surpassed our expectations, with over 500 articles submitted during 2020, beyond the capacity of our editing team and platform to achieve our promise of rapid publishing. We have now moved to a much larger and powerful web platform, developed by F1000 Research and within the Taylor and Francis stable, the home of AMEE’s other journal, Medical Teacher. Most of our innovations are supported by the new platform and there is scope for further developments. We look forward to an exciting new phase of innovation, powered by the F1000 platform.
15

Park, Jungsik, Byung-Kuk Seo e Jong-Il Park. "A Framework for Real-Time 3D Freeform Manipulation of Facial Video". Applied Sciences 9, n. 21 (4 novembre 2019): 4707. http://dx.doi.org/10.3390/app9214707.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper proposes a framework that allows 3D freeform manipulation of a face in live video. Unlike existing approaches, the proposed framework provides natural 3D manipulation of a face without background distortion and interactive face editing by a user’s input, which leads to freeform manipulation without any limitation of range or shape. To achieve these features, a 3D morphable face model is fitted to a face region in a video frame and is deformed by the user’s input. The video frame is then mapped as a texture to the deformed model, and the model is rendered on the video frame. Because of the high computational cost, parallelization and acceleration schemes are also adopted for real-time performance. Performance evaluation and comparison results show that the proposed framework is promising for 3D face editing in live video.
16

Zhang, Yu, e Edmond C. Prakash. "Face to Face: Anthropometry-Based Interactive Face Shape Modeling Using Model Priors". International Journal of Computer Games Technology 2009 (2009): 1–15. http://dx.doi.org/10.1155/2009/573924.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents a new anthropometrics-based method for generating realistic, controllable face models. Our method establishes an intuitive and efficient interface to facilitate procedures for interactive 3D face modeling and editing. It takes 3D face scans as examples in order to exploit the variations presented in the real faces of individuals. The system automatically learns a model prior from the data-sets of example meshes of facial features using principal component analysis (PCA) and uses it to regulate the naturalness of synthesized faces. For each facial feature, we compute a set of anthropometric measurements to parameterize the example meshes into a measurement space. Using PCA coefficients as a compact shape representation, we formulate the face modeling problem in a scattered data interpolation framework which takes the user-specified anthropometric parameters as input. Solving the interpolation problem in a reduced subspace allows us to generate a natural face shape that satisfies the user-specified constraints. At runtime, the new face shape can be generated at an interactive rate. We demonstrate the utility of our method by presenting several applications, including analysis of facial features of subjects in different race groups, facial feature transfer, and adapting face models to a particular population group.
17

Sun, Jingxiang, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang e Yebin Liu. "IDE-3D". ACM Transactions on Graphics 41, n. 6 (30 novembre 2022): 1–10. http://dx.doi.org/10.1145/3550454.3555506.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Existing 3D-aware facial generation methods face a dilemma in quality versus editability: they either generate editable results in low resolution, or high-quality ones with no editing flexibility. In this work, we propose a new approach that brings the best of both worlds together. Our system consists of three major components: (1) a 3D-semantics-aware generative model that produces view-consistent, disentangled face images and semantic masks; (2) a hybrid GAN inversion approach that initializes the latent codes from the semantic and texture encoder, and further optimizes them for faithful reconstruction; and (3) a canonical editor that enables efficient manipulation of semantic masks in canonical view and produces high-quality editing results. Our approach is competent for many applications, e.g. free-view face drawing, editing and style control. Both quantitative and qualitative results show that our method reaches the state-of-the-art in terms of photorealism, faithfulness and efficiency.
18

Lumbantoruan, Gortap, Marlyna Infryanty Hutapea, Jamaluddin Jamaluddin, Emma Rosinta Simarmata, Eviyanti Novita Purba, Eva Julia Gunawati Harianja, Resianta Perangin-angin et al. "PELATIHAN VIDEO RECORDING DAN EDITING VIDEO PADA SMK SWASTA GELORA JAYA NUSANTARA MEDAN". Jurnal Pengabdian Pada Masyarakat METHABDI 1, n. 1 (30 giugno 2021): 1–4. http://dx.doi.org/10.46880/methabdi.vol1no1.pp1-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The purpose of community service activities is to implement the “Tri Dharma” of Higher Education as well as to contribute ideas and transfer technology to teaching staff at SMK Gelora Jaya Nusantara Medan. This service activity was carried out for 2 days, with Video Recording and Video Editing Training materials. The topics given in this training are making video editing media and online learning content. The material given is the use of Filmora X software in video editing, and video recording techniques. This topic is very much needed in order to equip teachers in preparing and delivering subject matter during this COVID-19 pandemic. This topic was deliberately chosen considering that currently teachers are having difficulties in delivering subject matter face-to-face.
19

Ju, Yixuan, Jianhai Zhang, Xiaoyang Mao e Jiayi Xu. "Adaptive semantic attribute decoupling for precise face image editing". Visual Computer 37, n. 9-11 (1 luglio 2021): 2907–18. http://dx.doi.org/10.1007/s00371-021-02198-z.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Liu, Kanglin, Gaofeng Cao, Fei Zhou, Bozhi Liu, Jiang Duan e Guoping Qiu. "Towards Disentangling Latent Space for Unsupervised Semantic Face Editing". IEEE Transactions on Image Processing 31 (2022): 1475–89. http://dx.doi.org/10.1109/tip.2022.3142527.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Gwin, Louis. "Prospective Reporters Face Writing/Editing Tests at Many Dailies". Newspaper Research Journal 9, n. 2 (gennaio 1988): 101–11. http://dx.doi.org/10.1177/073953298800900210.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This national study confirms a documented trend that daily newspapers are using writing and editing tests to evaluate prospective reporters. It finds that nearly 45% of responding newspapers administer such tests and that the likelihood of testing decreases as circulation size increases. Those managing editors who do test, consider test results a key factor in reaching a hiring decision on both experienced and inexperienced candidates for reporting jobs.
22

Deng, Qiyao, Qi Li, Jie Cao, Yunfan Liu e Zhenan Sun. "Controllable Multi-Attribute Editing of High-Resolution Face Images". IEEE Transactions on Information Forensics and Security 16 (2021): 1410–23. http://dx.doi.org/10.1109/tifs.2020.3033184.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Guo, Ming, Feng Xu, Shunfei Wang, Zhibo Wang, Ming Lu, Xiufen Cui e Xiao Ling. "Synthesis, Style Editing, and Animation of 3D Cartoon Face". Tsinghua Science and Technology 29, n. 2 (aprile 2024): 506–16. http://dx.doi.org/10.26599/tst.2023.9010028.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Baranowski, Andreas M., e H. Hecht. "The Auditory Kuleshov Effect: Multisensory Integration in Movie Editing". Perception 46, n. 5 (5 dicembre 2016): 624–31. http://dx.doi.org/10.1177/0301006616682754.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.
25

Gao, Tianle, Yaojun Li e Yiwen Zhao. "CRISPR/Cas base-editing systems and their potential applications and prospects". Theoretical and Natural Science 20, n. 1 (20 dicembre 2023): 135–40. http://dx.doi.org/10.54254/2753-8818/20/20230739.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
CRISPR-Cas9 and its derivatives such as cytosine base editor, adenine base editor and prime editing are an important topic of research today. In recent years, with continuous development and updates by researchers, editing systems have been able to achieve various modifications of target genes. For example, base substitution, insertion and deletion of short fragment genes, etc. have wide applications in plants and animals, etc. However, these editing systems derived from the CRISPR/Cas9 system still face challenges in terms of editing efficiency and accuracy. This review describes the current status of research using editing systems in plant breeding, gene disease treatment, and knockdown of specific genes by establishing animal models, analyzes the advantages and disadvantages of editing technologies, and looks forward to their development.
26

Zhang, Qian, Vikas Thamizharasan e James Tompkin. "Learning physically based material and lighting decompositions for face editing". Computational Visual Media 10, n. 2 (3 gennaio 2024): 295–308. http://dx.doi.org/10.1007/s41095-022-0309-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractLighting is crucial for portrait photography, yet the complex interactions between the skin and incident light are expensive to model computationally in graphics and difficult to reconstruct analytically via computer vision. Alternatively, to allow fast and controllable reflectance and lighting editing, we developed a physically based decomposition through deep learned priors from path-traced portrait images. Previous approaches that used simplified material models or low-frequency or low-dynamic-range lighting struggled to model specular reflections or relight directly without intermediate decomposition. However, we estimate the surface normal, skin albedo and roughness, and high-frequency HDRI maps, and propose an architecture to estimate both diffuse and specular reflectance components. In our experiments, we show that this approach can represent the true appearance function more effectively than simpler baseline methods, leading to better generalization and higher-quality editing.
27

Leimkühler, Thomas, e George Drettakis. "FreeStyleGAN". ACM Transactions on Graphics 40, n. 6 (dicembre 2021): 1–15. http://dx.doi.org/10.1145/3478513.3480538.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Current Generative Adversarial Networks (GANs) produce photorealistic renderings of portrait images. Embedding real images into the latent space of such models enables high-level image editing. While recent methods provide considerable semantic control over the (re-)generated images, they can only generate a limited set of viewpoints and cannot explicitly control the camera. Such 3D camera control is required for 3D virtual and mixed reality applications. In our solution, we use a few images of a face to perform 3D reconstruction, and we introduce the notion of the GAN camera manifold, the key element allowing us to precisely define the range of images that the GAN can reproduce in a stable manner. We train a small face-specific neural implicit representation network to map a captured face to this manifold and complement it with a warping scheme to obtain free-viewpoint novel-view synthesis. We show how our approach - due to its precise camera control - enables the integration of a pre-trained StyleGAN into standard 3D rendering pipelines, allowing e.g., stereo rendering or consistent insertion of faces in synthetic 3D environments. Our solution proposes the first truly free-viewpoint rendering of realistic faces at interactive rates, using only a small number of casual photos as input, while simultaneously allowing semantic editing capabilities, such as facial expression or lighting changes.
28

Kawabe, Akihisa, Ryuto Haga, Yoichi Tomioka, Jungpil Shin e Yuichi Okuyama. "A Dynamic Ensemble Selection of Deepfake Detectors Specialized for Individual Face Parts". Electronics 12, n. 18 (18 settembre 2023): 3932. http://dx.doi.org/10.3390/electronics12183932.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The development of deepfake technology, based on deep learning, has made it easier to create images of fake human faces that are indistinguishable from the real thing. Many deepfake methods and programs are publicly available and can be used maliciously, for example, by creating fake social media accounts with images of non-existent human faces. To prevent the misuse of such fake images, several deepfake detection methods have been proposed as a countermeasure and have proven capable of detecting deepfakes with high accuracy when the target deepfake model has been identified. However, the existing approaches are not robust to partial editing and/or occlusion caused by masks, glasses, or manual editing, all of which can lead to an unacceptable drop in accuracy. In this paper, we propose a novel deepfake detection approach based on a dynamic configuration of an ensemble model that consists of deepfake detectors. These deepfake detectors are based on convolutional neural networks (CNNs) and are specialized to detect deepfakes by focusing on individual parts of the face. We demonstrate that a dynamic selection of face parts and an ensemble of selected CNN models is effective at realizing highly accurate deepfake detection even from partly edited and occluded images.
29

Shapiro, Maura. "CRISPR-Cas9 examined for potential and challenges treating Duchenne Muscular Dystrophy". Scilight 2023, n. 8 (24 febbraio 2023): 081103. http://dx.doi.org/10.1063/10.0017423.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Yunes, Maria Cristina, Zimbábwe Osório-Santos, Marina A. G. von Keyserlingk e Maria José Hötzel. "Gene Editing for Improved Animal Welfare and Production Traits in Cattle: Will This Technology Be Embraced or Rejected by the Public?" Sustainability 13, n. 9 (28 aprile 2021): 4966. http://dx.doi.org/10.3390/su13094966.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Integrating technology into agricultural systems has gained considerable traction, particularly over the last half century. Agricultural systems that incorporate the public’s concerns regarding farm animal welfare are more likely to be socially accepted in the long term, a key but often forgotten component of sustainability. Gene editing is a tool that has received considerable attention in the last five years, given its potential capacity to improve farm animal health, welfare, and production efficiency. This study aimed to explore the attitudes of Brazilian citizens regarding the applications of gene editing in cattle that generate offspring without horns; are more resistant to heat; and have increased muscle tissue. Using a mixed-methods approach, we surveyed participants via face-to-face, using in-depth interviews (Study 1) and an online questionnaire containing closed-ended questions (Study 2). Overall, the acceptability of gene editing was low and in cases where support was given it was highly dependent on the type and purpose of the application proposed. Using gene editing to improve muscle tissue growth was viewed as less acceptable compared to using gene editing to reduce heat stress or to produce hornless cattle. Support declined when the application was perceived to harm animal welfare, to be profit motivated or to reinforce the status quo of intensive livestock systems. The acceptability of gene editing was reduced when perceptions of risks and benefits were viewed as unevenly or unfairly distributed among consumers, corporations, different types of farmers, and the animals. Interviewees did not consider gene editing a “natural” process, citing dissenting reasons such as the high degree of human interference and the acceleration of natural processes. Our findings raised several issues that may need to be addressed for gene editing to comply with the social pillar of sustainable agriculture.
31

Takemae, Yoshinao. "Experimental Evaluation of Video Editing Based on Participants' Gaze for Face-to-face Multiparty Conversations". Journal of the Institute of Image Information and Television Engineers 59, n. 12 (2005): 1822–29. http://dx.doi.org/10.3169/itej.59.1822.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Liu, Feng-Lin, Shu-Yu Chen, Yu-Kun Lai, Chunpeng Li, Yue-Ren Jiang, Hongbo Fu e Lin Gao. "DeepFaceVideoEditing". ACM Transactions on Graphics 41, n. 4 (luglio 2022): 1–16. http://dx.doi.org/10.1145/3528223.3530056.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Sketches, which are simple and concise, have been used in recent deep image synthesis methods to allow intuitive generation and editing of facial images. However, it is nontrivial to extend such methods to video editing due to various challenges, ranging from appropriate manipulation propagation and fusion of multiple editing operations to ensure temporal coherence and visual quality. To address these issues, we propose a novel sketch-based facial video editing framework, in which we represent editing manipulations in latent space and propose specific propagation and fusion modules to generate high-quality video editing results based on StyleGAN3. Specifically, we first design an optimization approach to represent sketch editing manipulations by editing vectors, which are propagated to the whole video sequence using a proper strategy to cope with different editing needs. Specifically, input editing operations are classified into two categories: temporally consistent editing and temporally variant editing. The former (e.g., change of face shape) is applied to the whole video sequence directly, while the latter (e.g., change of facial expression or dynamics) is propagated with the guidance of expression or only affects adjacent frames in a given time window. Since users often perform different editing operations in multiple frames, we further present a region-aware fusion approach to fuse diverse editing effects. Our method supports video editing on facial structure and expression movement by sketch, which cannot be achieved by previous works. Both qualitative and quantitative evaluations show the superior editing ability of our system to existing and alternative solutions.
33

Xu, Sen-Zhe, Hao-Zhi Huang, Fang-Lue Zhang e Song-Hai Zhang. "FaceShapeGene: A disentangled shape representation for flexible face image editing". Graphics and Visual Computing 4 (giugno 2021): 200023. http://dx.doi.org/10.1016/j.gvc.2021.200023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Hou, Xianxu, Xiaokang Zhang, Hanbang Liang, Linlin Shen, Zhihui Lai e Jun Wan. "GuidedStyle: Attribute knowledge guided style manipulation for semantic face editing". Neural Networks 145 (gennaio 2022): 209–20. http://dx.doi.org/10.1016/j.neunet.2021.10.017.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Mao, Aihua, e Hengge Situ. "Image-Driven Automatic 3D Human Face Modeling and Editing Algorithm". Journal of Computer-Aided Design & Computer Graphics 31, n. 1 (2019): 17. http://dx.doi.org/10.3724/sp.j.1089.2019.17362.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Ning, Xin, Shaohui Xu, Weijun Li e Shuai Nie. "FEGAN: Flexible and Efficient Face Editing With Pre-Trained Generator". IEEE Access 8 (2020): 65340–50. http://dx.doi.org/10.1109/access.2020.2985086.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Zhang, Yu, e Norman I. Badler. "Face modeling and editing with statistical local feature control models". International Journal of Imaging Systems and Technology 17, n. 6 (2007): 341–58. http://dx.doi.org/10.1002/ima.20127.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Aliari, Mohammad Amin, Andre Beauchamp, Tiberiu Popa e Eric Paquette. "Face Editing Using Part‐Based Optimization of the Latent Space". Computer Graphics Forum 42, n. 2 (maggio 2023): 269–79. http://dx.doi.org/10.1111/cgf.14760.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Fontanini, Tomaso, Claudio Ferrari, Giuseppe Lisanti, Leonardo Galteri, Stefano Berretti, Massimo Bertozzi e Andrea Prati. "FrankenMask: Manipulating semantic masks with transformers for face parts editing". Pattern Recognition Letters 176 (dicembre 2023): 14–20. http://dx.doi.org/10.1016/j.patrec.2023.10.010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Lin, Yu-Tzu, Cheng-Chih Wu e Chiung-Fang Chiu. "The Use of Wiki in Teaching Programming". International Journal of Distance Education Technologies 16, n. 3 (luglio 2018): 18–45. http://dx.doi.org/10.4018/ijdet.2018070102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article explores the feasibility of employing cooperative program editing tools in teaching programming. A quasi-experimental study was conducted, in which the experimental group co-edited the programs with peers using the wiki. The control group co-edited the programs with peers using only the face-to-face approach. The findings show that the co-editing platform was effective in assisting collaborative learning of programming, especially for program implementation. By observing editing histories, students could compare programs and then reflect more deeply about programming. The use of the wiki history tool also helped to illuminate nonlinear and dynamic procedures utilized in programming. Students who engaged more in the collaborative programming or interacted more with partners on the wiki showed greater program implementation achievements. The major benefit of using the wiki was the enhanced ability to observe the dynamic programming procedure and to encounter programming conflicts, which contributed to the process of procedural knowledge acquisition and elaboration.
41

Zegeye, Workie Anley, Mesfin Tsegaw, Yingxin Zhang e Liyong Cao. "CRISPR-Based Genome Editing: Advancements and Opportunities for Rice Improvement". International Journal of Molecular Sciences 23, n. 8 (18 aprile 2022): 4454. http://dx.doi.org/10.3390/ijms23084454.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To increase the potentiality of crop production for future food security, new technologies for plant breeding are required, including genome editing technology—being one of the most promising. Genome editing with the CRISPR/Cas system has attracted researchers in the last decade as a safer and easier tool for genome editing in a variety of living organisms including rice. Genome editing has transformed agriculture by reducing biotic and abiotic stresses and increasing yield. Recently, genome editing technologies have been developed quickly in order to avoid the challenges that genetically modified crops face. Developing transgenic-free edited plants without introducing foreign DNA has received regulatory approval in a number of countries. Several ongoing efforts from various countries are rapidly expanding to adopt the innovations. This review covers the mechanisms of CRISPR/Cas9, comparisons of CRISPR/Cas9 with other gene-editing technologies—including newly emerged Cas variants—and focuses on CRISPR/Cas9-targeted genes for rice crop improvement. We have further highlighted CRISPR/Cas9 vector construction model design and different bioinformatics tools for target site selection.
42

Chen, Dongyue, Qiusheng Chen, Jianjun Wu, Xiaosheng Yu e Tong Jia. "Face Swapping: Realistic Image Synthesis Based on Facial Landmarks Alignment". Mathematical Problems in Engineering 2019 (14 marzo 2019): 1–11. http://dx.doi.org/10.1155/2019/8902701.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose an image-based face swapping algorithm, which can be used to replace the face in the reference image with the same facial shape and features as the input face. First, a face alignment is made based on a group of detected facial landmarks, so that the aligned input face and the reference face are consistent in size and posture. Secondly, an image warping algorithm based on triangulation is presented to adjust the reference face and its background according to the aligned input faces. In order to achieve more accurate face swapping, a face parsing algorithm is introduced to realize the accurate detection of the face-ROIs, and then the face-ROI in the reference image is replaced with the input face-ROI. Finally, a Poisson image editing algorithm is adopted to realize the boundary processing and color correction between the replacement region and the original background, and then the final face swapping result is obtained. In the experiments, we compare our method with other face swapping algorithms and make a qualitative and quantitative analysis to evaluate the reality and the fidelity of the replaced face. The analysis results show that our method has some advantages in the overall performance of swapping effect.
43

BJ, Sowmya, Meeradevi Meeradevi e Seems Shedole. "Generative adversarial networks with attentional multimodal for human face synthesis". Indonesian Journal of Electrical Engineering and Computer Science 33, n. 2 (1 febbraio 2024): 1205. http://dx.doi.org/10.11591/ijeecs.v33.i2.pp1205-1215.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p><span>Face synthesis and editing has increased cumulative consideration by the improvement of generative adversarial networks (GANs). The proposed attentional GAN-deep attentional multimodal similarity modal (AttnGAN-DAMSM) model focus on generating high-resolution images by removing discriminator components and generating realistic images from textual description. The attention model creates the attention map on the image and automatically retrieves the features to produce various sub-areas of the image. The DAMSM delivers fine-grained image-text identical loss to generative networks. This study, first describe text phrases and the model will generate a photorealistic high-resolution image composed of features with high accuracy. Next, model will fine-tune the selected features of face images and it will be left to the control of the user. The result shows that the proposed AttnGAN-DAMSM model delivers the performance metrics like structural similarity index measure (SSIM), feature similarity index measure (FSIM) and frechet inception distance (FID) using CelebA and CUHK face sketch (CUFS) dataset. For CelebFaces attribute (CelebA) dataset, the SSIM achieves 78.82% and for CUFS dataset, the SSIM achieves 81.45% which ensures accurate face synthesis and editing compared with existing methods such as GAN, SuperstarGAN and identity-sensitive GAN (IsGAN) models.</span></p>
44

Le, Minh-Ha, e Niklas Carlsson. "StyleAdv: A Usable Privacy Framework Against Facial Recognition with Adversarial Image Editing". Proceedings on Privacy Enhancing Technologies 2024, n. 2 (aprile 2024): 106–23. http://dx.doi.org/10.56553/popets-2024-0043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this era of ubiquitous surveillance and online presence, protecting facial privacy has become a critical concern for individuals and society as a whole. Adversarial attacks have emerged as a promising solution to this problem, but current methods are limited in quality or are impractical for sensitive domains such as facial editing. This paper presents a novel adversarial image editing framework called StyleAdv, which leverages StyleGAN's latent spaces to generate powerful adversarial images, providing an effective tool against facial recognition systems. StyleAdv achieves high success rates by employing meaningful facial editing with StyleGAN while maintaining image quality, addressing a challenge faced by existing methods. To do so, the comprehensive framework integrates semantic editing, adversarial attacks, and face recognition systems, providing a cohesive and robust tool for privacy protection. We also introduce the ``residual attack`` strategy, using residual information to enhance attack success rates. Our evaluation offers insights into effective editing, discussing tradeoffs in latent spaces, optimal edits for our optimizer, and the impact of utilizing residual information. Our approach is transferable to state-of-the-art facial recognition systems, making it a versatile tool for privacy protection. In addition, we provide a user-friendly interface with multiple editing options to help users create effective adversarial images. Extensive experiments are used to provide insights and demonstrate that StyleAdv outperforms state-of-the-art methods in terms of both attack success rate and image quality. By providing a versatile tool for generating high-quality adversarial samples, StyleAdv can be used both to enhance individual users' privacy and to stimulate advances in adversarial attack and defense research.
45

Zhao, Long, Fangda Han, Xi Peng, Xun Zhang, Mubbasir Kapadia, Vladimir Pavlovic e Dimitris N. Metaxas. "Cartoonish sketch-based face editing in videos using identity deformation transfer". Computers & Graphics 79 (aprile 2019): 58–68. http://dx.doi.org/10.1016/j.cag.2019.01.004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Santram, Abishay. "CRISPR: POTENTIAL USES AND CHALLENGES". International Journal of Advanced Research 11, n. 11 (30 novembre 2023): 1094–99. http://dx.doi.org/10.21474/ijar01/17915.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
CRISPR Cas is a genome editing tool, that has the potential to revolutionize the medical world. As a cheap and simple alternative to other gene editing technologies, it has grabbed scientists attention and is being actively studied. Its uses range from the removal of disorders caused by genetic mutations to developing rapid diagnostic tests. As well as a potential cure for cancer.This paper aims to understand CRISPRs potential along with the challenges it may face while providing a brief explanation of its mechanism.
47

He, Libo, Zhenping Qiang, Xiaofeng Shao, Hong Lin, Meijiao Wang e Fei Dai. "Research on High-Resolution Face Image Inpainting Method Based on StyleGAN". Electronics 11, n. 10 (19 maggio 2022): 1620. http://dx.doi.org/10.3390/electronics11101620.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In face image recognition and other related applications, incomplete facial imagery due to obscuring factors during acquisition represents an issue that requires solving. Aimed at tackling this issue, the research surrounding face image completion has become an important topic in the field of image processing. Face image completion methods require the capability of capturing the semantics of facial expression. A deep learning network has been widely shown to bear this ability. However, for high-resolution face image completion, the network training of high-resolution image inpainting is difficult to converge, thus rendering high-resolution face image completion a difficult problem. Based on the study of the deep learning model of high-resolution face image generation, this paper proposes a high-resolution face inpainting method. First, our method extracts the latent vector of the face image to be repaired through ResNet, then inputs the latent vector to the pre-trained StyleGAN model to generate the face image. Next, it calculates the loss between the known part of the face image to be repaired and the corresponding part of the generated face imagery. Afterward, the latent vector is cut to generate a new face image iteratively until the number of iterations is reached. Finally, the Poisson fusion method is employed to process the last generated face image and the face image to be repaired in order to eliminate the difference in boundary color information of the repaired image. Through the comparison and analysis between two classical face completion methods in recent years on the CelebA-HQ data set, we discovered our method can achieve better completion results of 256*256 resolution face image completion. For 1024*1024 resolution face image restoration, we have also conducted a large number of experiments, which prove the effectiveness of our method. Our method can obtain a variety of repair results by editing the latent vector. In addition, our method can be successfully applied to face image editing, face image watermark clearing and other applications without the network training process of different masks in these applications.
48

Jiang, Jiao. "Application of gene editing technology to DNA digital data storage". Highlights in Science, Engineering and Technology 73 (29 novembre 2023): 452–58. http://dx.doi.org/10.54097/hset.v73i.14051.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
While the archival digital storage industry is approaching its physical limits, demand is increasing significantly, so alternatives are emerging. The modern world is in dire need of durable, scalable and economical alternative storage media. Deoxyribonucleic acid (DNA), a promising storage medium, offers superior information durability, capacity and energy consumption, making it a promising candidate for long-term data storage. However, the design and realization of DNA digital data storage face many problems, but gene editing technology, as a technology that makes modifications to genes directly from the molecular level, provides a breakthrough in solving these problems. In this paper, I show some methods for designing DNA digital data storage based on gene editing technology. The method utilizes gene editing technology to modify DNA molecules to improve their storage capacity and stability. At the same time, this paper also introduces the application cases of gene editing technology in DNA bio storage devices and looks forward to its future development.
49

Suprapto, Maghfiroh Agustinasari, Amira Wahyu Anditasari, Siti Kholija Sitompul e Lestari Setyowati. "Undergraduate Students’ Perceptions towards the Process of Writing". Journal of English Language Teaching and Linguistics 7, n. 1 (15 aprile 2022): 185. http://dx.doi.org/10.21462/jeltl.v7i1.765.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p><em>Writing process has been proven to be an effective approach to enhance students’ writing performance. The writing process consists of pre-writing, drafting, revising, and editing. The writing process aids the students in developing ideas while writing, but it was claimed that students also face challenges during the process. Therefore, this current study aimed to investigate the students’ perceptions of the writing process and the challenges. The subjects taken were the first-year students of the English Educational program at the State University of Malang, in which 31 students of IC Writing Class were chosen. The perceptions were investigated through a survey study by distributing online questionnaires via Google Form. The questionnaire was divided into four aspects; pre-writing, drafting, revising, and editing, which aimed at exploring the students’ perceptions during the writing process and its challenges. The findings of the study showed that the students experienced difficulties in the writing process. The most challenging stages they had were the pre-writing and revising stage. The students faced challenges in generating and structuring ideas, respectively. Meanwhile, the students claimed that drafting and editing were less challenging since they followed the preceding stage. Nonetheless, the students also believed that proofreading was a complex process in the editing stage. The results of this study will be beneficial for English teachers and future researchers to vary the teaching methodology and reflect a better analysis. </em></p>
50

ZEB, Aqib, Shakeel AHMAD, Javaria TABBASUM, Zhonghua SHENG e Peisong HU. "Rice grain yield and quality improvement via CRISPR/Cas9 system: an updated review". Notulae Botanicae Horti Agrobotanici Cluj-Napoca 50, n. 3 (12 settembre 2022): 12388. http://dx.doi.org/10.15835/nbha50312388.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Rice (Oryza sativa L.) is an important staple food crop worldwide. To meet the growing nutritional requirements of the increasing population in the face of climate change, qualitative and quantitative traits of rice need to be improved. During recent years, genome editing has played a great role in the development of superior varieties of grain crops. Genome editing and speed breeding have improved the accuracy and pace of rice breeding. New breeding technologies including genome editing have been established in rice, expanding the potential for crop improvement. Over a decade, site-directed mutagenesis tools like Zinc Finger Nucleases (ZFN), Transcriptional activator-like Effector Nucleases (TALENs), and Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)/CRISPR-associated protein 9 (Cas9) System were used and have played a great role in rice yield and quality enhancement. In addition, most recently other genome editing techniques like prime editing and base editors have also been used for efficient genome editing in rice. Since rice is an excellent model system for functional studies due to its small genome and close synthetic relationships with other cereal crops, new genome-editing technologies continue to be developed for use in rice. Genomic alteration employing genome editing technologies (GETs) like CRISPR/Cas9 for reverse genetics has opened new avenues in agricultural sciences such as rice yield and grain quality improvement. Currently, CRISPR/Cas9 technology is widely used by researchers for genome editing to achieve the desired biological objectives, because of its simple targeting, easy-to-design, cost-effective, and versatile tool for precise and efficient plant genome editing. Over the past few years many genes related to rice grain quality and yield enhancement have been successfully edited via CRISPR/Cas9 technology method to cater to the growing demand for food worldwide. The effectiveness of these methods is being verified by the researchers and crop scientists worldwide. In this review we focus on genome-editing tools for rice improvement to address the progress made and provide examples of genome editing in rice. We also discuss safety concerns and methods for obtaining transgene-free crops.

Vai alla bibliografia