Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Multi-domain image translation.

Статті в журналах з теми "Multi-domain image translation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-44 статей у журналах для дослідження на тему "Multi-domain image translation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Shao, Mingwen, Youcai Zhang, Huan Liu, Chao Wang, Le Li, and Xun Shao. "DMDIT: Diverse multi-domain image-to-image translation." Knowledge-Based Systems 229 (October 2021): 107311. http://dx.doi.org/10.1016/j.knosys.2021.107311.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liu, Huajun, Lei Chen, Haigang Sui, Qing Zhu, Dian Lei, and Shubo Liu. "Unsupervised multi-domain image translation with domain representation learning." Signal Processing: Image Communication 99 (November 2021): 116452. http://dx.doi.org/10.1016/j.image.2021.116452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Cai, Naxin, Houjin Chen, Yanfeng Li, Yahui Peng, and Linqiang Guo. "Registration on DCE-MRI images via multi-domain image-to-image translation." Computerized Medical Imaging and Graphics 104 (March 2023): 102169. http://dx.doi.org/10.1016/j.compmedimag.2022.102169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xia, Weihao, Yujiu Yang, and Jing-Hao Xue. "Unsupervised multi-domain multimodal image-to-image translation with explicit domain-constrained disentanglement." Neural Networks 131 (November 2020): 50–63. http://dx.doi.org/10.1016/j.neunet.2020.07.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shen, Yangyun, Runnan Huang, and Wenkai Huang. "GD-StarGAN: Multi-domain image-to-image translation in garment design." PLOS ONE 15, no. 4 (April 21, 2020): e0231719. http://dx.doi.org/10.1371/journal.pone.0231719.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Yifei, Weipeng Li, Daling Wang, and Shi Feng. "Unsupervised Image Translation Using Multi-Scale Residual GAN." Mathematics 10, no. 22 (November 19, 2022): 4347. http://dx.doi.org/10.3390/math10224347.

Повний текст джерела
Анотація:
Image translation is a classic problem of image processing and computer vision for transforming an image from one domain to another by learning the mapping between an input image and an output image. A novel Multi-scale Residual Generative Adversarial Network (MRGAN) based on unsupervised learning is proposed in this paper for transforming images between different domains using unpaired data. In the model, a dual generater architecture is used to eliminate the dependence on paired training samples and introduce a multi-scale layered residual network in generators for reducing semantic loss of images in the process of encoding. The Wasserstein GAN architecture with gradient penalty (WGAN-GP) is employed in the discriminator to optimize the training process and speed up the network convergence. Comparative experiments on several image translation tasks over style transfers and object migrations show that the proposed MRGAN outperforms strong baseline models by large margins.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xu, Wenju, and Guanghui Wang. "A Domain Gap Aware Generative Adversarial Network for Multi-Domain Image Translation." IEEE Transactions on Image Processing 31 (2022): 72–84. http://dx.doi.org/10.1109/tip.2021.3125266.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Komatsu, Rina, and Tad Gonsalves. "Multi-CartoonGAN with Conditional Adaptive Instance-Layer Normalization for Conditional Artistic Face Translation." AI 3, no. 1 (January 24, 2022): 37–52. http://dx.doi.org/10.3390/ai3010003.

Повний текст джерела
Анотація:
In CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation models. For example, StarGAN works as a multi-domain translation model based on a single generator–discriminator pair, while U-GAT-IT aims to close the large face-to-anime translation gap by adapting its original normalization to the process. However, constructing robust and conditional translation models requires tradeoffs when the computational costs of training on graphic processing units (GPUs) are considered. This is because, if designers attempt to implement conditional models with complex convolutional neural network (CNN) layers and normalization functions, the GPUs will need to secure large amounts of memory when the model begins training. This study aims to resolve this tradeoff issue via the development of Multi-CartoonGAN, which is an improved CartoonGAN architecture that can output conditional translated images and adapt to large feature gap translations between the source and target domains. To accomplish this, Multi-CartoonGAN reduces the computational cost by using a pretrained VGGNet to calculate the consistency loss instead of reusing the generator. Additionally, we report on the development of the conditional adaptive layer-instance normalization (CAdaLIN) process for use with our model to make it robust to unique feature translations. We performed extensive experiments using Multi-CartoonGAN to translate real-world face images into three different artistic styles: portrait, anime, and caricature. An analysis of the visualized translated images and GPU computation comparison shows that our model is capable of performing translations with unique style features that follow the conditional inputs and at a reduced GPU computational cost during training.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Feng, Long, Guohua Geng, Qihang Li, Yi Jiang, Zhan Li, and Kang Li. "CRPGAN: Learning image-to-image translation of two unpaired images by cross-attention mechanism and parallelization strategy." PLOS ONE 18, no. 1 (January 6, 2023): e0280073. http://dx.doi.org/10.1371/journal.pone.0280073.

Повний текст джерела
Анотація:
Unsupervised image-to-image translation (UI2I) tasks aim to find a mapping between the source and the target domains from unpaired training data. Previous methods can not effectively capture the differences between the source and the target domain on different scales and often leads to poor quality of the generated images, noise, distortion, and other conditions that do not match human vision perception, and has high time complexity. To address this problem, we propose a multi-scale training structure and a progressive growth generator method to solve UI2I task. Our method refines the generated images from global structures to local details by adding new convolution blocks continuously and shares the network parameters in different scales and also in the same scale of network. Finally, we propose a new Cross-CBAM mechanism (CRCBAM), which uses a multi-layer spatial attention and channel attention cross structure to generate more refined style images. Experiments on our collected Opera Face, and other open datasets Summer↔Winter, Horse↔Zebra, Photo↔Van Gogh, show that the proposed algorithm is superior to other state-of-art algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tao, Rentuo, Ziqiang Li, Renshuai Tao, and Bin Li. "ResAttr-GAN: Unpaired Deep Residual Attributes Learning for Multi-Domain Face Image Translation." IEEE Access 7 (2019): 132594–608. http://dx.doi.org/10.1109/access.2019.2941272.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Gómez, Jose L., Gabriel Villalonga, and Antonio M. López. "Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches." Sensors 21, no. 9 (May 4, 2021): 3185. http://dx.doi.org/10.3390/s21093185.

Повний текст джерела
Анотація:
Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Xu, Xiaowe, Jiawei Zhang, Jinglan Liu, Yukun Ding, Tianchen Wang, Hailong Qiu, Haiyun Yuan, et al. "Multi-Cycle-Consistent Adversarial Networks for Edge Denoising of Computed Tomography Images." ACM Journal on Emerging Technologies in Computing Systems 17, no. 4 (July 19, 2021): 1–16. http://dx.doi.org/10.1145/3462328.

Повний текст джерела
Анотація:
As one of the most commonly ordered imaging tests, the computed tomography (CT) scan comes with inevitable radiation exposure that increases cancer risk to patients. However, CT image quality is directly related to radiation dose, and thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high-dose-like high-quality CT images (domain Y ) from low dose low-quality CT images (domain X ), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, the cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data, since the paired data is hard to collect due to patients’ interests and cardiac motion. However, out of concerns on patients’ privacy and data security, protocols typically require clinics to perform medical image processing tasks including CT image denoising locally (i.e., edge denoising). Therefore, the network models need to achieve high performance under various computation resource constraints including memory and performance. Our detailed analysis of CCADN raises a number of interesting questions that point to potential ways to further improve its performance using the same or even fewer computation resources. For example, if the noise is large leading to a significant difference between domain X and domain Y , can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle- consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, whereas the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms CCADN in terms of denoising quality with slightly less computation resource consumption.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Liu, Wenjie, Wenkai Zhang, Xian Sun, and Zhi Guo. "Unsupervised Cross-Scene Aerial Image Segmentation via Spectral Space Transferring and Pseudo-Label Revising." Remote Sensing 15, no. 5 (February 22, 2023): 1207. http://dx.doi.org/10.3390/rs15051207.

Повний текст джерела
Анотація:
Unsupervised domain adaptation (UDA) is essential since manually labeling pixel-level annotations is consuming and expensive. Since the domain discrepancies have not been well solved, existing UDA approaches yield poor performance compared with supervised learning approaches. In this paper, we propose a novel sequential learning network (SLNet) for unsupervised cross-scene aerial image segmentation. The whole system is decoupled into two sequential parts—the image translation model and segmentation adaptation model. Specifically, we introduce the spectral space transferring (SST) approach to narrow the visual discrepancy. The high-frequency components between the source images and the translated images can be transferred in the Fourier spectral space for better preserving the important identity and fine-grained details. To further alleviate the distribution discrepancy, an efficient pseudo-label revising (PLR) approach was developed to guide pseudo-label learning via entropy minimization. Without additional parameters, the entropy map works as the adaptive threshold, constantly revising the pseudo labels for the target domain. Furthermore, numerous experiments for single-category and multi-category UDA segmentation demonstrate that our SLNet is the state-of-the-art.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Wang, Biao, Lingxuan Zhu, Xing Guo, Xiaobing Wang, and Jiaji Wu. "SDTGAN: Generation Adversarial Network for Spectral Domain Translation of Remote Sensing Images of the Earth Background Based on Shared Latent Domain." Remote Sensing 14, no. 6 (March 11, 2022): 1359. http://dx.doi.org/10.3390/rs14061359.

Повний текст джерела
Анотація:
The synthesis of spectral remote sensing images of the Earth’s background is affected by various factors such as the atmosphere, illumination and terrain, which makes it difficult to simulate random disturbance and real textures. Based on the shared latent domain hypothesis and generation adversarial network, this paper proposes the SDTGAN method to mine the correlation between the spectrum and directly generate target spectral remote sensing images of the Earth’s background according to the source spectral images. The introduction of shared latent domain allows multi-spectral domains connect to each other without the need to build a one-to-one model. Meanwhile, additional feature maps are introduced to fill in the lack of information in the spectrum and improve the geographic accuracy. Through supervised training with a paired dataset, cycle consistency loss, and perceptual loss, the uniqueness of the output result is guaranteed. Finally, the experiments on the Fengyun satellite observation data show that the proposed SDTGAN method performs better than the baseline models in remote sensing image spectrum translation.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sun, Pengyu, Miaole Hou, Shuqiang Lyu, Wanfu Wang, Shuyang Li, Jincheng Mao, and Songnian Li. "Enhancement and Restoration of Scratched Murals Based on Hyperspectral Imaging—A Case Study of Murals in the Baoguang Hall of Qutan Temple, Qinghai, China." Sensors 22, no. 24 (December 13, 2022): 9780. http://dx.doi.org/10.3390/s22249780.

Повний текст джерела
Анотація:
Environmental changes and human activities have caused serious degradation of murals around the world. Scratches are one of the most common issues in these damaged murals. We propose a new method for virtually enhancing and removing scratches from murals; which can provide an auxiliary reference and support for actual restoration. First, principal component analysis (PCA) was performed on the hyperspectral data of a mural after reflectance correction, and high-pass filtering was performed on the selected first principal component image. Principal component fusion was used to replace the original first principal component with a high-pass filtered first principal component image, which was then inverse PCA transformed with the other original principal component images to obtain an enhanced hyperspectral image. The linear information in the mural was therefore enhanced, and the differences between the scratches and background improved. Second, the enhanced hyperspectral image of the mural was synthesized as a true colour image and converted to the HSV colour space. The light brightness component of the image was estimated using the multi-scale Gaussian function and corrected with a 2D gamma function, thus solving the problem of localised darkness in the murals. Finally, the enhanced mural images were applied as input to the triplet domain translation network pretrained model. The local branches in the translation network perform overall noise smoothing and colour recovery of the mural, while the partial nonlocal block is used to extract the information from the scratches. The mapping process was learned in the hidden space for virtual removal of the scratches. In addition, we added a Butterworth high-pass filter at the end of the network to generate the final restoration result of the mural with a clearer visual effect and richer high-frequency information. We verified and validated these methods for murals in the Baoguang Hall of Qutan Temple. The results show that the proposed method outperforms the restoration results of the total variation (TV) model, curvature-driven diffusion (CDD) model, and Criminisi algorithm. Moreover, the proposed combined method produces better recovery results and improves the visual richness, readability, and artistic expression of the murals compared with direct recovery using a triple domain translation network.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Mizginov, V. A., V. V. Kniaz, and N. A. Fomin. "A METHOD FOR SYNTHESIZING THERMAL IMAGES USING GAN MULTI-LAYERED APPROACH." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-2/W1-2021 (April 15, 2021): 155–62. http://dx.doi.org/10.5194/isprs-archives-xliv-2-w1-2021-155-2021.

Повний текст джерела
Анотація:
Abstract. The active development of neural network technologies and optoelectronic systems has led to the introduction of computer vision technologies in various fields of science and technology. Deep learning made it possible to solve complex problems that a person had not been able to solve before. The use of multi-spectral optical systems has significantly expanded the field of application of video systems. Tasks such as image recognition, object re-identification, video surveillance require high accuracy, speed and reliability. These qualities are provided by algorithms based on deep convolutional neural networks. However, they require to have large databases of multi-spectral images of various objects to achieve state-of-the-art results. While large and various databases of color images of different objects are widely available in public domain, then similar databases of thermal images are either not available, or they represent a small number of types of objects. The quality of three-dimensional modeling for the thermal imaging spectral range remains at an insufficient level for solving a number of important tasks, which require high precision and reliability. The realistic synthesis of thermal images is especially important due to the complexity and high cost of obtaining real data. This paper is focused on the development of a method for synthesizing thermal imaging images based on generative adversarial neural networks. We developed an algorithm for a multi-spectral image-to-image translation. We have changed to the original GAN architecture and converted the loss function. We presented a new learning approach. For this, we prepared a special training dataset including about 2000 image tensors. The evaluation of the results obtained showed that the proposed method can be used to expand the available databases of thermal images.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zeng, Wei, and Mingbo Zhao. "High-Resolution Tiled Clothes Generation from a Model." AATCC Journal of Research 8, no. 1_suppl (September 2021): 97–104. http://dx.doi.org/10.14504/ajr.8.s1.13.

Повний текст джерела
Анотація:
Many image translation methods based on conditional generative adversarial networks can transform images from one domain to another, but the results of many methods are at a low resolution. We present a modified pix2pixHD model, which generates high-resolution tiled clothing from a model wearing clothes. We choose a single Markovian discriminator instead of a multi-scale discriminator for a faster training speed, added a perceptual loss term, and improved the feature matching loss. Deeper feature maps have lower weights when calculating losses. A dataset was specifically built for this improved model, which contains over 20,000 paired high-quality tiled clothing. The experimental results demonstrate the feasibility of our improved method and can be extended to other fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yang, Yuanbo, Qunbo Lv, Baoyu Zhu, Xuefu Sui, Yu Zhang, and Zheng Tan. "One-Sided Unsupervised Image Dehazing Network Based on Feature Fusion and Multi-Scale Skip Connection." Applied Sciences 12, no. 23 (December 2, 2022): 12366. http://dx.doi.org/10.3390/app122312366.

Повний текст джерела
Анотація:
Haze and mist caused by air quality, weather, and other factors can reduce the clarity and contrast of images captured by cameras, which limits the applications of automatic driving, satellite remote sensing, traffic monitoring, etc. Therefore, the study of image dehazing is of great significance. Most existing unsupervised image-dehazing algorithms rely on a priori knowledge and simplified atmospheric scattering models, but the physical causes of haze in the real world are complex, resulting in inaccurate atmospheric scattering models that affect the dehazing effect. Unsupervised generative adversarial networks can be used for image-dehazing algorithm research; however, due to the information inequality between haze and haze-free images, the existing bi-directional mapping domain translation model often used in unsupervised generative adversarial networks is not suitable for image-dehazing tasks, and it also does not make good use of extracted features, which results in distortion, loss of image details, and poor retention of image features in the haze-free images. To address these problems, this paper proposes an end-to-end one-sided unsupervised image-dehazing network based on a generative adversarial network that directly learns the mapping between haze and haze-free images. The proposed feature-fusion module and multi-scale skip connection based on residual network consider the loss of feature information caused by convolution operation and the fusion of different scale features, and achieve adaptive fusion between low-level features and high-level features, to better preserve the features of the original image. Meanwhile, multiple loss functions are used to train the network, where the adversarial loss ensures that the network generates more realistic images and the contrastive loss ensures a meaningful one-sided mapping from the haze image to the haze-free image, resulting in haze-free images with good quantitative metrics and visual effects. The experiments demonstrate that, compared with existing dehazing algorithms, our method achieved better quantitative metrics and better visual effects on both synthetic haze image datasets and real-world haze image datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Van den Broeck, Wouter A. J., Toon Goedemé, and Maarten Loopmans. "Multiclass Land Cover Mapping from Historical Orthophotos Using Domain Adaptation and Spatio-Temporal Transfer Learning." Remote Sensing 14, no. 23 (November 22, 2022): 5911. http://dx.doi.org/10.3390/rs14235911.

Повний текст джерела
Анотація:
Historical land cover (LC) maps are an essential instrument for studying long-term spatio-temporal changes of the landscape. However, manual labelling on low-quality monochromatic historical orthophotos for semantic segmentation (pixel-level classification) is particularly challenging and time consuming. Therefore, this paper proposes a methodology for the automated extraction of very-high-resolution (VHR) multi-class LC maps from historical orthophotos under the absence of target-specific ground truth annotations. The methodology builds on recent evolutions in deep learning, leveraging domain adaptation and transfer learning. First, an unpaired image-to-image (I2I) translation between a source domain (recent RGB image of high quality, annotations available) and the target domain (historical monochromatic image of low quality, no annotations available) is learned using a conditional generative adversarial network (GAN). Second, a state-of-the-art fully convolutional network (FCN) for semantic segmentation is pre-trained on a large annotated RGB earth observation (EO) dataset that is converted to the target domain using the I2I function. Third, the FCN is fine-tuned using self-annotated data on a recent RGB orthophoto of the study area under consideration, after conversion using again the I2I function. The methodology is tested on a new custom dataset: the ‘Sagalassos historical land cover dataset’, which consists of three historical monochromatic orthophotos (1971, 1981, 1992) and one recent RGB orthophoto (2015) of VHR (0.3–0.84 m GSD) all capturing the same greater area around Sagalassos archaeological site (Turkey), and corresponding manually created annotations (2.7 km² per orthophoto) distinguishing 14 different LC classes. Furthermore, a comprehensive overview of open-source annotated EO datasets for multiclass semantic segmentation is provided, based on which an appropriate pretraining dataset can be selected. Results indicate that the proposed methodology is effective, increasing the mean intersection over union by 27.2% when using domain adaptation, and by 13.0% when using domain pretraining, and that transferring weights from a model pretrained on a dataset closer to the target domain is preferred.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lu, Chien-Yu, Min-Xin Xue, Chia-Che Chang, Che-Rung Lee, and Li Su. "Play as You Like: Timbre-Enhanced Multi-Modal Music Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1061–68. http://dx.doi.org/10.1609/aaai.v33i01.33011061.

Повний текст джерела
Анотація:
Style transfer of polyphonic music recordings is a challenging task when considering the modeling of diverse, imaginative, and reasonable music pieces in the style different from their original one. To achieve this, learning stable multi-modal representations for both domain-variant (i.e., style) and domaininvariant (i.e., content) information of music in an unsupervised manner is critical. In this paper, we propose an unsupervised music style transfer method without the need for parallel data. Besides, to characterize the multi-modal distribution of music pieces, we employ the Multi-modal Unsupervised Image-to-Image Translation (MUNIT) framework in the proposed system. This allows one to generate diverse outputs from the learned latent distributions representing contents and styles. Moreover, to better capture the granularity of sound, such as the perceptual dimensions of timbre and the nuance in instrument-specific performance, cognitively plausible features including mel-frequency cepstral coefficients (MFCC), spectral difference, and spectral envelope, are combined with the widely-used mel-spectrogram into a timbreenhanced multi-channel input representation. The Relativistic average Generative Adversarial Networks (RaGAN) is also utilized to achieve fast convergence and high stability. We conduct experiments on bilateral style transfer tasks among three different genres, namely piano solo, guitar solo, and string quartet. Results demonstrate the advantages of the proposed method in music style transfer with improved sound quality and in allowing users to manipulate the output.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Guo, Hongjun, and Lili Chen. "An Image Similarity Invariant Feature Extraction Method Based on Radon Transform." International Journal of Circuits, Systems and Signal Processing 15 (April 8, 2021): 288–96. http://dx.doi.org/10.46300/9106.2021.15.33.

Повний текст джерела
Анотація:
With the advancements of computer technology, image recognition technology has been more and more widely applied and feature extraction is a core problem of image recognition. In study, image recognition classifies the processed image and identifies the category it belongs to. By selecting the feature to be extracted, it measures the necessary parameters and classifies according to the result. For better recognition, it needs to conduct structural analysis and image description of the entire image and enhance image understanding through multi-object structural relationship. The essence of Radon transform is to reconstruct the original N-dimensional image in N-dimensional space according to the N-1 dimensional projection data of N-dimensional image in different directions. The Radon transform of image is to extract the feature in the transform domain and map the image space to the parameter space. This paper study the inverse problem of Radon transform of the upper semicircular curve with compact support and continuous in the support. When the center and radius of a circular curve change in a certain range, the inversion problem is unique when the Radon transform along the upper semicircle curve is known. In order to further improve the robustness and discrimination of the features extracted, given the image translation or proportional scaling and the removal of impact caused by translation and proportion, this paper has proposed an image similarity invariant feature extraction method based on Radon transform, constructed Radon moment invariant and shown the description capacity of shape feature extraction method on shape feature by getting intra-class ratio. The experiment result has shown that the method of this paper has overcome the flaws of cracks, overlapping, fuzziness and fake edges which exist when extracting features alone, it can accurately extract the corners of the digital image and has good robustness to noise. It has effectively improved the accuracy and continuity of complex image feature extraction.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Islam, Naeem Ul, and Jaebyung Park. "Face Attribute Modification Using Fine-Tuned Attribute-Modification Network." Electronics 9, no. 5 (April 30, 2020): 743. http://dx.doi.org/10.3390/electronics9050743.

Повний текст джерела
Анотація:
Multi-domain image-to-image translation with the desired attributes is an important approach for modifying single or multiple attributes of a face image, but is still a challenging task in the computer vision field. Previous methods were based on either attribute-independent or attribute-dependent approaches. The attribute-independent approach, in which the modification is performed in the latent representation, has performance limitations because it requires paired data for changing the desired attributes. In contrast, the attribute-dependent approach is effective because it can modify the required features while maintaining the information in the given image. However, the attribute-dependent approach is sensitive to attribute modifications performed while preserving the face identity, and requires a careful model design for generating high-quality results. To address this problem, we propose a fine-tuned attribute modification network (FTAMN). The FTAMN comprises a single generator and two discriminators. The discriminators use the modified image in two configurations with the binary attributes to fine tune the generator such that the generator can generate high-quality attribute-modification results. Experimental results obtained using the CelebA dataset verify the feasibility and effectiveness of the proposed FTAMN for editing multiple facial attributes while preserving the other details.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhang, Jianfu, Yuanyuan Huang, Yaoyi Li, Weijie Zhao, and Liqing Zhang. "Multi-Attribute Transfer via Disentangled Representation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9195–202. http://dx.doi.org/10.1609/aaai.v33i01.33019195.

Повний текст джерела
Анотація:
Recent studies show significant progress in image-to-image translation task, especially facilitated by Generative Adversarial Networks. They can synthesize highly realistic images and alter the attribute labels for the images. However, these works employ attribute vectors to specify the target domain which diminishes image-level attribute diversity. In this paper, we propose a novel model formulating disentangled representations by projecting images to latent units, grouped feature channels of Convolutional Neural Network, to disassemble the information between different attributes. Thanks to disentangled representation, we can transfer attributes according to the attribute labels and moreover retain the diversity beyond the labels, namely, the styles inside each image. This is achieved by specifying some attributes and swapping the corresponding latent units to “swap” the attributes appearance, or applying channel-wise interpolation to blend different attributes. To verify the motivation of our proposed model, we train and evaluate our model on face dataset CelebA. Furthermore, the evaluation of another facial expression dataset RaFD demonstrates the generalizability of our proposed model.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Sommervold, Oscar, Michele Gazzea, and Reza Arghandeh. "A Survey on SAR and Optical Satellite Image Registration." Remote Sensing 15, no. 3 (February 3, 2023): 850. http://dx.doi.org/10.3390/rs15030850.

Повний текст джерела
Анотація:
After decades of research, automatic synthetic aperture radar (SAR)-optical registration remains an unsolved problem. SAR and optical satellites utilize different imaging mechanisms, resulting in imagery with dissimilar heterogeneous characteristics. Transforming and translating these characteristics into a shared domain has been the main challenge in SAR-optical matching for many years. Combining the two sensors will improve the quality of existing and future remote sensing applications across multiple industries. Several approaches have emerged as promising candidates in the search for combining SAR and optical imagery. In addition, recent research has indicated that machine learning-based approaches have great potential for filling the information gap posed by utilizing only one sensor type in Earth observation applications. However, several challenges remain, and combining them is a multi-step process where no one-size-fits-all approach is available. This article reviews traditional, state-of-the-art, and recent development trends in SAR-optical co-registration methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wu, Haijun, Yijun Liu, Weikang Jiang, and Wenbo Lu. "A Fast Multipole Boundary Element Method for Three-Dimensional Half-Space Acoustic Wave Problems Over an Impedance Plane." International Journal of Computational Methods 12, no. 01 (January 23, 2015): 1350090. http://dx.doi.org/10.1142/s0219876213500904.

Повний текст джерела
Анотація:
A high-frequency fast multipole boundary element method (FMBEM) based on the Burton–Miller formulation is proposed for three-dimensional acoustic wave problems over an infinite plane with impedance boundary conditions. The Green's function for the sound propagation over an impedance plane is employed explicitly in the boundary integral equation (BIE). To deal with the integral appearing in the half-space Green's function, the downward pass in the FMBEM is divided into two parts to compute contributions from the real domain to the real and image domains, respectively. A piecewise analytical method is proposed to compute the moment-to-local (M2L) translator from the real domain to the image domain accurately. An algorithm based on the multi-level tree structure is designed to compute the M2L translators efficiently. Correspondingly, the direct coefficient can also be computed efficiently by taking advantage of the algorithm of the efficient M2L. A flexible generalized minimal residual (fGMRES) is applied to accelerating the solution when the convergence is very slow. Numerical examples are presented to demonstrate the accuracy and efficiency of the developed FMBEM. Good solutions and high acceleration ratios compared with the conventional boundary element method clearly show the potential of the FMBEM for large-scale 3D acoustic wave problems over an infinite impedance plane which are of practical significance.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kou, Rong, Bo Fang, Gang Chen, and Lizhe Wang. "Progressive Domain Adaptation for Change Detection Using Season-Varying Remote Sensing Images." Remote Sensing 12, no. 22 (November 20, 2020): 3815. http://dx.doi.org/10.3390/rs12223815.

Повний текст джерела
Анотація:
The development of artificial intelligence technology has prompted an immense amount of researches on improving the performance of change detection approaches. Existing deep learning-driven methods generally regard changes as a specific type of land cover, and try to identify them relying on the powerful expression capabilities of neural networks. However, in practice, different types of land cover changes are generally influenced by environmental factors at different degrees. Furthermore, seasonal variation-induced spectral differences seriously interfere with those of real changes in different land cover types. All these problems pose great challenges for season-varying change detection because the real and seasonal variation-induced changes are technically difficult to separate by a single end-to-end model. In this paper, by embedding a convolutional long short-term memory (ConvLSTM) network into a conditional generative adversarial network (cGAN), we develop a novel method, named progressive domain adaptation (PDA), for change detection using season-varying remote sensing images. In our idea, two cascaded modules, progressive translation and group discrimination, are introduced to progressively translate pre-event images from their own domain to the post-event one, where their seasonal features are consistent and their intrinsic land cover distribution features are retained. By training this hybrid multi-model framework with certain reference change maps, the seasonal variation-induced changes between paired images are effectively suppressed, and meanwhile the natural and human activity-caused changes are greatly emphasized. Extensive experiments on two types of season-varying change detection datasets and a comparison with other state-of-the-art methods verify the effectiveness and competitiveness of our proposed PDA.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Roberts, Mitchell, Orli Kehat, Michaela Gross, Nethanel Lasri, Gil Issachar, Erica Sappington VandeWeerd, Ziv Peremen, and Amir Geva. "TECHNOLOGY TRANSLATING THE RELATIONSHIP BETWEEN QUALITY OF LIFE AND MEMORY USING A NOVEL EEG TECHNOLOGY." Innovation in Aging 3, Supplement_1 (November 2019): S331—S332. http://dx.doi.org/10.1093/geroni/igz038.1207.

Повний текст джерела
Анотація:
Abstract Existing research has postulated a relationship between cognition and quality of life (QoL). Components of QoL such as satisfaction with social support may be particularly influential in memory for those with comorbidities. Additional research is needed to characterize the relationship between memory and QoL domains. Findings are presented from a clinical trial using BNA memory scores to assess brain health. BNA uses EEG technology and machine learning to map networks of brain functioning including working memory. Participants were older adults living in The Villages, an active lifestyle community in Florida, between the ages of 55-85, from 8/30/2017-3/11/2019. Participants were stratified into 2 groups: healthy (no CNS/psychiatric conditions; n=158) and multi-morbid (>1 CNS and/or psychiatric conditions; n=106) and compared across memory and QoL indicators. Subjective QoL was measured by the WHOQOL-BREF across 4 domains (physical, psychological, social, environmental). Scores on QoL domains were divided into 3 levels (high-medium-low) and tested for their relationship to BNA memory scores using ANOVA. Results indicate a relationship between health status, subjective QoL and BNA memory scores. Healthy subjects who scored high in the psychological QoL domain had significantly higher memory scores [F(2,152)=4.30,p=.02)]. In healthy subjects, satisfaction with social support (p=.001) had the strongest impact on memory for social QoL, while body image (p=.06) and concentration (p=.06) were the most salient predictors of psychological QoL and approached significance. Multi-morbid subjects who indicated high social ratings had higher memory scores (F(2,100)=3.75,p=.03) which relied heavily on satisfaction with social support (p=.003). Implications for policy and practice are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Monroe, Jonathan. "Urgent Matter." Konturen 8 (October 9, 2015): 8. http://dx.doi.org/10.5399/uo/konturen.8.0.3697.

Повний текст джерела
Анотація:
Opening questions about “things” onto the bureaucratically-maintained, compartmentalized discursive, disciplinary claims of “philosophy,” “theory,” and “poetry,” “Urgent Matter” explores these three terms in relation to one another through attention to recent work by Giorgio Agamben, Jacques Rancière, the German-American poet Rosmarie Waldrop, and the German poet Ulf Stolterfoht, whose fachsprachen. Gedichte. I-IX (Lingos I-IX. Poems) Waldrop rendered into English in an award-winning translation. The difference between the "things" called "poetry" and "philosophy," as now institutionalized within the academy, is not epistemological, ontological, ahistorical, but a matter of linguistic domains, of so-called concrete "images" as the policed domain of the former and of "abstraction" as the policed domain of the latter. Challenging the binary logics that dominate language use in diverse discursive/disciplinary cultures, Waldrop’s linguistically self-referential, appositional procedures develop ways to use language that are neither linear, nor so much without direction, as multi-directional, offering complexes of adjacency, of asides, of digression, of errancy, of being “alongside,” in lieu of being “opposed to,” that constitute at once a poetics, an aesthetics, an ethics, and a politics. Elaborating a complementary understanding of poetry as “the most philosophic of all writing,” a medium of being “contemporary,” Waldrop and Stolterfoht question poetry’s purposes as one kind of language apparatus among others in the general economy. Whatever poetry might be, it aspires to be in their hands not a thing in itself but a form of self-questioning, of all discourses, all disciplines, that “thing” that binds “poetry” and “philosophy” together, as urgent matter, in continuing.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kim, Hyunjong, Gyutaek Oh, Joon Beom Seo, Hye Jeon Hwang, Sang Min Lee, Jihye Yun, and Jong Chul Ye. "Multi-domain CT translation by a routable translation network." Physics in Medicine & Biology, September 26, 2022. http://dx.doi.org/10.1088/1361-6560/ac950e.

Повний текст джерела
Анотація:
Abstract Objective. To unify the style of CT images from multiple sources, we propose a novel multi-domain image translation network to convert CT images from different scan parameters and manufacturers by simply changing a routing vector. Approach. Unlike the existing multi-domain translation techniques, our method is based on a shared encoder and a routable decoder architecture to maximize the expressivity and conditioning power of the network. Main results. Experimental results show that the proposed CT image conversion can minimize the variation of image characteristics caused by imaging parameters, reconstruction algorithms, and hardware designs. Quantitative results and clinical evaluation from radiologists also show that our method can provide accurate translation results. Significance. Quantitative evaluation of CT images from multi-site or longitudinal studies has been a difficult problem due to the image variation depending on CT scan parameters and manufacturers. The proposed method can be utilized to address this for the quantitative analysis of multi-domain CT images.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Roy, Subhankar, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe, and Elisa Ricci. "TriGAN: image-to-image translation for multi-source domain adaptation." Machine Vision and Applications 32, no. 1 (January 2021). http://dx.doi.org/10.1007/s00138-020-01164-4.

Повний текст джерела
Анотація:
AbstractMost domain adaptation methods consider the problem of transferring knowledge to the target domain from a single-source dataset. However, in practical applications, we typically have access to multiple sources. In this paper we propose the first approach for multi-source domain adaptation (MSDA) based on generative adversarial networks. Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style (characterized in terms of low-level features variations) and the content. For this reason, we propose to project the source image features onto a space where only the dependence from the content is kept, and then re-project this invariant representation onto the pixel space using the target domain and style. In this way, new labeled images can be generated which are used to train a final target classifier. We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Zhang, Xiaokang, Yuanlue Zhu, Wenting Chen, Wenshuang Liu, and Linlin Shen. "Gated SwitchGAN for multi-domain facial image translation." IEEE Transactions on Multimedia, 2021, 1. http://dx.doi.org/10.1109/tmm.2021.3074807.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Wang, Yuxi, Zhaoxiang Zhang, Wangli Hao, and Chunfeng Song. "Multi-Domain Image-to-Image Translation via a Unified Circular Framework." IEEE Transactions on Image Processing, 2020, 1. http://dx.doi.org/10.1109/tip.2020.3037528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Chou, Chien-Hsing, Ping-Hsuan Han, Chia-Chun Chang, and Yi-Zeng Hsieh. "Garment Style Creator: Using StarGAN for Image-to-Image Translation of Multi-Domain Garments." IEEE MultiMedia, 2022, 1. http://dx.doi.org/10.1109/mmul.2021.3139760.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhang, Huaqi, Jie Liu, Pengyu Wang, Zekuan Yu, Weifan Liu, and Huang Chen. "Cross-Boosted Multi-Target Domain Adaptation for Multi-Modality Histopathology Image Translation and Segmentation." IEEE Journal of Biomedical and Health Informatics, 2022, 1. http://dx.doi.org/10.1109/jbhi.2022.3153793.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chen, Yunfeng, Yalan Lin, Jinzhen Ding, Chuzhao Li, Yiming Zeng, Weifang Xie, and Jianlong Huang. "Multi-domain medical image translation generation for lung image classification based on Generative Adversarial Networks." Computer Methods and Programs in Biomedicine, November 2022, 107200. http://dx.doi.org/10.1016/j.cmpb.2022.107200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

"An Image Fusion Algorithm based on Modified Contourlet Transform." International Journal of Circuits, Systems and Signal Processing 14 (July 20, 2020). http://dx.doi.org/10.46300/9106.2020.14.46.

Повний текст джерела
Анотація:
Multi-focus image fusion has established itself as a useful tool for reducing the amount of raw data and it aims at overcoming imaging cameras’ finite depth of f ield by combining information from multiple images with the same scene. Most of existing fusion algorithms use the method of multi-scale decompositions (MSD) to fuse the s ource images. MSD-based fusion algorithms provide much better performance than the conventional fusion methods .In the image fusion algorithm based on multi-scale decomposition, how to make full use of the characteristics of coefficients to fuse images is a key problem.This paper proposed a modified contourlet transform(MCT) based on wavelets and nonsubsampled directional filter banks(NSDFB). The image is decomposed in wavelet domain,and each highpass subband of wavelets is further decomposed into multiple directional subbands by using NSDFB. The MCT has the important features of directionality and translation invariance. Furthermore, the MCT and a novel region energy strategy are exploited to perform image fusion algorithm. simulation results shows that the proposed method can the fusion results visually and also improve in objective evaluating parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Goswami, Mukesh M., Nidhi J. Dadiya, Suman Mitra, and Tanvi Goswami. "Multi-Script Text Detection from Image Using FRCNN." International Journal of Asian Language Processing 31, no. 02 (June 2021). http://dx.doi.org/10.1142/s2717554522500035.

Повний текст джерела
Анотація:
Textual information is the most common type of way by which we can determine what text/texts we are looking for. In order to retrieve text from images the first and foremost step is text detection from the image. Text detection has a wide range of applications such as translation, smart car driving system, information retrieval, indexing of multimedia archives, sign board reading, and countless. Multilingual text detection from images adds an extra complication to a computer vision problem. As India is a multilingual country and therefore multi-script texts can be found almost everywhere. A multi-script text differs in terms of formats, strokes, width, and height. Also, universal features for such an environment are unknown and difficult to determine as well. Therefore, detecting multi-script text from images is an important yet unsolved problem. In this work, we proposed a faster RCNN-based method for detecting English, Hindi, and Gujarati text from Images. Faster RCNN is the state-of-the-art approach for object detection. As it works for objects which are of large size and texts are of smaller size, the parameters are tuned to meet the objective of multi-script text detection. The dataset is created by collecting images as there is no standard dataset available that includes English, Gujarati, and Hindi texts in the public domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Mariscal Harana, J., V. Vergani, C. Asher, R. Razavi, A. King, B. Ruijsink, and E. Puyol Anton. "Large-scale, multi-vendor, multi-protocol, quality-controlled analysis of clinical cine CMR using artificial intelligence." European Heart Journal - Cardiovascular Imaging 22, Supplement_2 (June 1, 2021). http://dx.doi.org/10.1093/ehjci/jeab090.046.

Повний текст джерела
Анотація:
Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Advancing Impact Award scheme of the EPSRC Impact Acceleration Account at King’s College London Background Artificial intelligence (AI) has the potential to facilitate the automation of CMR analysis for biomarker extraction. However, most AI algorithms are trained on a specific input domain (e.g., scanner vendor or hospital-tailored imaging protocol) and lack the robustness to perform optimally when applied to CMR data from other input domains. Purpose To develop and validate a robust CMR analysis tool for automatic segmentation and cardiac function analysis which achieves state-of-the-art performance for multi-vendor short-axis cine CMR images. Methods The current work is an extension of our previously published quality-controlled AI-based tool for cine CMR analysis [1]. We deployed an AI algorithm that is equipped to handle different image sizes and domains automatically - the ‘nnU-Net’ framework [2] - and retrained our tool using the UK Biobank (UKBB) cohort population (n = 4,872) and a large database of clinical CMR studies obtained from two NHS hospitals (n = 3,406). The NHS hospital data came from three different scanner types: Siemens Aera 1.5T (n = 1,419), Philips Achieva 1.5T and 3T (n = 1,160), and Philips Ingenia 1.5T (n = 827). The ‘nnU-net’ was used to segment both ventricles and the myocardium. The proposed method was evaluated on randomly selected test sets from UKBB (n = 488) and NHS (n = 331) and on two external publicly available databases of clinical CMRs acquired on Philips, Siemens, General Electric (GE), and Canon CMR scanners – ACDC (n = 100) [3] and M&Ms (n = 321) [4]. We calculated the Dice scores - which measure the overlap between manual and automatic segmentations - and compared manual vs AI-based measures of biventricular volumes and function. Results Table 1 shows that the Dice scores for the NHS, ACDC, and M&Ms scans are similar to those obtained in the highly controlled, single vendor and single field strength UKBB scans. Although our AI-based tool was only trained on CMR scans from two vendors (Philips and Siemens), it performs similarly in unseen vendors (GE and Canon). Furthermore, it achieves state-of-the-art performance in online segmentation challenges, without being specifically trained on these databases. Table 1 also shows good agreement between manual and automated clinical measures of ejection fraction and ventricular volume and mass. Conclusions We show that our proposed AI-based tool, which combines training on a large-scale multi-domain CMR database with a state-of-the-art AI algorithm, allows us to robustly deal with routine clinical data from multiple centres, vendors, and field strengths. This is a fundamental step for the clinical translation of AI algorithms. Moreover, our method yields a range of additional metrics of cardiac function (filling and ejection rates, regional wall motion, and strain) at no extra computational cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Tan, Daniel Stanley, Yong-Xiang Lin, and Kai-Lung Hua. "Incremental Learning of Multi-Domain Image-to-Image Translations." IEEE Transactions on Circuits and Systems for Video Technology, 2020, 1. http://dx.doi.org/10.1109/tcsvt.2020.3005311.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Guran, C. N. Alexandrina, Ronald Sladky, Sabrina Karl, Magdalena Boch, Elmar Laistler, Christian Windischberger, Ludwig Huber, and Claus Lamm. "Validation of a new coil array tailored for dog functional magnetic resonance imaging studies." eneuro, February 7, 2023, ENEURO.0083–22.2022. http://dx.doi.org/10.1523/eneuro.0083-22.2022.

Повний текст джерела
Анотація:
Comparative neuroimaging allows for the identification of similarities and differences between species. It provides an important and promising avenue, to answer questions about the evolutionary origins of the brain´s organization, in terms of both structure and function. Dog fMRI has recently become one particularly promising and increasingly used approach to study brain function and coevolution. In dog neuroimaging, image acquisition has so far been mostly performed with coils originally developed for use in human MRI. Since such coils have been tailored to human anatomy, their sensitivity and data quality is likely not optimal for dog MRI. Therefore, we developed a multi-channel receive coil (K9 coil, read “canine”) tailored for high-resolution functional imaging in canines, optimized for dog cranial anatomy. In this paper we report structural (n = 9) as well as functional imaging data (resting-state, n = 6; simple visual paradigm, n = 9) collected with the K9 coil in comparison to reference data collected with a human knee coil. Our results show that the K9 coil significantly outperforms the human knee coil, improving the signal-to-noise ratio across the imaging modalities. We noted increases of roughly 45% signal-to-noise in the structural and functional domain. In terms of translation to functional fMRI data collected in a visual flickering checkerboard paradigm, group-level analyses show that the K9 coil performs better than the knee coil as well. These findings demonstrate how hardware improvements may be instrumental in driving data quality, and thus, quality of imaging results, for dog-human comparative neuroimaging.Significance StatementComparative neuroimaging is a powerful avenue to discover evolutionary mechanisms at the brain level. However, data quality is a major constraint in non-human functional magnetic resonance imaging. We describe a novel canine head coil for magnetic resonance imaging, designed specifically for dog cranial anatomy. Data quality performance and improvements over previously used human knee coils are described quantitatively. In brief, the canine coil improved signal quality substantially across both structural and functional imaging domains, with strongest improvements noted on the cortical surface.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Barua, Barun, Kangkana Bora, Anup Kr.Das, Gazi N. Ahmed, and Tashnin Rahman. "Stain color translation of multi-domain OSCC histopathology images using attention gated cGAN." Computerized Medical Imaging and Graphics, February 2023, 102202. http://dx.doi.org/10.1016/j.compmedimag.2023.102202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Yang, Karren Dai, Anastasiya Belyaeva, Saradha Venkatachalapathy, Karthik Damodaran, Abigail Katcoff, Adityanarayanan Radhakrishnan, G. V. Shivashankar, and Caroline Uhler. "Multi-domain translation between single-cell imaging and sequencing data using autoencoders." Nature Communications 12, no. 1 (January 4, 2021). http://dx.doi.org/10.1038/s41467-020-20249-2.

Повний текст джерела
Анотація:
AbstractThe development of single-cell methods for capturing different data modalities including imaging and sequencing has revolutionized our ability to identify heterogeneous cell states. Different data modalities provide different perspectives on a population of cells, and their integration is critical for studying cellular heterogeneity and its function. While various methods have been proposed to integrate different sequencing data modalities, coupling imaging and sequencing has been an open challenge. We here present an approach for integrating vastly different modalities by learning a probabilistic coupling between the different data modalities using autoencoders to map to a shared latent space. We validate this approach by integrating single-cell RNA-seq and chromatin images to identify distinct subpopulations of human naive CD4+ T-cells that are poised for activation. Collectively, our approach provides a framework to integrate and translate between data modalities that cannot yet be measured within the same cell for diverse applications in biomedical discovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Yadav, Ojasvi, Koustav Ghosal, Sebastian Lutz, and Aljosa Smolic. "Frequency-domain loss function for deep exposure correction of dark images." Signal, Image and Video Processing, May 15, 2021. http://dx.doi.org/10.1007/s11760-021-01915-4.

Повний текст джерела
Анотація:
AbstractWe address the problem of exposure correction of dark, blurry and noisy images captured in low-light conditions in the wild. Classical image-denoising filters work well in the frequency space but are constrained by several factors such as the correct choice of thresholds and frequency estimates. On the other hand, traditional deep networks are trained end to end in the RGB space by formulating this task as an image translation problem. However, that is done without any explicit constraints on the inherent noise of the dark images and thus produces noisy and blurry outputs. To this end, we propose a DCT/FFT-based multi-scale loss function, which when combined with traditional losses, trains a network to translate the important features for visually pleasing output. Our loss function is end to end differentiable, scale-agnostic and generic; i.e., it can be applied to both RAW and JPEG images in most existing frameworks without additional overhead. Using this loss function, we report significant improvements over the state of the art using quantitative metrics and subjective tests.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Broderick, Mick, Stuart Marshall Bender, and Tony McHugh. "Virtual Trauma: Prospects for Automediality." M/C Journal 21, no. 2 (April 25, 2018). http://dx.doi.org/10.5204/mcj.1390.

Повний текст джерела
Анотація:
Unlike some current discourse on automediality, this essay eschews most of the analysis concerning the adoption or modification of avatars to deliberately enhance, extend or distort the self. Rather than the automedial enabling of alternative, virtual selves modified by playful, confronting or disarming avatars we concentrate instead on emerging efforts to present the self in hyper-realist, interactive modes. In doing so we ask, what is the relationship between traumatic forms of automediation and the affective impact on and response of the audience? We argue that, while on the one hand there are promising avenues for valuable individual and social engagements with traumatic forms of automediation, there is an overwhelming predominance of suffering as a theme in such virtual depictions, comingled with uncritically asserted promises of empathy, which are problematic as the technology assumes greater mainstream uptake.As Smith and Watson note, embodiment is always a “translation” where the body is “dematerialized” in virtual representation (“Virtually” 78). Past scholarship has analysed the capacity of immersive realms, such as Second Life or online games, to highlight how users can modify their avatars in often spectacular, non-human forms. Critics of this mode of automediality note that users can adopt virtually any persona they like (racial, religious, gendered and sexual, human, animal or hybrid, and of any age), behaving as “identity tourists” while occupying virtual space or inhabiting online communities (Nakamura). Furthermore, recent work by Jaron Lanier, a key figure from the 1980s period of early Virtual Reality (VR) technology, has also explored so-called “homuncular flexibility” which describes the capacity for humans to seemingly adapt automatically to the control mechanisms of an avatar with multiple legs, other non-human appendages, or for two users to work in tandem to control a single avatar (Won et. al.). But this article is concerned less with these single or multi-player online environments and the associated concerns over modifying interactive identities. We are principally interested in other automedial modes where the “auto” of autobiography is automated via Artificial Intelligences (AIs) to convincingly mimic human discourse as narrated life-histories.We draw from case studies promoted by the 2017 season of ABC television’s flagship science program, Catalyst, which opened with semi-regular host and biological engineer Dr Jordan Nguyen, proclaiming in earnest, almost religious fervour: “I want to do something that has long been a dream. I want to create a copy of a human. An avatar. And it will have a life of its own in virtual reality.” As the camera followed Nguyen’s rapid pacing across real space he extolled: “Virtual reality, virtual human, they push the limits of the imagination and help us explore the impossible […] I want to create a virtual copy of a person. A digital addition to the family, using technology we have now.”The troubling implications of such rhetoric were stark and the next third of the program did little to allay such techno-scientific misgivings. Directed and produced by David Symonds, with Nguyen credited as co-developer and presenter, the episode “Meet the Avatars” immediately introduced scenarios where “volunteers” entered a pop-up inner city virtual lab, to experience VR for the first time. The volunteers were shown on screen subjected to a range of experimental VR environments designed to elicit fear and/or adverse and disorienting responses such as vertigo, while the presenter and researchers from Sydney University constantly smirked and laughed at their participants’ discomfort. We can only wonder what the ethics process was for both the ABC and university researchers involved in these broadcast experiments. There is little doubt that the participant/s experienced discomfort, if not distress, and that was televised to a national audience. Presenter Nguyen was also shown misleading volunteers on their way to the VR lab, when one asked “You’re not going to chuck us out of a virtual plane are you?” to which Nguyen replied “I don't know what we’re going to do yet,” when it was next shown that they immediately underwent pre-programmed VR exposure scenarios, including a fear of falling exercise from atop a city skyscraper.The sweat-inducing and heart rate-racing exposures to virtual plank walks high above a cityscape, or seeing subjects haptically viewing spiders crawl across their outstretched virtual hands, all elicited predictable responses, showcased as carnivalesque entertainment for the viewing audience. As we will see, this kind of trivialising of a virtual environment’s capacity for immersion belies the serious use of the technology in a range of treatments for posttraumatic stress disorder (see Rizzo and Koenig; Rothbaum, Rizzo and Difede).Figure 1: Nguyen and researchers enjoying themselves as their volunteers undergo VR exposure Defining AutomedialityIn their pioneering 2008 work, Automedialität: Subjektkonstitution in Schrift, Bild und neuen Medien, Jörg Dünne and Christian Moser coined the term “automediality” to problematise the production, application and distribution of autobiographic modes across various media and genres—from literary texts to audiovisual media and from traditional expression to inter/transmedia and remediated formats. The concept of automediality was deployed to counter the conventional critical exclusion of analysis of the materiality/technology used for an autobiographical purpose (Gernalzick). Dünne and Moser proffered a concept of automediality that rejects the binary division of (a) self-expression determining the mediated form or (b) (self)subjectivity being solely produced through the mediating technology. Hence, automediality has been traditionally applied to literary constructs such as autobiography and life-writing, but is now expanding into the digital domain and other “paratextual sites” (Maguire).As Nadja Gernalzick suggests, automediality should “encourage and demand not only a systematics and taxonomy of the constitution of the self in respectively genre-specific ways, but particularly also in medium-specific ways” (227). Emma Maguire has offered a succinct working definition that builds on this requirement to signal the automedial universally, noting it operates asa way of studying auto/biographical texts (of a variety of forms) that take into account how the effects of media shape the kinds of selves that can be represented, and which understands the self not as a preexisting subject that might be distilled into story form but as an entity that is brought into being through the processes of mediation.Sidonie Smith and Julia Watson point to automediality as a methodology, and in doing so emphasize how the telling or mediation of a life actually shapes the kind of story that can be told autobiographically. They state “media cannot simply be conceptualized as ‘tools’ for presenting a preexisting, essential self […] Media technologies do not just transparently present the self. They constitute and expand it” (Smith and Watson “Virtually Me” 77).This distinction is vital for understanding how automediality might be applied to self-expression in virtual domains, including the holographic avatar dreams of Nguyen throughout Catalyst. Although addressing this distinction in relation to online websites, following P. David Marshall’s description of “the proliferation of the public self”, Maguire notes:The same integration of digital spaces and platforms into daily life that is prompting the development of new tools in autobiography studies […] has also given rise to the field of persona studies, which addresses the ways in which individuals engage in practices of self-presentation in order to form commoditised identities that circulate in affective communities.For Maguire, these automedial works operate textually “to construct the authorial self or persona”.An extension to this digital, authorial construction is apparent in the exponential uptake of screen mediated prosumer generated content, whether online or theatrical (Miller). According to Gernalzick, unlike fictional drama films, screen autobiographies more directly enable “experiential temporalities”. Based on Mary Anne Doane’s promotion of the “indexicality” of film/screen representations to connote the real, Gernalzick suggests that despite semiotic theories of the index problematising realism as an index as representation, the film medium is still commonly comprehended as the “imprint of time itself”:Film and the spectator of film are said to be in a continuous present. Because the viewer is aware, however, that the images experienced in or even as presence have been made in the past, the temporality of the so-called filmic present is always ambiguous” (230).When expressed as indexical, automedial works, the intrinsic audio-visual capacities of film and video (as media) far surpass the temporal limitations of print and writing (Gernalzick, 228). One extreme example can be found in an emergent trend of “performance crime” murder and torture videos live-streamed or broadcast after the fact using mobile phone cameras and FaceBook (Bender). In essence, the political economy of the automedial ecology is important to understand in the overall context of self expression and the governance of content exhibition, access, distribution and—where relevant—interaction.So what are the implications for automedial works that employ virtual interfaces and how does this evolving medium inform both the expressive autobiographical mode and audiences subjectivities?Case StudyThe Catalyst program described above strove to shed new light on the potential for emerging technology to capture and create virtual avatars from living participants who (self-)generate autobiographical narratives interactively. Once past the initial gee-wiz journalistic evangelism of VR, the episode turned towards host Nguyen’s stated goal—using contemporary technology to create an autonomous virtual human clone. Nguyen laments that if he could create only one such avatar, his primary choice would be that of his grandfather who died when Nguyen was two years old—a desire rendered impossible. The awkward humour of the plank walk scenario sequence soon gives way as the enthusiastic Nguyen is surprised by his family’s discomfort with the idea of digitally recreating his grandfather.Nguyen next visits a Southern California digital media lab to experience the process by which 3D virtual human avatars are created. Inside a domed array of lights and cameras, in less than one second a life-size 3D avatar is recorded via 6,000 LEDs illuminating his face in 20 different combinations, with eight cameras capturing the exposures from multiple angles, all in ultra high definition. Called the Light Stage (Debevec), it is the same technology used to create a life size, virtual holocaust survivor, Pinchas Gutter (Ziv).We see Nguyen encountering a life-size, high-resolution 2D screen version of Gutter’s avatar. Standing before a microphone, Nguyen asks a series of questions about Gutter’s wartime experiences and life in the concentration camps. The responses are naturalistic and authentic, as are the pauses between questions. The high definition 4K screen is photo-realist but much more convincing in-situ (as an artifact of the Catalyst video camera recording, in some close-ups horizontal lines of transmission appear). According to the project’s curator, David Traum, the real Pinchas Gutter was recorded in 3D as a virtual holograph. He spent 25 hours providing 1,600 responses to a broad range of questions that the curator maintained covered “a lot of what people want to say” (Catalyst).Figure 2: The Museum of Jewish Heritage in Manhattan presented an installation of New Dimensions in Testimony, featuring Pinchas Gutter and Eva SchlossIt is here that the intersection between VR and auto/biography hybridise in complex and potentially difficult ways. It is where the concept of automediality may offer insight into this rapidly emerging phenomenon of creating interactive, hyperreal versions of our selves using VR. These hyperreal VR personae can be questioned and respond in real-time, where interrogators interact either as casual conversers or determined interrogators.The impact on visitors is sobering and palpable. As Nguyen relates at the end of his session, “I just want to give him a hug”. The demonstrable capacity for this avatar to engender a high degree of empathy from its automedial testimony is clear, although as we indicate below, it could simply indicate increased levels of emotion.Regardless, an ongoing concern amongst witnesses, scholars and cultural curators of memorials and museums dedicated to preserving the history of mass violence, and its associated trauma, is that once the lived experience and testimony of survivors passes with that generation the impact of the testimony diminishes (Broderick). New media modes of preserving and promulgating such knowledge in perpetuity are certainly worthy of embracing. As Stephen Smith, the executive director of the USC Shoah Foundation suggests, the technology could extendto people who have survived cancer or catastrophic hurricanes […] from the experiences of soldiers with post-traumatic stress disorder or survivors of sexual abuse, to those of presidents or great teachers. Imagine if a slave could have told her story to her grandchildren? (Ziv)Yet questions remain as to the veracity of these recorded personae. The avatars are created according to a specific agenda and the autobiographical content controlled for explicit editorial purposes. It is unclear what and why material has been excluded. If, for example, during the recorded questioning, the virtual holocaust survivor became mute at recollecting a traumatic memory, cried or sobbed uncontrollably—all natural, understandable and authentic responses given the nature of the testimony—should these genuine and spontaneous emotions be included along with various behavioural ticks such as scratching, shifting about in the seat and other naturalistic movements, to engender a more profound realism?The generation of the photorealist, mimetic avatar—remaining as an interactive persona long after the corporeal, authorial being is gone—reinforces Baudrillard’s concept of simulacra, where a clone exists devoid of its original entity and unable to challenge its automedial discourse. And what if some unscrupulous hacker managed to corrupt and subvert Gutter’s AI so that it responded antithetically to its purpose, by denying the holocaust ever happened? The ethical dilemmas of such a paradigm were explored in the dystopian 2013 film, The Congress, where Robyn Wright plays herself (and her avatar), as an out of work actor who sells off the rights to her digital self. A movie studio exploits her screen persona in perpetuity, enabling audiences to “become” and inhabit her avatar in virtual space while she is limited in the real world from undertaking certain actions due to copyright infringement. The inability of Wright to control her mimetic avatar’s discourse or action means the assumed automedial agency of her virtual self as an immortal, interactive being remains ontologically perplexing.Figure 3: Robyn Wright undergoing a full body photogrammetry to create her VR avatar in The Congress (2013)The various virtual exposures/experiences paraded throughout Catalyst’s “Meet the Avatars” paradoxically recorded and broadcast a range of troubling emotional responses to such immersion. Many participant responses suggest great caution and sensitivity be undertaken before plunging headlong into the new gold rush mentality of virtual reality, augmented reality, and AI affordances. Catalyst depicted their program subjects often responding in discomfort and distress, with some visibly overwhelmed by their encounters and left crying. There is some irony that presenter Ngyuen was himself relying on the conventions of 2D linear television journalism throughout, adopting face-to-camera address in (unconscious) automedial style to excitedly promote the assumed socio-cultural boon such automedial VR avatars will generate.Challenging AuthenticityThere are numerous ethical considerations surrounding the potential for AIs to expand beyond automedial (self-)expression towards photorealist avatars interacting outside of their pre-recorded content. When such systems evolve it may be neigh impossible to discern on screen whether the person you are conversing with is authentic or an indistinguishable, virtual doppelganger. In the future, a variant on the Turning Test may be needed to challenge and identify such hyperreal simulacra. We may be witnessing the precursor to such a dilemma playing out in the arena of audio-only podcasts, with some public intellectuals such as Sam Harris already discussing the legal and ethical problems from technology that can create audio from typed text that convincingly replicate the actual voice of a person by sampling approximately 30 minutes of their original speech (Harris). Such audio manipulation technology will soon be available to anybody with the motivation and relatively minor level of technological ability in order to assume an identity and masquerade as automediated dialogue. However, for the moment, the ability to convincingly alter a real-time computer generated video image of a person remains at the level of scientific innovation.Also of significance is the extent to which the audience reactions to such automediated expressions are indeed empathetic or simply part of the broader range of affective responses that also include direct sympathy as well as emotions such as admiration, surprise, pity, disgust and contempt (see Plantinga). There remains much rhetorical hype surrounding VR as the “ultimate empathy machine” (Milk). Yet the current use of the term “empathy” in VR, AI and automedial forms of communication seems to be principally focused on the capacity for the user-viewer to ameliorate negatively perceived emotions and experiences, whether traumatic or phobic.When considering comments about authenticity here, it is important to be aware of the occasional slippage of technological terminology into the mainstream. For example, the psychological literature does emphasise that patients respond strongly to virtual scenarios, events, and details that appear to be “authentic” (Pertaub, Slater, and Barker). Authentic in this instance implies a resemblance to a corresponding scenario/activity in the real world. This is not simply another word for photorealism, but rather it describes for instance the experimental design of one study in which virtual (AI) audience members in a virtual seminar room designed to treat public speaking anxiety were designed to exhibit “random autonomous behaviours in real-time, such as twitches, blinks, and nods, designed to encourage the illusion of life” (Kwon, Powell and Chalmers 980). The virtual humans in this study are regarded as having greater authenticity than an earlier project on social anxiety (North, North, and Coble) which did not have much visual complexity but did incorporate researcher-triggered audio clips of audience members “laughing, making comments, encouraging the speaker to speak louder or more clearly” (Kwon, Powell, and Chalmers 980). The small movements, randomly cued rather than according to a recognisable pattern, are described by the researchers as creating a sense of authenticity in the VR environment as they seem to correspond to the sorts of random minor movements that actual human audiences in a seminar can be expected to make.Nonetheless, nobody should regard an interaction with these AIs, or the avatar of Gutter, as in any way an encounter with a real person. Rather, the characteristics above function to create a disarming effect and enable the real person-viewer to willingly suspend their disbelief and enter into a pseudo-relationship with the AI; not as if it is an actual relationship, but as if it is a simulation of an actual relationship (USC). Lucy Suchman and colleagues invoke these ideas in an analysis of a YouTube video of some apparently humiliating human interactions with the MIT created AI-robot Mertz. Their analysis contends that, while it may appear on first glance that the humans’ mocking exchange with Mertz are mean-spirited, there is clearly a playfulness and willingness to engage with a form of AI that is essentially continuous with “long-standing assumptions about communication as information processing, and in the robot’s performance evidence for the limits to the mechanical reproduction of interaction as we know it through computational processes” (Suchman, Roberts, and Hird).Thus, it will be important for future work in the area of automediated testimony to consider the extent to which audiences are willing to suspend disbelief and treat the recounted traumatic experience with appropriate gravitas. These questions deserve attention, and not the kind of hype displayed by the current iteration of techno-evangelism. Indeed, some of this resurgent hype has come under scrutiny. From the perspective of VR-based tourism, Janna Thompson has recently argued that “it will never be a substitute for encounters with the real thing” (Thompson). Alyssa K. Loh, for instance, also argues that many of the negatively themed virtual experiences—such as those that drop the viewer into a scene of domestic violence or the location of a terrorist bomb attack—function not to put you in the position of the actual victim but in the position of the general category of domestic violence victim, or bomb attack victim, thus “deindividuating trauma” (Loh).Future work in this area should consider actual audience responses and rely upon mixed-methods research approaches to audience analysis. In an era of alt.truth and Cambridge Analytics personality profiling from social media interaction, automediated communication in the virtual guise of AIs demands further study.ReferencesAnon. “New Dimensions in Testimony.” Museum of Jewish Heritage. 15 Dec. 2017. 19 Apr. 2018 <http://mjhnyc.org/exhibitions/new-dimensions-in-testimony/>.Australian Broadcasting Corporation. “Meet The Avatars.” Catalyst, 15 Aug. 2017.Baudrillard, Jean. “Simulacra and Simulations.” Jean Baudrillard: Selected Writings. Ed. Mark Poster. Stanford: Stanford UP, 1988. 166-184.Bender, Stuart Marshall. Legacies of the Degraded Image in Violent Digital Media. Basingstoke: Palgrave Macmillan, 2017.Broderick, Mick. “Topographies of Trauma, Dark Tourism and World Heritage: Hiroshima’s Genbaku Dome.” Intersections: Gender and Sexuality in Asia and the Pacific. 24 Apr. 2010. 14 Apr. 2018 <http://intersections.anu.edu.au/issue24/broderick.htm>.Debevec, Paul. “The Light Stages and Their Applications to Photoreal Digital Actors.” SIGGRAPH Asia. 2012.Doane, Mary Ann. The Emergence of Cinematic Time: Modernity, Contingency, the Archive. Cambridge: Harvard UP, 2002.Dünne, Jörg, and Christian Moser. “Allgemeine Einleitung: Automedialität”. Automedialität: Subjektkonstitution in Schrift, Bild und neuen Medien. Eds. Jörg Dünne and Christian Moser. München: Wilhelm Fink, 2008. 7-16.Harris, Sam. “Waking Up with Sam Harris #64 – Ask Me Anything.” YouTube, 16 Feb. 2017. 16 Mar. 2018 <https://www.youtube.com/watch?v=gMTuquaAC4w>.Kwon, Joung Huem, John Powell, and Alan Chalmers. “How Level of Realism Influences Anxiety in Virtual Reality Environments for a Job Interview.” International Journal of Human-Computer Studies 71.10 (2013): 978-87.Loh, Alyssa K. "I Feel You." Artforum, Nov. 2017. 10 Apr. 2018 <https://www.artforum.com/print/201709/alyssa-k-loh-on-virtual-reality-and-empathy-71781>.Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self.” Journalism 15.2 (2014): 153-170.Mathews, Karen. “Exhibit Allows Virtual ‘Interviews’ with Holocaust Survivors.” Phys.org Science X Network, 15 Dec. 2017. 18 Apr. 2018 <https://phys.org/news/2017-09-virtual-holocaust-survivors.html>.Maguire, Emma. “Home, About, Shop, Contact: Constructing an Authorial Persona via the Author Website” M/C Journal 17.9 (2014).Miller, Ken. More than Fifteen Minutes of Fame: The Evolution of Screen Performance. Unpublished PhD Thesis. Murdoch University. 2009.Milk, Chris. “Ted: How Virtual Reality Can Create the Ultimate Empathy Machine.” TED Conferences, LLC. 16 Mar. 2015. <https://www.ted.com/talks/chris_milk_how_virtual_reality_can_create_the_ultimate_empathy_machine>.Nakamura, Lisa. “Cyberrace.” Identity Technologies: Constructing the Self Online. Eds. Anna Poletti and Julie Rak. Madison, Wisconsin: U of Wisconsin P, 2014. 42-54.North, Max M., Sarah M. North, and Joseph R Coble. "Effectiveness of Virtual Environment Desensitization in the Treatment of Agoraphobia." International Journal of Virtual Reality 1.2 (1995): 25-34.Pertaub, David-Paul, Mel Slater, and Chris Barker. “An Experiment on Public Speaking Anxiety in Response to Three Different Types of Virtual Audience.” Presence: Teleoperators and Virtual Environments 11.1 (2002): 68-78.Plantinga, Carl. "Emotion and Affect." The Routledge Companion to Philosophy and Film. Eds. Paisley Livingstone and Carl Plantinga. New York: Routledge, 2009. 86-96.Rizzo, A.A., and Sebastian Koenig. “Is Clinical Virtual Reality Ready for Primetime?” Neuropsychology 31.8 (2017): 877-99.Rothbaum, Barbara O., Albert “Skip” Rizzo, and JoAnne Difede. "Virtual Reality Exposure Therapy for Combat-Related Posttraumatic Stress Disorder." Annals of the New York Academy of Sciences 1208.1 (2010): 126-32.Smith, Sidonie, and Julia Watson. Reading Autobiography: A Guide to Interpreting Life Narratives. 2nd ed. Minneapolis: U of Minnesota P, 2010.———. “Virtually Me: A Toolbox about Online Self-Presentation.” Identity Technologies: Constructing the Self Online. Eds. Anna Poletti and Julie Rak. Madison: U of Wisconsin P, 2014. 70-95.Suchman, Lucy, Celia Roberts, and Myra J. Hird. "Subject Objects." Feminist Theory 12.2 (2011): 119-45.Thompson, Janna. "Why Virtual Reality Cannot Match the Real Thing." The Conversation, 14 Mar. 2018. 10 Apr. 2018 <http://theconversation.com/why-virtual-reality-cannot-match-the-real-thing-92035>.USC. "Skip Rizzo on Medical Virtual Reality: USC Global Conference 2014." YouTube, 28 Oct. 2014. 2 Apr. 2018 <https://www.youtube.com/watch?v=PdFge2XgDa8>.Won, Andrea Stevenson, Jeremy Bailenson, Jimmy Lee, and Jaron Lanier. "Homuncular Flexibility in Virtual Reality." Journal of Computer-Mediated Communication 20.3 (2015): 241-59.Ziv, Stan. “How Technology Is Keeping Holocaust Survivor Stories Alive Forever”. Newsweek, 18 Oct. 2017. 19 Apr. 2018 <http://www.newsweek.com/2017/10/27/how-technology-keeping-holocaust-survivor-stories-alive-forever-687946.html>.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії