Literatura científica selecionada sobre o tema "Gu dian yuan lin"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Gu dian yuan lin".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Gu dian yuan lin"

1

Lin, hao-yu, Yu-cui Gu, Zhen-hao Wang, Kai-yuan Huang, He-xing Sun, De Zeng e Yuan-ke Liang. "Abstract 5340: GATA3-VMP1 axis induces trastuzumab resistance via promoting autophagy in HER2-positive breast cancer cells". Cancer Research 82, n.º 12_Supplement (15 de junho de 2022): 5340. http://dx.doi.org/10.1158/1538-7445.am2022-5340.

Texto completo da fonte
Resumo:
Abstract Trastuzumab is the most important agent for HER-2 positive breast cancer both in neoadjuvant/adjuvant and metastatic settings. However, primary or acquired resistance to trastuzumab often compromises the efficacy and impairs patients’ survival. Vacuole Membrane Protein-1(VMP1), a transmembrane protein located in the endoplasmic reticulum membrane, has been reported to induce chemotherapy resistance by promoting autophagy. Our preliminary study found that the expression of GATA Binding Protein 3(GATA3) and VMP1 are positively correlated in tissue samples from HER-2 positive breast cancer patients. GATA3 and VMP1 were significantly increased in both protein and mRNA levels in trastuzumab-resistant breast cancer cell lines as compared to the sensitive counterparts. Furthermore, the expression autophagy signature protein P62 significantly decreased in trastuzumab-resistant cell lines, while the expression of autophagy activation protein ATG5 increased, indicating a close relationship between autophagy and trastuzumab resistance. In tissue samples from patients treated with neoadjuvant regiments containing trastuzumab, the expression of VMP1 significantly increased in the non-pCR (non-pathological complete response) group than the pCR group. Overexpression of VMP1 in wild-type breast cancer cells decreased their sensitivity to trastuzumab. On the contrary, when VMP1 was knocked-down in trastuzumab-resistance cells, the sensitivity of cell lines to trastuzumab was increased. Moreover, overexpressed GATA3 in wild-type HER-2 positive cell lines, the expression of VMP1 increased as well as the expression of ATG5, the signature protein of autophagy, suggesting that the transcription factor GATA3 may play a role in the transcriptional activation of VMP1. The promoter sequence was predicted, and it was found that GATA3 regulatory core sequence "TGATA" existed in front of the transcription start site from -327bp to -322bp, indicating that GATA3 may transcriptionally activate the expression of VMP1 and promote the occurrence of autophagy. TCGA (The Cancer Genome Atlas) correlation analysis showed that, both VMP1 and GATA3 are positively correlated with classical autophagy related factors ATG2B, ATG7 and BCL1, indicating VMP1 and GATA3 may be involved in the positive regulation of autophagy in HER-2 positive breast cancer. Collectively, our results suggest that GATA3 transcriptionally activates VMP1 to promote autophagy, resulting in the induction of trastuzumab resistance in HER-2 positive breast cancer. Further mechanistic study on autophagy in contribution to trastuzumab resistance is warranted. Citation Format: hao-yu Lin, Yu-cui Gu, Zhen-hao Wang, Kai-yuan Huang, He-xing Sun, De Zeng, Yuan-ke Liang. GATA3-VMP1 axis induces trastuzumab resistance via promoting autophagy in HER2-positive breast cancer cells [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5340.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yao, Herui, Min Yan, Zhongsheng Tong, Xinhong Wu, Min-Hee Ryu, Jee Hyun Kim, John Park et al. "Abstract CT175: Safety, tolerability, pharmacokinetics, and antitumor activity of SHR-A1811 in HER2-expressing/mutated advanced solid tumors: A global phase 1, multi-center, first-in-human study". Cancer Research 83, n.º 8_Supplement (14 de abril de 2023): CT175. http://dx.doi.org/10.1158/1538-7445.am2023-ct175.

Texto completo da fonte
Resumo:
Abstract Background: SHR-A1811 is an ADC comprised of a humanized anti-HER2 monoclonal antibody (trastuzumab), a cleavable linker, and a DNA topoisomerase I inhibitor payload. Here we assessed SHR-A1811 in HER2-expressing/mutated unresectable, advanced, or metastatic solid tumors. Methods: Pts were eligible if they had HER2 positive breast cancer (BC), HER2 positive gastric/GEJ carcinoma, HER2 low-expressing BC, HER2-expressing/mutated NSCLC, or other HER2-expressing/mutated solid tumors, and were refractory or intolerant to standard therapy. SHR-A1811 at doses of 1.0-8.0 mg/kg was given Q3W (IV). The primary endpoints were DLT, safety, and the RP2D. Results: From Sep 7, 2020 to Sep 28, 2022, 250 pts who had undergone a median of 3 prior treatment lines in the metastatic setting received at least one dose of SHR-A1811 in dose escalation, PK expansion, and indication expansion part. As of data cutoff on Sep 28, 2022, 1 pt experienced DLT. Treatment-related adverse events (TRAEs) were reported in 243 (97.2%) pts. Grade ≥3 TRAEs, serious TRAEs, and treatment-related deaths were reported in 131 (52.4%), 31 (12.4%), and 3 (1.2%) pts, respectively. Interstitial lung disease (AESI) was reported in 8 (3.2%) pts. Exposures of SHR-A1811, total antibody, and the payload were generally proportional to dose from 3.2 to 8.0 mg/kg. ORR was 61.6% (154/250, 95% CI 55.3-67.7) in all pts. Objective responses were observed in pts with HER2 positive BC (88/108, ORR 81.5%, 95% CI 72.9-88.3), HER2-low BC (43/77, ORR 55.8%, 95% CI 44.1-67.2), urothelial carcinoma (7/11), colorectal cancer (3/10), gastric/GEJ carcinoma (5/9), biliary tract cancer (5/8), NSCLC (1/3), endometrial cancer (1/2), and H&N cancer (1/1). Subgroup analyses of ORR are shown in Table 1. The 6-month PFS rate was 73.9% in all pts. Conclusions: SHR-A1811 was well-tolerated and showed promising antitumor activity in heavily pretreated advanced solid tumors. Table 1. Subgroup analyses of ORR No. of prior treatment lines in metastatic setting in all pts (N=250) HER2 positive BC (N=108) HER2-low BC (N=77) Other tumor types (N=65) ≤3 81.8% (45/55) 58.7% (27/46) 36.7% (18/49) >3 81.1% (43/53) 51.6% (16/31) 31.3% (5/16) Prior anti-HER2 therapies in pts with BC (N=185)* HER2 positive BC (N=108) HER2-low BC (N=77) All BC (N=185) Any 82.2% (88/107, 73.7-89.0) 68.8% (11/16, 41.3-89.0) 80.5% (99/123, 72.4-87.1) Trastuzumab 81.9% (86/105, 73.2-88.7) 75.0% (9/12, 42.8-94.5) 81.2% (95/117, 72.9-87.8) Pertuzumab 83.0% (39/47, 69.2-92.4) 100% (5/5, 47.8-100) 84.6% (44/52, 71.9-93.1) Pyrotinib 86.9% (53/61, 75.8-94.1) 71.4% (5/7, 29.0-96.3) 85.3% (58/68, 74.6-92.7) Lapatinib 80.0% (28/35, 63.1-91.6) 100% (1/1, 2.5-100) 80.6% (29/36, 64.0-91.8) T-DM1 82.4% (14/17, 56.6-96.2) 100% (3/3, 29.2-100) 85.0% (17/20, 62.1-96.8) Other HER2-ADC (except T-DM1)** 60.0% (9/15, 32.3-83.7) 50.0% (2/4, 6.8-93.2) 57.9% (11/19, 33.5-79.8) ORR in pts with tumor types other than BC (N=65) HER2 IHC3+ or IHC2+/ISH+ (N=36) HER2 IHC2+/ISH- or IHC1+ or unknown (N=29) All other tumor types (N=65) % (n/N) 38.9% (14/36) 31.0% (9/29) 35.4% (23/65) ORR was shown as % (n/N, 95% CI) or % (n/N). *ORR is calculated using the number of subjects previously treated with anti-HER2 cancer therapy in advanced/metastatic setting as denominator; 2-sided 95% CIs are estimated using Clopper-Pearson method. **Includes RC48-ADC, A166, DP303c, MRG002, ARX788, TAA013, DX126-262, PF-06804103, and BAT8001. Citation Format: Herui Yao, Min Yan, Zhongsheng Tong, Xinhong Wu, Min-Hee Ryu, Jee Hyun Kim, John Park, Yahua Zhong, Weiqing Han, Caigang Liu, Mark Voskoboynik, Qun Qin, Jian Zhang, Minal Barve, Ana Acuna-Villaorduna, Vinod Ganju, Seock-Ah Im, Changsheng Ye, Yongmei Yin, Amitesh C. Roy, Li-Yuan Bai, Yung-Chang Lin, Chia-Jui Yen, Hui Li, Ki Young Chung, Shanzhi Gu, Jun Qian, Yuee Teng, Yiding Chen, Yu Shen, Kaijing Zhao, Shangyi Rong, Xiaoyu Zhu, Erwei Song. Safety, tolerability, pharmacokinetics, and antitumor activity of SHR-A1811 in HER2-expressing/mutated advanced solid tumors: A global phase 1, multi-center, first-in-human study [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2023; Part 2 (Clinical Trials and Late-Breaking Research); 2023 Apr 14-19; Orlando, FL. Philadelphia (PA): AACR; Cancer Res 2023;83(8_Suppl):Abstract nr CT175.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

M. Ch., R. "Kodansha Ltd., <em>Kodansha Encyclopedia of Japan</em>, Tokio, 1983. 9 vols. + suplemento." Estudios de Asia y África, 1 de setembro de 1990, 575–76. http://dx.doi.org/10.24201/eaa.v25i3.1217.

Texto completo da fonte
Resumo:
Tang Song ci jan shang ci dian [La lírica de las dinastías Tang [618-906] y Song [960-1279] Sluzhou, 1986. 1517 pp. Song shi jian shang ci dian [La poesía de la dinastía Song], Shaghai 1987. 1610 pp. Gu shi jian shang ci dian (La poesía antigua), Pekín, 1988. 1414 pp. Yuan qu jian shang ci dian (La canción de la dinastía Yuan [1279-1368]), Pekín 1988. 1521 pp. R. M. Ch.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Thinh, Nguyen Hong, Tran Hoang Tung e Le Vu Ha. "Depth-aware salient object segmentation". VNU Journal of Science: Computer Science and Communication Engineering 36, n.º 2 (7 de outubro de 2020). http://dx.doi.org/10.25073/2588-1086/vnucsce.217.

Texto completo da fonte
Resumo:
Object segmentation is an important task which is widely employed in many computer vision applications such as object detection, tracking, recognition, and retrieval. It can be seen as a two-phase process: object detection and segmentation. Object segmentation becomes more challenging in case there is no prior knowledge about the object in the scene. In such conditions, visual attention analysis via saliency mapping may offer a mean to predict the object location by using visual contrast, local or global, to identify regions that draw strong attention in the image. However, in such situations as clutter background, highly varied object surface, or shadow, regular and salient object segmentation approaches based on a single image feature such as color or brightness have shown to be insufficient for the task. This work proposes a new salient object segmentation method which uses a depth map obtained from the input image for enhancing the accuracy of saliency mapping. A deep learning-based method is employed for depth map estimation. Our experiments showed that the proposed method outperforms other state-of-the-art object segmentation algorithms in terms of recall and precision. KeywordsSaliency map, Depth map, deep learning, object segmentation References[1] Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Transactions on pattern analysis and machine intelligence 20(11) (1998) 1254-1259.[2] Goferman, L. Zelnik-Manor, A. Tal, Context-aware saliency detection, IEEE transactions on pattern analysis and machine intelligence 34(10) (2012) 1915-1926.[3] Kanan, M.H. Tong, L. Zhang, G.W. Cottrell, Sun: Top-down saliency using natural statistics, Visual cognition 17(6-7) (2009) 979-1003.[4] Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, H.-Y. Shum, Learning to detect a salient object, IEEE Transactions on Pattern analysis and machine intelligence 33(2) (2011) 353-367.[5] Perazzi, P. Krähenbühl, Y. Pritch, A. Hornung, Saliency filters: Contrast based filtering for salient region detection, in: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE, 2012, pp. 733-740.[6] M. Cheng, N.J. Mitra, X. Huang, P.H. Torr, S.M. Hu, Global contrast based salient region detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 37(3) (2015) 569-582.[7] Borji, L. Itti, State-of-the-art in visual attention modeling, IEEE transactions on pattern analysis and machine intelligence 35(1) (2013) 185-207.[8] Simonyan, A. Vedaldi, A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, arXiv preprint arXiv:1312.6034.[9] Li, Y. Yu, Visual saliency based on multiscale deep features, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5455-5463.[10] Liu, J. Han, Dhsnet: Deep hierarchical saliency network for salient object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 678-686.[11] Achanta, S. Hemami, F. Estrada, S. Susstrunk, Frequency-tuned saliency detection model, CVPR: Proc IEEE, 2009, pp. 1597-604.Fu, J. Cheng, Z. Li, H. Lu, Saliency cuts: An automatic approach to object segmentation, in: Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, IEEE, 2008, pp. 1-4Borenstein, J. Malik, Shape guided object segmentation, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, Vol. 1, IEEE, 2006, pp. 969-976.Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, S. Li, Automatic salient object segmentation based on context and shape prior., in: BMVC. 6 (2011) 9.Ciptadi, T. Hermans, J.M. Rehg, An in depth view of saliency, Georgia Institute of Technology, 2013.Desingh, K.M. Krishna, D. Rajan, C. Jawahar, Depth really matters: Improving visual salient region detection with depth., in: BMVC, 2013.Li, J. Ye, Y. Ji, H. Ling, J. Yu, Saliency detection on light field, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2806-2813.Koch, S. Ullman, Shifts in selective visual attention: towards the underlying neural circuitry, in: Matters of intelligence, Springer, 1987, pp. 115-141.Laina, C. Rupprecht, V. Belagiannis, F. Tombari, N. Navab, Deeper depth prediction with fully convolutional residual networks, in: 3D Vision (3DV), 2016 Fourth International Conference on, IEEE, 2016, pp. 239-248.Bruce, J. Tsotsos, Saliency based on information maximization, in: Advances in neural information processing systems, 2006, pp. 155-162.Ren, X. Gong, L. Yu, W. Zhou, M. Ying Yang, Exploiting global priors for rgb-d saliency detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 25-32.Fang, J. Wang, M. Narwaria, P. Le Callet, W. Lin, Saliency detection for stereoscopic images., IEEE Trans. Image Processing 23(6) (2014) 2625-2636.Hou, L. Zhang, Saliency detection: A spectral residual approach, in: Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, IEEE, 2007, pp. 1-8.Guo, Q. Ma, L. Zhang, Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform, in: Computer vision and pattern recognition, 2008. cvpr 2008. ieee conference on, IEEE, 2008, pp. 1-8.Fang, W. Lin, B.S. Lee, C.T. Lau, Z. Chen, C.W. Lin, Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum, IEEE Transactions on Multimedia 14(1) (2012) 187-198.Lang, T.V. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, S. Yan, Depth matters: Influence of depth cues on visual saliency, in: Computer vision-ECCV 2012, Springer, 2012, pp. 101-115.Zhang, G. Jiang, M. Yu, K. Chen, Stereoscopic visual attention model for 3d video, in: International Conference on Multimedia Modeling, Springer, 2010, pp. 314-324.Wang, M.P. Da Silva, P. Le Callet, V. Ricordel, Computational model of stereoscopic 3d visual saliency, IEEE Transactions on Image Processing 22(6) (2013) 2151-2165.Peng, B. Li, W. Xiong, W. Hu, R. Ji, Rgbd salient object detection: A benchmark and algorithms, in: European Conference on Computer Vision (ECCV), 2014, pp. 92-109.Wu, L. Duan, L. Kong, Rgb-d salient object detection via feature fusion and multi-scale enhancement, in: CCF Chinese Conference on Computer Vision, Springer, 2015, pp. 359-368.Xue, Y. Gu, Y. Li, J. Yang, Rgb-d saliency detection via mutual guided manifold ranking, in: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE, 2015, pp. 666-670.Katz, A. Adler, Depth camera based on structured light and stereo vision, uS Patent App. 12/877,595 (Mar. 8 2012).Chatterjee, G. Molina, D. Lelescu, Systems and methods for determining depth from multiple views of a scene that include aliasing using hypothesized fusion, uS Patent App. 13/623,091 (Mar. 21 2013).Matthies, T. Kanade, R. Szeliski, Kalman filter-based algorithms for estimating depth from image sequences, International Journal of Computer Vision 3(3) (1989) 209-238.Y. Schechner, N. Kiryati, Depth from defocus vs. stereo: How different really are they?, International Journal of Computer Vision 39(2) (2000) 141-162.Delage, H. Lee, A.Y. Ng, A dynamic bayesian network model for autonomous 3d reconstruction from a single indoor image, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, Vol. 2, IEEE, 2006, pp. 2418-2428.Saxena, M. Sun, A.Y. Ng, Make3d: Learning 3d scene structure from a single still image, IEEE transactions on pattern analysis and machine intelligence 31(5) (2009) 824-840.Hedau, D. Hoiem, D. Forsyth, Recovering the spatial layout of cluttered rooms, in: Computer vision, 2009 IEEE 12th international conference on, IEEE, 2009, pp. 1849-1856.Liu, S. Gould, D. Koller, Single image depth estimation from predicted semantic labels, in: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, 2010, pp. 1253-1260.Ladicky, J. Shi, M. Pollefeys, Pulling things out of perspective, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 89-96.K. Nathan Silberman, Derek Hoiem, R. Fergus, Indoor segmentation and support inference from rgbd images, in: ECCV, 2012.Liu, J. Yuen, A. Torralba, Sift flow: Dense correspondence across scenes and its applications, IEEE transactions on pattern analysis and machine intelligence 33(5) (2011) 978-994.Konrad, M. Wang, P. Ishwar, 2d-to-3d image conversion by learning depth from examples, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, 2012, pp. 16-22.Liu, C. Shen, G. Lin, Deep convolutional neural fields for depth estimation from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5162-5170.Wang, X. Shen, Z. Lin, S. Cohen, B. Price, A.L. Yuille, Towards unified depth and semantic prediction from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2800-2809.Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The kitti dataset, International Journal of Robotics Research (IJRR).Achanta, S. Süsstrunk, Saliency detection using maximum symmetric surround, in: Image processing (ICIP), 2010 17th IEEE international conference on, IEEE, 2010, pp. 2653-2656.E. Rahtu, J. Kannala, M. Salo, J. Heikkilä, Segmenting salient objects from images and videos, in: Computer Vision-ECCV 2010, Springer, 2010, pp. 366-37.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Gu dian yuan lin"

1

Jian, Hanqian. "Yi bang shi shi : Xie Lingyun shi Chu ci dian gu yan jiu = A study on literary quotation of Chu Ci in the poetry of Xie Ling-yun /". click here to view the abstract and table of contents, 2002. http://net3.hkbu.edu.hk/~libres/cgi-bin/thesisab.pl?pdf=b17087351a.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Gu dian yuan lin"

1

Renjuan, Wang, ed. Gu dian yuan lin. Changsha: Hunan ke xue ji shu chu ban she, 2009.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zhang, Tingyu. Ci lin dian gu. [Yangzhou shi]: Jiangsu Guangling gu ji ke yin she, 1989.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Liyao, Cheng, e Guo Fangming, eds. Zhongguo gu dian yuan lin. Kunming shi: Yunnan ren min chu ban she, 1999.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Baoyuan, Jin, Sun Ruxian e Suzhou yuan lin he lü hua guan li ju, eds. Suzhou gu dian yuan lin. Shanghai: Shanghai shi jie tu shu chu ban she, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Peng, Yigang. Zhongguo gu dian yuan lin fen xi. Taibei Shi: Di jing qi ye gu fen you xian gong si chu ban bu, 1988.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Xiaoli, Zhang, ed. You ya de jiang nan gu dian yuan lin. Beijing: Zhongguo lu you chu ban she, 2006.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Fan, Yiguang, e Chaoxiong Feng. The classical gardens of Suzhou: Suzhou gu dian yuan lin. Beijing: New World Press, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Liu, Tongtong. Zhongguo gu dian yuan lin de ru xue ji yin. Tianjin Shi: Tianjin da xue chu ban she, 2015.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Jinsheng, Xu, ed. Zhong Ri gu dian yuan lin wen hua bi jiao. Beijing: Zhongguo jian zhu gong ye chu ban she, 2004.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Gu, Hailin. Yi hai lin yuan: Gu Hailin gu dian jia ju diao ke yi shu cang pin ji. Shanghai: Shanghai ci shu chu ban she, 2008.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia