Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Detection et segmentation des lignes.

Zeitschriftenartikel zum Thema „Detection et segmentation des lignes“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Detection et segmentation des lignes" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lessard, Claude, und Creutzer Mathurin. „L’évolution du corps enseignant québécois : 1960-1986“. Revue des sciences de l'éducation 15, Nr. 1 (26.11.2009): 43–71. http://dx.doi.org/10.7202/900617ar.

Der volle Inhalt der Quelle
Annotation:
Résumé Dans cet article, les auteurs esquissent les grandes lignes d’une problématique de l’évolution du corps enseignant québécois des niveaux primaire et secondaire, de la Révolution tranquille à aujourd’hui. La démarche essentiellement socio-historique aborde à la fois la structuration interne du corps enseignant et ses paramètres d’intégration, de différenciation et de segmentation, et aussi l’évolution de la conception dominante de la fonction enseignante. Une attention est portée à l’Université comme instance de légitimation professionnelle des enseignants. Au plan théorique, les auteurs abordent l’opposition professionnalisation-prolétarisation en tant que manière de saisir et d’insérer le corps enseignant dans les rapports sociaux dominants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Çiftci, Sadettin, und Bahattin Kerem Aydin. „Comment on Lee et al. Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening. Diagnostics 2021, 11, 1174“. Diagnostics 12, Nr. 7 (18.07.2022): 1738. http://dx.doi.org/10.3390/diagnostics12071738.

Der volle Inhalt der Quelle
Annotation:
We have read the article titled “Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening” by Lee et al. [...]
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

El-shazli, Alaa M. Adel, Sherin M. Youssef und Marwa Elshennawy. „COMPUTER-AIDED MODEL FOR BREAST CANCER DETECTION IN MAMMOGRAMS“. International Journal of Pharmacy and Pharmaceutical Sciences 8, Nr. 2 (17.09.2016): 31. http://dx.doi.org/10.22159/ijpps.2016v8s2.15216.

Der volle Inhalt der Quelle
Annotation:
<p>The objective of this research was to introduce a new system for automated detection of breast masses in mammography images. The system will be able to discriminate if the image has a mass or not, as well as benign and malignant masses. The new automated ROI segmentation model, where a profiling model integrated with a new iterative growing region scheme has been proposed. The ROI region segmentation is integrated with both statistical and texture feature extraction and selection to discriminate suspected regions effectively. A classifier model is designed using linear fisher classifier for suspected region identification. To check the system’s performance, a large mammogram database has been used for experimental analysis. Sensitivity, specificity, and accuracy have been used as performance measures. In this study, the methods yielded an accuracy of 93% for normal/abnormal classification and a 79% accuracy for bening/malignant classification. The proposed model had an improvement of 8% for normal/abnormal classification, and a 7% improvement for benign/malignant classification over Naga <em>et al.</em>, 2001. Moreover, the model improved 8% for normal/abnormal classification over Subashimi <em>et al.</em>, 2015. The early diagnosis of this disease has a major role in its treatment. Thus the use of computer systems as a detection tool could be viewed as essential to helping with this disease.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nour, Majid, Hakan Öcal, Adi Alhudhaif und Kemal Polat. „Skin Lesion Segmentation Based on Edge Attention Vnet with Balanced Focal Tversky Loss“. Mathematical Problems in Engineering 2022 (14.06.2022): 1–10. http://dx.doi.org/10.1155/2022/4677044.

Der volle Inhalt der Quelle
Annotation:
Segmentation of skin lesions from dermoscopic images plays an essential role in the early detection of skin cancer. However, skin lesion segmentation is still challenging due to artifacts such as indistinguishability between skin lesion and normal skin, hair on the skin, and reflections in the obtained dermoscopy images. In this study, an edge attention network (ET-Net) combining edge guidance module (EGM) and weighted aggregation module is added to the 2D volumetric convolutional neural network (Vnet 2D) to maximize the performance of skin lesion segmentation. In addition, the proposed fusion model presents a new fusion loss function by combining balanced binary cross-entropy (BBCE) and focal Tversky loss (FTL). The proposed model has been tested on the ISIC 2018 Task 1 Lesion Boundary Segmentation Challenge dataset. The proposed model outperformed the state-of-the-art studies as a result of the tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chen, Shuo-Tsung, Tzung-Dau Wang, Wen-Jeng Lee, Tsai-Wei Huang, Pei-Kai Hung, Cheng-Yu Wei, Chung-Ming Chen und Woon-Man Kung. „Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform“. BioMed Research International 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/798303.

Der volle Inhalt der Quelle
Annotation:
Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies.Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries.Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed.Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhong, Johnson. „Analyzing Out-of-Domain Generalization Performance of Pre-Trained Segmentation Models“. Network and Communication Technologies 8, Nr. 1 (16.02.2023): 1. http://dx.doi.org/10.5539/nct.v8n1p1.

Der volle Inhalt der Quelle
Annotation:
Artists illustrate objects to various degrees of complexity. As the amount of detail or the similarity to reality of a depiction decreases, the object tends to be reduced to its simplest, most relevant higher-level features (Harrison, 1981). One of the reasons Deep Neural Networks (DNN) may fail to identify objects in an image is that models are unable to recognize the order of importance of features such as shape, depth, or color within an image, which means even the most minute distortions of pixels within an image that would be imperceptible to humans would greatly impact the performance of the object detection models (Eykholt et al., 2018). However, by training DNN on artworks where the most prominent features defining specific objects are emphasized, perhaps a model can be made to be more resilient against small-scale changes in an image. In this paper, the correlation between the level of similarity to reality of images and artworks of an object and the accuracy of object detection models is investigated to test the ability of object detection models in identifying the most salient features of a particular object. The results of this report can help outline the efficacy of models only trained on real images in identifying increasingly abstract artworks that have simplified an object to its most prominent features. The experiment shows that the accuracies of models decrease as the images or illustrations provided become more abstract or simplified, which suggests the higher level features that identify a particular object are different in object detection models and humans.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lefkovits, Szidónia, und László Lefkovits. „U-Net architecture variants for brain tumor segmentation of histogram corrected images“. Acta Universitatis Sapientiae, Informatica 14, Nr. 1 (01.08.2022): 49–74. http://dx.doi.org/10.2478/ausi-2022-0004.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper we propose to create an end-to-end brain tumor segmentation system that applies three variants of the well-known U-Net convolutional neural networks. In our results we obtain and analyse the detection performances of U-Net, VGG16-UNet and ResNet-UNet on the BraTS2020 training dataset. Further, we inspect the behavior of the ensemble model obtained as the weighted response of the three CNN models. We introduce essential preprocessing and post-processing steps so as to improve the detection performances. The original images were corrected and the different intensity ranges were transformed into the 8-bit grayscale domain to uniformize the tissue intensities, while preserving the original histogram shapes. For post-processing we apply region connectedness onto the whole tumor and conversion of background pixels into necrosis inside the whole tumor. As a result, we present the Dice scores of our system obtained for WT (whole tumor), TC (tumor core) and ET (enhanced tumor) on the BraTS2020 training dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lee, Hongseok, Kyungdoc Kim, Guhyun Kang, Kyu-Hwan Jung und Sunyoung S. Lee. „Abstract 1721: Spatial distribution of immune cells as quantitative prognosis indicator in hepatocellular carcinoma“. Cancer Research 82, Nr. 12_Supplement (15.06.2022): 1721. http://dx.doi.org/10.1158/1538-7445.am2022-1721.

Der volle Inhalt der Quelle
Annotation:
Abstract Background: We previously demonstrated that the analysis of the tumor microenvironment (TME) in histopathology images via tissue segmentation [1] and cell density in lymphocyte-rich area [2] impacts prognosis and treatment in hepatocellular carcinoma (HCC). Few biomarker models exist to prognosticate patients with HCC via the automated analysis of TME at the cellular level. Methods: Clinical outcomes data and histopathology images of 351 patients with HCC were obtained from TCGA. We advanced a deep learning-based algorithm to analyze the tumor volume and spatial distribution of nuclei in TME. This was based on combination of two models: the PAIP2019 dataset was used for DenseNet-based HCC segmentation, which showed the performance of 0.8582 on the F1-score metric [3]; HoverNet-based cell detection model, which showed the performance of 0.654 on the binary PQ metric, annotated lymphocytes, macrophages, and neutrophils on the MonuSac dataset [4]. Results: The HCC segmentation model divided the TME into tumoral, marginal, and peritumoral areas by image processing. The marginal and peritumoral areas were defined as inner 50 um area and outer 100 um area from the estimated tumoral boundary, respectively. The ratios of neutrophils, lymphocytes, macrophages to the total cell count on marginal and peritumoral areas were calculated through integration of HCC segmentation and cell detection models. The proportions of leukocytes were subjected into Cox proportional hazard analysis. The results of Cox proportional hazard analysis calculated the proportions of macrophages and lymphocytes to other cells in the TME. The macrophage proportion on the peritumoral area was a significant prognostic indicator showing Log(hazard ratio) (-2.42 ± 2.14, p=0.026). The lymphocyte proportion on both areas of the peritumor and margins showed significant Log(hazard ratio) (-1.70 ± 1.61, p=0.042). Conclusions: The retrospective analysis of the TME using deep learning-assisted algorithm combining tissue segmentation and cell detection models reveals that the ratio of lymphocytes and macrophages in the peri-tumoral areas of HCC TME significantly impact prognosis. Further analyses in the prospective studies may provide more information about cellular biomarkers. [1] Kim et al. Cancer Res 2020 (80) (16 Supp) 2631 [2] Park et al. Journal of Clinical Oncology 39, no. 15_suppl (May 20, 2021) 4107-4107 [3] Kim, Yoo Jung, Jang, Hyungjoon, Lee, Kyoungbun et al. Medical Image Analysis 67 (2021): 101854. [4] Verma, Ruchika. IEEE Transactions on Medical Imaging 39 (2020): 1380-1391. [5] Graham, Simon. Medical Image Analysis 58 (2019): 101563. Citation Format: Hongseok Lee, Kyungdoc Kim, Guhyun Kang, Kyu-Hwan Jung, Sunyoung S. Lee. Spatial distribution of immune cells as quantitative prognosis indicator in hepatocellular carcinoma [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1721.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gujjunoori, Sagar, Madhu Oruganti, N. Aparna, M. Srija und Chaitrali Dangare. „Tracking and Size Estimation of Objects in Motion based on Median of Localized Thresholding“. International Journal of Engineering & Technology 7, Nr. 4.6 (25.09.2018): 78. http://dx.doi.org/10.14419/ijet.v7i4.6.20241.

Der volle Inhalt der Quelle
Annotation:
Motion detection and tracking play an important role in Computer vision and Robotics. Optical flow based methods to estimate the motion are widely explored during the last decade. The motion information retrieved from these techniques has enormous applications. Video analysis based on the size, speed, and directions of objects have wider applications in computer vision, robotics and watermarking. Segmentation of moving objects based on the optical flow is very challenging. In this paper, we present a model to estimate the size of a moving object based on the optical flow technique and present localized thresholding technique. Over segmentation is reduced by the proposed local thresholding technique and use of bilateral filtering. We compare our results with Sagar et al. scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bougrine, Asma, Rachid Harba, Raphael Canals, Roger Ledee, Meryem Jabloun und Alain Villeneuve. „Segmentation of Plantar Foot Thermal Images Using Prior Information“. Sensors 22, Nr. 10 (18.05.2022): 3835. http://dx.doi.org/10.3390/s22103835.

Der volle Inhalt der Quelle
Annotation:
Diabetic foot (DF) complications are associated with temperature variations. The occurrence of DF ulceration could be reduced by using a contactless thermal camera. The aim of our study is to provide a decision support tool for the prevention of DF ulcers. Thus, the segmentation of the plantar foot in thermal images is a challenging step for a non-constraining acquisition protocol. This paper presents a new segmentation method for plantar foot thermal images. This method is designed to include five pieces of prior information regarding the aforementioned images. First, a new energy term is added to the snake of Kass et al. in order to force its curvature to match that of the prior shape, which has a known form. Second, we defined the initial contour as the downsized prior-shape contour, which is placed inside the plantar foot surface in a vertical orientation. This choice makes the snake avoid strong false boundaries present outside the plantar region when evolving. As a result, the snake produces a smooth contour that rapidly converges to the true boundaries of the foot. The proposed method is compared to two classical prior-shape snake methods, that of Ahmed et al. and that of Chen et al. A database of 50 plantar foot thermal images was processed. The results show that the proposed method outperforms the previous two methods with a root-mean-square error of 5.12 pixels and a dice similarity coefficient of 94%. The segmentation of the plantar foot regions in the thermal images helped us to assess the point-to-point temperature differences between the two feet in order to detect hyperthermia regions. The presence of such regions is the pre-sign of ulcers in the diabetic foot. Furthermore, our method was applied to hyperthermia detection to illustrate the promising potential of thermography in the case of the diabetic foot. Associated with a friendly acquisition protocol, the proposed segmentation method is the first step for a future mobile smartphone-based plantar foot thermal analysis for diabetic foot patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Almukhtar, Mohammed, Ameer H. Morad, Hussein L. Hussein und Mina H. Al-hashimi. „Brain Tumor Segmentation Using Enhancement Convolved and Deconvolved CNN Model“. ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 12, Nr. 1 (30.03.2024): 88–99. http://dx.doi.org/10.14500/aro.11333.

Der volle Inhalt der Quelle
Annotation:
The brain assumes the role of the primary organ in the human body, serving as the ultimate controller and regulator. Nevertheless, certain instances may give rise to the development of malignant tumors within the brain. At present, a definitive explanation of the etiology of brain cancer has yet to be established. This study develops a model that can accurately identify the presence of a tumor in a given magnetic resonance imaging (MRI) scan and subsequently determine its size within the brain. The proposed methodology comprises a two-step process, namely, tumor extraction and measurement (segmentation), followed by the application of deep learning techniques for the identification and classification of brain tumors. The detection and measurement of a brain tumor involve a series of steps, namely, preprocessing, skull stripping, and tumor segmentation. The overfitting of BTNet-convolutional neural network (CNN) models occurs after a lot of training time because training the model with a large number of images. Moreover, the tuned CNN model shows a better performance for classification step by achieving an accuracy rate of 98%. The performance metrics imply that the BTNet model can reach the optimal classification accuracy for the brain tumor (BraTS 2020) dataset identification. The model analysis segment has a WT specificity of 0.97, a TC specificity of 0.925914, an ET specificity of 0.967717, and Dice scores of 79.73% for ET, 91.64% for WT, and 87.73% for TC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Makowski, Ryszard, und Robert Hossa. „Automatic speech signal segmentation based on the innovation adaptive filter“. International Journal of Applied Mathematics and Computer Science 24, Nr. 2 (26.06.2014): 259–70. http://dx.doi.org/10.2478/amcs-2014-0019.

Der volle Inhalt der Quelle
Annotation:
Abstract Speech segmentation is an essential stage in designing automatic speech recognition systems and one can find several algorithms proposed in the literature. It is a difficult problem, as speech is immensely variable. The aim of the authors’ studies was to design an algorithm that could be employed at the stage of automatic speech recognition. This would make it possible to avoid some problems related to speech signal parametrization. Posing the problem in such a way requires the algorithm to be capable of working in real time. The only such algorithm was proposed by Tyagi et al., (2006), and it is a modified version of Brandt’s algorithm. The article presents a new algorithm for unsupervised automatic speech signal segmentation. It performs segmentation without access to information about the phonetic content of the utterances, relying exclusively on second-order statistics of a speech signal. The starting point for the proposed method is time-varying Schur coefficients of an innovation adaptive filter. The Schur algorithm is known to be fast, precise, stable and capable of rapidly tracking changes in second order signal statistics. A transfer from one phoneme to another in the speech signal always indicates a change in signal statistics caused by vocal track changes. In order to allow for the properties of human hearing, detection of inter-phoneme boundaries is performed based on statistics defined on the mel spectrum determined from the reflection coefficients. The paper presents the structure of the algorithm, defines its properties, lists parameter values, describes detection efficiency results, and compares them with those for another algorithm. The obtained segmentation results, are satisfactory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Stern, David L., Jan Clemens, Philip Coen, Adam J. Calhoun, John B. Hogenesch, Ben J. Arthur und Mala Murthy. „Experimental and statistical reevaluation provides no evidence for Drosophila courtship song rhythms“. Proceedings of the National Academy of Sciences 114, Nr. 37 (29.08.2017): 9978–83. http://dx.doi.org/10.1073/pnas.1707471114.

Der volle Inhalt der Quelle
Annotation:
From 1980 to 1992, a series of influential papers reported on the discovery, genetics, and evolution of a periodic cycling of the interval between Drosophila male courtship song pulses. The molecular mechanisms underlying this periodicity were never described. To reinitiate investigation of this phenomenon, we previously performed automated segmentation of songs but failed to detect the proposed rhythm [Arthur BJ, et al. (2013) BMC Biol 11:11; Stern DL (2014) BMC Biol 12:38]. Kyriacou et al. [Kyriacou CP, et al. (2017) Proc Natl Acad Sci USA 114:1970–1975] report that we failed to detect song rhythms because (i) our flies did not sing enough and (ii) our segmenter did not identify many of the song pulses. Kyriacou et al. manually annotated a subset of our recordings and reported that two strains displayed rhythms with genotype-specific periodicity, in agreement with their original reports. We cannot replicate this finding and show that the manually annotated data, the original automatically segmented data, and a new dataset provide no evidence for either the existence of song rhythms or song periodicity differences between genotypes. Furthermore, we have reexamined our methods and analysis and find that our automated segmentation method was not biased to prevent detection of putative song periodicity. We conclude that there is no evidence for the existence of Drosophila courtship song rhythms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Damian-Gaillard, Béatrice. „Sexuality Stereotypes and Fantasies of Consumers of French Male Heterosexual Pornographic Media.“ Sur le journalisme, About journalism, Sobre jornalismo 8, Nr. 2 (20.12.2019): 46–61. http://dx.doi.org/10.25200/slj.v8.n2.2019.401.

Der volle Inhalt der Quelle
Annotation:
EN. Based on a survey of the French male heterosexual pornographic press, this paper addresses the class and gender stereotypes writers and editors have of their readership and their sexuality. The analysis is based on publishing company data, editorial guidelines for publications representative of the sector and individual interviews with four managers and nine editors. It identifies the commercial, professional and social issues underlying the construction and deployment of these stereotypes. The study is divided into two parts. The first sheds light on the heterogeneity of existing editorial positions that result from strategies such as market segmentation, content specialization and targeting of readers (as practiced in the conventional magazine press). The second part analyzes the stereotypes advanced by editors-in-chief and journalists about the sexualities of their readers, and their own professional and social challenges, demonstrating how these stereotypes are part of discursive strategies to counteract the social abasement of their career due to the social illegitimacy of pornography. It also examines how these stereotypes influence pornographic discourses and their modes of production, and more broadly how the agents of this press confine their readers to a certain category of masculinity and sexuality, exercising a type of domination based on class and gender stereotypes. *** FR. Basé sur une enquête sur la presse pornographique hétérosexuelle masculine française, cet article s'intéresse aux stéréotypes de classe et de genre que les journalistes et les rédacteurs en chef ont sur leurs lecteurs et leur sexualité. L'analyse s'appuie sur les données des éditeurs de ces médias, les lignes éditoriales de publications représentatives du secteur et des entretiens individuels avec quatre directeurs de publication et neuf éditeurs. L’étude identifie les enjeux commerciaux, professionnels et sociaux qui sous-tendent la construction et le déploiement de ces stéréotypes. Ce travail est divisé en deux parties. La première met en lumière l'hétérogénéité des positions éditoriales existantes, résultant de stratégies telles que la segmentation du marché, la spécialisation des contenus et le ciblage des lecteurs (comme cela se pratique dans la presse magazine classique). La deuxième partie analyse les stéréotypes véhiculés par les rédacteurs en chef et les journalistes sur la sexualité de leurs lecteurs ainsi que leurs propres défis professionnels et sociaux, en montrant comment ces stéréotypes s'inscrivent dans des stratégies discursives qui tendent à contrer l'humiliation sociale de leur carrière due à l'illégitimité de la pornographie dans la société. Enfin, cette recherche examine également comment ces stéréotypes influencent les discours pornographiques et leurs modes de production, et plus largement comment les agents de cette presse confinent leurs lecteurs à une certaine catégorie de masculinité et de sexualité, exerçant une forme de domination fondée sur les stéréotypes de classe et de genre. *** PT. Com base em uma pesquisa da imprensa pornográfica heterossexual masculina francesa, este artigo aborda os estereótipos de classe e gênero que escritores e editores têm de seus leitores e de sua sexualidade. A análise é baseada na publicação de dados da empresa, diretrizes editoriais para publicações representativas do setor e entrevistas individuais com quatro gerentes e nove editores. Ela identifica as questões comerciais, profissionais e sociais subjacentes à construção e implantação desses estereótipos. O estudo está dividido em duas partes. A primeira esclarece a heterogeneidade das posições editoriais existentes que resultam de estratégias como segmentação de mercado, especialização de conteúdo e direcionamento de leitores (como praticado na imprensa convencional de revista). A segunda parte analisa os estereótipos adiantados por editores-chefes e jornalistas sobre as sexualidades de seus leitores e seus próprios desafios profissionais e sociais, demonstrando como esses estereótipos fazem parte de estratégias discursivas para neutralizar o rebaixamento social de sua carreira devido à ilegitimidade social da pornografia. Também examina como esses estereótipos influenciam os discursos pornográficos e seus modos de produção e, de maneira mais ampla, como os agentes desta imprensa confinam seus leitores a uma determinada categoria de masculinidade e sexualidade, exercendo um tipo de dominação com base nos estereótipos de classe e gênero. ***
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Tanaka, Michio, Hiroki Matsubara und Takashi Morie. „Human Detection and Face Recognition Using 3D Structure of Head and Face Surfaces Detected by RGB-D Sensor“. Journal of Robotics and Mechatronics 27, Nr. 6 (18.12.2015): 691–97. http://dx.doi.org/10.20965/jrm.2015.p0691.

Der volle Inhalt der Quelle
Annotation:
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/11.jpg"" width=""300"" /> Summary of proposed method</div>Home service robots must possess the ability to communicate with humans, for which human detection and recognition methods are particularly important. This paper proposes methods for human detection and face recognition that are based on image processing, and are suitable for home service robots. For the human detection method, we combine the method proposed by Xia et al. based on the use of head shape with the results of region segmentation based on depth information, and use the positional relations of the detected points. We obtained a detection rate of 98.1% when the method was evaluated for various postures and facing directions. We demonstrate the robustness of the proposed method against postural changes such as stretching the arms, resting the chin on one’s hands, and drinking beverages. For the human recognition method, we combine the elastic bunch graph matching method proposed by Wiskott et al. with Face Tracking SDK to extract facial feature points, and use the 3D information in the deformation computation; we obtained a recognition rate of 93.6% during evaluation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

da Silva, Ricardo Dutra, Rosane Minghim und Helio Pedrini. „3D Edge Detection Based on Boolean Functions and Local Operators“. International Journal of Image and Graphics 15, Nr. 01 (Januar 2015): 1550003. http://dx.doi.org/10.1142/s0219467815500035.

Der volle Inhalt der Quelle
Annotation:
Edge detection is one of the most commonly used operations in image processing and computer vision areas. Edges correspond to the boundaries between regions in an image, which are useful for object segmentation and recognition tasks. This work presents a novel method for 3D edge detection based on Boolean functions and local operators, which is an extension of the 2D edge detector introduced by Vemis et al. [Signal Processing45(2), 161–172 (1995)] The proposed method is composed of two main steps. An adaptive binarization process is initially applied to blocks of the image and the resulting binary map is processed with a set of Boolean functions to identify edge points within the blocks. A global threshold, calculated to estimate image intensity variation, is then used to reduce false edges in the image blocks. The proposed method is compared to other 3D gradient filters: Canny, Monga–Deriche, Zucker–Hummel and Sobel operators. Experimental results demonstrate the effectiveness of the proposed technique when applied to several 3D synthetic and real data sets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Zhang, Mingyu, Fei Gao, Wuping Yang und Haoran Zhang. „Correction: Zhang et al. Wildlife Object Detection Method Applying Segmentation Gradient Flow and Feature Dimensionality Reduction. Electronics 2023, 12, 377“. Electronics 12, Nr. 8 (19.04.2023): 1923. http://dx.doi.org/10.3390/electronics12081923.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Park, Jeonghyuk, Kyungdoc Kim, Hong-Seok Lee, Guhyun Kang, Kyu-Hwan Jung, Ahmed Omar Kaseb und Sunyoung S. Lee. „Impact of cell density in lymphocyte-rich areas in the tumor microenvironment on prognosis and gene expression landscape in hepatocellular carcinoma.“ Journal of Clinical Oncology 39, Nr. 15_suppl (20.05.2021): 4107. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.4107.

Der volle Inhalt der Quelle
Annotation:
4107 Background: Cellular and non-cellular components in the tumor microenvironment (TME) impact prognosis and treatment in hepatocellular carcinoma (HCC). We previously reported a deep learning-based model of tissue segmentation in pathology images, showing an impact of stromal and malignant cell distribution with respect to gene expression on survival and molecular subtypes of cancer [1]. Methods: Clinical outcomes data, mRNA-seq, and histopathology images of 351 patients (pts) with HCC were obtained from TCGA. We established a combined algorithm of two deep learning models: ResNet-based model for tissue segmentation; YOLO-based model for cell detection, using published data sets [2, 3]. The tissue segmentation model defines six segments having following predominant components: malignant cells, lymphocytes, adipose, stromal, mucinous, and normal liver tissues. The cell detection model calculates density and mapping of cells in the TME. The immune landscape was analyzed via mRNA-seq of 770 genes enriched in TME. This comprehensive analysis defined parameters including the cell density per lymphocyte segmented area (CDpLA), representing the density of lymphocytes on a lymphocyte-rich area in TME. Results: Pts were clustered into two groups with high and low CDpLA (212 and 139 pts). High CDpLA was defined as lymphocyte density > 0.5 (13,618 cells/mm2 lymphocyte area). Pts with high CDpLA showed significantly better median overall survival (OS) than those with low CDpLA (82.9 vs 37.8 month, p < 0.005). The hazard ratio of CDpLA in OS was 0.36 (95% CI 0.18-0.72, p < 0.005). Among pts with available clinical data, 29 and 21 pts were with hepatitis C (HCV) and hepatitis B (HBV). Out of 29 HCV pts, 23 and 6 pts were with high and low CDpLA; out of 21 HBV pts, 17 and 4 pts were with high and low CDpLA. Fifty three were with alcoholic abuse, and 26 and 27 pts were with high and low CDpLA. Of note, pts with high CDpLA had significantly better OS in HCV pts (61.7 vs 19.9 months, p < 0.005). Genomic analysis with mRNA-seq shows that HCV pts with high CDpLA have lower expression of genes related to myeloid-derived suppressor cells (TRANK1, MEGF9, HS3ST2, GPNMB) and higher in genes related to immune activation (PLD4, IL3RA, TNFRSF4). Conclusions: A deep learning-assisted model of TME segmentation and cell detection showed an impact on survival from CDpLA, rather than the total number of lymphocytes in the TME. HCV pts are more likely to have higher CDpLA, and CDpLA was a strong prognostic indicator in HCV pts. Pts with high CDpLA are those with elevated expression of genes related to immune activation and decreased expression of immunosuppressive genes. Retrospective and prospective analysis of clinical response to immunotherapy and tyrosine kinase inhibitors is underway. [1] Kim et al. Cancer Res 2020 (80) (16 Supp) 2631 [2] Kather et al. PLoS Med 2019 16(1): e1002730 [3] Gamper et al. arXiv 2020:10778
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Ngcofe, Luncedo, und Nale Mudau. „An investigation of geographic object based image analysis (GEOBIA) for human settlement detection in South Africa“. Abstracts of the ICA 1 (15.07.2019): 1. http://dx.doi.org/10.5194/ica-abs-1-269-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> The changes to the landscape are constantly occurring both naturally and human induced. One of such changes is the human settlement expansion. The ability to map human settlements is vital for variety of studies including urban development planning and management. For this study human settlement detection is essential for topographic map update. The newly identified human settlements also serves as change detection area indicator for further update of other topographic features that are represented on the topographic map (such as roads etc.). The semi-automated human settlement detection has been conducted through geographic object based image analysis (GEOBIA) method using 2012 SPOT 5 imagery in the KwaZulu-Natal Province of South Africa. GEOBIA is relative new development of image processing and analysis of remote sensed imagery. It involves partitioning an imagery into discrete entities or segments from which meaningful image object, based on the spatial and spectral attributes can be generated. Through a multiresolution segmentation model implemented by the eCognition software, image segmentation was attained. This entailed evaluation of different segmentation parameter in order obtain suitable objects of interest. The following step was to determine appropriate variables obtained from image segmentation to classify the image. These include: layer values, geometry, position, texture, hierarchy and thematic attributes. The layer value option entails spectral statistics such as mean value and mean brightness for image reflectance bands together with capability of further applying band ratio combinations. Under the texture and geometry option also several alternatives are applicable (such as length/width under extend and asymmetry under shape properties). Under the level co-occurrence matrix (GLCM) the study explored the contrast textual measurements developed by Haralick et al., (1973). Other assessed variables are mean brightness, density, length/width and band ratios. The last step for GEOBIA was to determine suitable variables for the rule-set based classification. This resulted to 70.7% overall accuracy.</p><p>These results were further compared to the existing South African global human settlement layer (SA_GHSL) for the same study area which also used the same year 2012 SPOT 5 imagery. The SA_GHSL had an overall accuracy of 60%. The GEOBIA presents an opportunity to apply semi-automated method to target areas of new settlement development more efficiently and with consistent repeatable manner. Thus assisting topographic update analyst to be drawn to more areas of new settlement development at an enhanced efficient rate. However the spectral variability of roof tops proved to be the most challenging obstacle towards of both the semi-automated settlement detection methods.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Eckstein, F., A. Wisser, F. Roemer, F. Berenbaum, J. Kemnitz, G. Duda, S. Maschek und W. Wirth. „POS0403 SENSITIVITY OF AUTOMATED, U-NET-BASED SEGMENTATION TO LAMINAR CARTILAGE TRANSVERSE RELAXATION TIME (T2) IN KNEES WITH DIFFERENCES IN CONTRALATERAL OSTEOARTHRITIS STATUS – DATA FROM THE THE OA-BIO CONSORTIUM“. Annals of the Rheumatic Diseases 82, Suppl 1 (30.05.2023): 457.1–457. http://dx.doi.org/10.1136/annrheumdis-2023-eular.3125.

Der volle Inhalt der Quelle
Annotation:
BackgroundRadiographically normal knees with contralateral (CL) radiographic joint space narrowing (JSN) are at elevated risk of incident radiographic osteoarthritis (ROA). We previously observed increased superficial femorotibial cartilage layer transverse relaxation time (T2) on magnetic resonance images (MRI) of 39 KLG0 knees (0=normal) with advanced ROA in the contralateral knee (CL JSN), compared with 39 (1:1-matched) KLG0 knees without evidence of CL ROA (bilateral KLG0) [1]. These results suggest that cartilage matrix degeneration occurs in radiographically normal knees with CL JSN, and can be detected in vivo using MRI [1]. Application to larger cohorts, however, will benefit from fully automated image segmentation, as manual segmentation is a labor-intensive process.ObjectivesTo evaluate the performance of U-Net-based cartilage segmentation, using an artificial intelligence (AI), i.e. convolutional neuronal network (CNN) approach, combined with fully automated detection of bony landmarks and the specific MRI slices requiring cartilage segmentation. The U-Net results were compared with those from manual segmentation, and the fully automated technology was applied to a larger (extended) control group.MethodsU-Nets were trained from manual, quality-controlled cartilage segmentations in sagittal MESE images of the Osteoarthritis Initiative healthy reference cohort (n=92; HRC), one on the medial (MFTC) and one on the lateral femorotibial compartment (LFTC). All 7 echos (10-70 ms) were used. A 3rd U-Net was trained from manual, quality-controlled bone segmentations in 60 OAI HRC knees. The latter was used for automated detection of the weight-bearing femoral region of interest. The automated bone segmentation was registered to an atlas comprising both bone and cartilage segmentation, for identifying the slices required for cartilage segmentation. Automated post-processing was employed to correct obvious segmentation errors. This pipeline was first applied to n=39 dataset pairs (KLG0 with CL JSN vs. bilateral KLG0 knees) [1]. Then it was applied to n=642 bilateral KLG0 knees, to extend the limited paired case-control design (n=39) to a much larger control cohort. The agreement of the U-Net-based vs. manual segmentation was evaluated using the Dice Similarity Coefficient (DSC). Actual differences in cartilage T2 between a) 39 case vs. 39 control knees were compared between manual [1] vs. fully automated segmentation, and b) 39 case vs. 642 control knees (U-Net). Cohen’s D was used as a measure of effect size.ResultsThe DSC for the 2x39 knees with manual segmentations available ranged from 0.83±0.05 to 0.87±0.04 across the medial/lateral tibia and femur. When applied to the 39 pairs of case vs. control knees, fully automated segmentation identified differences in superficial layer T2, with the effect size apparently larger (Cohen’s D MFTC/LFTC: 0.62/0.50) than that obtained from manual segmentation (0.46/0.48). Further, the U-Net was more sensitive to deep layer T2 differences (MFTC/LFTC: 0.36/0.50) than manual segmentation (0.25/0.35). When comparing the case knees to 642 control knees, the fully automated segmentation showed similar between-group differences (0.57/0.48 for superficial T2).ConclusionFully automated segmentation of the femorotibial cartilage, in combination with automated detection of the relevant slices and femoral region of interest, showed high agreement with manual segmentation. The pipeline was able to reproduce differences in laminar T2 observed by manual segmentations [1], indicating early superficial cartilage matrix degeneration in radiographically normal knees with advanced contralateral radiographic KOA (JSN). The proposed fully automated analysis pipeline was readily expanded to a much larger cohort and thus represents a promising tool for future clinical studies on early cartilage change.References[1]Wirth W, et al. Osteoarthritis Cartilage. 2019; 27:1663AcknowledgementsThis project has received funding from the Eurostars-2 joint programme with co-funding from the European Union Horizon 2020 research and innovation programme. The local funding agency supporting this work in Germany is the project management agency DLR, which acts on behalf of the Federal Ministry of Education and Research, BMBF (OA-BIO Eurostars-2 project - E! 114932).Disclosure of InterestsFelix Eckstein Shareholder of: Chondrometrics GmbH, Grant/research support from: European Union: OA-BIO Eurostars-2 project (E! 114932), Employee of: Chondrometrics GmbH, Anna Wisser Grant/research support from: European Union: OA-BIO Eurostars-2 project (E! 114932), Employee of: Chondrometrics GmbH, Frank Roemer: None declared, Francis Berenbaum Shareholder of: 4Moving Biotech, Grant/research support from: European Union: OA-BIO Eurostars-2 project (E! 114932), Employee of: 4Moving Biotech, Jana Kemnitz Employee of: Chondrometrics GmbH, Georg Duda: None declared, Susanne Maschek Shareholder of: Chondrometrics GmbH, Grant/research support from: European Union: OA-BIO Eurostars-2 project (E! 114932), Employee of: Chondrometrics GmbH, Wolfgang Wirth Shareholder of: Chondrometrics GmbH, Grant/research support from: European Union: OA-BIO Eurostars-2 project (E! 114932), Employee of: Chondrometrics GmbH.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Chen, Yuhong, Weilong Peng, Keke Tang, Asad Khan, Guodong Wei und Meie Fang. „PyraPVConv: Efficient 3D Point Cloud Perception with Pyramid Voxel Convolution and Sharable Attention“. Computational Intelligence and Neuroscience 2022 (13.05.2022): 1–9. http://dx.doi.org/10.1155/2022/2286818.

Der volle Inhalt der Quelle
Annotation:
Designing efficient deep learning models for 3D point cloud perception is becoming a major research direction. Point-voxel convolution (PVConv) Liu et al. (2019) is a pioneering research work in this topic. However, since with quite a few layers of simple 3D convolutions and linear point-voxel feature fusion operations, it still has considerable room for improvement in performance. In this paper, we propose a novel pyramid point-voxel convolution (PyraPVConv) block with two key structural modifications to address the above issues. First, PyraPVConv uses a voxel pyramid module to fully extract voxel features in the manner of feature pyramid, such that sufficient voxel features can be obtained efficiently. Second, a sharable attention module is utilized to capture compatible features between multi-scale voxels in pyramid and point cloud for aggregation, as well as to reduce the complexity via structure sharing. Extensive results on three point cloud perception tasks, i.e., indoor scene segmentation, object part segmentation and 3D object detection, validate that the networks constructed by stacking PyraPVConv blocks are efficient in terms of both GPU memory consumption and computational complexity, and are superior to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Gourmelon, Nora, Thorsten Seehaus, Matthias Braun, Andreas Maier und Vincent Christlein. „Calving fronts and where to find them: a benchmark dataset and methodology for automatic glacier calving front extraction from synthetic aperture radar imagery“. Earth System Science Data 14, Nr. 9 (22.09.2022): 4287–313. http://dx.doi.org/10.5194/essd-14-4287-2022.

Der volle Inhalt der Quelle
Annotation:
Abstract. Exact information on the calving front positions of marine- or lake-terminating glaciers is a fundamental glacier variable for analyzing ongoing glacier change processes and assessing other variables like frontal ablation rates. In recent years, researchers started implementing algorithms that can automatically detect the calving fronts on satellite imagery. Most studies use optical images, as calving fronts are often easy to distinguish in these images due to the sufficient spatial resolution and the presence of different spectral bands, allowing the separation of ice features. However, detecting calving fronts on synthetic aperture radar (SAR) images is highly desirable, as SAR images can also be acquired during the polar night and are independent of weather conditions (e.g., cloud cover), facilitating year-round monitoring worldwide. In this paper, we present a benchmark dataset (Gourmelon et al., 2022b) of SAR images from multiple regions of the globe with corresponding manually defined labels providing information on the position of the calving front (https://doi.org/10.1594/PANGAEA.940950). With this dataset, different approaches for the detection of glacier calving fronts can be implemented, tested, and their performance fairly compared so that the most effective approach can be determined. The dataset consists of 681 samples, making it large enough to train deep learning segmentation models. It is the first dataset to provide long-term glacier calving front information from multi-mission data. As the dataset includes glaciers from Antarctica, Greenland, and Alaska, the wide applicability of models trained and tested on this dataset is ensured. The test set is independent of the training set so that the generalization capabilities of the models can be evaluated. We provide two sets of labels: one binary segmentation label to discern the calving front from the background, and one label for multi-class segmentation of different landscape classes. Unlike other calving front datasets, the presented dataset contains not only the labels but also the corresponding preprocessed and geo-referenced SAR images as PNG files. The ease of access to the dataset will allow scientists from other fields, such as data science, to contribute their expertise. With this benchmark dataset, we enable comparability between different front detection algorithms and improve the reproducibility of front detection studies. Moreover, we present one baseline model for each kind of label type. Both models are based on the U-Net, one of the most popular deep learning segmentation architectures. In the following two post-processing procedures, the segmentation results are converted into 1-pixel-wide front delineations. By providing both types of labels, both approaches can be used to address the problem. To assess the performance of different models, we suggest first reviewing the segmentation results using the recall, precision, F1 score, and the Jaccard index. Second, the front delineation can be evaluated by calculating the mean distance error to the labeled front. The presented vanilla models provide a baseline of 150 m ± 24 m mean distance error for the Mapple Glacier in Antarctica and 840 m ± 84 m for the Columbia Glacier in Alaska, which has a more complex calving front, consisting of multiple sections, compared with a laterally well constrained, single calving front of Mapple Glacier.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Jiang, Hou, Ling Yao, Ning Lu, Jun Qin, Tang Liu, Yujun Liu und Chenghu Zhou. „Multi-resolution dataset for photovoltaic panel segmentation from satellite and aerial imagery“. Earth System Science Data 13, Nr. 11 (19.11.2021): 5389–401. http://dx.doi.org/10.5194/essd-13-5389-2021.

Der volle Inhalt der Quelle
Annotation:
Abstract. In the context of global carbon emission reduction, solar photovoltaic (PV) technology is experiencing rapid development. Accurate localized PV information, including location and size, is the basis for PV regulation and potential assessment of the energy sector. Automatic information extraction based on deep learning requires high-quality labeled samples that should be collected at multiple spatial resolutions and under different backgrounds due to the diversity and variable scale of PVs. We established a PV dataset using satellite and aerial images with spatial resolutions of 0.8, 0.3, and 0.1 m, which focus on concentrated PVs, distributed ground PVs, and fine-grained rooftop PVs, respectively. The dataset contains 3716 samples of PVs installed on shrub land, grassland, cropland, saline–alkali land, and water surfaces, as well as flat concrete, steel tile, and brick roofs. The dataset is used to examine the model performance of different deep networks on PV segmentation. On average, an intersection over union (IoU) greater than 85 % is achieved. In addition, our experiments show that direct cross application between samples with different resolutions is not feasible and that fine-tuning of the pre-trained deep networks using target samples is necessary. The dataset can support more work on PV technology for greater value, such as developing a PV detection algorithm, simulating PV conversion efficiency, and estimating regional PV potential. The dataset is available from Zenodo on the following website: https://doi.org/10.5281/zenodo.5171712 (Jiang et al., 2021).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Saeed, Muhammad Usman, Ghulam Ali, Wang Bin, Sultan H. Almotiri, Mohammed A. AlGhamdi, Arfan Ali Nagra, Khalid Masood und Riaz ul Amin. „RMU-Net: A Novel Residual Mobile U-Net Model for Brain Tumor Segmentation from MR Images“. Electronics 10, Nr. 16 (14.08.2021): 1962. http://dx.doi.org/10.3390/electronics10161962.

Der volle Inhalt der Quelle
Annotation:
The most aggressive form of brain tumor is gliomas, which leads to concise life when high grade. The early detection of glioma is important to save the life of patients. MRI is a commonly used approach for brain tumors evaluation. However, the massive amount of data provided by MRI prevents manual segmentation in a reasonable time, restricting the use of accurate quantitative measurements in clinical practice. An automatic and reliable method is required that can segment tumors accurately. To achieve end-to-end brain tumor segmentation, a hybrid deep learning model RMU-Net is proposed. The architecture of MobileNetV2 is modified by adding residual blocks to learn in-depth features. This modified Mobile Net V2 is used as an encoder in the proposed network, and upsampling layers of U-Net are used as the decoder part. The proposed model has been validated on BraTS 2020, BraTS 2019, and BraTS 2018 datasets. The RMU-Net achieved the dice coefficient scores for WT, TC, and ET of 91.35%, 88.13%, and 83.26% on the BraTS 2020 dataset, 91.76%, 91.23%, and 83.19% on the BraTS 2019 dataset, and 90.80%, 86.75%, and 79.36% on the BraTS 2018 dataset, respectively. The performance of the proposed method outperforms with less computational cost and time as compared to previous methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Yadlapalli, Priyanka, und D. Bhavana. „Segmentation and Pre-processing of Interstitial Lung Disease using Deep Learning Model“. Scalable Computing: Practice and Experience 23, Nr. 4 (24.12.2022): 403–20. http://dx.doi.org/10.12694/scpe.v23i4.2051.

Der volle Inhalt der Quelle
Annotation:
Medical image processing involves using and examining 3D human body images, which are most frequently acquired through a computed tomography scanner, to diagnose disorders. Medical image process- ing helps radiologists, engineers, and clinicians better comprehend the anatomy of specific patients or groups of patients. Due to recent advancements in deep learn ing techniques, the study of medical image analysis is now a quickly expanding area of research. Interstitial Lung Disease is a chronic lung disease that worsens with time. This condition cannot be completely treated when the lungs have been damaged. Early detection, on the other hand, aids in the control of the disease. It causes lung scarring as a result. The first methodology characterizes lung tissue utilizing first order statistics, grey live occurrence, run length matrices, and fractal analysis. It was suggested by Uppaluri et al in one instance. In the pre-processing step, patients' CT scans are presented using various color map models for better understanding of data-set. and also for determining the patients final Force Vital Capacity and Confidence values using a Pytorch model with leaky relu activation function. These variables can be used to determine whether a person has a disease. Segmentation is a crucial stage in employing a computer assisted diagnosis system to estimate interstitial lung disease. Accurate segmentation of aberrant lung is essential for a trustworthy computer-aided illness diagnosis. Using separate training, validation, and test sets, we proposed an efficient deep learning model using Unet architecture and Densenet121 to segment lungs with Interstitial Lung Disease. The proposed segmentation model distinguishes the exact lung region from the ct slice background. To train and evaluate the algo rithm, 176 sparsely annotated Computed Tomography scans were utilized. The training was completed in a supervised and end to end manner. Contrary to current approaches, the suggested method yields accurate segmentation results without the requirement for re-initialization. We were able to achieve an accuracy of 92.59 percent after training the proposed model with Nvidia's CUDA GPU.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Pirotti, F., C. Paterno und M. Pividori. „APPLICATION OF TREE DETECTION METHODS OVER LIDAR DATA FOR FOREST VOLUME ESTIMATION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (21.08.2020): 1055–60. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1055-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Lidar (light detection and ranging) data are becoming more and more important in the analysis of the most relevant forest parameters. This study aims to compare the most recent segmentation methods for single trees using the ALS (Airborne Laser Scanning) point cloud and the CHM (Canopy Height Model). The methods used were the Li et al., method developed in 2012 and the Multi CHM method developed in 2015. The parameters analysed were the height and diameter for the individual trees and the volume and density for the entire forest. The efficiency of each method was verified by comparing the estimated parameters with those measured through 30 test areas. To better identify the useful parameters for the correct calibration of the algorithms, the population was divided into three layers according to the vertical structure and chronological class. From the comparison of the volumes obtained with the above methods and those calculated for the test areas, it emerges a tendency to over-segment for the Multi CHM method, while for the appropriately calibrated Li method there is a better correspondence to reality. The F-score values for the volumes obtained for the Li method are between 0.52 and 0.69 while for those obtained for the Multi CHM method are between 0.47 and 0.55. When compared with relascopic measures for each of the 48 parcels, a mean absolute difference ∼127 m3/ha and ∼141 m3/ha were found for Li2012 and MultiCHM respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Schofield, Andrew J., und Timothy A. Yates. „Interactions between Orientation and Contrast Modulations Suggest Limited Cross-Cue Linkage“. Perception 34, Nr. 7 (Juli 2005): 769–92. http://dx.doi.org/10.1068/p5294.

Der volle Inhalt der Quelle
Annotation:
Recent studies of texture segmentation and second-order vision have proposed very similar models for the detection of orientation modulation and contrast modulation (OM and CM). From the similarity of the models it is tempting to assume that the two cues might be processed by a single generalised texture mechanism; however, recent results (Kingdom et al, 2003 Visual Neuroscience2 65–76) have suggested that these cues are detected independently, or at least in a mechanism that is able to maintain an apparent independence between the cues. We tested new combinations of OM and CM and found that CM at 0.4 cycle deg−1 facilitates the detection of OM at 0.2 cycle deg−1 when the peaks of contrast align with the extremes of orientation. There is also some evidence of weak facilitation of CM by OM under the same conditions. Further, this facilitation can be predicted by filter-rectify-filter channels optimised for the detection of each cue, adding weight to the argument that texture cues are processed in a single generalised mechanism that nonetheless achieves cue independence or near-independence in many circumstances. We also found that the amount of suprathreshold masking produced by an orientation cue depends on the overall percept formed by that cue.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ercan, Caner, Mairene Coto-Llerena, Salvatore Piscuoglio und Luigi M. Terracciano. „Abstract 453: Establishing quantitative image analysis methods for tumor microenvironment evaluation“. Cancer Research 82, Nr. 12_Supplement (15.06.2022): 453. http://dx.doi.org/10.1158/1538-7445.am2022-453.

Der volle Inhalt der Quelle
Annotation:
Abstract Introduction: There is growing evidence that supports the role of the tumor microenvironment (TME) in the development and progression of hepatocellular carcinoma (HCC). However, the correlation between its composition and prognosis remain unclear. TME evaluation requires a combination of cell type and spatial information. These information can be obtained with the use of immunohistochemistry on patient derived slides. However, the IHC quantification remains a challenge. Computational methods such as artificial intelligence-based tool, may expedite the detection and classification thousands of different cells, expanding our understanding of the TME. Here, we aim to develop an AI based image analysis pipeline to define morphological and immunological characteristics of HCC-TME, as well as their relationship with clinicopathological features. Materials and Methods: We collected 98 HCC samples from liver resections available in the . TME composition of the tumors was evaluated and classified as inflamed, immune excluded and immune desert. Tumor slides were stained with a panel of TME markers (CD3, CD8, FOXP3, TIGIT, RORgt, ICOS, GranzymeB CD163, iNOS, PD-1, PD-L1) by IHC. The slides were digitalised and whole slide images were used for the quantification. The samples were split into training (80%) and test (20%) datasets and used to train convolutional neural network (CNN) models. For the quantification of immune cells, we trained two separate CNNs: cell detection and tumor-stroma segmentation. Cell nucleus instance segmentation was achieved using StarDist package (Schmidt et al 2018). We trained a model with our slides and tested the pretrained model (2D_versatile_he) which is for H&E stained images. Immune cells were classified using random forest classifier in QuPath. Finally, we trained a CNN in UNET architecture with ResNet34 backbone for semantic segmentation of tumor tissue into parenchyma, stroma and debris classes, by fastai deep learning library. Results: The accuracy of pretrained StarDist model was limited to 72% on IHC slide images. Thus, we trained a new cell nucleus instance segmentation StarDist model with our dataset and it reached 84% accuracy, 91.3% F1-Score, 92% true positive, 90.6% true negative rates on IHC slide images. Random forest classifier annotated immune cells at 98% accuracy. The tissue segmentation model classified tumor regions into parenchyma, stroma and debris at 95,8% accuracy, 92.5% dice, 86.3% IuO. Conclusions: In this study we developed a pipeline implementing open-source solutions to quantify IHC slides. The use of this semi-automatized computational pathology workflow can provide robust information in regard of the TME composition augmenting the discovering tumour specific TME features and pave way for the discovery of novel prognostic and therapeutic targets. Citation Format: Caner Ercan, Mairene Coto-Llerena, Salvatore Piscuoglio, Luigi M. Terracciano. Establishing quantitative image analysis methods for tumor microenvironment evaluation [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 453.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Paing, May Phu, Kazuhiko Hamamoto, Supan Tungjitkusolmun und Chuchart Pintavirooj. „Automatic Detection and Staging of Lung Tumors using Locational Features and Double-Staged Classifications“. Applied Sciences 9, Nr. 11 (06.06.2019): 2329. http://dx.doi.org/10.3390/app9112329.

Der volle Inhalt der Quelle
Annotation:
Lung cancer is a life-threatening disease with the highest morbidity and mortality rates of any cancer worldwide. Clinical staging of lung cancer can significantly reduce the mortality rate, because effective treatment options strongly depend on the specific stage of cancer. Unfortunately, manual staging remains a challenge due to the intensive effort required. This paper presents a computer-aided diagnosis (CAD) method for detecting and staging lung cancer from computed tomography (CT) images. This CAD works in three fundamental phases: segmentation, detection, and staging. In the first phase, lung anatomical structures from the input tomography scans are segmented using gray-level thresholding. In the second, the tumor nodules inside the lungs are detected using some extracted features from the segmented tumor candidates. In the last phase, the clinical stages of the detected tumors are defined by extracting locational features. For accurate and robust predictions, our CAD applies a double-staged classification: the first is for the detection of tumors and the second is for staging. In both classification stages, five alternative classifiers, namely the Decision Tree (DT), K-nearest neighbor (KNN), Support Vector Machine (SVM), Ensemble Tree (ET), and Back Propagation Neural Network (BPNN), are applied and compared to ensure high classification performance. The average accuracy levels of 92.8% for detection and 90.6% for staging are achieved using BPNN. Experimental findings reveal that the proposed CAD method provides preferable results compared to previous methods; thus, it is applicable as a clinical diagnostic tool for lung cancer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Guiotte, F., M. B. Rao, S. Lefèvre, P. Tang und T. Corpetti. „RELATION NETWORK FOR FULL-WAVEFORMS LIDAR CLASSIFICATION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (21.08.2020): 515–20. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-515-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. LiDAR data are widely used in various domains related to geosciences (flow, erosion, rock deformations, etc.), computer graphics (3D reconstruction) or earth observation (detection of trees, roads, buildings, etc.). Because of the unstructured nature of remaining 3D points and because of the cost of acquisition, the LiDAR data processing is still challenging (few learning data, difficult spatial neighboring relationships, etc.). In practice, one can directly analyze the 3D points using feature extraction and then classify the points via machine learning techniques (Brodu, Lague, 2012, Niemeyer et al., 2014, Mallet et al., 2011). In addition, recent neural network developments have allowed precise point cloud segmentation, especially using the seminal pointnet network and its extensions (Qi et al., 2017a, Riegler et al., 2017). Other authors rather prefer to rasterize / voxelize the point cloud and use more conventional computers vision strategies to analyze structures (Lodha et al., 2006). In a recent work, we demonstrated that Digital Elevation Models (DEM) is reductive of the vertical component complexity describing objects in urban environments (Guiotte et al., 2020). These results highlighted the necessity to preserve the 3D structure of the point cloud as long as possible in the processing. In this paper, we therefore rely on ortho-waveforms to compute a land cover map. Ortho-waveforms are directly computed from the waveforms in a regular 3D grid. This method provides volumes somehow “similar” to hyperspectral data where each pixel is here associated with one ortho-waveform. Then, we exploit efficient neural networks adapted to the classification of hyperspectral data when few samples are available. Our results, obtained on the 2018 Data Fusion Contest dataset (DFC), demonstrate the efficiency of the approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Meena, Sansar Raj, Lorenzo Nava, Kushanav Bhuyan, Silvia Puliero, Lucas Pedrosa Soares, Helen Cristina Dias, Mario Floris und Filippo Catani. „HR-GLDD: a globally distributed dataset using generalized deep learning (DL) for rapid landslide mapping on high-resolution (HR) satellite imagery“. Earth System Science Data 15, Nr. 7 (27.07.2023): 3283–98. http://dx.doi.org/10.5194/essd-15-3283-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. Multiple landslide events occur often across the world which have the potential to cause significant harm to both human life and property. Although a substantial amount of research has been conducted to address mapping of landslides using Earth observation (EO) data, several gaps and uncertainties remain with developing models to be operational at the global scale. The lack of a high-resolution globally distributed and event-diverse dataset for landslide segmentation poses a challenge in developing machine learning models that can accurately and robustly detect landslides in various regions, as the limited representation of landslide and background classes can result in poor generalization performance of the models. To address this issue, we present the High-Resolution Global landslide Detector Database (HR-GLDD), a high-resolution (HR) satellite dataset (PlanetScope, 3 m pixel resolution) for landslide mapping composed of landslide instances from 10 different physiographical regions globally in South and South-East Asia, East Asia, South America, and Central America. The dataset contains five rainfall-triggered and five earthquake-triggered multiple landslide events that occurred in varying geomorphological and topographical regions in the form of standardized image patches containing four PlanetScope image bands (red, green, blue, and NIR) and a binary mask for landslide detection. The HR-GLDD can be accessed through this link: https://doi.org/10.5281/zenodo.7189381 (Meena et al., 2022a, c). HR-GLDD is one of the first datasets for landslide detection generated by high-resolution satellite imagery which can be useful for applications in artificial intelligence for landslide segmentation and detection studies. Five state-of-the-art deep learning models were used to test the transferability and robustness of the HR-GLDD. Moreover, three recent landslide events were used for testing the performance and usability of the dataset to comment on the detection of newly occurring significant landslide events. The deep learning models showed similar results when testing the HR-GLDD at individual test sites, thereby indicating the robustness of the dataset for such purposes. The HR-GLDD is open access and it has the potential to calibrate and develop models to produce reliable inventories using high-resolution satellite imagery after the occurrence of new significant landslide events. The HR-GLDD will be updated regularly by integrating data from new landslide events.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Marshak, Charlie, Marc Simard und Michael Denbina. „Monitoring Forest Loss in ALOS/PALSAR Time-Series with Superpixels“. Remote Sensing 11, Nr. 5 (07.03.2019): 556. http://dx.doi.org/10.3390/rs11050556.

Der volle Inhalt der Quelle
Annotation:
We present a flexible methodology to identify forest loss in synthetic aperture radar (SAR) L-band ALOS/PALSAR images. Instead of single pixel analysis, we generate spatial segments (i.e., superpixels) based on local image statistics to track homogeneous patches of forest across a time-series of ALOS/PALSAR images. Forest loss detection is performed using an ensemble of Support Vector Machines (SVMs) trained on local radar backscatter features derived from superpixels. This method is applied to time-series of ALOS-1 and ALOS-2 radar images over a boreal forest within the Laurentides Wildlife Reserve in Québec, Canada. We evaluate four spatial arrangements including (1) single pixels, (2) square grid cells, (3) superpixels based on segmentation of the radar images, and (4) superpixels derived from ancillary optical Landsat imagery. Detection of forest loss using superpixels outperforms single pixel and regular square grid cell approaches, especially when superpixels are generated from ancillary optical imagery. Results are validated with official Québec forestry data and Hansen et al. forest loss products. Our results indicate that this approach can be applied to monitor forest loss across large study areas using L-band radar instruments such as ALOS/PALSAR, particularly when combined with superpixels generated from ancillary optical data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Borges, Flavia, Sandra Ofori und Maura Marcucci. „Myocardial Injury after Noncardiac Surgery and Perioperative Atrial Fibrillation“. Canadian Journal of General Internal Medicine 16, SP1 (26.03.2021): 18–26. http://dx.doi.org/10.22374/cjgim.v16isp1.530.

Der volle Inhalt der Quelle
Annotation:
One in 60 patients who undergo major noncardiac surgery dies within 30 days following surgery. The most common cause is cardiac complications, of which myocardial injury after noncardiac surgery (MINS) and perioperative atrial fibrillation (POAF) are common, affecting about 18 and 11% of adults, respectively, after noncardiac surgery. Patients who suffer MINS are at a higher risk of death compared to patients without MINS. Similarly, patients who develop POAF are at a higher risk of stroke and death compared to patients who do not. Most patients who suffer MINS are asymptomatic, and its diagnosis is not possible without routine troponin monitoring. Observational studies support the use of statins and aspirin in the management of patients with MINS. The only randomized controlled trial to date that has specifically addressed the management of MINS was the MANAGE trial that demonstrated the efficacy and safety of intermediate dose dabigatran in this population. There are no specific prediction models for POAF and no randomised controlled trial evidence to guide the specific management of POAF. Management guidelines in the acute period follow the management of nonoperative atrial fibrillation. The role of long-term anticoagulation in this population is still uncertain and should be guided by a shared care decision model with the patient, and with consideration of the individual risk for stroke balanced against the risk of bleeding. In this review, we present a case-based approach to the detection, prognosis, and management of MINS and POAF based on the existing evidence. RÉSUMÉUn patient sur 60 qui subit une intervention chirurgicale majeure non cardiaque meurt dans les 30 jours suivant l’opération. La cause la plus fréquente est celle des complications cardiaques, dont les lésions myocardiques après une chirurgie non cardiaque (LMCNC) et la fibrillation auriculaire périopératoire (FAPO) sont courantes et touchent respectivement environ 18 et 11 % des adultes après une chirurgie non cardiaque. Les patients présentant des LMCNC sont exposés à un risque plus élevé de décès que les patients qui ne présentent pas de LMCNC. De même, les patients chez qui on voit apparaître une FAPO ont un risque plus élevé d’accident vasculaire cérébral et de décès que ceux qui ne connaîtront pas cette complication. La plupart des patients atteints de LMCNC sont asymptomatiques, et il est impossible d’établir un diagnostic sans surveiller régulièrement la troponine. Des études d’observation appuient l’utilisation des statines et de l’aspirine dans la prise en charge des patients atteints de LMCNC. À ce jour, le seul essai contrôlé randomisé qui s’est penché précisément sur le traitement des LMCNC est l’essai MANAGE qui a démontré l’efficacité et l’innocuité du dabigatran à dose intermédiaire chez cette population. Il n’existe aucun modèle de prédiction précis pour la FAPO ni aucune donnée probante provenant d’essais contrôlés randomisés pour orienter précisément son traitement. Les lignes directrices concernant la prise en charge au cours de la période aiguë suivent celles de la prise en charge de la fibrillation auriculaire non liée à une opération. Le rôle de l’anticoagulation à long terme chez cette population est encore incertain et devrait être guidé par un modèle de prise de décision partagée avec le patient et tenir compte du risque individuel d’accident vasculaire cérébral par rapport à celui d’hémorragie. Dans cette revue, nous présentons une approche fondée sur des cas pour la détection, le pronostic et le traitement des LMCNC et de la FAPO sur la base des données probantes existantes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Lee, Sangho, Kyoung Hee Choi, Sohyun Hwang und Jihyang Kim. „#319 : Blastocyst Formation Prediction Based on Deep Learning Model from 3-Day Embryo Images in Time-Lapse Incubator Using Data Augmentation“. Fertility & Reproduction 05, Nr. 04 (Dezember 2023): 701. http://dx.doi.org/10.1142/s2661318223744132.

Der volle Inhalt der Quelle
Annotation:
Background and Aims: Current literature suggests that blastocyst ET (Embryo Transfer) at day 5 improves pregnancy outcomes compared with cleavage ET at day 3. However, blastocyst ET poses potential challenges due to the risk of developmental arrest at the cleavage stage. Therefore, accurately predicting blastocyst formation in embryos will help determining optimal days for embryo culture. The aim of our study is to develop a deep learning-based classification model that can predict blastocyst formation based on 3-day embryo images captured from Time-Lapse incubators. Method: A total of 200 embryo images were collected at 72 hours after ICSI (Intracytoplasmic Sperm Injection), with 100 forming blastocysts and 100 failing to do so. The images were annotated by RectLabel for object detection and segmentation, and data augmentation techniques were applied to enhance the dataset. A pre-trained ResNet model was used as the basis for our classifier model, and optimization was performed using the Adam optimizer and BCEWithLogitsLoss as the loss function. Model performance was evaluated through classification accuracy and AUC score. Results: The CNN-based classifier models demonstrated a considerable accuracy of 87.5% and an AUC of 0.896 in predicting blastocyst formation, with sensitivity and specificity of 91.7% and 81.3%, respectively. It was found that without data augmentation, the model degraded to an accuracy of 62.5% and an AUC of 0.631. Conclusion: Our study competently developed a classification model for predicting blastocyst formation using 3-day embryo images from a Time-Lapse incubator and demonstrated the effectiveness of data augmentation techniques in limited data sets. This deep learning-based model may contribute as a beneficial tool in IVF (In Vitro Fertilization) process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Orthuber, E., und J. Avbelj. „3D BUILDING RECONSTRUCTION FROM LIDAR POINT CLOUDS BY ADAPTIVE DUAL CONTOURING“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W4 (11.03.2015): 157–64. http://dx.doi.org/10.5194/isprsannals-ii-3-w4-157-2015.

Der volle Inhalt der Quelle
Annotation:
This paper presents a novel workflow for data-driven building reconstruction from Light Detection and Ranging (LiDAR) point clouds. The method comprises building extraction, a detailed roof segmentation using region growing with adaptive thresholds, segment boundary creation, and a structural 3D building reconstruction approach using adaptive 2.5D Dual Contouring. First, a 2D-grid is overlain on the segmented point cloud. Second, in each grid cell 3D vertices of the building model are estimated from the corresponding LiDAR points. Then, the number of 3D vertices is reduced in a quad-tree collapsing procedure, and the remaining vertices are connected according to their adjacency in the grid. Roof segments are represented by a Triangular Irregular Network (TIN) and are connected to each other by common vertices or - at height discrepancies - by vertical walls. Resulting 3D building models show a very high accuracy and level of detail, including roof superstructures such as dormers. The workflow is tested and evaluated for two data sets, using the evaluation method and test data of the “ISPRS Test Project on Urban Classification and 3D Building Reconstruction” (Rottensteiner et al., 2012). Results show that the proposed method is comparable with the state of the art approaches, and outperforms them regarding undersegmentation and completeness of the scene reconstruction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Xu, Xiaoping, Meirong Ji, Bobin Chen und Guowei Lin. „Analysis on Characteristics of Dysplasia in 345 Patients with Myelodysplastic Syndrome“. Blood 112, Nr. 11 (16.11.2008): 5100. http://dx.doi.org/10.1182/blood.v112.11.5100.5100.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective To investigate the characteristics of dysplasia in myelodysplastic syndrome (MDS). Methods Collect 716 samples of adult patients with abnormal blood routine but unclear cause between July 04, 2003 and March 14, 2007. Based on the gold standard of WHO MDS classification, all cases were detected on cytomorphological observation, cytochemical stain, bone marrow pathological study, cytogenetics, flow cytometry, and ect. The bone marrow cytological study on some abnormal hematopoietic cells has a diagnostic value to determine clonal or non-clonal diseases and assess sensitivity and specificity. Results In the complicated various dysplasia of hematopoietic cells, the following characteristics can be the main basis of cytomorphological diagnosis: One of granular Auer bodies, micronucleus (MN), or nuclear budding; Erythroid nuclear budding; Megakaryocytes presented in peripheral blood; Myeloblast or prorubricyte exhibited in peripheral blood; ringed sideroblasts&gt;1%. The subordinate basis of cytomorphological diagnosis was as follows: Granular pseudo Pelger-H≥et anomaly, hard nucleus segmentation, unsynchronous development of nuclei, ring-shaped nuclei, and aggregation of nuclear chromatin. Erythroid multi-nuclei, odd nucleus, mother-daughter nucleus, nuclear fragmentation, vacuole, and anisocytosis; micromegakaryocytes. Conclusion Cytomorphologic is the base for the diagnosis of MDS, however, it presents certain limit, especially cytomorphological change does not possess specificity for early MDS, hereby, it requires to combine other detection methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Schmidhuber, Jürgen. „Learning Factorial Codes by Predictability Minimization“. Neural Computation 4, Nr. 6 (November 1992): 863–79. http://dx.doi.org/10.1162/neco.1992.4.6.863.

Der volle Inhalt der Quelle
Annotation:
I propose a novel general principle for unsupervised learning of distributed nonredundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor, which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter "abstract concepts" out of the environmental input such that these concepts are statistically independent of those on which the other units focus. I discuss various simple yet potentially powerful implementations of the principle that aim at finding binary factorial codes (Barlow et al. 1989), i.e., codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. Such codes are potentially relevant for (1) segmentation tasks, (2) speeding up supervised learning, and (3) novelty detection. Methods for finding factorial codes automatically implement Occam's razor for finding codes using a minimal number of units. Unlike previous methods the novel principle has a potential for removing not only linear but also nonlinear output redundancy. Illustrative experiments show that algorithms based on the principle of predictability minimization are practically feasible. The final part of this paper describes an entirely local algorithm that has a potential for learning unique representations of extended input sequences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

NUNES, JEAN-CLAUDE, und ÉRIC DELÉCHELLE. „EMPIRICAL MODE DECOMPOSITION: APPLICATIONS ON SIGNAL AND IMAGE PROCESSING“. Advances in Adaptive Data Analysis 01, Nr. 01 (Januar 2009): 125–75. http://dx.doi.org/10.1142/s1793536909000059.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose some recent works on data analysis and synthesis based on Empirical Mode Decomposition (EMD). Firstly, a direct 2D extension of original Huang EMD algorithm with application to texture analysis, and fractional Brownian motion synthesis. Secondly, an analytical version of EMD based on PDE in 1D-space is presented. We proposed an extension in 2D-case of the so-called "sifting process" used in the original Huang's EMD. The 2D-sifting process is performed in two steps: extrema detection (by neighboring window or morphological operators) and surface interpolation by splines (thin plate splines or multigrid B-splines). We propose a multiscale segmentation approach by using the zero-crossings from each 2D-intrinsic mode function (IMF) obtained by 2D-EMD. We apply the Hilbert–Huang transform (which consists of two parts: (a) Empirical mode decomposition, and (b) the Hilbert spectral analysis) to texture analysis. We analyze each 2D-IMF obtained by 2D-EMD by studying local properties (amplitude, phase, isotropy, and orientation) extracted from the monogenic signal of each one of them. The monogenic signal proposed by Felsberg et al. is a 2D-generalization of the analytic signal, where the Riesz transform replaces the Hilbert transform. These local properties are obtained by the structure multivector such as proposed by Felsberg and Sommer. We present numerical simulations of fractional Brownian textures. Recent works published by Flandrin et al. relate that, in the case of fractional Gaussian noise (fGn), EMD acts essentially as a dyadic filter bank that can be compared to wavelet decompositions. Moreover, in the context of fGn identification, Flandrin et al. show that variance progression across IMFs is related to Hurst exponent H through a scaling law. Starting with these results, we proposed an algorithm to generate fGn, and fractional Brownian motion (fBm) of Hurst exponent H from IMFs obtained from EMD of a White noise, i.e., ordinary Gaussian noise (fGn with H = 1/2). Deléchelle et al. proposed an analytical approach (formulated as a partial differential equation (PDE)) for sifting process. This PDE-based approach is applied on signals. The analytical approach has a behavior similar to that of the EMD proposed by Huang.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Bunnell, Arianna, Dustin Valdez, Fredrik Strand, Yannik Glaser, Peter Sadowski und John A. Shepherd. „Abstract 3449: Is AI-enhanced breast ultrasound ready for breast cancer screening in low-resource environments? A systematic review“. Cancer Research 84, Nr. 6_Supplement (22.03.2024): 3449. http://dx.doi.org/10.1158/1538-7445.am2024-3449.

Der volle Inhalt der Quelle
Annotation:
Abstract Purpose. Screening mammography is unavailable in many low-resource areas. We ask if the state-of-the-art in artificial intelligence (AI)-enhanced breast ultrasound (BUS) is sufficiently accurate to be used for primary breast cancer screening in low-resource regions. Background. Since the 1980s, high-income countries have implemented mammographic screening programs, leading to breast cancer mortality reduction in screened women.1 Mammography is unavailable in many low-resource regions, such as the USAPI. Furthermore, travel difficulties and lack of radiologists hinder implementation. AI combined with portable BUS may address limitations of the high-income paradigm. In this systematic review, we ask if AI-enhanced BUS can detect/segment lesions (Objective 1) and classify lesions as cancerous (Objective 2). Methods. Two reviewers independently assessed articles from 1/1/2016 to 8/6/2023 from PubMed, Google Scholar, and citation searching. Studies which report on AI development and report performance on a patient-wise, held-out test set met the inclusion criteria. Studies were characterized by AI task and clinical application time. Dataset composition and performance were examined via narrative data synthesis. QUADAS-2 bias assessment was performed using criteria for each AI task. Success in (2) is defined by meeting minimum screening performance guidelines.2,3 Results. PubMed yielded 281 studies, Google Scholar yielded 225 studies, and a manual citation search yielded 41 studies. From 382 unique full texts evaluated, 52 articles met all inclusion criteria: 3 frame selection, 2 real-time detection, 2 combination, 14 segmentation-only, and 31 classification-only. Lesion segmentation-only models achieved a 90th percentile Dice similarity coefficient of 0.913 on generally small datasets. The best evidence for lesion cancer classification reported 0.976 area under the curve. All studies faced elevated bias and applicability concerns under QUADAS-2. Conclusion. Reported performance for (1) is insufficient to introduce AI-enhanced BUS for breast cancer screening. Evidence supporting AI-enhanced BUS for (2) is dependent on few studies relying on internal datasets, limiting generalizability. Geographically diverse clinical trials are needed to confirm and improve robustness of performance of AI-enhanced BUS for (1) and (2). References. 1. Marmot MG, et al. The benefits and harms of breast cancer screening: an independent review. British journal of cancer. 2013;108(11):2205-2240. 2. Lehman CD, et al. National Performance Benchmarks for Modern Screening Digital Mammography: Update from the Breast Cancer Surveillance Consortium. Radiology. 2017-04-01 2017;283(1):49-58. doi:10.1148/radiol.2016161174 3. Rosenberg RD, et al. Performance Benchmarks for Screening Mammography. Radiology. 2006-10-01 2006;241(1):55-66. doi:10.1148/radiol.2411051504 Citation Format: Arianna Bunnell, Dustin Valdez, Fredrik Strand, Yannik Glaser, Peter Sadowski, John A. Shepherd. Is AI-enhanced breast ultrasound ready for breast cancer screening in low-resource environments? A systematic review [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 3449.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Navikas, Vytautas, Quentin Juppet, Samuel Aubert, Benjamin Pelz, Joanna Kowal und Diego Dupouy. „Abstract 4620: Automated multiplex immunofluorescence workflow to interrogate the cellular composition of the tumor microenvironment“. Cancer Research 83, Nr. 7_Supplement (04.04.2023): 4620. http://dx.doi.org/10.1158/1538-7445.am2023-4620.

Der volle Inhalt der Quelle
Annotation:
Abstract Background: The tumor microenvironment (TME) is a constantly changing niche due to dynamic interactions between tumor cells and their surroundings. Recently, it became evident that a better understanding of the TME is needed to target the disease more accurately (Binnewies et al., Nat Med 2018; Vitale et al., Nat Med 2021). Spatial hyperplex immunofluorescence enables the visualization of cellular and non-cellular tissue components simultaneously. However, the increasing number of biomarkers detected by spatial proteomics quickly escalates the complexity of images and renders their interpretation challenging. Extracting quantitative data from images remains a challenge as it requires extensive training and experience in data analysis. Additionally, inferring cellular interactions and dependencies from spatial assays is an important milestone that needs to be overcome with image analysis approaches. Here, we present an end-to-end solution that combines automated hyperplex execution with image data extraction to study the spatial composition of TME. Methods: Sequential hyperplex immunofluorescence was performed on COMET™ on an FFPE Breast Tissue Microarray (TMA), generating images containing 42 layers: nuclear DAPI, tissue autofluorescence, and 39 single layers for biomarkers. All layers were delivered as a single ome.tiff file and processed using HORIZON™ image analysis software in combination with downstream statistical analysis. The tissue composition was interrogated with the use of an in-house trained nuclei detection algorithm (Schmid et al., MICCAI 2018; Weigert et al., WACV 2020), in-house developed image artifact exclusion approach, and spatial analysis based on a Squidpy workflow (Palla G. et al., Nat Meth 2022). Results: We interrogated the composition of breast tumor tissues (n=9). Our in-house developed pipeline allows for filtering out both autofluorescent objects and segmentation artifacts based on the tissue autofluorescence and morphological features. Once the quality of segmented cell detection was assured, over 20 different cell phenotypes were identified by unsupervised Leiden clustering and visualized using UMAP dimensionality reduction method. It revealed several myeloid cell subsets highlighting their heterogeneous phenotypes including, but not limited to, immunosuppressive neutrophils and macrophages. Contrastingly, the uniform presence of T regulatory lymphocytes was detected in majority of cores and their co-occurrence with CD14+CD163+ macrophages was observed. Conclusion: Here, we showcased an innovative automated workflow that highlights the ease of adoption of multiplex imaging to explore TME composition at single-cell resolution using simple workflow: slide in-data out. This workflow is easily transferrable to various cohorts of specimens to provide a toolset for spatial cellular dissection of the cancer milieu. Citation Format: Vytautas Navikas, Quentin Juppet, Samuel Aubert, Benjamin Pelz, Joanna Kowal, Diego Dupouy. Automated multiplex immunofluorescence workflow to interrogate the cellular composition of the tumor microenvironment. [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2023; Part 1 (Regular and Invited Abstracts); 2023 Apr 14-19; Orlando, FL. Philadelphia (PA): AACR; Cancer Res 2023;83(7_Suppl):Abstract nr 4620.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Mironov, S. V., K. N. Dudkin und A. K. Doudkine. „Separation of Figure from Ground as an Adaptive Image Processing“. Perception 26, Nr. 1_suppl (August 1997): 270. http://dx.doi.org/10.1068/v970139.

Der volle Inhalt der Quelle
Annotation:
We define the separation of figure from ground as a visual-attribute-dependent and task-dependent representation of sensory information in higher-level visual processes. A computer model for adaptive segmentation of 2-D visual objects (Dudkin et al, 1995 Proceedings of SPIE 122) was developed in these studies. The description and separation of figure from ground are implemented by spatial frequency filters and feature detectors performing as self-organising mechanisms. The simulation of control processes caused by attention (top - down), and lateral, frequency-selective, and cross-orientation inhibition (bottom - up) determines the adaptive image processing. The first stage is the estimation of input image produced by the analysis of the spatial brightness distribution by algorithms calculating the vector of primary descriptive attributes. These results provide the synthesis of control processes based on several algorithms, each of which transforms descriptive attributes into separate control parameters. The creation of two primary descriptions: ‘sustained’ (contours) and ‘transient’ (fragments with homogeneous intensity), and the selection of feature-detection operators are governed by the complete set of control parameters. The primary descriptions allow formation of the intermediate image description in which similar elements are grouped by identical brightness, colour, spatial position, curvature, and texture according to Gestalt concepts. To divide the image into basic areas and to extract fragments which belong to a putative figure, all these descriptions are combined into the final integrated image representation. The model has been tested on various images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Kuo, C. F., K. Zheng, S. Miao, L. Lu, C. I. Hsieh, C. Lin und T. Y. Fan. „OP0062 PREDICTIVE VALUE OF BONE TEXTURE FEATURES EXTRACTED BY DEEP LEARNING MODELS FOR THE DETECTION OF OSTEOARTHRITIS: DATA FROM THE OSTEOARTHRITIS INITIATIVE“. Annals of the Rheumatic Diseases 79, Suppl 1 (Juni 2020): 41.2–42. http://dx.doi.org/10.1136/annrheumdis-2020-eular.2858.

Der volle Inhalt der Quelle
Annotation:
Background:Osteoarthritis is a degenerative disorder characterized by radiographic features of asymmetric loss of joint space, subchondral sclerosis, and osteophyte formation. Conventional plain films are essential to detect structural changes in osteoarthritis. Recent evidence suggests that fractal- and entropy-based bone texture parameters may improve the prediction of radiographic osteoarthritis.1In contrast to the fixed texture features, deep learning models allow the comprehensive texture feature extraction and recognition relevant to osteoarthritis.Objectives:To assess the predictive value of deep learning-extracted bone texture features in the detection of radiographic osteoarthritis.Methods:We used data from the Osteoarthritis Initiative, which is a longitudinal study with 4,796 patients followed up and assessed for osteoarthritis. We used a training set of 25,978 images from 3,086 patients to develop the textual model. We use the BoneFinder software2to do the segmentation of distal femur and proximal tibia. We used the Deep Texture Encoding Network (Deep-TEN)3to encode the bone texture features into a vector, which is fed to a 5-way linear classifier for Kellgren and Lawrence grading for osteoarthritis classification. We also developed a Residual Network with 18 layers (ResNet18) for comparison since it deals with contours as well. Spearman’s correlation coefficient was used to assess the correlation between predicted and reference KL grades. We also test the performance of the model to identify osteoarthritis (KL grade≥2).Results:We obtained 6,490 knee radiographs from 446 female and 326 male patients who were not in the training sets to validate the performance of the models. The distribution of the KL grades in the training and testing sets were shown in Table 1. The Spearman’s correlation coefficient was 0.60 for the Deep-TEN and 0.67 for the ResNet18 model. Table 2 shows the performance of the models to detect osteoarthritis. The positive predictive value for Deep-TEN and ResNet18 model classification for OA was 81.37% and 87.46%, respectively.Table 1Distribution of KL grades in the training and testing sets.KL grades01234TotalTraining set1089341.9%458218.7%611423.5%332012.8%7993.1%25,978Testing set247238.1%135320.8%169626.1%77511.9%1943.0%6,490Table 2Performance matrices of the Deep-Ten and ResNet18 models to detect osteoarthritisDeep-TENResNet18Sensitivity62.29%(95% CI, 60.42%–64.13%)59.14%(95% CI, 57.24%–61.01%)Specificity90.07%(95% CI, 89.07%–91.00%)94.09%(95% CI, 93.30%–94.82%)Positive predictive value81.37%(95% CI, 79.81%–82.84%)87.46%(95% CI, 85.96%–88.82%)Negative predictive value77.42%(95% CI, 77.64%–79.65%)76.77%(95% CI, 75.93%–77.59%)Conclusion:This study demonstrates that the bone texture model performs reasonably well to detect radiographic osteoarthritis with a similar performance to the bone contour model.References:[1]Bertalan Z, Ljuhar R, Norman B, et al. Combining fractal- and entropy-based bone texture analysis for the prediction of osteoarthritis: data from the multicenter osteoarthritis study (MOST). Osteoarthritis Cartilage 2018;26:S49.[2]Lindner C, Wang CW, Huang CT, et al. Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms. Sci Rep 2016;6:33581.[3]Zhang H, Xue J, Dana K. Deep TEN: Texture Encoding Network. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017:708-17.Disclosure of Interests:None declared
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wang, Xingzhu, Jiyu Wei, Yang Liu, Jinhao Li, Zhen Zhang, Jianyu Chen und Bin Jiang. „Research on Morphological Detection of FR I and FR II Radio Galaxies Based on Improved YOLOv5“. Universe 7, Nr. 7 (25.06.2021): 211. http://dx.doi.org/10.3390/universe7070211.

Der volle Inhalt der Quelle
Annotation:
Recently, astronomy has witnessed great advancements in detectors and telescopes. Imaging data collected by these instruments are organized into very large datasets that form data-oriented astronomy. The imaging data contain many radio galaxies (RGs) that are interesting to astronomers. However, considering that the scale of astronomical databases in the information age is extremely large, a manual search of these galaxies is impractical given the need for manual labor. Therefore, the ability to detect specific types of galaxies largely depends on computer algorithms. Applying machine learning algorithms on large astronomical data sets can more effectively detect galaxies using photometric images. Astronomers are motivated to develop tools that can automatically analyze massive imaging data, including developing an automatic morphological detection of specified radio sources. Galaxy Zoo projects have generated great interest in visually classifying galaxy samples using CNNs. Banfield studied radio morphologies and host galaxies derived from visual inspection in the Radio Galaxy Zoo project. However, there are relatively more studies on galaxy classification, while there are fewer studies on galaxy detection. We develop a galaxy detection model, which realizes the location and classification of Fanaroff–Riley class I (FR I) and Fanaroff–Riley class II (FR II) galaxies. The field of target detection has also developed rapidly since the convolutional neural network was proposed. You Only Look Once: Unified, Real-Time Object Detection (YOLO) is a neural-network-based target detection model proposed by Redmon et al. We made several improvements to the detection effect of dense galaxies based on the original YOLOv5, mainly including the following. (1) We use Varifocal loss, whose function is to weigh positive and negative samples asymmetrically and highlight the main sample of positive samples in the training phase. (2) Our neural network model adds an attention mechanism for the convolution kernel so that the feature extraction network can adjust the size of the receptive field dynamically in deep convolutional neural networks. In this way, our model has good adaptability and effect for identifying galaxies of different sizes on the picture. (3) We use empirical practices suitable for small target detection, such as image segmentation and reducing the stride of the convolutional layers. Apart from the three major contributions and novel points of the model, the thesis also included different data sources, i.e., radio images and optical images, aiming at better classification performance and more accurate positioning. We used optical image data from SDSS, radio image data from FIRST, and label data from FR Is and FR IIs catalogs to create a data set of FR Is and FR IIs. Subsequently, we used the data set to train our improved YOLOv5 model and finally realize the automatic classification and detection of FR Is and FR IIs. Experimental results prove that our improved method achieves better performance. mAP@0.5 of our model reaches 82.3%, and the location (Ra and Dec) of the galaxies can be identified more accurately. Our model has great astronomical significance. For example, it can help astronomers find FR I and FR II galaxies to build a larger-scale galaxy catalog. Our detection method can also be extended to other types of RGs. Thus, astronomers can locate the specific type of galaxies in a considerably shorter time and with minimum human intervention, or it can be combined with other observation data (spectrum and redshift) to explore other properties of the galaxies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Schneider, Christine A., Dario X. Figueroa Velez, Ricardo Azevedo, Evelyn M. Hoover, Cuong J. Tran, Chelsie Lo, Omid Vadpey, Sunil P. Gandhi und Melissa B. Lodoen. „Imaging the dynamic recruitment of monocytes to the blood–brain barrier and specific brain regions during Toxoplasma gondii infection“. Proceedings of the National Academy of Sciences 116, Nr. 49 (14.11.2019): 24796–807. http://dx.doi.org/10.1073/pnas.1915778116.

Der volle Inhalt der Quelle
Annotation:
Brain infection by the parasite Toxoplasma gondii in mice is thought to generate vulnerability to predation by mechanisms that remain elusive. Monocytes play a key role in host defense and inflammation and are critical for controlling T. gondii. However, the dynamic and regional relationship between brain-infiltrating monocytes and parasites is unknown. We report the mobilization of inflammatory (CCR2+Ly6Chi) and patrolling (CX3CR1+Ly6Clo) monocytes into the blood and brain during T. gondii infection of C57BL/6J and CCR2RFP/+CX3CR1GFP/+ mice. Longitudinal analysis of mice using 2-photon intravital imaging of the brain through cranial windows revealed that CCR2-RFP monocytes were recruited to the blood–brain barrier (BBB) within 2 wk of T. gondii infection, exhibited distinct rolling and crawling behavior, and accumulated within the vessel lumen before entering the parenchyma. Optical clearing of intact T. gondii-infected brains using iDISCO+ and light-sheet microscopy enabled global 3D detection of monocytes. Clusters of T. gondii and individual monocytes across the brain were identified using an automated cell segmentation pipeline, and monocytes were found to be significantly correlated with sites of T. gondii clusters. Computational alignment of brains to the Allen annotated reference atlas [E. S. Lein et al., Nature 445:168–176 (2007)] indicated a consistent pattern of monocyte infiltration during T. gondii infection to the olfactory tubercle, in contrast to LPS treatment of mice, which resulted in a diffuse distribution of monocytes across multiple brain regions. These data provide insights into the dynamics of monocyte recruitment to the BBB and the highly regionalized localization of monocytes in the brain during T. gondii CNS infection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Kang, Hyung W., und Sung Yong Shin. „Creating Walk-Through Images from a Video Sequence of a Dynamic Scene“. Presence: Teleoperators and Virtual Environments 13, Nr. 6 (Dezember 2004): 638–55. http://dx.doi.org/10.1162/1054746043280556.

Der volle Inhalt der Quelle
Annotation:
Tour into the picture (TIP), proposed by Horry et al. (Horry, Anjyo, & Arai, 1997, ACM SIGGRAPH '97 Conference Proceedings, 225–232) is a method for generating a sequence of walk-through images from a single reference image. By navigating a 3D scene model constructed from the image, TIP provides convincing 3D effects. This paper presents a comprehensive scheme for creating walk-through images from a video sequence by generalizing the idea of TIP. To address various problems in dealing with a video sequence rather than a single image, the proposed scheme is designed to have the following features: First, it incorporates a new modeling scheme based on a vanishing circle identified in the video, assuming that the input video contains a negligible amount of motion parallax effects and that dynamic objects move on a flat terrain. Second, we propose a novel scheme for automatic background detection from the video, based on 4-parameter motion model and statistical background color estimation. Third, to assist the extraction of static or dynamic foreground objects from video, we devised a semiautomatic boundary-segmentation scheme based on enhanced lane (Kang & Shin, 2002, Graphical Models, 64 (5), 282–303). The purpose of this work is to let users experience the feel of navigating into a video sequence with their own interpretation and imagination about a given scene. The proposed scheme covers various types of video films of dynamic scenes, such as sports coverage, cartoon animation, and movie films, in which objects are continuously changing their shapes and locations. It can also be used to produce a variety of synthetic video sequences by importing and merging dynamic foreign objects with the original video.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lee, Si-Wook, Hee-Uk Ye, Kyung-Jae Lee, Woo-Young Jang, Jong-Ha Lee, Seok-Min Hwang und Yu-Ran Heo. „Reply to Çiftci, S.; Aydin, B.K. Comment on “Lee et al. Accuracy of New Deep Learning Model-Based Segmentation and Key-Point Multi-Detection Method for Ultrasonographic Developmental Dysplasia of the Hip (DDH) Screening. Diagnostics 2021, 11, 1174”“. Diagnostics 12, Nr. 7 (18.07.2022): 1739. http://dx.doi.org/10.3390/diagnostics12071739.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Mokhlespour Esfahani, Mohammad Iman, und Maury A. Nussbaum. „Detection of Occupational Physical Activities using a Smart Textile System“. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, Nr. 1 (September 2018): 800–801. http://dx.doi.org/10.1177/1541931218621182.

Der volle Inhalt der Quelle
Annotation:
High-risk occupational physical activities (OPAs) may adversely impact the physical health of workers (Valero, Sivanathan, Bosché, & Abdel-Wahab, 2016). For example, adverse OPA-related outcomes are linked to repetitive physical activities or postures that target certain muscle groups or anatomical areas, and a sedentary workstyle that appears to be contributing to work-related musculoskeletal disorders (WMSDs), including back pain, spinal disorders, and joint complaints (Smith et al., 2016). To offset the risk for the range of disorders noted above, there is a need to identify occupational tasks and to detect high-risk OPAs in the workplace. For these purposes, activity monitoring represents a potentially effective approach. Physical monitoring systems are potential assessment devices to detect undesired or high-risk occupational activities during the workday. A promising area of research, and a new alternative for detecting and monitoring a range of OPAs, is smart textile system (STS). STSs are comprised of sensitive/actuating fiber, yarn, or fabric that can sense an external stimulus. All required components of an STS (sensors, electronic boards, energy supply, etc.) can be conveniently embedded into a garment, providing a fully textile-based system. In this study, our goal was to assess the feasibility and accuracy of using an STS to monitor several occupational activities. A particular STS was examined, consisting of a commercially-available system (i.e., “smart” socks), using textile pressure sensors, and a custom smart shirt (Mokhlespour Esfahani & Nussbaum, (Under review)), using textile strain sensors. We also explored the relative merits of these two approaches, separately and in combination. Nine manual material handling and lifting tasks were simulated in a lab, representing common occupational tasks (e.g., as performed by a delivery driver) that involve some variations of manual box lifting, carrying, pushing, pulling, and reaching: Activity 1 = Squat lifting of a box from mid-shank height to elbow height and lowering; Activity 2 = Stoop lifting a box from mid-shank height to elbow height and lowering; Activity 3 = Semi-squat lifting a box from mid-shank height to elbow height and lowering; Activity 4 = Lifting and turning a box from the table on the right side and inversely; Activity 5 = Carrying a box from one table to another (3 meters); Activity 6 = Turning and placing a box on the table on the left side and inversely; Activity 7 = Pushing a box away from the body over a distance = 70 cm and inversely; Activity 8 = Pulling a box toward the body over 70 cm and inversely; Activity 9 = Lifting a box from elbow height to overhead height and inversely. Eleven participants performed these tasks while wearing the STS. Task classification accuracy using data from the STS was determined using several methods. We implemented three of the more popular algorithms – k-nearest (k-NN), support vector machine (SVM), and artificial neural network (ANN) – and then compared their relative performance. The classification algorithms were implemented both for each participant (individual-level) and for all 11 participants (group-level). Classification models were also developed separate for the shirt only, socks only, and the complete STS. Global accuracy at the group level using k-NN, SVM, and ANN respectively ranged from ~ 95 to 99%, from ~ 29 to 97%, and from ~ 89 to 97%. However, all algorithms performed very well, and comparably, at the individual- level. According to the results, using data from either the shirt or socks yielded similar task classification performance, in terms of global accuracy, using K-NN and ANN, though using a SVM generated relatively poor performance with the data from the shirt. Nevertheless, other performance criteria indicated that the shirt had better performance than the smart socks (e.g., in terms of sensitivity, specificity, and precision). Both the smart shirt and socks, and their combination, could detect occupational tasks with better than 97% accuracy. Our results support the feasibility of using an STS for identifying occupational tasks among jobs that require lifting, carrying, reaching, pulling, and pushing. However, some limitations should be addressed. First, our participant sample involved healthy, young volunteers; the results obtained thus may not be generalizable to an older population or those with medical conditions. Second, the occupational activities simulated here only involved manual material handling, and did not cover all common workplace activities. Therefore, the extent to which results of this study can be generalized to a wider set of occupational activities is limited. Third, the current investigation relied on standardized activities in a laboratory setting; the performance of an STS in real workplaces may be less accurate. Fourth, the current investigation did not address the issue of data segmentation, or the need to identify the initiation and termination times of a given task. Based on our findings, however, we hope to facilitate future work that more effectively detects additional occupational activity types that may be help or hinder health and fitness. Such information will likely be of use to both workers and ergonomists. More specifically, results from future investigation may provide strategies for helping to more accurately identify occupational injury risk factors associated with human movement. RibbedTee (Nevada, USA) kindly donated its products for developing the SUS. The first author was supported by a fellowship from the United Parcel Service (UPS); any opinions expressed here do not necessarily represent those of UPS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Oliveira, Mafalda, Fiorella Ruiz-Pace, Judit Matito, Raquel Perez-Lopez, Anna Suñol, Meritxell Bellet, Santiago Escriva-de-Romani et al. „Determinants of concordance in clinically relevant genes (CRG) from synchronously acquired tumor biopsies (tBx) and ctDNA in metastatic breast cancer (MBC).“ Journal of Clinical Oncology 37, Nr. 15_suppl (20.05.2019): 1075. http://dx.doi.org/10.1200/jco.2019.37.15_suppl.1075.

Der volle Inhalt der Quelle
Annotation:
1075 Background: NGS in ctDNA from MBC is feasible and results may be informative for patients’ management, especially in non-luminal tumors (Oliveira et al, ASCO 2018). We aimed to study the determinants of concordance in CRG in a cohort of 60 MBC patients undergoing tBx and ctDNA collection. Methods: MiSeq Amplicon-based NGS (59 cancer-related genes) was performed in one single metastatic lesion per patient and compared with liquid biopsies taken at the same time point at disease progression to prior treatment. The concordance in CRG ( PIK3CA, AKT1, ERBB2, ESR1, PTEN, BRAF, FGFR1, HRAS, KRAS, and PIK3R1) in tBx vs ctDNA was determined at patient and at mutation (mut) level and correlated with mutant allele fraction (MAF), total disease volume (TDV), and clinical characteristics. True positive in plasma (TPP): patient with a mut detected both in ctDNA and tBx. TDV was defined as all metastasis volume assessed by CT scan (excluding sclerotic bone metastasis), and analyzed by an experienced radiologist using the 3DSlicer semiautomatic segmentation tool (TDV = pixel size x number of pixels). Results: Concordance in CRG at patient and mut level was 72% and 55%, respectively. Concordance for ERBB2 (1/1; 100%) and PIK3CA (17/22; 77%) was higher than for ESR1 (8/20; 40%) and AKT1 (2/6; 33%). ctDNA failed to detect 14 mut present in tBx ( ESR1 n = 5, PIK3CA n = 5, AKT1 n = 3, BRAF n = 1). Concordance was 100% for non-luminal and 60% for luminal cases (P = 0.01). In univariate analysis, concordance was not associated with MAF in tBx (P = 0.15), TDV (p = 0.86), number of prior lines of therapy (P = 0.57), number of metastatic sites (P = 0.56) or presence of visceral metastasis (P = 1.0). In patients with PIK3CA mut (N = 22), those with TPP had a numerically higher TDV than those where a PIK3CA mut was not detected in ctDNA (20.9cm3 vs 5.1cm3, P = 0.28). Across all patients, in the multivariate logistic model adjusted for other factors, TDV was a determinant of TPP (OR 1.02, 95%CI 1.0-1.06; P = 0.059). For each increase of 1cm3 in TDV, there was a 2% increase in the probability of detecting a mut in ctDNA. Conclusions: Our results suggest that liquid biopsy testing for the detection of actionable CRG is clinically valid in MBC, although its yield depends on several factors – tumor subtype, analyzed gene, and possibly tumor volume – that reflect both tumor heterogeneity and tumor shedding rate. Due to the potential clinical implications, the observation that mutation detection in ctDNA may correlate with tumor volume merits further study in a larger dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Bunnell, Arianna, Dustin Valdez, Thomas Wolfgruber, Aleen Altamirano, Brenda Hernandez, Peter Sadowski und John Shepherd. „Abstract P3-04-05: Artificial Intelligence Detects, Classifies, and Describes Lesions in Clinical Breast Ultrasound Images“. Cancer Research 83, Nr. 5_Supplement (01.03.2023): P3–04–05—P3–04–05. http://dx.doi.org/10.1158/1538-7445.sabcs22-p3-04-05.

Der volle Inhalt der Quelle
Annotation:
Abstract Purpose Many low-middle income countries (LMIC) suffer from chronic shortages of resources that inhibit the implementation of effective breast cancer screening programs. Advanced breast cancer rates in the U.S. Affiliated Pacific Islands substantially exceed that of the United States. We propose the use of portable breast ultrasound coupled with artificial intelligence (AI) algorithms to assist non-radiologist field personnel in real-time field lesion detection, classification, and biopsy, as well as determination of breast density for risk assessment. In this study, we examine the ability of an AI algorithm to detect and describe breast cancer lesions in clinically-acquired breast ultrasound images in 40,000 women participating in a Hawaii screening program. Materials and Methods The Hawaii and Pacific Islands Mammography Registry (HIPIMR) collects breast health questionnaires and breast imaging (mammography, ultrasound, and MRI) from participating centers in Hawaii and the Pacific and links this information to the Hawaii Tumor Registry for cancer findings. From the women with either screening or diagnostic B-mode breast ultrasound exams, we selected 3 negative cases (no cancer) for every positive case matched by age, excluding Doppler and elastography images. The blinded images were read by the study radiologist to delineate all lesions and describe in terms of the BI-RADS lexicon. These images were split by woman into training (70%), validation and hyperparameter selection (20%) and testing (20%) subsets. An AI model was fine-tuned for lesion and BI-RADS category classification from a Detectron2 framework [1] pre-trained on the COCO Instance Segmentation Dataset [2]. Model performance was evaluated by computation of precision and sensitivity percentages, as well as Area under the Receiver Operator Curve (AUROC). Detections were considered positive if they overlapped a ground truth lesion delineation by at least 50% (Intersection over Union = 0.5), and a maximum of 4 detections were generated for each image. Timing experiments were run on a GPU-enabled (Nvidia Tesla V100) machine on unbatched images. Results Over the 10-year observation period, we identified 5,214 women with US images meeting our criterion. Of these, 111 were diagnosed with malignant breast cancer and 333 were selected as non-cases for a total of 444 women. These 444 women had a total of 4,623 ultrasound images with 2,028 benign and 1,431 malignant lesions identified by the study radiologist. For cancerous lesions, the AI algorithm had 8% precision at a sensitivity of 90% on the testing set. For benign lesions, a sensitivity of 90% resulted in 5% precision on the testing set. The AUROC for bounding box detections of cancerous lesions was 0.90. The AUROC for bounding box detections of benign lesions was 0.87. The model made predictions at a rate of 25 frames/second time (38.7 milliseconds per image). Conclusion Detection, segmentation, and cancer classification of breast lesions are possible in clinically-acquired ultrasound images using AI. Based on our timing experiments, the model is capable of detecting and classifying lesions in real-time during ultrasound capture. Model performance is expected to improve as more data becomes available for training. Future work would involve further fine-tuning of the model on portable breast ultrasound images and increasing model evaluation speed in order to assess utility in low-resource populations [1] Wu Y, Kirillov A, Massa F, Lo W-Y, Girshick R. Detectron2. https://github.com/facebookresearch/detectron2. [2] Lin T-Y, Maire M, Belongie S, et al. Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014. Springer International Publishing; 2014:740-755. Citation Format: Arianna Bunnell, Dustin Valdez, Thomas Wolfgruber, Aleen Altamirano, Brenda Hernandez, Peter Sadowski, John Shepherd. Artificial Intelligence Detects, Classifies, and Describes Lesions in Clinical Breast Ultrasound Images [abstract]. In: Proceedings of the 2022 San Antonio Breast Cancer Symposium; 2022 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2023;83(5 Suppl):Abstract nr P3-04-05.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Vosberg, Sebastian, Luise Hartmann, Stephanie Schneider, Klaus H. Metzeler, Bianka Ksienzyk, Kathrin Bräundl, Martin Neumann et al. „Detection of Chromosomal Aberrations in Acute Myeloid Leukemia By Copy Number Alteration Analysis of Exome Sequencing Data“. Blood 126, Nr. 23 (03.12.2015): 3859. http://dx.doi.org/10.1182/blood.v126.23.3859.3859.

Der volle Inhalt der Quelle
Annotation:
Abstract Exome sequencing is widely used and established to detect tumor-specific sequence variants such as point mutations and small insertions/deletions. Beyond single nucleotide resolution, sequencing data can also be used to identify changes in sequence coverage between samples enabling the detection of copy number alterations (CNAs). Somatic CNAs represent gain or loss of genomic material in tumor cells like aneuploidies (e.g. monosomies and trisomies), duplications, or deletions. In order to test the feasibility of somatic CNA detection from exome data, we analyzed 13 acute myeloid leukemia (AML) patients with known cytogenetic alterations detected at diagnosis (n=8) and/or at relapse (n=11). Corresponding remission exomes from all patients were available as germline controls resulting in 19 comparisons of paired leukemia and remission exome data sets. Exome sequencing was performed on a HiSeq 2500 instrument (Illumina) with mean target coverage of >100x. Exons with divergent coverage were detected using a linear regression model on mean exon coverage, and CNAs were called by an exact segmentation algorithm (Rigaill et al. 2012, Bioinformatics). For all samples, cytogenetic information was available either form routine chromosomal analysis or fluorescent in situ hybridization (FISH). Blast count were known for all but one AML sample (n=19). Copy number-neutral cytogenetic alterations such as balanced translocations were excluded from the comparative analysis. By CNA-analysis of exomes we were able to detect chromosomal aberrations consistent with routine cytogenetics in 18 out of 19 (95%) AML samples. In particular, we confirmed 2 out of 2 monosomies (both -7), and 9 out of 10 trisomies (+4, n=1; +8, n=8; +21, n=1), e.g. trisomy 8 in figure 1A. Partial amplifications or deletions of chromosomes were confirmed in 10 out of 10 AML samples (dup(1q), n=3; dup(8q), n=1; del(5q), n=3; del(17p), n=1; del(20q), n=2), e.g. del(5q) in figure 1B. In the one case with inconsistent findings of chromosomal aberrations between exome and cytogenetic data there was a small subclone harboring the alteration described in only 4 out of 21 metaphases (19%). To assess the specificity of our CNA approach, we analyzed the exomes of 44 cytogenetically normal (CN) AML samples. Here we did not detect any CNAs larger than 5 Mb in the vast majority of these samples (43/44, 98%), only one large CNA was detected indicating a trisomy 8. Estimates of the clone size were highly correlated between CNA-analysis of exomes and the parameters from cytogenetics and cytomorphology (p=0.0076, Fisher's exact test, Figure 1C). In CNA-analysis of exomes, we defined the clone size based on the coverage ratio: . Clone size estimation by cytogenetics and cytomorphology was performed by calculating the mean of blast count and abnormal metaphase/interphase count. Of note, clones estimated by CNA-analysis of exomes tended to be slightly larger. This may result from purification by Ficoll gradient centrifugation prior to DNA extraction for sequencing and/or the fact that the fraction of cells analyzed by cytogenetics does not represent the true size of the malignant clone accurately because of differences in the mitotic index between normal and malignant cells. Overall, there was a high correlation between our CNA analysis of exome sequencing data and routine cytogenetics including limitations in the detection of small subclones. Our results confirm that high throughput sequencing is a versatile, valuable, and robust method to detect chromosomal changes resulting in copy number alterations in AML with high specificity and sensitivity (98% and 95%, respectively). Figure 1. (A) Detection of trisomy 8 with an estimated clone size of 100% (B) Detection of deletion on chromosome 5q with an estimated clone size of 90% (C) Correlation of clone size estimation by routine diagnostics and exome sequencing (p=0.0076) Figure 1. (A) Detection of trisomy 8 with an estimated clone size of 100%. / (B) Detection of deletion on chromosome 5q with an estimated clone size of 90%. / (C) Correlation of clone size estimation by routine diagnostics and exome sequencing (p=0.0076) Figure 2. Figure 2. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie