Статті в журналах з теми "3D visualisation and segmentation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: 3D visualisation and segmentation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "3D visualisation and segmentation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Gaifas, Lorenzo, Moritz A. Kirchner, Joanna Timmins, and Irina Gutsche. "Blik is an extensible 3D visualisation tool for the annotation and analysis of cryo-electron tomography data." PLOS Biology 22, no. 4 (April 30, 2024): e3002447. http://dx.doi.org/10.1371/journal.pbio.3002447.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Powerful, workflow-agnostic and interactive visualisation is essential for the ad hoc, human-in-the-loop workflows typical of cryo-electron tomography (cryo-ET). While several tools exist for visualisation and annotation of cryo-ET data, they are often integrated as part of monolithic processing pipelines, or focused on a specific task and offering limited reusability and extensibility. With each software suite presenting its own pros and cons and tools tailored to address specific challenges, seamless integration between available pipelines is often a difficult task. As part of the effort to enable such flexibility and move the software ecosystem towards a more collaborative and modular approach, we developed blik, an open-source napari plugin for visualisation and annotation of cryo-ET data (source code: https://github.com/brisvag/blik). blik offers fast, interactive, and user-friendly 3D visualisation thanks to napari, and is built with extensibility and modularity at the core. Data is handled and exposed through well-established scientific Python libraries such as numpy arrays and pandas dataframes. Reusable components (such as data structures, file read/write, and annotation tools) are developed as independent Python libraries to encourage reuse and community contribution. By easily integrating with established image analysis tools—even outside of the cryo-ET world—blik provides a versatile platform for interacting with cryo-ET data. On top of core visualisation features—interactive and simultaneous visualisation of tomograms, particle picks, and segmentations—blik provides an interface for interactive tools such as manual, surface-based and filament-based particle picking, and image segmentation, as well as simple filtering tools. Additional self-contained napari plugins developed as part of this work also implement interactive plotting and selection based on particle features, and label interpolation for easier segmentation. Finally, we highlight the differences with existing software and showcase blik’s applicability in biological research.
2

Jung, Y., H. Kim, B. Park, H. Lee, B. Kim, M. Bang, J. Lee, M. Oh, and G. Cho. "EP02.14: The new 3D‐based fetal segmentation and visualisation method." Ultrasound in Obstetrics & Gynecology 62, S1 (October 2023): 107. http://dx.doi.org/10.1002/uog.26634.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kang, Hanwen, and Chao Chen. "Fruit detection, segmentation and 3D visualisation of environments in apple orchards." Computers and Electronics in Agriculture 171 (April 2020): 105302. http://dx.doi.org/10.1016/j.compag.2020.105302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Colombo, E., T. Fick, G. Esposito, M. Germans, L. Regli, and T. van Doormaal. "Segmentation techniques of cerebral arteriovenous malformations for 3D visualisation: a systematic review." Brain and Spine 2 (2022): 101415. http://dx.doi.org/10.1016/j.bas.2022.101415.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dury, Richard, Rob Dineen, Anbarasu Lourdusamy, and Richard Grundy. "Semi-automated medulloblastoma segmentation and influence of molecular subgroup on segmentation quality." Neuro-Oncology 21, Supplement_4 (October 2019): iv14. http://dx.doi.org/10.1093/neuonc/noz167.060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Medulloblastoma is the most common malignant brain tumour in children. Segmenting the tumour itself from the surrounding tissue on MRI scans has shown to be useful for neuro-surgical planning, by allowing a better understanding of the tumour margin with 3D visualisation. However, manual segmentation of medulloblastoma is time consuming, prone to bias and inter-observer discrepancies. Here we propose a semi-automatic patient based segmentation pipeline with little sensitivity to tumour location and minimal user input. Using SPM12 “Segment” as a base, an additional tissue component describing the medulloblastoma is included in the algorithm. The user is required to define the centre of mass and a single surface point of the tumour, creating an approximate enclosing sphere. The calculated volume is confined to the cerebellum to minimise misclassification of other intracranial structures. This process typically takes 5 minutes from start to finish. This method was applied to 97 T2-weighted scans of paediatric medulloblastoma (7 WNT, 6 SHH, 17 Gr3, 26 Gr4, 41 unknown subtype); resulting segmented volumes were compared to manual segmentations. An average Dice coefficient of 0.85±0.07 was found, with the Group 4 subtype demonstrating a significantly higher similarity with manual segmentation than other subgroups (0.88±0.04). When visually assessing the 10 cases with the lowest Dice coefficients, it was found that the misclassification of oedema was the most common source of error. As this method is independent of image contrast, segmentation could be improved by applying it to images that are less sensitive to oedema, such as T1.
6

Patekar, Rahul, Prashant Shukla Kumar, Hong-Seng Gan, and Muhammad Hanif Ramlee. "Automated Knee Bone Segmentation and Visualisation Using Mask RCNN and Marching Cube: Data From The Osteoarthritis Initiative." ASM Science Journal 17 (April 13, 2022): 1–7. http://dx.doi.org/10.32802/asmscj.2022.968.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this work, an automated knee bone segmentation model is proposed. A mask region-based convolutional neural network (RCNN) algorithm is developed to segment the bone and reconstructed into 3D object by using Marching-Cube algorithm. The proposed method is divided into two stages. First, the Mask RCNN is introduced to segment subchondral knee bone from the input MRI sequence. In the second stage, the segmented output from Mask R-CNN is fed as input to the Marching cube algorithm for the 3D reconstruction of knee subchondral bone. The proposed method achieved high dice similarity scores for femur bone 95.35%, tibia bone 95.3%, and patella bone 94.40% using a Mask R-CNN with Resnet-50 as backbone architecture. Improved dice similarity scores for femur bone 97.11%, tibia bone 97.33%, and patella bone 97.05% are obtained by Mask RCNN with Resnet-101 as backbone architecture. It is noted that the Mask RCNN framework has demonstrated efficient and accurate knee subchondral bone detection as well as segmentation for input MRI sequences.
7

Luo, Tess X. H., Wallace W. L. Lai, and Zhanzhan Lei. "Intensity Normalisation of GPR C-Scans." Remote Sensing 15, no. 5 (February 27, 2023): 1309. http://dx.doi.org/10.3390/rs15051309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The three-dimensional (3D) ground-penetrating radar (GPR) has been widely applied in subsurface surveys and imaging, and the quality of the resulting C-scan images is determined by the spatial resolution and visualisation contrast. Previous studies have standardised the suitable spatial resolution of GPR C-scans; however, their measurement normalisation remains arbitrary. Human bias is inevitable in C-scan interpretation because different visualisation algorithms lead to different interpretation results. Therefore, an objective scheme for mapping GPR signals after standard processing to the visualisation contrast should be established. Focusing on two typical scenarios, a reinforced concrete structure and an urban underground, this study illustrated that the essential parameters were greyscale thresholding and transformation mapping. By quantifying the normalisation performance with the integration of image segmentation and structural similarity index measure, a greyscale threshold was developed in which the normalised standard deviation of the unit intensity of any surveyed object was two. A transformation function named “bipolar” was also shown to balance the maintenance of real reflections at the target objects. By providing academia/industry with an object-based approach, this study contributes to solving the final unresolved issue of 3D GPR imaging (i.e., image contrast) to better eliminate the interfering noise and better mitigate human bias for any one-off/touch-based imaging and temporal change detection.
8

Medved, M. S., S. D. Rud, G. E. Trufanov, and D. S. Lebedev. "The intraoperative visualisation technique during lead implantation into the cardiac conductive system: aspects of computed tomography: prospective study." Diagnostic radiology and radiotherapy 14, no. 3 (October 5, 2023): 46–52. http://dx.doi.org/10.22328/2079-5343-2023-14-3-46-52.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
INTRODUCTION: The lead implantation into the cardiac conduction system (CCS) is the most physiological method of pacing nowadays. «The method of intraoperative visualization and control of the lead position for permanent electrocardiostimulation during implantation of the lead in the CCS» has been developed for reduce the number of non-targeted implantations. This method based on the integration into the angiograph system 3D-reconstruction of the heart converted to computed tomography (CT) in the form of a mask against the background of fluoroscopy. CT is an important stage of the intraoperative visualization technique (IVT).OBJECTIVE: The aim of the study was to adapt the protocol of CT examination of the heart with contrast to construct a partially segmented 3D-reconstruction of the heart on an angiographic complex for subsequent use during of the lead implantation in the CCS within the framework of the author’s IVT.MATERIALS AND METHODS: As part of the development of the IVT, 21 CT studies of the heart were selected from own database. The step of the gradient of the density difference of the contrasted blood is about 10 HU, the range of the difference of densitometric parameters of the «left ventricle (LV) — right ventricle (RV)» from 0 HU to 200 HU. As well as selected 11 CT studies of the heart. The step of the gradient of the difference of densitometric indicators the contrasted blood in «the RV cavity — myocardium» is about 10 HU, the range is from 0 HU to 100 HU. All CT scans are alternately loaded into the angiograph, followed by the creation of a 3D model of the heart using basic software.RESULTS: It’s necessary to exceed the degree of contrast of the LV cavity over the RV cavity by at least 80 HU to perform partial segmentation on the left and right chambers of a 3D-model of the heart in an angiographic complex that does not have a specialized segmentation module. A sufficiently large part of the left ventricular cavity (LV) disappears with a smaller gradient when the right ventricular cavity (RV) is suppressed. The minimum gradient of «the ventricular cavity — myocardium» is at least 20 HU. The boundaries of the right ventricular edge of the interventricular septum (IVS) are not visualized with a smaller contrast gradient. It’s important for determining the insertion place of the lead into the IVS.CONCLUSION: It’s necessary to exceed the contrast of the LV cavities above the RV cavity by at least 80 HU, the RV cavity above the myocardium by at least 20 HU to perform partial segmentation on the left and right chambers of a 3D-model of the heart in an angiographic complex that does not have a specialized segmentation module
9

Forte, Mari Nieves Velasco, Tarique Hussain, Arno Roest, Gorka Gomez, Monique Jongbloed, John Simpson, Kuberan Pushparajah, Nick Byrne, and Israel Valverde. "Living the heart in three dimensions: applications of 3D printing in CHD." Cardiology in the Young 29, no. 06 (June 2019): 733–43. http://dx.doi.org/10.1017/s1047951119000398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractAdvances in biomedical engineering have led to three-dimensional (3D)-printed models being used for a broad range of different applications. Teaching medical personnel, communicating with patients and relatives, planning complex heart surgery, or designing new techniques for repair of CHD via cardiac catheterisation are now options available using patient-specific 3D-printed models. The management of CHD can be challenging owing to the wide spectrum of morphological conditions and the differences between patients. Direct visualisation and manipulation of the patients’ individual anatomy has opened new horizons in personalised treatment, providing the possibility of performing the whole procedure in vitro beforehand, thus anticipating complications and possible outcomes. In this review, we discuss the workflow to implement 3D printing in clinical practice, the imaging modalities used for anatomical segmentation, the applications of this emerging technique in patients with structural heart disease, and its limitations and future directions.
10

Gende, Mateo, Joaquim De Moura, Jorge Novo, Pablo Charlon, and Marcos Ortega. "Automatic Segmentation and Intuitive Visualisation of the Epiretinal Membrane in 3D OCT Images Using Deep Convolutional Approaches." IEEE Access 9 (2021): 75993–6004. http://dx.doi.org/10.1109/access.2021.3082638.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Pacheco-Gutierrez, Salvador, Hanlin Niu, Ipek Caliskanelli, and Robert Skilton. "A Multiple Level-of-Detail 3D Data Transmission Approach for Low-Latency Remote Visualisation in Teleoperation Tasks." Robotics 10, no. 3 (July 14, 2021): 89. http://dx.doi.org/10.3390/robotics10030089.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In robotic teleoperation, the knowledge of the state of the remote environment in real time is paramount. Advances in the development of highly accurate 3D cameras able to provide high-quality point clouds appear to be a feasible solution for generating live, up-to-date virtual environments. Unfortunately, the exceptional accuracy and high density of these data represent a burden for communications requiring a large bandwidth affecting setups where the local and remote systems are particularly geographically distant. This paper presents a multiple level-of-detail (LoD) compression strategy for 3D data based on tree-like codification structures capable of compressing a single data frame at multiple resolutions using dynamically configured parameters. The level of compression (resolution) of objects is prioritised based on: (i) placement on the scene; and (ii) the type of object. For the former, classical point cloud fitting and segmentation techniques are implemented; for the latter, user-defined prioritisation is considered. The results obtained are compared using a single LoD (whole-scene) compression technique previously proposed by the authors. Results showed a considerable improvement to the transmitted data size and updated frame rate while maintaining low distortion after decompression.
12

Santarossa, Monty, Ayse Tatli, Claus von der Burchard, Julia Andresen, Johann Roider, Heinz Handels, and Reinhard Koch. "Chronological Registration of OCT and Autofluorescence Findings in CSCR: Two Distinct Patterns in Disease Course." Diagnostics 12, no. 8 (July 22, 2022): 1780. http://dx.doi.org/10.3390/diagnostics12081780.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Optical coherence tomography (OCT) and fundus autofluorescence (FAF) are important imaging modalities for the assessment and prognosis of central serous chorioretinopathy (CSCR). However, setting the findings from both into spatial and temporal contexts as desirable for disease analysis remains a challenge due to both modalities being captured in different perspectives: sparse three-dimensional (3D) cross sections for OCT and two-dimensional (2D) en face images for FAF. To bridge this gap, we propose a visualisation pipeline capable of projecting OCT labels to en face image modalities such as FAF. By mapping OCT B-scans onto the accompanying en face infrared (IR) image and then registering the IR image onto the FAF image by a neural network, we can directly compare OCT labels to other labels in the en face plane. We also present a U-Net inspired segmentation model to predict segmentations in unlabeled OCTs. Evaluations show that both our networks achieve high precision (0.853 Dice score and 0.913 Area under Curve). Furthermore, medical analysis performed on exemplary, chronologically arranged CSCR progressions of 12 patients visualized with our pipeline indicates that, on CSCR, two patterns emerge: subretinal fluid (SRF) in OCT preceding hyperfluorescence (HF) in FAF and vice versa.
13

Ge, Ting, Tianming Zhan, Qinfeng Li, and Shanxiang Mu. "Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation." Computational Intelligence and Neuroscience 2022 (June 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/3514988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
14

Geerlings-Batt, Jade, Carley Tillett, Ashu Gupta, and Zhonghua Sun. "Enhanced Visualisation of Normal Anatomy with Potential Use of Augmented Reality Superimposed on Three-Dimensional Printed Models." Micromachines 13, no. 10 (October 10, 2022): 1701. http://dx.doi.org/10.3390/mi13101701.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Anatomical knowledge underpins the practice of many healthcare professions. While cadaveric specimens are generally used to demonstrate realistic anatomy, high cost, ethical considerations and limited accessibility can often impede their suitability for use as teaching tools. This study aimed to develop an alternative to traditional teaching methods; a novel teaching tool using augmented reality (AR) and three-dimensional (3D) printed models to accurately demonstrate normal ankle and foot anatomy. An open-source software (3D Slicer) was used to segment a high-resolution magnetic resonance imaging (MRI) dataset of a healthy volunteer ankle and produce virtual bone and musculature objects. Bone and musculature were segmented using seed-planting and interpolation functions, respectively. Virtual models were imported into Unity 3D, which was used to develop user interface and achieve interactability prior to export to the Microsoft HoloLens 2. Three life-size models of bony anatomy were printed in yellow polylactic acid and thermoplastic polyurethane, with another model printed in white Visijet SL Flex with a supporting base attached to its plantar aspect. Interactive user interface with functional toggle switches was developed. Object recognition did not function as intended, with adequate tracking and AR superimposition not achieved. The models accurately demonstrate bony foot and ankle anatomy in relation to the associated musculature. Although segmentation outcomes were sufficient, the process was highly time consuming, with effective object recognition tools relatively inaccessible. This may limit the reproducibility of augmented reality learning tools on a larger scale. Research is required to determine the extent to which this tool accurately demonstrates anatomy and ascertain whether use of this tool improves learning outcomes and is effective for teaching anatomy.
15

Lee, Yee Sye, Ali Rashidi, Amin Talei, and Daniel Kong. "Innovative Point Cloud Segmentation of 3D Light Steel Framing System through Synthetic BIM and Mixed Reality Data: Advancing Construction Monitoring." Buildings 14, no. 4 (March 30, 2024): 952. http://dx.doi.org/10.3390/buildings14040952.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, mixed reality (MR) technology has gained popularity in construction management due to its real-time visualisation capability to facilitate on-site decision-making tasks. The semantic segmentation of building components provides an attractive solution towards digital construction monitoring, reducing workloads through automation techniques. Nevertheless, data shortages remain an issue in maximizing the performance potential of deep learning segmentation methods. The primary aim of this study is to address this issue through synthetic data generation using Building Information Modelling (BIM) models. This study presents a point-cloud-based deep learning segmentation approach to a 3D light steel framing (LSF) system through synthetic BIM models and as-built data captured using MR headsets. A standardisation workflow between BIM and MR models was introduced to enable seamless data exchange across both domains. A total of five different experiments were set up to identify the benefits of synthetic BIM data in supplementing actual as-built data for model training. The results showed that the average testing accuracy using solely as-built data stood at 82.88%. Meanwhile, the introduction of synthetic BIM data into the training dataset led to an improved testing accuracy of 86.15%. A hybrid dataset also enabled the model to segment both the BIM and as-built data captured using an MR headset at an average accuracy of 79.55%. These findings indicate that synthetic BIM data have the potential to supplement actual data, reducing the costs associated with data acquisition. In addition, this study demonstrates that deep learning has the potential to automate construction monitoring tasks, aiding in the digitization of the construction industry.
16

Capellini, Katia, Vincenzo Positano, Michele Murzi, Pier Andrea Farneti, Giovanni Concistrè, Luigi Landini, and Simona Celi. "A Decision-Support Informatics Platform for Minimally Invasive Aortic Valve Replacement." Electronics 11, no. 12 (June 17, 2022): 1902. http://dx.doi.org/10.3390/electronics11121902.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Minimally invasive aortic valve replacement is performed by mini-sternotomy (MS) or less invasive right anterior mini-thoracotomy (RT). The possibility of adopting RT is assessed by anatomical criteria derived from manual 2D image analysis. We developed a semi-automatic tool (RT-PLAN) to assess the criteria of RT, extract other parameters of surgical interest and generate a view of the anatomical region in a 3D space. Twenty-five 3D CT images from a dataset were retrospectively evaluated. The methodology starts with segmentation to reconstruct 3D surface models of the aorta and anterior rib cage. Secondly, the RT criteria and geometric information from these models are automatically and quantitatively evaluated. A comparison is made between the values of the parameters measured by the standard manual 2D procedure and our tool. The RT-PLAN procedure was feasible in all cases. Strong agreement was found between RT-PLAN and the standard manual 2D procedure. There was no difference between the RT-PLAN and the standard procedure when selecting patients for the RT technique. The tool developed is able to effectively perform the assessment of the RT criteria, with the addition of a realistic visualisation of the surgical field through virtual reality technology.
17

Kharroubi, A., R. Hajji, R. Billen, and F. Poux. "CLASSIFICATION AND INTEGRATION OF MASSIVE 3D POINTS CLOUDS IN A VIRTUAL REALITY (VR) ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 165–71. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-165-2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.
18

Herráez, Borja Javier, and Eduardo Vendrell. "Segmentación de mallas 3d de edificios históricos para levantamiento arquitectónico." Virtual Archaeology Review 9, no. 18 (January 10, 2018): 66. http://dx.doi.org/10.4995/var.2018.5858.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>Advances in three-dimensional (3D) acquisition systems have introduced this technology to more fields of study, such as archaeology or architecture. In the architectural field, scanning a building is one of the first possible steps from which a 3D model can be obtained and can be later used for visualisation and/or feature analysis, thanks to computer-based pattern recognition tools. The automation of these tools allows for temporal savings and has become a strong aid for professionals, so that more and more methods are developed with this objective. In this article, a method for 3D mesh segmentation focused on the representation of historic buildings is proposed. This type of buildings is characterised by having singularities and features in façades, such as doors or windows. The main objective is to recognise these features, understanding them as those parts of the model that differ from the main structure of the building. The idea is to use a recognition algorithm for planar faces that allows users to create a graph showing the connectivity between them, therefore allowing the reflection of the shape of the 3Dmodel. At a later step, this graph is matched against some pre-defined graphs that represent the patterns to look for. Each coincidence between both graphs indicate the position of one of the characteristics sought. The developed method has proved to be effective for feature detection and suitable for inclusion in architectural surveying applications.</p>
19

Avena, M., E. Colucci, G. Sammartano, and A. Spanò. "HBIM MODELLING FOR AN HISTORICAL URBAN CENTRE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 831–38. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-831-2021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The research in the geospatial data structuring and formats interoperability direction is the crucial task for creating a 3D Geodatabase at the urban scale. Both geometric and semantic data structuring should be considered, mainly regarding the interoperability of objects and formats generated outside the geographical space. Current reflections on 3D database generation, based on geospatial data, are mostly related to visualisation issues and context-related application. The purposes and scale of representation according to LoDs require some reflections, particularly for the transmission of semantic information.This contribution adopts and develops the integration of some tools to derive object-oriented modelling in the HBIM environment, both at the urban and architectural scale, from point clouds obtained by UAV (Unmanned Aerial Vehicle) photogrammetry.One of the paper’s objectives is retracing the analysis phases of the point clouds acquired by UAV photogrammetry technique and their suitability for multiscale modelling. Starting from UAV clouds, through the optimisation and segmentation, the proposed workflow tries to trigger the modelling of the objects according to the LODs, comparing the one coming from CityGML and the one in use in the BIM community. The experimentation proposed is focused on the case study of the city of Norcia, which like many other historic centres spread over the territory of central Italy, was deeply damaged by the 2016-17 earthquake.
20

Sun, Ruixue, Ruiting Chang, Tianshu Yu, Dongxin Wang, and Lijie Jiang. "U-Net Modelling-Based Imaging MAP Score for Tl Stage Nephrectomy: An Exploratory Study." Journal of Healthcare Engineering 2022 (January 5, 2022): 1–9. http://dx.doi.org/10.1155/2022/1084853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We evaluate the stability of the clinical application of the MAP scoring system based on anatomical features of renal tumour images, explore the relevance of this scoring system to the choice of surgical procedure for patients with limited renal tumours, and investigate the effectiveness of automated segmentation and reconstruction 3D models of renal tumour images based on U-net for interpretative cognitive navigation during laparoscopy Tl stage radical renal tumour cancer surgery. A total of 5 000 kidney tumour images containing manual annotations were applied to the training set, and a stable and efficient full CNN algorithm model oriented to clinical needs was constructed to regionalism and multistructure and to finely automate segmentation of kidney tumour images, output modelling information in STL format, and apply a tablet computer to intraoperatively display the Tl stage kidney tumour model for cognitive navigation. Based on a training sample of MR images from 201 patients with stage Tl renal tumour cancer, an adaptation of the classical U-net allows individual segmentation of important structures such as renal tumours and 3D visualisation to visualise the structural relationships and the extent of tumour invasion at key surgical sites. The preoperative CT and clinical data of 225 patients with limited renal tumours treated surgically at our hospital from August 2011 to August 2012 were retrospectively analysed by three imaging physicians using the MAP scoring system for the total score and the variables R (maximum diameter), E (exogenous/endogenous), N (distance from the renal sinus), A (ventral/dorsal), L (relationship along the longitudinal axis of the kidney), and h (whether in contact with the renal hilum). The score for each variable (contact with the renal hilum) was statistically compared with each other for the three observers. Patients were divided into three groups according to the total score—low, medium, and high—and according to the surgical procedure—radical and partial resection. The correlation between the total score and the score of each variable and the choice of surgical procedure was analysed. The agreement rate of the total score and the score of each variable for all three observers was over 90% ( P ≤ 0.001). The map scoring system based on the anatomical features of renal tumour imaging was well stabilized, and the scores were significantly correlated with the surgical approach.
21

Anderson, Matthew, Salman Sadiq, Muzammil Nahaboo Solim, Hannah Barker, David H. Steel, Maged Habib, and Boguslaw Obara. "Biomedical Data Annotation: An OCT Imaging Case Study." Journal of Ophthalmology 2023 (August 22, 2023): 1–9. http://dx.doi.org/10.1155/2023/5747010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In ophthalmology, optical coherence tomography (OCT) is a widely used imaging modality, allowing visualisation of the structures of the eye with objective and quantitative cross-sectional three-dimensional (3D) volumetric scans. Due to the quantity of data generated from OCT scans and the time taken for an ophthalmologist to inspect for various disease pathology features, automated image analysis in the form of deep neural networks has seen success for the classification and segmentation of OCT layers and quantification of features. However, existing high-performance deep learning approaches rely on huge training datasets with high-quality annotations, which are challenging to obtain in many clinical applications. The collection of annotations from less experienced clinicians has the potential to alleviate time constraints from more senior clinicians, allowing faster data collection of medical image annotations; however, with less experience, there is the possibility of reduced annotation quality. In this study, we evaluate the quality of diabetic macular edema (DME) intraretinal fluid (IRF) biomarker image annotations on OCT B-scans from five clinicians with a range of experience. We also assess the effectiveness of annotating across multiple sessions following a training session led by an expert clinician. Our investigation shows a notable variance in annotation performance, with a correlation that depends on the clinician’s experience with OCT image interpretation of DME, and that having multiple annotation sessions has a limited effect on the annotation quality.
22

Ramandi, Hamed Lamei, Peyman Mostaghimi, Ryan T. Armstrong, Christoph H. Arns, Mohammad Saadatfar, Rob M. Sok, Val Pinczewski, and Mark A. Knackstedt. "Pore scale imaging and modelling of coal properties." APPEA Journal 55, no. 2 (2015): 468. http://dx.doi.org/10.1071/aj14103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A key parameter in determining the productivity and commercial success of coal seam gas wells is the permeability of individual seams. Laboratory testing of core plugs is commonly used as an indicator of potential seam permeability. The highly heterogeneous and stress-dependent nature of coal makes laboratory measurements difficult to perform and the results difficult to interpret. Consequently, permeability in coal is poorly understood. The permeability of coal at the core scale depends on the geometry, topology, connectivity, mineralisation and spatial distribution of the cleat system, and a better understanding of coal permeability, that and the factors that control this depends on having a better understanding and detailed characterisation of the system. The authors used high resolution micro-focus X-ray computed tomography and 2D-3D image registration techniques to overlay tomograms of the same core plug, with and without X-ray attenuating fluids present in the pore space, with 2D scanning electron microscope images to produce detailed 3D visualisations of the geometry and topology of the cleat systems in the coal plugs. Novel filtering algorithms were used to produce segmentations that preserve cleat aperture dimensions and together with large-scale fluid flow simulations, they performed directly on the images and were used to compute porosities and permeabilities. Image resolution and segmentation sensitivity studies are also described, which show that the core scale permeability is controlled by a small number of well-connected percolating cleats. The results of this study demonstrate the potential of simple image-based analysis techniques to provide rapid estimates of core plug permeabilities.
23

Clement, Alice M., Richard Cloutier, Jing Lu, Egon Perilli, Anton Maksimenko, and John Long. "A fresh look at Cladarosymblema narrienense, a tetrapodomorph fish (Sarcopterygii: Megalichthyidae) from the Carboniferous of Australia, illuminated via X-ray tomography." PeerJ 9 (December 10, 2021): e12597. http://dx.doi.org/10.7717/peerj.12597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background The megalichthyids are one of several clades of extinct tetrapodomorph fish that lived throughout the Devonian–Permian periods. They are advanced “osteolepidid-grade” fishes that lived in freshwater swamp and lake environments, with some taxa growing to very large sizes. They bear cosmine-covered bones and a large premaxillary tusk that lies lingually to a row of small teeth. Diagnosis of the family remains controversial with various authors revising it several times in recent works. There are fewer than 10 genera known globally, and only one member definitively identified from Gondwana. Cladarosymblema narrienense Fox et al. 1995 was described from the Lower Carboniferous Raymond Formation in Queensland, Australia, on the basis of several well-preserved specimens. Despite this detailed work, several aspects of its anatomy remain undescribed. Methods Two especially well-preserved 3D fossils of Cladarosymblema narrienense, including the holotype specimen, are scanned using synchrotron or micro-computed tomography (µCT), and 3D modelled using specialist segmentation and visualisation software. New anatomical detail, in particular internal anatomy, is revealed for the first time in this taxon. A novel phylogenetic matrix, adapted from other recent work on tetrapodomorphs, is used to clarify the interrelationships of the megalichthyids and confirm the phylogenetic position of C. narrienense. Results Never before seen morphological details of the palate, hyoid arch, basibranchial skeleton, pectoral girdle and axial skeleton are revealed and described. Several additional features are confirmed or updated from the original description. Moreover, the first full, virtual cranial endocast of any tetrapodomorph fish is presented and described, giving insight into the early neural adaptations in this group. Phylogenetic analysis confirms the monophyly of the Megalichthyidae with seven genera included (Askerichthys, Cladarosymblema, Ectosteorhachis, Mahalalepis, Megalichthys, Palatinichthys, and Sengoerichthys). The position of the megalichthyids as sister group to canowindrids, crownward of “osteolepidids” (e.g.,Osteolepis and Gogonasus), but below “tristichopterids” such as Eusthenopteron is confirmed, but our findings suggest further work is required to resolve megalichthyid interrelationships.
24

Toro, Roberto, Rembrandt Bakker, Thierry Delzescaux, Alan Evans, and Paul Tiesinga. "FIIND: Ferret Interactive Integrated Neurodevelopment Atlas." Research Ideas and Outcomes 4 (March 30, 2018): e25312. http://dx.doi.org/10.3897/rio.4.e25312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The first days after birth in ferrets provide a privileged view of the development of a complex mammalian brain. Unlike mice, ferrets develop a rich pattern of deep neocortical folds and cortico- cortical connections. Unlike humans and other primates, whose brains are well differentiated and folded at birth, ferrets are born with a very immature and completely smooth neocortex: folds, neocortical regionalisation and cortico-cortical connectivity develop in ferrets during the first postnatal days. After a period of fast neocortical expansion, during which brain volume increases by up to a factor of 4 in 2 weeks, the ferret brain reaches its adult volume at about 6 weeks of age. Ferrets could thus become a major animal model to investigate the neurobiological correlates of the phenomena observed in human neuroimaging. Many of these phenomena, such as the relationship between brain folding, cortico-cortical connectivity and neocortical regionalisation cannot be investigated in mice, but could be investigated in ferrets. Our aim is to provide the research community with a detailed description of the development of a complex brain, necessary to better understand the nature of human neuroimaging data, create models of brain development, or analyse the relationship between multiple spatial scales. We have already started a project to constitute an open, collaborative atlas of ferret brain development, integrating multi-modal and multi-scale data. We have acquired data for 28 ferrets (4 animals per time point from P0 to adults), using high-resolution MRI and diffusion tensor imaging (DTI). We have developed an open-source pipeline to segment and produce – online – 3D reconstructions of brain MRI data. We propose to process the brains of 16 of our specimens (from P0 to P16) using high-throughput 3D histology, staining for cytoarchitectonic landmarks, neuronal progenitors and neurogenesis. This would allow us to relate the MRI data that we have already acquired with multi-dimensional cell-scale information. Brains will be sectioned at 25 μm, stained, scanned at 0.25 μm of resolution, and processed for real-time multi-scale visualisation. We will extend our current web-platform to integrate an interactive multi-scale visualisation of the data. Using our combined expertise in computational neuroanatomy, multi-modal neuroimaging, neuroinformatics, and the development of inter-species atlases, we propose to build an open-source web platform to allow the collaborative, online, creation of atlases of the development of the ferret brain. The web platform will allow researchers to access and visualise interactively the MRI and histology data. It will also allow researchers to create collaborative, human curated, 3D segmentations of brain structures, as well as vectorial atlases. Our work will provide a first integrated atlas of ferret brain development, and the basis for an open platform for the creation of collaborative multi-modal, multi-scale, multi-species atlases.
25

Leahey, Lucy G., Ralph E. Molnar, Kenneth Carpenter, Lawrence M. Witmer, and Steven W. Salisbury. "Cranial osteology of the ankylosaurian dinosaur formerly known asMinmisp. (Ornithischia: Thyreophora) from the Lower Cretaceous Allaru Mudstone of Richmond, Queensland, Australia." PeerJ 3 (December 8, 2015): e1475. http://dx.doi.org/10.7717/peerj.1475.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Minmiis the only known genus of ankylosaurian dinosaur from Australia. Seven specimens are known, all from the Lower Cretaceous of Queensland. Only two of these have been described in any detail: the holotype specimenMinmi paravertebrafrom the Bungil Formation near Roma, and a near complete skeleton from the Allaru Mudstone on Marathon Station near Richmond, preliminarily referred to a possible new species ofMinmi. The Marathon specimen represents one of the world’s most complete ankylosaurian skeletons and the best-preserved dinosaurian fossil from eastern Gondwana. Moreover, among ankylosaurians, its skull is one of only a few in which the majority of sutures have not been obliterated by dermal ossifications or surface remodelling. Recent preparation of the Marathon specimen has revealed new details of the palate and narial regions, permitting a comprehensive description and thus providing new insights cranial osteology of a basal ankylosaurian. The skull has also undergone computed tomography, digital segmentation and 3D computer visualisation enabling the reconstruction of its nasal cavity and endocranium. The airways of the Marathon specimen are more complicated than non-ankylosaurian dinosaurs but less so than derived ankylosaurians. The cranial (brain) endocast is superficially similar to those of other ankylosaurians but is strongly divergent in many important respects. The inner ear is extremely large and unlike that of any dinosaur yet known. Based on a high number of diagnostic differences between the skull of the Marathon specimen and other ankylosaurians, we consider it prudent to assign this specimen to a new genus and species of ankylosaurian.Kunbarrasaurus ieversigen. et sp. nov. represents the second genus of ankylosaurian from Australia and is characterised by an unusual melange of both primitive and derived characters, shedding new light on the evolution of the ankylosaurian skull.
26

Grzelka, Kornelia, Jarosław Bydłosz, and Agnieszka Bieda. "Analysis of the prospects for the development of 3D cadastral visualisation." Acta Scientiarum Polonorum Administratio Locorum 22, no. 1 (March 31, 2023): 45–57. http://dx.doi.org/10.31648/aspal.8550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Motives: In the past twenty years, considerable progress has been made in 3D real estate cadastres and 3D visualisation technologies. These developments require advanced solutions for the visualisation of 3D cadastral objects. Aim: The main aim of this study was to propose an optimal 3D cadastre visualisation strategy that accounts for user needs, the types of visualised data, and visualisation platforms. Results: The optimal 3D cadastre visualisation strategy was determined by performing a SWOT/TOWS analysis. Both internal and external factors that can influence the development of 3D cadastre visualisation policies were considered in the analysis. The results of the study were used to propose an aggressive strategy (based on the identified strengths and opportunities) for the development of 3D cadastre visualisation.
27

Wodehouse, Andrew, and Mohammed Abba. "3D Visualisation for Online Retail." International Journal of Market Research 58, no. 3 (May 2016): 451–72. http://dx.doi.org/10.2501/ijmr-2016-027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work investigates the effect of 3D product visualisation on online shopping behaviour. A virtual shopping interface with product categories projected in both 2D and 3D was developed and deployed. The main purpose of this system was to determine the suitability of a 3D virtual catalogue as a shopping outlet for consumers and the potential impact on consumer shopping behaviour. The virtual catalogue was implemented as a web-based interface, with products displayed with the intent of determining whether the level of presence experienced affected consumer motivations to shop. Participants completed an immersive tendency questionnaire to ascertain their alertness and levels of immersion before viewing the interface, and afterwards completed a presence questionnaire related to the viewing experience. The results showed significant correlations between individual immersive tendencies and presence experienced. In addition, items in the presence questionnaire were aligned with ease of use, interactivity and realism. This leads to a number of recommendations for the design of future virtual shopping environments and considerations for the assessment of online consumer behaviour.
28

Segerman, Henry. "3D Printing for Mathematical Visualisation." Mathematical Intelligencer 34, no. 4 (September 26, 2012): 56–62. http://dx.doi.org/10.1007/s00283-012-9319-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Roslan, Rose Khairunnisa, and Azlina Ahmad. "3D Spatial Visualisation Skills Training Application for School Students Using Hologram Pyramid." JOIV : International Journal on Informatics Visualization 1, no. 4 (November 13, 2017): 170. http://dx.doi.org/10.30630/joiv.1.4.61.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Students need good visualisation skills to perform well in the field of Science, Technology, Engineering and Mathematics (STEM). However, two preliminary studies conducted with six teachers and fifty (50) Year 4 students at a local school showed that most of them are facing difficulties in visualizing 3D objects. From these findings and review of literature, we proposed a conceptual model called 3D Spatial Visual Skills Training (3D SVST) model. The 3D SVST model was applied as a basis to develop an application to improve 3D visualisation skills among elementary school students using a floating image technology known as hologram pyramid. In this paper, we report the findings from evaluation of students’ performance in visualisation skills test using 3D SVST application. Fifty (50) Year 4 students from a local school in the state of Selangor participated in the study. Two types of tests were conducted on students; 3D visualisation skills test using paper and 3D visualisation skills test using hologram pyramid. The tests include Paper Folding Task (PFT), Mental Rotating Task (MRT) and Virtual Building Component (VBC). The results show that students’ visualisation skills improved when using the hologram pyramid application. The study also found that students performed better in PFT but lower in MRT and VBC. From the findings, we can conclude that hologram pyramid has a positive impact on visualisation skills of students. Therefore, it has the potential to be used in classroom to complement other teaching and learning materials
30

de la Losa, Arnaud, and Bernard Cervelle. "3D Topological modeling and visualisation for 3D GIS." Computers & Graphics 23, no. 4 (August 1999): 469–78. http://dx.doi.org/10.1016/s0097-8493(99)00066-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Neuville, R., J. Pouliot, F. Poux, P. Hallot, L. De Rudder, and R. Billen. "TOWARDS A DECISION SUPPORT TOOL FOR 3D VISUALISATION: APPLICATION TO SELECTIVITY PURPOSE OF SINGLE OBJECT IN A 3D CITY SCENE." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4/W5 (October 23, 2017): 91–97. http://dx.doi.org/10.5194/isprs-annals-iv-4-w5-91-2017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper deals with the establishment of a comprehensive methodological framework that defines 3D visualisation rules and its application in a decision support tool. Whilst the use of 3D models grows in many application fields, their visualisation remains challenging from the point of view of mapping and rendering aspects to be applied to suitability support the decision making process. Indeed, there exists a great number of 3D visualisation techniques but as far as we know, a decision support tool that facilitates the production of an efficient 3D visualisation is still missing. This is why a comprehensive methodological framework is proposed in order to build decision tables for specific data, tasks and contexts. Based on the second-order logic formalism, we define a set of functions and propositions among and between two collections of entities: on one hand static retinal variables (hue, size, shape…) and 3D environment parameters (directional lighting, shadow, haze…) and on the other hand their effect(s) regarding specific visual tasks. It enables to define 3D visualisation rules according to four categories: consequence, compatibility, potential incompatibility and incompatibility. In this paper, the application of the methodological framework is demonstrated for an urban visualisation at high density considering a specific set of entities. On the basis of our analysis and the results of many studies conducted in the 3D semiotics, which refers to the study of symbols and how they relay information, the truth values of propositions are determined. 3D visualisation rules are then extracted for the considered context and set of entities and are presented into a decision table with a colour coding. Finally, the decision table is implemented into a plugin developed with three.js, a cross-browser JavaScript library. The plugin consists of a sidebar and warning windows that help the designer in the use of a set of static retinal variables and 3D environment parameters.
32

Noordegraaf, Julia, Loes Opgenhaffen, and Norbert Bakker. "Cinema Parisien 3D." Alphaville: Journal of Film and Screen Media, no. 11 (August 17, 2016): 45–61. http://dx.doi.org/10.33178/alpha.11.03.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this article we evaluate the relevance of 3D visualisation as a research tool for the history of cinemagoing. How does the process of building a 3D model of cinema theatres relate to what we already know about this history? In which ways does the modelling process allow for the synthesis of different types of archived cinema heritage assets? To what extent does this presentation of “content in context” helps us to better understand the history of film consumption? We will address these questions via a discussion of a specific case study, our visualisation of Jean Desmet’s Amsterdam Cinema Parisien theatre, one of the first permanent cinemas of the Dutch capital. First, we reflect on 3D as a research tool, outlining its technology and methodological principles and its usefulness for research into the historiography of moviegoing. Then we describe our 3D visualisation of Cinema Parisien, discussing the process of researching and building the model. Finally, we evaluate the result against the existing knowledge about the history of cinemagoing in Amsterdam and of this cinema theatre in particular, and answer the question to what extent 3D as a research tool can aid our understanding of the history of cinema consumption.
33

Isaacs, John P., Ruth E. Falconer, Daniel J. Gilmour, and David J. Blackwood. "Enhancing urban sustainability using 3D visualisation." Proceedings of the Institution of Civil Engineers - Urban Design and Planning 164, no. 3 (September 2011): 163–73. http://dx.doi.org/10.1680/udap.900034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kočevar, Tanja Nuša, and Helena Gabrijelčič Tomc. "3D Visualisation of Woven Fabric Porosity." Tekstilec 59, no. 1 (March 25, 2016): 28–40. http://dx.doi.org/10.14502/tekstilec2016.59.28-40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Teles, Bruno, Pedro Mariano, and Pedro Santana. "Game-Like 3D Visualisation of Air Quality Data." Multimodal Technologies and Interaction 4, no. 3 (August 17, 2020): 54. http://dx.doi.org/10.3390/mti4030054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The data produced by sensor networks for urban air quality monitoring is becoming a valuable asset for informed health-aware human activity planning. However, in order to properly explore and exploit these data, citizens need intuitive and effective ways of interacting with it. This paper presents CityOnStats, a visualisation tool developed to provide users, mainly adults and young adults, with a game-like 3D environment populated with air quality sensing data, as an alternative to the traditionally passive visualisation techniques. CityOnStats provides several visual cues of pollution presence with the purpose of meeting each user’s preferences. Usability tests with a sample of 30 participants have shown the value of air quality 3D game-based visualisation and have provided empirical support for which visual cues are most adequate for the task at hand.
36

Wada, Tomohito, Mirai Mizutani, James Lee, David Rowlands, and Daniel James. "3D Visualisation of Wearable Inertial/Magnetic Sensors." Proceedings 2, no. 6 (February 22, 2018): 292. http://dx.doi.org/10.3390/proceedings2060292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Chunjiang, Zhao, Wang Gongming, Guo Xinyu, Li Changfeng, Lu Shenglian, Du Xiaohong, and Hao Ruirui. "Soil pore visualisation using 3D Koch curves." New Zealand Journal of Agricultural Research 50, no. 5 (December 2007): 919–26. http://dx.doi.org/10.1080/00288230709510368.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kleut, Duska, Milorad Jovanovic, and Branimir Reljin. "3D visualisation of MRI images using MATLAB." Journal of Automatic Control 16, no. 1 (2006): 1–3. http://dx.doi.org/10.2298/jac0601001k.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper describes the use of MATLAB in three-dimensional reconstruction of human brain MRI images. The programme that was designed enables observing disections of the gained 3D structure along three axes.
39

Stojanovic, Nikola, and Vladan Vuckovic. "Real-Time 3D Visualisation for inverted pendulum." Facta universitatis - series: Electronics and Energetics 23, no. 3 (2010): 299–309. http://dx.doi.org/10.2298/fuee1003299s.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Real-Time 3D Visualisation for Inverted Pendulum is developed by the authors, as the result of some advanced research and real-time 3D simulation of an inverted pendulum. Nowadays, there are some practical applications and robots, controlled mostly using the microprocessor fuzzy controllers, generating the new interest for these machines. Our application has intention to simulate the movements of the inverted pendulum in real time and could interact with user during the generating process. It includes Inverted Pendulum Generator (written in Delphi) and 3d Pendulum Viewer. The application is standalone.
40

Smessaert, Hans. "On the 3D Visualisation of Logical Relations." Logica Universalis 3, no. 2 (July 25, 2009): 303–32. http://dx.doi.org/10.1007/s11787-009-0010-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Tziakos, Ioannis, Andrea Cavallaro, and Li-Qun Xu. "Video event segmentation and visualisation in non-linear subspace." Pattern Recognition Letters 30, no. 2 (January 2009): 123–31. http://dx.doi.org/10.1016/j.patrec.2008.02.028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Isikdag, U., and K. Sahin. "WEB BASED 3D VISUALISATION OF TIME-VARYING AIR QUALITY INFORMATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4 (September 19, 2018): 267–74. http://dx.doi.org/10.5194/isprs-archives-xlii-4-267-2018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p><strong>Abstract.</strong> Many countries where the industrial development and production rates are high face many side effects of low air quality and air pollution. There is an evident correlation between the topographic and climatic properties of a location and the air pollution and air quality on that location. As the variation of air quality is dependent on location, air quality information should be acquired, utilised, stored and presented in form of Geo-Information. On the other hand, as this information is related with the health concerns of public, the information should be available publicly, and needs to be presented through an easily accessible medium and through a commonly used interface. Efficient storage of time-varying air quality information when combined with an efficient mechanism of 3D web-based visualisation would help very much in dissemination of air quality information to public. This research is focused on web-based 3D visualisation of time-varying air quality data. A web based interactive system is developed to visualise pollutant levels that were acquired as hourly intervals from more than 100 stations in Turkey between years 2008 and 2017. The research also concentrated on visualisation of geospatial high volume data. In the system, visualisation can be achieved on-demand by querying an air pollutant information database of 10.330.629 records and a city object database with more than 700.000 records. The paper elaborates on the details of this research. Following a background on air quality, air quality models, and Geo-Information visualisation, the system architecture and functionality is presented. The paper concludes with results of usability tests of the system.</p>
43

Guerrero, J., S. Zlatanova, and M. Meijers. "3D VISUALISATION OF UNDERGROUND PIPELINES: BEST STRATEGY FOR 3D SCENE CREATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-2/W1 (September 13, 2013): 139–45. http://dx.doi.org/10.5194/isprsannals-ii-2-w1-139-2013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Klesper, C. "IVIS-3D: A tool for interactive 3D-visualisation of gravity models." Physics and Chemistry of the Earth 23, no. 3 (January 1998): 279–83. http://dx.doi.org/10.1016/s0079-1946(98)00025-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Bolliger, Michael J., Ursula Buck, Michael J. Thali, and Stephan A. Bolliger. "Reconstruction and 3D visualisation based on objective real 3D based documentation." Forensic Science, Medicine, and Pathology 8, no. 3 (October 7, 2011): 208–17. http://dx.doi.org/10.1007/s12024-011-9288-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Opiyo, Eliab Z., and Imre Horvath. "Towards an interactive spatial product visualisation: a comparative analysis of prevailing 3D visualisation paradigms." International Journal of Product Development 11, no. 1/2 (2010): 6. http://dx.doi.org/10.1504/ijpd.2010.032987.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Moore, Andrew, David J. Storey, and Darren Stanton. "Better reservoir visualisation." APPEA Journal 52, no. 1 (2012): 475. http://dx.doi.org/10.1071/aj11037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Santos, a significant Australian energy company, sponsors open source software to improve 3D reservoir visualisation. The software, TurboVNC, allows users of standard laptops to connect to servers running Paradigm exploration and production software from any network location. Performance, collaboration and data management benefits are coupled with capital and operational savings of $2.5 million AUD. Santos’s TurboVNC project won the global Innovator of the Year award (2011) with Red Hat, suppliers of Linux, the server operating system. Beyond these immediate benefits, the real value of thin client application delivery is the ability to centralise data in one large database. This facilitates consistent data standards and quality procedures to be applied. New insights and value can be derived from the consolidated big data gathered from the full exploration and production spectrum. The hypothesis is that access to larger, integrated data sets can result in better reservoir models, reduced uncertainty and optimised production.
48

Cerreta, Maria, Roberta Mele, and Giuliano Poli. "Urban Ecosystem Services (UES) Assessment within a 3D Virtual Environment: A Methodological Approach for the Larger Urban Zones (LUZ) of Naples, Italy." Applied Sciences 10, no. 18 (September 7, 2020): 6205. http://dx.doi.org/10.3390/app10186205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The complexity of the urban spatial configuration, which affects human wellbeing and landscape functioning, necessitates data acquisition and three-dimensional (3D) visualisation to support effective decision-making processes. One of the main challenges in sustainability research is to conceive spatial models adapting to changes in scale and recalibrate the related indicators, depending on scale and data availability. From this perspective, the inclusion of the third dimension in the Urban Ecosystem Services (UES) identification and assessment can enhance the detail in which urban structure–function relationships can be studied. Moreover, improving the modelling and visualisation of 3D UES indicators can aid decision-makers in localising, analysing, assessing, and managing urban development strategies. The main goal of the proposed framework is concerned with evaluating, planning, and monitoring UES within a 3D virtual environment, in order to improve the visualisation of spatial relationships among services and to support site-specific planning choices.
49

Jarna, A., B. O. Grøtan, I. H. C. Henderson, S. Iversen, E. Khloussy, B. Nordahl, and B. I. Rindstad. "MANAGING GEOLOGICAL PROFILES IN DATABASES FOR 3D VISUALISATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W2 (October 6, 2016): 115–21. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w2-115-2016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Geology and all geological structures are three-dimensional in space. GIS and databases are common tools used by geologists to interpret and communicate geological data. The NGU (Geological Survey of Norway) is the national institution for the study of bedrock, mineral resources, surficial deposits and groundwater and marine geology. 3D geology is usually described by geological profiles, or vertical sections through a map, where you can look at the rock structure below the surface. The goal is to gradually expand the usability of existing and new geological profiles to make them more available in the retail applications as well as build easier entry and registration of profiles. The project target is to develop the methodology for acquisition of data, modification and use of data and its further presentation on the web by creating a user-interface directly linked to NGU’s webpage. This will allow users to visualise profiles in a 3D model.
50

Jeffrey, Stuart, Siân Jones, Mhairi Maxwell, Alex Hale, and Cara Jones. "3D visualisation, communities and the production of significance." International Journal of Heritage Studies 26, no. 9 (February 27, 2020): 885–900. http://dx.doi.org/10.1080/13527258.2020.1731703.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії