Artykuły w czasopismach na temat „Imagerie fUS”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Imagerie fUS.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Imagerie fUS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Pascual, Javier, Ander Ramos i Carmen Vidaurre. "Classifying motor imagery with FES induced EEG patterns". Neuroscience Letters 500 (lipiec 2011): e48. http://dx.doi.org/10.1016/j.neulet.2011.05.209.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Choi, Inchul, Gyu Hyun Kwon, Sangwon Lee i Chang S. Nam. "Functional Electrical Stimulation Controlled by Motor Imagery Brain-Computer Interface for Rehabilitation". Brain Sciences 10, nr 8 (2.08.2020): 512. http://dx.doi.org/10.3390/brainsci10080512.

Pełny tekst źródła
Streszczenie:
Sensorimotor rhythm (SMR)-based brain–computer interface (BCI) controlled Functional Electrical Stimulation (FES) has gained importance in recent years for the rehabilitation of motor deficits. However, there still remain many research questions to be addressed, such as unstructured Motor Imagery (MI) training procedures; a lack of methods to classify different MI tasks in a single hand, such as grasping and opening; and difficulty in decoding voluntary MI-evoked SMRs compared to FES-driven passive-movement-evoked SMRs. To address these issues, a study that is composed of two phases was conducted to develop and validate an SMR-based BCI-FES system with 2-class MI tasks in a single hand (Phase 1), and investigate the feasibility of the system with stroke and traumatic brain injury (TBI) patients (Phase 2). The results of Phase 1 showed that the accuracy of classifying 2-class MIs (approximately 71.25%) was significantly higher than the true chance level, while that of distinguishing voluntary and passive SMRs was not. In Phase 2, where the patients performed goal-oriented tasks in a semi-asynchronous mode, the effects of the FES existence type and adaptive learning on task performance were evaluated. The results showed that adaptive learning significantly increased the accuracy, and the accuracy after applying adaptive learning under the No-FES condition (61.9%) was significantly higher than the true chance level. The outcomes of the present research would provide insight into SMR-based BCI-controlled FES systems that can connect those with motor disabilities (e.g., stroke and TBI patients) to other people by greatly improving their quality of life. Recommendations for future work with a larger sample size and kinesthetic MI were also presented.
Style APA, Harvard, Vancouver, ISO itp.
3

Savic, A., N. Malešević i M. B. Popovic. "11. Motor imagery based BCI for control of FES". Clinical Neurophysiology 124, nr 7 (lipiec 2013): e11-e12. http://dx.doi.org/10.1016/j.clinph.2012.12.020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Boone, C. D., P. F. Bernath i M. Lecours. "Version 5 retrievals for ACE-FTS and ACE-imagers". Journal of Quantitative Spectroscopy and Radiative Transfer 310 (grudzień 2023): 108749. http://dx.doi.org/10.1016/j.jqsrt.2023.108749.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhou, Jing, Huawei Mou, Jianfeng Zhou, Md Liakat Ali, Heng Ye, Pengyin Chen i Henry T. Nguyen. "Qualification of Soybean Responses to Flooding Stress Using UAV-Based Imagery and Deep Learning". Plant Phenomics 2021 (28.06.2021): 1–13. http://dx.doi.org/10.34133/2021/9892570.

Pełny tekst źródła
Streszczenie:
Soybean is sensitive to flooding stress that may result in poor seed quality and significant yield reduction. Soybean production under flooding could be sustained by developing flood-tolerant cultivars through breeding programs. Conventionally, soybean tolerance to flooding in field conditions is evaluated by visually rating the shoot injury/damage due to flooding stress, which is labor-intensive and subjective to human error. Recent developments of field high-throughput phenotyping technology have shown great potential in measuring crop traits and detecting crop responses to abiotic and biotic stresses. The goal of this study was to investigate the potential in estimating flood-induced soybean injuries using UAV-based image features collected at different flight heights. The flooding injury score (FIS) of 724 soybean breeding plots was taken visually by breeders when soybean showed obvious injury symptoms. Aerial images were taken on the same day using a five-band multispectral and an infrared (IR) thermal camera at 20, 50, and 80 m above ground. Five image features, i.e., canopy temperature, normalized difference vegetation index, canopy area, width, and length, were extracted from the images at three flight heights. A deep learning model was used to classify the soybean breeding plots to five FIS ratings based on the extracted image features. Results show that the image features were significantly different at three flight heights. The best classification performance was obtained by the model developed using image features at 20 m with 0.9 for the five-level FIS. The results indicate that the proposed method is very promising in estimating FIS for soybean breeding.
Style APA, Harvard, Vancouver, ISO itp.
6

Chandra Agustina, Haris, I. Made Oka Widyantara i I. G. A. K. Diafari Djuni H. "PEMBANGKITAN CITRA TIME EXPOSURE MENGGUNAKAN FILTER MEDIAN". Jurnal SPEKTRUM 8, nr 2 (12.07.2021): 281. http://dx.doi.org/10.24843/spektrum.2021.v08.i02.p32.

Pełny tekst źródła
Streszczenie:
Time exposure (timex) image is a type of image generated from image acquisition at acertain time or image acquisition with different exposure criteria, the advantage of timex imageis High Dynamic Range which can help provide details in a digital image processing. Toproduce a good timex image, a method is needed that is able to create a timex image fromeither a video or photo source. There are several methods used to produce timex images,namely Gradient-Based Synthesized, Multi-Exposure Image, and Median Filter. In this study,the three methods are compared in producing timex images with input in the form of imageswith over exposure and low exposure as well as video images. For testing the timex image, theHistogram, Standard Deviation, Variance, Mean, Median, and Mode parameters are used.Based on the results of research conducted on timex images with the Median Filter method, thestandard deviation value is 0.0739 and the variance value is 0.0054 where this value indicatesthat the intensity distribution on the median timex filter image has a wider dynamic range thanthe other two methods. The comparison of the timex image with the fps variation shows that thehigher the fps used, the better the timex image.
Style APA, Harvard, Vancouver, ISO itp.
7

Sari, Dewi Mutiara, Bayu Sandi Marta, Muhammad Amin A i Haryo Dwito Armono. "The Analysis of Underwater Imagery System for Armor Unit Monitoring Application". International Journal of Artificial Intelligence & Robotics (IJAIR) 5, nr 1 (29.04.2023): 1–12. http://dx.doi.org/10.25139/ijair.v5i1.5918.

Pełny tekst źródła
Streszczenie:
The placement of armor units for breakwaters in Indonesia is still done manually, which depends on divers in each placement of the armor unit. The use of divers is less effective due to limited communication between divers and excavator operators, making divers in the water take a long time. This makes the diver's job risky and expensive. This research presents a vision system to reduce the diver's role in adjusting the position of each armor unit. This vision system is built with two cameras connected to a mini-computer. This system has an image improvement process by comparing three methods. The results obtained are an average frame per second is 20.71 without applying the method, 0.45 fps for using the multi-scale retinex with color restoration method, 16.75 fps for applying the Contrast Limited Adaptive Histogram Equalization method, 16.17 fps for applying the Histogram Equalization method. The image quality evaluation uses the underwater color quality evaluation with 48 data points. The method that has experienced the most improvement in image quality is multi-scale retinex with color restoration. Forty data have improved image quality with an average of 14,131, or 83.33%. The number of images that experienced the highest image quality improvement was using the multi-scale retinex with color restoration method. Meanwhile, for image quality analysis based on Underwater Image Quality Measures, out of a total of 48 images, the method with the highest value for image quality is the contrast limited adaptive histogram equalization method. 100% of images have the highest image matrix value with an average value is 33.014.
Style APA, Harvard, Vancouver, ISO itp.
8

Buchholz, Jan, Jan Krieger, Claudio Bruschini, Samuel Burri, Andrei Ardelean, Edoardo Charbon i Jörg Langowski. "Widefield High Frame Rate Single-Photon SPAD Imagers for SPIM-FCS". Biophysical Journal 114, nr 10 (maj 2018): 2455–64. http://dx.doi.org/10.1016/j.bpj.2018.04.029.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Opasatian, Ithiphat, i Tofael Ahamed. "Driveway Detection for Weed Management in Cassava Plantation Fields in Thailand Using Ground Imagery Datasets and Deep Learning Models". AgriEngineering 6, nr 3 (18.09.2024): 3408–26. http://dx.doi.org/10.3390/agriengineering6030194.

Pełny tekst źródła
Streszczenie:
Weeds reduce cassava root yields and infest furrow areas quickly. The use of mechanical weeders has been introduced in Thailand; however, manually aligning the weeders with each planting row and at headland turns is still challenging. It is critical to clear weeds on furrow slopes and driveways via mechanical weeders. Automation can support this difficult work for weed management via driveway detection. In this context, deep learning algorithms have the potential to train models to detect driveways through furrow image segmentation. Therefore, the purpose of this research was to develop an image segmentation model for automated weed control operations in cassava plantation fields. To achieve this, image datasets were obtained from various fields to aid weed detection models in automated weed management. Three models—Mask R-CNN, YOLACT, and YOLOv8n-seg—were used to construct the image segmentation model, and they were evaluated according to their precision, recall, and FPS. The results show that YOLOv8n-seg achieved the highest accuracy and FPS (114.94 FPS); however, it experienced issues with frame segmentation during video testing. In contrast, YOLACT had no segmentation issues in the video tests (23.45 FPS), indicating its potential for driveway segmentation in cassava plantations. In summary, image segmentation for detecting driveways can improve weed management in cassava fields, and the further automation of low-cost mechanical weeders in tropical climates can be performed based on the YOLACT algorithm.
Style APA, Harvard, Vancouver, ISO itp.
10

Han, Seongkyun, Jisang Yoo i Soonchul Kwon. "Real-Time Vehicle-Detection Method in Bird-View Unmanned-Aerial-Vehicle Imagery". Sensors 19, nr 18 (13.09.2019): 3958. http://dx.doi.org/10.3390/s19183958.

Pełny tekst źródła
Streszczenie:
Vehicle detection is an important research area that provides background information for the diversity of unmanned-aerial-vehicle (UAV) applications. In this paper, we propose a vehicle-detection method using a convolutional-neural-network (CNN)-based object detector. We design our method, DRFBNet300, with a Deeper Receptive Field Block (DRFB) module that enhances the expressiveness of feature maps to detect small objects in the UAV imagery. We also propose the UAV-cars dataset that includes the composition and angular distortion of vehicles in UAV imagery to train our DRFBNet300. Lastly, we propose a Split Image Processing (SIP) method to improve the accuracy of the detection model. Our DRFBNet300 achieves 21 mAP with 45 FPS in the MS COCO metric, which is the highest score compared to other lightweight single-stage methods running in real time. In addition, DRFBNet300, trained on the UAV-cars dataset, obtains the highest AP score at altitudes of 20–50 m. The gap of accuracy improvement by applying the SIP method became larger when the altitude increases. The DRFBNet300 trained on the UAV-cars dataset with SIP method operates at 33 FPS, enabling real-time vehicle detection.
Style APA, Harvard, Vancouver, ISO itp.
11

Xia, Haiyang, Baohua Yang, Yunlong Li i Bing Wang. "An Improved CenterNet Model for Insulator Defect Detection Using Aerial Imagery". Sensors 22, nr 8 (8.04.2022): 2850. http://dx.doi.org/10.3390/s22082850.

Pełny tekst źródła
Streszczenie:
For the issue of low accuracy and poor real-time performance of insulator and defect detection by an unmanned aerial vehicle (UAV) in the process of power inspection, an insulator detection model MobileNet_CenterNet was proposed in this study. First, the lightweight network MobileNet V1 was used to replace the feature extraction network Resnet-50 of the original model, aiming to ensure the detection accuracy of the model while speeding up its detection speed. Second, a spatial and channel attention mechanism convolutional block attention module (CBAM) was introduced in CenterNet, aiming to improve the prediction accuracy of small target insulator position information. Then, three transposed convolution modules were added for upsampling, aiming to better restore the semantic information and position information of the image. Finally, the insulator dataset (ID) constructed by ourselves and the public dataset (CPLID) were used for model training and validation, aiming to improve the generalization ability of the model. The experimental results showed that compared with the CenterNet model, MobileNet_CenterNet improved the detection accuracy by 12.2%, the inference speed by 1.1 f/s for FPS-CPU and 4.9 f/s for FPS-GPU, and the model size was reduced by 37 MB. Compared with other models, our proposed model improved both detection accuracy and inference speed, indicating that the MobileNet_CenterNet model had better real-time performance and robustness.
Style APA, Harvard, Vancouver, ISO itp.
12

Vavoulis, Athanasios, Patricia Figueiredo i Athanasios Vourvopoulos. "A Review of Online Classification Performance in Motor Imagery-Based Brain–Computer Interfaces for Stroke Neurorehabilitation". Signals 4, nr 1 (20.01.2023): 73–86. http://dx.doi.org/10.3390/signals4010004.

Pełny tekst źródła
Streszczenie:
Motor imagery (MI)-based brain–computer interfaces (BCI) have shown increased potential for the rehabilitation of stroke patients; nonetheless, their implementation in clinical practice has been restricted due to their low accuracy performance. To date, although a lot of research has been carried out in benchmarking and highlighting the most valuable classification algorithms in BCI configurations, most of them use offline data and are not from real BCI performance during the closed-loop (or online) sessions. Since rehabilitation training relies on the availability of an accurate feedback system, we surveyed articles of current and past EEG-based BCI frameworks who report the online classification of the movement of two upper limbs in both healthy volunteers and stroke patients. We found that the recently developed deep-learning methods do not outperform the traditional machine-learning algorithms. In addition, patients and healthy subjects exhibit similar classification accuracy in current BCI configurations. Lastly, in terms of neurofeedback modality, functional electrical stimulation (FES) yielded the best performance compared to non-FES systems.
Style APA, Harvard, Vancouver, ISO itp.
13

Ezenwa, K. O., E. O. Iguisi, Y. O. Yakubu i M. Ismail. "A SCS-CN TECHNIQUE FOR GEOSPATIAL ESTIMATION OF RUNOFF PEAK DISCHARGE IN THE KUBANNI DRAINAGE BASIN, ZARIA, NIGERIA". FUDMA JOURNAL OF SCIENCES 6, nr 1 (5.04.2022): 314–22. http://dx.doi.org/10.33003/fjs-2022-0601-901.

Pełny tekst źródła
Streszczenie:
The problem of soil loss is becoming widespread due to increasing unwholesome land use practices and population pressure on limited landscape. This study employed the integration of satellite imageries, rainfall and soil data and modern GIS technology to estimate runoff peak discharge in the Kubanni drainage basin. Some of the contributions of this study include the determination of the Hydrologic Soil Group (HSG) and Soil Conservation Service Curve Number (SCS CN) for the Kubanni drainage basin with a view to investigating runoff peak discharge using geospatial technology. Satellite images of Landsat OLI for February, July and November 2019, rainfall data from 2014 to 2018, soil data and SRTM DEM of 30-meter resolution were utilized for the study. A maximum likelihood supervised classification method was adopted in processing the satellite images to determine the Land Use and Land Cover (LULC) classes for the Kubanni drainage basin landscape. The LULC classes for the study area include built up area, water, vegetation, farmland and bare land. The SCS CN values for Goruba, Maigamo, Tukurwa and Malmo sub basins were discovered to be 79.72, 76.51, 71.47 and 66.00 respectively. The runoff peak discharges for the Kubanni drainage basin was found to be , , and for the years 2014, 2015, 2016, 2017 and 2018 respectively. The study has demonstrated the viability of adopting the SCS CN technique, satellite images, rainfall data and geospatial tools for the estimation of runoff peak discharge in the Kubanni drainage basin.
Style APA, Harvard, Vancouver, ISO itp.
14

Ijafiya, Danjuma Jijuwa, U. Y. Abubakar i A. B. Liman. "PERCEPTIONS ON THE IMPACTS OF MORPHOLOGICAL CHANGES ON THE LOWER COURSE OF RIVER MAYO-INNE, YOLA SOUTH, ADAMAWA STATE, NIGERIA". FUDMA JOURNAL OF SCIENCES 7, nr 3 (8.07.2023): 146–51. http://dx.doi.org/10.33003/fjs-2023-0703-1858.

Pełny tekst źródła
Streszczenie:
Rivers are important natural resources that support the existence of humans and other living organisms right from time immemorial. Despite their significance to human livelihood; changes in their morphology can impact on the socio-economic, cultural and environmental values of the riparian environment. Therefore, this study focused on the perception of the impacts of morphological changes on the lower course of River Mayo-Inne, Yola South, Adamawa State, Nigeria. This was done with the view to examine the perceived impacts of morphological changes on riparian land uses; factors influencing morphological changes and estimate the land area affected by changes in river channel morphology of the study area. An integrated approach of remote sensing, GIS, questionnaire survey and interview, were employed in this study. Descriptive statistics such as percentage and sum were used to analyse the data sets. The results revealed that fishing activities, agricultural land, damage on crops, plantation and residential land use have high impacts in the study area. While grazing land and commercial land uses have medium and low impacts respectively. The perceived factors influencing morphological changes in the study area were: discharge 50.75%, sand mining 17.10%, channel bed siltation 13.15% and urbanization 6.80%; all of which provide 81% of the total response of people living in the riparian environments. The temporal analysis of satellite imageries from 1990 to 2015 revealed that the river channel area increased from 486.34ha to 594.90ha respectively; impacting riparian land uses through bank under cutting, chute cutoffs and meander migration...
Style APA, Harvard, Vancouver, ISO itp.
15

Ren, Shixin, Weiqun Wang, Zeng-Guang Hou, Xu Liang, Jiaxing Wang i Weiguo Shi. "Enhanced Motor Imagery Based Brain- Computer Interface via FES and VR for Lower Limbs". IEEE Transactions on Neural Systems and Rehabilitation Engineering 28, nr 8 (sierpień 2020): 1846–55. http://dx.doi.org/10.1109/tnsre.2020.3001990.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Shi, Wenxu, Qingyan Meng, Linlin Zhang, Maofan Zhao, Chen Su i Tamás Jancsó. "DSANet: A Deep Supervision-Based Simple Attention Network for Efficient Semantic Segmentation in Remote Sensing Imagery". Remote Sensing 14, nr 21 (27.10.2022): 5399. http://dx.doi.org/10.3390/rs14215399.

Pełny tekst źródła
Streszczenie:
Semantic segmentation for remote sensing images (RSIs) plays an important role in many applications, such as urban planning, environmental protection, agricultural valuation, and military reconnaissance. With the boom in remote sensing technology, numerous RSIs are generated; this is difficult for current complex networks to handle. Efficient networks are the key to solving this challenge. Many previous works aimed at designing lightweight networks or utilizing pruning and knowledge distillation methods to obtain efficient networks, but these methods inevitably reduce the ability of the resulting models to characterize spatial and semantic features. We propose an effective deep supervision-based simple attention network (DSANet) with spatial and semantic enhancement losses to handle these problems. In the network, (1) a lightweight architecture is used as the backbone; (2) deep supervision modules with improved multiscale spatial detail (MSD) and hierarchical semantic enhancement (HSE) losses synergistically strengthen the obtained feature representations; and (3) a simple embedding attention module (EAM) with linear complexity performs long-range relationship modeling. Experiments conducted on two public RSI datasets (the ISPRS Potsdam dataset and Vaihingen dataset) exhibit the substantial advantages of the proposed approach. Our method achieves 79.19% mean intersection over union (mIoU) on the ISPRS Potsdam test set and 72.26% mIoU on the Vaihingen test set with speeds of 470.07 FPS on 512 × 512 images and 5.46 FPS on 6000 × 6000 images using an RTX 3090 GPU.
Style APA, Harvard, Vancouver, ISO itp.
17

Boone, C. D., P. F. Bernath, D. Cok, S. C. Jones i J. Steffen. "Version 4 retrievals for the atmospheric chemistry experiment Fourier transform spectrometer (ACE-FTS) and imagers". Journal of Quantitative Spectroscopy and Radiative Transfer 247 (maj 2020): 106939. http://dx.doi.org/10.1016/j.jqsrt.2020.106939.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Oh, Dong Sik, i Jong Duk Choi. "Effects of Motor Imagery Training on Balance and Gait in Older Adults: A Randomized Controlled Pilot Study". International Journal of Environmental Research and Public Health 18, nr 2 (14.01.2021): 650. http://dx.doi.org/10.3390/ijerph18020650.

Pełny tekst źródła
Streszczenie:
The aim of this study was to demonstrate the effects of motor imagery training on balance and gait abilities in older adults and to investigate the possible application of the training as an effective intervention against fall prevention. Subjects (n = 34) aged 65 years and over who had experienced falls were randomly allocated to three groups: (1) motor imagery training group (MITG, n = 11), (2) task-oriented training group (TOTG, n = 11), and (3) control group (CG, n = 12). Each group performed an exercise three times a week for 6 weeks. The dependent variables included Path Length of center of pressure (COP)-based static balance, Berg Balance Scale (BBS) score, Timed Up and Go Test (TUG) score, which assesses a person’s mobility based on changes in both static and dynamic balance, Falls Efficacy Scale (FES) score, which evaluates changes in fear of falls, and gait parameters (velocity, cadence, step length, stride length, and H-H base support) to evaluate gait. After the intervention, Path Length, BBS, TUG, velocity, cadence, step length, and stride length showed significant increases in MITG and TOTG compared to CG (p < 0.05). Post hoc test results showed a significantly greater increase in BBS, TUG, and FES in MITG compared with TOTG and CG (p < 0.05). Our results suggest that motor imagery training combined with functional training has positive effects on balance, gait, and fall efficacy for fall prevention in the elderly.
Style APA, Harvard, Vancouver, ISO itp.
19

Kamlun, Kamlisa U., i Mui-How Phua. "Anthropogenic influences on deforestation of a peat swamp forest in Northern Borneo using remote sensing and GIS". Forest Systems 33, nr 1 (9.01.2024): eSC02. http://dx.doi.org/10.5424/fs/2024331-20585.

Pełny tekst źródła
Streszczenie:
Aim of study: To study the anthropogenic factors that influence the fire occurrences in a peat swamp forest (PSF) in the northern part of Borneo Island. Area of study: Klias Peninsula, Sabah Borneo Island, Malaysia. Material and methods: Supervised classification using the maximum likelihood algorithm of multitemporal satellite imageries from the mid-80s to the early 20s was used to quantify the wetland vegetation change on Klias Peninsula. GIS-based buffering analysis was made to generate three buffer zones with distances of 1000 m, 2000 m, and 3000 m based on each of three anthropogenic factors (settlement, agriculture, and road) that influence the fire events. Main results: The results showed that PSF, barren land, and grassland have significantly changed between 1991 and 2013. PSF plummeted by about 70% during the 19-year period. Agriculture exhibited the most significant anthropogenic factor that contributes to the deforestation of the PSF in this study area with the distance of 1001-2000 m in 1998 fire event and 0-1000 m in 2003. Additionally, the distance to settlement played an increasingly important role in the fire affected areas, as shown by the increase of weightages from 0.26 to 0.35. Research highlights: Our results indicate that agriculture is the most influential anthropogenic factor associated with the fire-affected areas. The distance to settlement played an increasingly important role in the fire affected areas and contributes to the deforestation of the PSF in these study areas.
Style APA, Harvard, Vancouver, ISO itp.
20

Niu, Shanwei, Zhigang Nie, Guang Li i Wenyu Zhu. "Multi-Altitude Corn Tassel Detection and Counting Based on UAV RGB Imagery and Deep Learning". Drones 8, nr 5 (14.05.2024): 198. http://dx.doi.org/10.3390/drones8050198.

Pełny tekst źródła
Streszczenie:
In the context of rapidly advancing agricultural technology, precise and efficient methods for crop detection and counting play a crucial role in enhancing productivity and efficiency in crop management. Monitoring corn tassels is key to assessing plant characteristics, tracking plant health, predicting yield, and addressing issues such as pests, diseases, and nutrient deficiencies promptly. This ultimately ensures robust and high-yielding corn growth. This study introduces a method for the recognition and counting of corn tassels, using RGB imagery captured by unmanned aerial vehicles (UAVs) and the YOLOv8 model. The model incorporates the Pconv local convolution module, enabling a lightweight design and rapid detection speed. The ACmix module is added to the backbone section to improve feature extraction capabilities for corn tassels. Moreover, the CTAM module is integrated into the neck section to enhance semantic information exchange between channels, allowing for precise and efficient positioning of corn tassels. To optimize the learning rate strategy, the sparrow search algorithm (SSA) is utilized. Significant improvements in recognition accuracy, detection efficiency, and robustness are observed across various UAV flight altitudes. Experimental results show that, compared to the original YOLOv8 model, the proposed model exhibits an increase in accuracy of 3.27 percentage points to 97.59% and an increase in recall of 2.85 percentage points to 94.40% at a height of 5 m. Furthermore, the model optimizes frames per second (FPS), parameters (params), and GFLOPs (giga floating point operations per second) by 7.12%, 11.5%, and 8.94%, respectively, achieving values of 40.62 FPS, 14.62 MB, and 11.21 GFLOPs. At heights of 10, 15, and 20 m, the model maintains stable accuracies of 90.36%, 88.34%, and 84.32%, respectively. This study offers technical support for the automated detection of corn tassels, advancing the intelligence and precision of agricultural production and significantly contributing to the development of modern agricultural technology.
Style APA, Harvard, Vancouver, ISO itp.
21

Bernini, Marco. "Affording innerscapes: Dreams, introspective imagery and the narrative exploration of personal geographies". Frontiers of Narrative Studies 4, nr 2 (26.11.2018): 291–311. http://dx.doi.org/10.1515/fns-2018-0024.

Pełny tekst źródła
Streszczenie:
AbstractThe essay presents an interdisciplinary theory of what it will call “innerscapes”: artefactual representations of the mind as a spatially extended world. By bringing examples of innerscapes from literature (Kafka’s short story The Bridge), radio plays (Samuel Beckett’s Embers), and a creative documentary about auditory-verbal hallucinations (a voice-hearer’s short film, Adam + 1), it suggests that these spatial renditions of the mind are constructed by transforming the quasi-perceptual elements of inner experience into affording ecologies. In so doing, they enable an enactive exploration of inner worlds as navigable environments. The resulting storyworlds display features that resemble the logic and ontology of dreams. Cognitive research on dreams and cartographical studies of the personal geographies of dreamscapes will thus inform the understanding of what innerscapes are, do and can do if used, as the essay argues they should be, as enhancing devices for what Jesse Butler has called ‘extended introspection” (2013: 95).
Style APA, Harvard, Vancouver, ISO itp.
22

Bell, K. "Literature and Painting in Quebec: From Imagery to Identity". French Studies 67, nr 3 (1.07.2013): 448–49. http://dx.doi.org/10.1093/fs/knt123.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Balamuralidhar, Navaneeth, Sofia Tilon i Francesco Nex. "MultEYE: Monitoring System for Real-Time Vehicle Detection, Tracking and Speed Estimation from UAV Imagery on Edge-Computing Platforms". Remote Sensing 13, nr 4 (5.02.2021): 573. http://dx.doi.org/10.3390/rs13040573.

Pełny tekst źródła
Streszczenie:
We present MultEYE, a traffic monitoring system that can detect, track, and estimate the velocity of vehicles in a sequence of aerial images. The presented solution has been optimized to execute these tasks in real-time on an embedded computer installed on an Unmanned Aerial Vehicle (UAV). In order to overcome the limitation of existing object detection architectures related to accuracy and computational overhead, a multi-task learning methodology was employed by adding a segmentation head to an object detector backbone resulting in the MultEYE object detection architecture. On a custom dataset, it achieved 4.8% higher mean Average Precision (mAP) score, while being 91.4% faster than the state-of-the-art model and while being able to generalize to different real-world traffic scenes. Dedicated object tracking and speed estimation algorithms have been then optimized to track reliably objects from an UAV with limited computational effort. Different strategies to combine object detection, tracking, and speed estimation are discussed, too. From our experiments, the optimized detector runs at an average frame-rate of up to 29 frames per second (FPS) on frame resolution 512 × 320 on a Nvidia Xavier NX board, while the optimally combined detector, tracker and speed estimator pipeline achieves speeds of up to 33 FPS on an image of resolution 3072 × 1728. To our knowledge, the MultEYE system is one of the first traffic monitoring systems that was specifically designed and optimized for an UAV platform under real-world constraints.
Style APA, Harvard, Vancouver, ISO itp.
24

Liu, Ye, Mingfen Li, Hao Zhang, Hang Wang, Junhua Li, Jie Jia, Yi Wu i Liqing Zhang. "A tensor-based scheme for stroke patients’ motor imagery EEG analysis in BCI-FES rehabilitation training". Journal of Neuroscience Methods 222 (styczeń 2014): 238–49. http://dx.doi.org/10.1016/j.jneumeth.2013.11.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Zasetsky, A. Y., K. Gilbert, I. Galkina, S. McLeod i J. J. Sloan. "Properties of polar stratospheric clouds obtained by combined ACE-FTS and ACE-Imager extinction measurements". Atmospheric Chemistry and Physics Discussions 7, nr 5 (12.09.2007): 13271–90. http://dx.doi.org/10.5194/acpd-7-13271-2007.

Pełny tekst źródła
Streszczenie:
Abstract. We report the compositions and size distributions of aerosol particles in typical polar stratospheric clouds (PSCs) observed between 24 January and 28 February 2005 in the Arctic stratosphere. The results are obtained by combining the extinction measurements made by the Atmospheric Chemistry Experiment (ACE) Fourier-Transform Spectrometer and the visible/near IR imagers on the SCISAT satellite. The extended wavenumber range provided by this combination (750 to 20 000 cm−1) enables the retrieval of aerosol particle sizes between 0.05 and 10 μm as well as providing extensive information about the compositions. Our results indicate that liquid ternary solutions with a high (>30 wt%) content of HNO3 were the most probable component of the clouds at the (60–70° N) latitudes accessible by ACE. The mean size of these ternary aerosol particles is in the range of 0.3 to 0.8 μm. Less abundant, although still frequent, were clouds composed of NAT particles having radii in the range of 1 μm and clouds of ice particles having mean radii in the 4–5 μm range. In some cases, these last two types were found in the same observation.
Style APA, Harvard, Vancouver, ISO itp.
26

Tilby, M. "Imagery and Ideology: Fiction and Painting in Nineteenth-Century France". French Studies 63, nr 1 (1.01.2009): 103–4. http://dx.doi.org/10.1093/fs/knn199.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Field, Ryan M., Simeon Realov i Kenneth L. Shepard. "A 100 fps, Time-Correlated Single-Photon-Counting-Based Fluorescence-Lifetime Imager in 130 nm CMOS". IEEE Journal of Solid-State Circuits 49, nr 4 (kwiecień 2014): 867–80. http://dx.doi.org/10.1109/jssc.2013.2293777.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Almanza Sepúlveda, Mayra Linné, Julio Llamas Alonso, Miguel Angel Guevara i Marisela Hernández González. "Increased Prefrontal-Parietal EEG Gamma Band Correlation during Motor Imagery in Expert Video Game Players". Actualidades en Psicología 28, nr 117 (19.11.2014): 27–36. http://dx.doi.org/10.15517/ap.v28i117.14095.

Pełny tekst źródła
Streszczenie:
Abstract. The aim of this study was to characterize the prefrontal-parietal EEG correlation in experienced video game players (VGPs) in relation to individuals with little or no video game experience (NVGPs) during a motor imagery condition for an action-type video game. The participants in both groups watched a first-person shooter (FPS) gameplay from Halo Reach during five minutes. None of the participants was notified as to the content of the video before watching it. Only the VGPs showed an increased right intrahemispheric prefrontal-parietal correlation (F4-P4) in the gamma band (31-50 Hz) during the observation of the gameplay. These data provide novel information on the participation of the gamma band during motor imagery for an action-type video game. It is probable that this higher degree of coupling between the prefrontal and parietalcortices could represent a characteristic pattern of brain functionality in VGPs as they make motor representations.
Style APA, Harvard, Vancouver, ISO itp.
29

Li, Shouliang, Jiale Han, Fanghui Chen, Rudong Min, Sixue Yi i Zhen Yang. "Fire-Net: Rapid Recognition of Forest Fires in UAV Remote Sensing Imagery Using Embedded Devices". Remote Sensing 16, nr 15 (2.08.2024): 2846. http://dx.doi.org/10.3390/rs16152846.

Pełny tekst źródła
Streszczenie:
Forest fires pose a catastrophic threat to Earth’s ecology as well as threaten human beings. Timely and accurate monitoring of forest fires can significantly reduce potential casualties and property damage. Thus, to address the aforementioned problems, this paper proposed an unmanned aerial vehicle (UAV) based on a lightweight forest fire recognition model, Fire-Net, which has a multi-stage structure and incorporates cross-channel attention following the fifth stage. This is to enable the model’s ability to perceive features at various scales, particularly small-scale fire sources in wild forest scenes. Through training and testing on a real-world dataset, various lightweight convolutional neural networks were evaluated on embedded devices. The experimental outcomes indicate that Fire-Net attained an accuracy of 98.18%, a precision of 99.14%, and a recall of 98.01%, surpassing the current leading methods. Furthermore, the model showcases an average inference time of 10 milliseconds per image and operates at 86 frames per second (FPS) on embedded devices.
Style APA, Harvard, Vancouver, ISO itp.
30

Troscianko, Emily T. "Samuel Beckett and Experimental Psychology: Perception, Attention, Imagery. By Joshua Powell". French Studies 75, nr 2 (25.01.2021): 283–84. http://dx.doi.org/10.1093/fs/knab002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

BROWN, C. "Review. Building Resemblance: Analogical Imagery in the Early French Renaissance. Randall, Michael". French Studies 52, nr 3 (1.07.1998): 330. http://dx.doi.org/10.1093/fs/52.3.330.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

FOLKIERSKA, A. "Review. Bird Imagery in the Lyric Poetry of Tristan l'Hermite. Belcher, Margaret". French Studies 44, nr 1 (1.01.1990): 57. http://dx.doi.org/10.1093/fs/44.1.57.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Qiu, Yue, Fang Wu, Jichong Yin, Chengyi Liu, Xianyong Gong i Andong Wang. "MSL-Net: An Efficient Network for Building Extraction from Aerial Imagery". Remote Sensing 14, nr 16 (12.08.2022): 3914. http://dx.doi.org/10.3390/rs14163914.

Pełny tekst źródła
Streszczenie:
There remains several challenges that are encountered in the task of extracting buildings from aerial imagery using convolutional neural networks (CNNs). First, the tremendous complexity of existing building extraction networks impedes their practical application. In addition, it is arduous for networks to sufficiently utilize the various building features in different images. To address these challenges, we propose an efficient network called MSL-Net that focuses on both multiscale building features and multilevel image features. First, we use depthwise separable convolution (DSC) to significantly reduce the network complexity, and then we embed a group normalization (GN) layer in the inverted residual structure to alleviate network performance degradation. Furthermore, we extract multiscale building features through an atrous spatial pyramid pooling (ASPP) module and apply long skip connections to establish long-distance dependence to fuse features at different levels of the given image. Finally, we add a deformable convolution network layer before the pixel classification step to enhance the feature extraction capability of MSL-Net for buildings with irregular shapes. The experimental results obtained on three publicly available datasets demonstrate that our proposed method achieves state-of-the-art accuracy with a faster inference speed than that of competing approaches. Specifically, the proposed MSL-Net achieves 90.4%, 81.1% and 70.9% intersection over union (IoU) values on the WHU Building Aerial Imagery dataset, Inria Aerial Image Labeling dataset and Massachusetts Buildings dataset, respectively, with an inference speed of 101.4 frames per second (FPS) for an input image of size 3 × 512 × 512 on an NVIDIA RTX 3090 GPU. With an excellent tradeoff between accuracy and speed, our proposed MSL-Net may hold great promise for use in building extraction tasks.
Style APA, Harvard, Vancouver, ISO itp.
34

Rodríguez-Puerta, Francisco, Carlos Barrera, Borja García, Fernando Pérez-Rodríguez i Angel M. García-Pedrero. "Mapping Tree Canopy in Urban Environments Using Point Clouds from Airborne Laser Scanning and Street Level Imagery". Sensors 22, nr 9 (24.04.2022): 3269. http://dx.doi.org/10.3390/s22093269.

Pełny tekst źródła
Streszczenie:
Resilient cities incorporate a social, ecological, and technological systems perspective through their trees, both in urban and peri-urban forests and linear street trees, and help promote and understand the concept of ecosystem resilience. Urban tree inventories usually involve the collection of field data on the location, genus, species, crown shape and volume, diameter, height, and health status of these trees. In this work, we have developed a multi-stage methodology to update urban tree inventories in a fully automatic way, and we have applied it in the city of Pamplona (Spain). We have compared and combined two of the most common data sources for updating urban tree inventories: Airborne Laser Scanning (ALS) point clouds combined with aerial orthophotographs, and street-level imagery from Google Street View (GSV). Depending on the data source, different methodologies were used to identify the trees. In the first stage, the use of individual tree detection techniques in ALS point clouds was compared with the detection of objects (trees) on street level images using computer vision (CV) techniques. In both cases, a high success rate or recall (number of true positive with respect to all detectable trees) was obtained, where between 85.07% and 86.42% of the trees were well-identified, although many false positives (FPs) or trees that did not exist or that had been confused with other objects were always identified. In order to reduce these errors or FPs, a second stage was designed, where FP debugging was performed through two methodologies: (a) based on the automatic checking of all possible trees with street level images, and (b) through a machine learning binary classification model trained with spectral data from orthophotographs. After this second stage, the recall decreased to about 75% (between 71.43 and 78.18 depending on the procedure used) but most of the false positives were eliminated. The results obtained with both data sources were robust and accurate. We can conclude that the results obtained with the different methodologies are very similar, where the main difference resides in the access to the starting information. While the use of street-level images only allows for the detection of trees growing in trafficable streets and is a source of information that is usually paid for, the use of ALS and aerial orthophotographs allows for the location of trees anywhere in the city, including public and private parks and gardens, and in many countries, these data are freely available.
Style APA, Harvard, Vancouver, ISO itp.
35

Rodríguez-Puerta, Francisco, Carlos Barrera, Borja García, Fernando Pérez-Rodríguez i Angel M. García-Pedrero. "Mapping Tree Canopy in Urban Environments Using Point Clouds from Airborne Laser Scanning and Street Level Imagery". Sensors 22, nr 9 (24.04.2022): 3269. http://dx.doi.org/10.3390/s22093269.

Pełny tekst źródła
Streszczenie:
Resilient cities incorporate a social, ecological, and technological systems perspective through their trees, both in urban and peri-urban forests and linear street trees, and help promote and understand the concept of ecosystem resilience. Urban tree inventories usually involve the collection of field data on the location, genus, species, crown shape and volume, diameter, height, and health status of these trees. In this work, we have developed a multi-stage methodology to update urban tree inventories in a fully automatic way, and we have applied it in the city of Pamplona (Spain). We have compared and combined two of the most common data sources for updating urban tree inventories: Airborne Laser Scanning (ALS) point clouds combined with aerial orthophotographs, and street-level imagery from Google Street View (GSV). Depending on the data source, different methodologies were used to identify the trees. In the first stage, the use of individual tree detection techniques in ALS point clouds was compared with the detection of objects (trees) on street level images using computer vision (CV) techniques. In both cases, a high success rate or recall (number of true positive with respect to all detectable trees) was obtained, where between 85.07% and 86.42% of the trees were well-identified, although many false positives (FPs) or trees that did not exist or that had been confused with other objects were always identified. In order to reduce these errors or FPs, a second stage was designed, where FP debugging was performed through two methodologies: (a) based on the automatic checking of all possible trees with street level images, and (b) through a machine learning binary classification model trained with spectral data from orthophotographs. After this second stage, the recall decreased to about 75% (between 71.43 and 78.18 depending on the procedure used) but most of the false positives were eliminated. The results obtained with both data sources were robust and accurate. We can conclude that the results obtained with the different methodologies are very similar, where the main difference resides in the access to the starting information. While the use of street-level images only allows for the detection of trees growing in trafficable streets and is a source of information that is usually paid for, the use of ALS and aerial orthophotographs allows for the location of trees anywhere in the city, including public and private parks and gardens, and in many countries, these data are freely available.
Style APA, Harvard, Vancouver, ISO itp.
36

Estupinan-Suarez, L. M., C. Florez-Ayala, M. J. Quinones, A. M. Pacheco i A. C. Santos. "Detection and characterizacion of Colombian wetlands using Alos Palsar and MODIS imagery". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (29.04.2015): 375–82. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-375-2015.

Pełny tekst źródła
Streszczenie:
Wetlands regulate the flow of water and play a key role in risk management of extreme flooding and drought. In Colombia, wetland conservation has been a priority for the government. However, there is an information gap neither an inventory nor a national baseline map exists. In this paper, we present a method that combines a wetlands thematic map with remote sensing derived data, and hydrometeorological stations data in order to characterize the Colombian wetlands. Following the adopted definition of wetlands, available spatial data on land forms, soils and vegetation was integrated in order to characterize spatially the occurrence of wetlands. This data was then complemented with remote sensing derived data from active and passive sensors. A flood frequency map derived from dense time series analysis of the ALOS PALSAR FBD /FBS data (2007-2010) at 50m resolution was used to analyse the recurrence of flooding. In this map, flooding under the canopy and open water classes could be mapped due to the capabilities of the L-band radar. In addition, MODIS NDVI profiles (2007-2012) were used to characterize temporally water mirrors and vegetation, founding different patterns at basin levels. Moreover, the Colombian main basins were analysed and typified based on hydroperiods, highlighting different hydrological regimes within each basin. The combination of thematic maps, SAR data, optical imagery and hydrological data provided information on the spatial and temporal dynamics of wetlands at regional scales. Our results provide the first validated baseline wetland map for Colombia, this way providing valuable information for ecosystem management.
Style APA, Harvard, Vancouver, ISO itp.
37

Luo, Yiyun, Jinnian Wang, Xiankun Yang, Zhenyu Yu i Zixuan Tan. "Pixel Representation Augmented through Cross-Attention for High-Resolution Remote Sensing Imagery Segmentation". Remote Sensing 14, nr 21 (28.10.2022): 5415. http://dx.doi.org/10.3390/rs14215415.

Pełny tekst źródła
Streszczenie:
Natural imagery segmentation has been transferred to land cover classification in remote sensing imagery with excellent performance. However, two key issues have been overlooked in the transfer process: (1) some objects were easily overwhelmed by the complex backgrounds; (2) interclass information for indistinguishable classes was not fully utilized. The attention mechanism in the transformer is capable of modeling long-range dependencies on each sample for per-pixel context extraction. Notably, per-pixel context from the attention mechanism can aggregate category information. Therefore, we proposed a semantic segmentation method based on pixel representation augmentation. In our method, a simplified feature pyramid was designed to decode the hierarchical pixel features from the backbone, and then decode the category representations into learnable category object embedding queries by cross-attention in the transformer decoder. Finally, pixel representation is augmented by an additional cross-attention in the transformer encoder under the supervision of auxiliary segmentation heads. The results of extensive experiments on the aerial image dataset Potsdam and satellite image dataset Gaofen Image Dataset with 15 categories (GID-15) demonstrate that the cross-attention is effective, and our method achieved the mean intersection over union (mIoU) of 86.2% and 62.5% on the Potsdam test set and GID-15 validation set, respectively. Additionally, we achieved an inference speed of 76 frames per second (FPS) on the Potsdam test dataset, higher than all the state-of-the-art models we tested on the same device.
Style APA, Harvard, Vancouver, ISO itp.
38

Xu, Haiqing, Mingyang Yu, Fangliang Zhou i Hongling Yin. "Segmenting Urban Scene Imagery in Real Time Using an Efficient UNet-like Transformer". Applied Sciences 14, nr 5 (28.02.2024): 1986. http://dx.doi.org/10.3390/app14051986.

Pełny tekst źródła
Streszczenie:
Semantic segmentation of high-resolution remote sensing urban images is widely used in many fields, such as environmental protection, urban management, and sustainable development. For many years, convolutional neural networks (CNNs) have been a prevalent method in the field, but the convolution operations are deficient in modeling global information due to their local nature. In recent years, the Transformer-based methods have demonstrated their advantages in many domains due to the powerful ability to model global information, such as semantic segmentation, instance segmentation, and object detection. Despite the above advantages, Transformer-based architectures tend to incur significant computational costs, limiting the model’s real-time application potential. To address this problem, we propose a U-shaped network with Transformer as the decoder and CNN as the encoder to segment remote sensing urban scene images. For efficient segmentation, we design a window-based, multi-head, focused linear self-attention (WMFSA) mechanism and further propose the global–local information modeling module (GLIM), which can capture both global and local contexts through a dual-branch structure. Experimenting on four challenging datasets, we demonstrate that our model not only achieves a higher segmentation accuracy compared with other methods but also can obtain competitive speeds to enhance the model’s real-time application potential. Specifically, the mIoU of our method is 68.2% and 52.8% on the UAVid and LoveDA datasets, respectively, while the speed is 114 FPS, with a 1024 × 1024 input on a single 3090 GPU.
Style APA, Harvard, Vancouver, ISO itp.
39

Yang, Shuowen, Hanlin Qin, Xiang Yan, Shuai Yuan i Qingjie Zeng. "Mid-Wave Infrared Snapshot Compressive Spectral Imager with Deep Infrared Denoising Prior". Remote Sensing 15, nr 1 (3.01.2023): 280. http://dx.doi.org/10.3390/rs15010280.

Pełny tekst źródła
Streszczenie:
Although various infrared imaging spectrometers have been studied, most of them are developed under the Nyquist sampling theorem, which severely burdens 3D data acquisition, storage, transmission, and processing, in terms of both hardware and software. Recently, computational imaging, which avoids direct imaging, has been investigated for its potential in the visible field. However, it has been rarely studied in the infrared domain, as it suffers from inconsistency in spectral response and reconstruction. To address this, we propose a novel mid-wave infrared snapshot compressive spectral imager (MWIR-SCSI). This design scheme provides a high degree of randomness in the measurement projection, which is more conducive to the reconstruction of image information and makes spectral correction implementable. Furthermore, leveraging the explainability of model-based algorithms and the high efficiency of deep learning algorithms, we designed a deep infrared denoising prior plug-in for the optimization algorithm to perform in terms of both imaging quality and reconstruction speed. The system calibration obtains 111 real coded masks, filling the gap between theory and practice. Experimental results on simulation datasets and real infrared scenarios prove the efficacy of the designed deep infrared denoising prior plug-in and the proposed acquisition architecture that acquires mid-infrared spectral images of 640 pixels × 512 pixels × 111 spectral channels at an acquisition frame rate of 50 fps.
Style APA, Harvard, Vancouver, ISO itp.
40

Zhang, Zhiqi, Wendi Xia, Guangqi Xie i Shao Xiang. "Fast Opium Poppy Detection in Unmanned Aerial Vehicle (UAV) Imagery Based on Deep Neural Network". Drones 7, nr 9 (30.08.2023): 559. http://dx.doi.org/10.3390/drones7090559.

Pełny tekst źródła
Streszczenie:
Opium poppy is a medicinal plant, and its cultivation is illegal without legal approval in China. Unmanned aerial vehicle (UAV) is an effective tool for monitoring illegal poppy cultivation. However, targets often appear occluded and confused, and it is difficult for existing detectors to accurately detect poppies. To address this problem, we propose an opium poppy detection network, YOLOHLA, for UAV remote sensing images. Specifically, we propose a new attention module that uses two branches to extract features at different scales. To enhance generalization capabilities, we introduce a learning strategy that involves iterative learning, where challenging samples are identified and the model’s representation capacity is enhanced using prior knowledge. Furthermore, we propose a lightweight model (YOLOHLA-tiny) using YOLOHLA based on structured model pruning, which can be better deployed on low-power embedded platforms. To evaluate the detection performance of the proposed method, we collect a UAV remote sensing image poppy dataset. The experimental results show that the proposed YOLOHLA model achieves better detection performance and faster execution speed than existing models. Our method achieves a mean average precision (mAP) of 88.2% and an F1 score of 85.5% for opium poppy detection. The proposed lightweight model achieves an inference speed of 172 frames per second (FPS) on embedded platforms. The experimental results showcase the practical applicability of the proposed poppy object detection method for real-time detection of poppy targets on UAV platforms.
Style APA, Harvard, Vancouver, ISO itp.
41

Nikfar, Maryam, Mohammad Zoej, Mehdi Mokhtarzade i Mahdi Shoorehdeli. "Designing a New Framework Using Type-2 FLS and Cooperative-Competitive Genetic Algorithms for Road Detection from IKONOS Satellite Imagery". Remote Sensing 7, nr 7 (25.06.2015): 8271–99. http://dx.doi.org/10.3390/rs70708271.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Cheng, Jijie, Yi Liu i Xiaowei Li. "Coal Mine Rock Burst and Coal and Gas Outburst Perception Alarm Method Based on Visible Light Imagery". Sustainability 15, nr 18 (7.09.2023): 13419. http://dx.doi.org/10.3390/su151813419.

Pełny tekst źródła
Streszczenie:
To solve the current reliance of coal mine rock burst and coal and gas outburst detection on mainly manual methods and the problem wherein it is still difficult to ensure disaster warning required to meet the needs of coal mine safety production, a coal mine rock burst and coal and gas outburst perception alarm method based on visible light imagery is proposed. Real-time video images were collected by color cameras in key areas of underground coal mines; the occurrence of disasters was determined by noting when the black area of a video image increases greatly, when the average brightness is less than the set brightness threshold, and when the moving speed of an object resulting in a large increase in the black area is greater than the set speed threshold (V > 13 m/s); methane concentration characteristics were used to distinguish rock burst and coal and gas outburst accidents, and an alarm was created. A set of disaster-characteristic simulation devices was designed. A Φ315 mm white PVC pipe was used to simulate the roadway and background equipment; Φ10 mm rubber balls were used to replace crushed coal rocks; a color camera with a 2.8 mm focal length, 30 FPS, and 110° field angle was used for image acquisition. The results of our study show that the recognition effect is good, which verifies the feasibility and effectiveness of the method.
Style APA, Harvard, Vancouver, ISO itp.
43

Mustapha, S., S. Suleman, S. R. Iliyasu, E. E. Udensi, Y. A. Sanusi, D. Dahuwa i L. Abba. "INTERPRETATION OF AEROMAGNETIC DATA AND LANDSAT IMAGERY OVER THE NIGERIAN YOUNGER GRANITES IN AND AROUND KAFANCHAN AREA, NORTH-CENTRAL NIGERIA". FUDMA JOURNAL OF SCIENCES 4, nr 4 (14.06.2021): 323–33. http://dx.doi.org/10.33003/fjs-2020-0404-489.

Pełny tekst źródła
Streszczenie:
In this research the lineaments of the Kafanchan area in North-central Nigeria were investigated in order to explore the mineralization zones of the area. Aeromagnetic data over Kafanchan and environs within the Younger Granite Province, in the North-Central Nigeria were collated and analyzed. The aeromagnetic map of the area was interpreted both qualitatively and quantitatively so as to identify the nature of the magnetic sources and the trends direction in the study area. The trend of the Total Magnetic Intensity (TMI) map is predominantly in NE-SW. The First Vertical Derivative (FVD) Lineaments Map was also correlated with LADSAT lineaments map and both maps agreed in most areas. The study area is characterized by predominant magnetic lineament trend in NE-SW direction and subordinate E-W direction. The result also shows that the most significant structural trends affecting the distribution of these magnetic anomalies in the study area is in NE-SW direction. The TMI map indicates that there are three major mineralization zones in the study area. The high magnetization contrast in the NE and SE parts of the study area correlates with the migmatite-gneiss, biotite-granites, granites and basalts which are associated with high magnetic contrasts. Also, the high magnetization contrast in the NW part of the area correlates with basalt and the biotite-granite. However, the predominant low magnetization contrast observed in the western half does not correlate with the basic igneous rock
Style APA, Harvard, Vancouver, ISO itp.
44

Wijaya, Firnandino, Wen-Cheng Liu, Suharyanto i Wei-Che Huang. "Comparative Assessment of Different Image Velocimetry Techniques for Measuring River Velocities Using Unmanned Aerial Vehicle Imagery". Water 15, nr 22 (12.11.2023): 3941. http://dx.doi.org/10.3390/w15223941.

Pełny tekst źródła
Streszczenie:
The accurate measurement of river velocity is essential due to its multifaceted significance. In response to this demand, remote measurement techniques have emerged, including large-scale particle image velocimetry (LSPIV), which can be implemented through cameras or unmanned aerial vehicles (UAVs). This study conducted water surface velocity measurements in the Xihu River, situated in Miaoli County, Taiwan. These measurements were subjected to analysis using five distinct algorithms (PIVlab, Fudaa-LSPIV, OpenPIV, KLT-IV, and STIV) and were compared with surface velocity radar (SVR) results. In the quest for identifying the optimal parameter configuration, it was found that an IA size of 32 pixels × 32 pixels, an image acquisition frequency of 12 frames per second (fps), and a pixel size of 20.5 mm/pixel consistently yielded the lowest values for mean error (ME) and root mean squared error (RMSE) in the performance of Fudaa-LSPIV. Among these algorithms, Fudaa-LSPIV consistently demonstrated the lowest mean error (ME) and root mean squared error (RMSE) values. Additionally, it exhibited the highest coefficient of determination (R2 = 0.8053). Subsequent investigations employing Fudaa-LSPIV delved into the impact of various water surface velocity calculation parameters. These experiments revealed that alterations in the size of the interrogation area (IA), image acquisition frequency, and pixel size significantly influenced water surface velocity. This parameter set was subsequently employed in an experiment exploring the incorporation of artificial particles in image velocimetry analysis. The results indicated that the introduction of artificial particles had a discernible impact on the calculation of surface water velocity. Inclusion of these artificial particles enhanced the capability of Fudaa-LSPIV to detect patterns on the water surface.
Style APA, Harvard, Vancouver, ISO itp.
45

Elsheshtawy, A. M., M. A. Amasha, D. A. Mohamed, M. E. Abdelmomen, M. A. Ahmed, M. A. Diab, M. M. Elnabawy, D. M. Ahmed i A. R. Aldakiki. "ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING POTENTIAL AS TOOLS FOR GEOREFERENCING AND FEATURES DETECTION USING UAV IMAGERY". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (14.12.2023): 1887–93. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1887-2023.

Pełny tekst źródła
Streszczenie:
Abstract. While Satellite imagery holds the advantage of encompassing expansive geographical regions. spanning square kilometers, its Spatial Resolution (SR) might prove inadequate for specific tasks. Conversely, Unmanned Aerial Vehicles (UAVs) excel in capturing high-resolution images with spatial resolutions in the range of a few centimeters or even millimeters. However, the accuracy of sensor locations during UAV flights is non-accurate enough by the Global Navigation Satellite Systems (GNSS) technology onboard. One of the key objectives of this research is to evaluate a technique aimed at generating precise sensor locations. This technique employs raw data from the drone's GNSS receiver and minimum Ground Control Points (GCPs) placed within a 2-meter diameter circle in the study area. The goal is to achieve accurate Digital Elevation Models (DEM) and orthomosaic images. Another focus of this research is on addressing challenges related to road lane detection. This is achieved through the enhancement of the You Only Look Once (YOLO) v3 algorithm. The proposed approach optimizes grid division, detection scales, and network architecture to enhance accuracy and real-time performance. The experimental results showcase an impressive 92.03% accuracy with a processing speed of 48 frames per second (fps), surpassing the performance of the original YOLOv3. In the rapidly evolving landscape of Artificial Intelligence (AI) and drone technology, this investigation underscores both the potential and complexities inherent in utilizing advanced AI models, such as YOLOv8, for building detection using UAV and satellite imagery. Furthermore, the research delves into robustness and real-time capabilities within building detection algorithms. The outlined strategy encompasses precise pre-processing, Field-Programmable Gate Array (FPGA) validation, and algorithm refinement. This comprehensive framework aims to elevate feature detection in intricate scenarios, ensuring accuracy, real-time efficiency, and adaptability.
Style APA, Harvard, Vancouver, ISO itp.
46

Wang, Xu, Hewei Wang, Xin Xiong, Changhui Sun, Bing Zhu, Yiming Xu, Mingxia Fan, Shanbao Tong, Limin Sun i Xiaoli Guo. "Motor Imagery Training After Stroke Increases Slow-5 Oscillations and Functional Connectivity in the Ipsilesional Inferior Parietal Lobule". Neurorehabilitation and Neural Repair 34, nr 4 (26.02.2020): 321–32. http://dx.doi.org/10.1177/1545968319899919.

Pełny tekst źródła
Streszczenie:
Background. Reorganization in motor areas have been suggested after motor imagery training (MIT). However, motor imagery involves a large-scale brain network, in which many regions, andnot only the motor areas, potentially constitute the neural substrate for MIT. Objective. This study aimed to identify the targets for MIT in stroke rehabilitation from a voxel-based whole brain analysis of resting-state functional magnetic resonance imaging (fMRI). Methods. Thirty-four chronic stroke patients were recruited and randomly assigned to either an MIT group or a control group. The MIT group received a 4-week treatment of MIT plus conventional rehabilitation therapy (CRT), whereas the control group only received CRT. Before and after intervention, the Fugl-Meyer Assessment Upper Limb subscale (FM-UL) and resting-state fMRI were collected. The fractional amplitude of low-frequency fluctuations (fALFF) in the slow-5 band (0.01-0.027 Hz) was calculated across the whole brain to identify brain areas with distinct changes between 2 groups. These brain areas were then targeted as seeds to perform seed-based functional connectivity (FC) analysis. Results. In comparison with the control group, the MIT group exhibited more improvements in FM-UL and increased slow-5 fALFF in the ipsilesional inferior parietal lobule (IPL). The change of the slow-5 oscillations in the ipsilesional IPL was positively correlated with the improvement of FM-UL. The MIT group also showed distinct alternations in FCs of the ipsilesional IPL, which were correlated with the improvement of FM-UL. Conclusions. The rehabilitation efficiency of MIT was associated with increased slow-5 oscillations and altered FC in the ipsilesional IPL. Clinical Trial Registration. http://www.chictr.org.cn . Unique Identifier. ChiCTR-TRC-08003005.
Style APA, Harvard, Vancouver, ISO itp.
47

Broniera Junior, Paulo, Daniel Prado Campos, André Eugenio Lazzaretti, Percy Nohama, Aparecido Augusto Carvalho, Eddy Krueger i Marcelo Carvalho Minhoto Teixeira. "EEG-FES-Force-MMG closed-loop control systems of a volunteer with paraplegia considering motor imagery with fatigue recognition and automatic shut-off". Biomedical Signal Processing and Control 68 (lipiec 2021): 102662. http://dx.doi.org/10.1016/j.bspc.2021.102662.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Yeh, Shang-Fu, Chih-Cheng Hsieh i Ka-Yi Yeh. "A 3 Megapixel 100 Fps 2.8 $\mu$m Pixel Pitch CMOS Image Sensor Layer With Built-in Self-Test for 3D Integrated Imagers". IEEE Journal of Solid-State Circuits 48, nr 3 (marzec 2013): 839–49. http://dx.doi.org/10.1109/jssc.2012.2233331.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Guerra-Hernandez, Juan, Eduardo Gonzalez-Ferreiro, Alexandre Sarmento, João Silva, Alexandra Nunes, Alexandra Cristina Correia, Luis Fontes, Margarida Tomé i Ramon Diaz-Varela. "Short Communication. Using high resolution UAV imagery to estimate tree variables in Pinus pinea plantation in Portugal". Forest Systems 25, nr 2 (20.07.2016): eSC09. http://dx.doi.org/10.5424/fs/2016252-08895.

Pełny tekst źródła
Streszczenie:
Aim of study: The study aims to analyse the potential use of low‑cost unmanned aerial vehicle (UAV) imagery for the estimation of Pinus pinea L. variables at the individual tree level (position, tree height and crown diameter).Area of study: This study was conducted under the PINEA project focused on 16 ha of umbrella pine afforestation (Portugal) subjected to different treatments.Material and methods: The workflow involved: a) image acquisition with consumer‑grade cameras on board an UAV; b) orthomosaic and digital surface model (DSM) generation using structure-from-motion (SfM) image reconstruction; and c) automatic individual tree segmentation by using a mixed pixel‑ and region‑based based algorithm.Main results: The results of individual tree segmentation (position, height and crown diameter) were validated using field measurements from 3 inventory plots in the study area. All the trees of the plots were correctly detected. The RMSE values for the predicted heights and crown widths were 0.45 m and 0.63 m, respectively.Research highlights: The results demonstrate that tree variables can be automatically extracted from high resolution imagery. We highlight the use of UAV systems as a fast, reliable and cost‑effective technique for small scale applications.Keywords: Unmanned aerial systems (UAS); forest inventory; tree crown variables; 3D image modelling; canopy height model (CHM); object‑based image analysis (OBIA), structure‑from‑motion (SfM).
Style APA, Harvard, Vancouver, ISO itp.
50

WHITEHOUSE, J. C. "Review. From Heaven to Hell: Imagery of Earth, Air, Water and Fire in the Novels of Georges Bernanos. Morris, Daniel R." French Studies 45, nr 1 (1.01.1991): 98. http://dx.doi.org/10.1093/fs/45.1.98.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii