Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: High framerate.

Статті в журналах з теми "High framerate"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-40 статей у журналах для дослідження на тему "High framerate".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Schmid, S., and D. Fritsch. "A VARIANT OF LSD-SLAM CAPABLE OF PROCESSING HIGH-SPEED LOW-FRAMERATE MONOCULAR DATASETS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W8 (November 14, 2017): 243–47. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w8-243-2017.

Повний текст джерела
Анотація:
We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Antink, Christoph Hoog, and Steffen Leonhardt. "QUANTIFICATION OF RESPIRATORY SINUS ARRHYTHMIA WITH HIGH-FRAMERATE ELECTRICAL IMPEDANCE TOMOGRAPHY." Acta Polytechnica 53, no. 6 (December 31, 2013): 854–61. http://dx.doi.org/10.14311/ap.2013.53.0854.

Повний текст джерела
Анотація:
Respiratory Sinus Arrhythmia, the variation in the heart rate synchronized with the breathing cycle, forms an interconnection between cardiac-related and respiratory-related signals. It can be used by itself for diagnostic purposes, or by exploiting the redundancies it creates, for example by extracting respiratory rate from an electrocardiogram (ECG). To perform quantitative analysis and patient specific modeling, however, simultaneous information about ventilation as well as cardiac activity needs to be recorded and analyzed. The recent advent of medically approved Electrical Impedance Tomography (EIT) devices capable of recording up to 50 frames per second facilitates the application of this technology. This paper presents the automated selection of a cardiac-related signal from EIT data and quantitative analysis of this signal. It is demonstrated that beat-to-beat intervals can be extracted with a median absolute error below 20 ms. A comparison between ECG and EIT data shows a variation in peak delay time that requires further analysis. Finally, the known coupling of heart rate variability and tidal volume can be shown and quantified using global impedance as a surrogate for tidal volume.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gawehn, Matthijs, Rafael Almar, Erwin W. J. Bergsma, Sierd de Vries, and Stefan Aarninkhof. "Depth Inversion from Wave Frequencies in Temporally Augmented Satellite Video." Remote Sensing 14, no. 8 (April 12, 2022): 1847. http://dx.doi.org/10.3390/rs14081847.

Повний текст джерела
Анотація:
Optical satellite images of the nearshore water surface offer the possibility to invert water depths and thereby constitute the underlying bathymetry. Depth inversion techniques based on surface wave patterns can handle clear and turbid waters in a variety of global coastal environments. Common depth inversion algorithms require video from shore-based camera stations, UAVs or Xband-radars with a typical duration of minutes and at framerates of 1–2 fps to find relevant wave frequencies. These requirements are often not met by satellite imagery. In this paper, satellite imagery is augmented from a sequence of 12 images of Capbreton, France, collected over a period of ∼1.5 min at a framerate of 1/8 fps by the Pleiades satellite, to a pseudo-video with a framerate of 1 fps. For this purpose, a recently developed method is used, which considers spatial pathways of propagating waves for temporal video reconstruction. The augmented video is subsequently processed with a frequency-based depth inversion algorithm that works largely unsupervised and is openly available. The resulting depth estimates approximate ground truth with an overall depth bias of −0.9 m and an interquartile range of depth errors of 5.1 m. The acquired accuracy is sufficiently high to correctly predict wave heights over the shoreface with a numerical wave model and to find hotspots where wave refraction leads to focusing of wave energy that has potential implications for coastal hazard assessments. A more detailed depth inversion analysis of the nearshore region furthermore demonstrates the possibility to detect sandbars. The combination of image augmentation with a frequency-based depth inversion method shows potential for broad application to temporally sparse satellite imagery and thereby aids in the effort towards globally available coastal bathymetry data.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hossain, Md Mamun, Md Ashiqur Rahman, and Humayra Ahmed. "Identifying Objects in Real-Time at the Lowest Framerate." International Journal of Research and Innovation in Applied Science 07, no. 08 (2022): 60–63. http://dx.doi.org/10.51584/ijrias.2022.7809.

Повний текст джерела
Анотація:
The practice of finding instances of semantic objects of a certain class, including people, cars, and traffic signs, in digital photos and videos is known as object identification or detection. Due to the development of high-resolution cameras and their widespread usage in everyday life, the detection is one of the most difficult and rapidly expanding study fields in computer science, particularly in computer vision. For automatic object recognition, several researchers have experimented with a variety of techniques, including image processing and computer vision. In this research, we employed a deep learning based framework YOLOv3 using Python, Tensorflow, and OpenCV to identify objects in real time. We do a number of tests using the COCO dataset to verify the effectiveness of the suggested strategy. The results of the experiments show that our suggested solution is resource and cost effective since it uses the fewest frames per second.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

HU, Tingting, Ryuji FUCHIKAMI, and Takeshi IKENAGA. "High Temporal Resolution-Based Temporal Iterative Tracking for High Framerate and Ultra-Low Delay Dynamic Tracking System." IEICE Transactions on Information and Systems E105.D, no. 5 (May 1, 2022): 1064–74. http://dx.doi.org/10.1587/transinf.2021edp7200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tomioka, Kohei, Toshio Yasue, Ryohei Funatsu, Kodai Kikuchi, Tomoki Matsubara, Takayuki Yamashita, and Shoji Kawahito. "Improved Correlated Multiple Sampling by Using Interleaved Pixel Source Follower for High-Resolution and High-Framerate CMOS Image Sensor." IEEE Transactions on Electron Devices 68, no. 5 (May 2021): 2326–34. http://dx.doi.org/10.1109/ted.2021.3069177.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Scapin, Martina, Lorenzo Peroni, and Massimiliano Avalle. "Dynamic Brazilian Test for Mechanical Characterization of Ceramic Ballistic Protection." Shock and Vibration 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/7485856.

Повний текст джерела
Анотація:
The aim of this work is to identify the tensile strength of alumina (Corbit98), by performing Brazilian tests at different loading rate. In this kind of test, generally used for brittle material in static loading conditions, a cylindrical specimen is diametrically compressed and failure is generated in the middle of the component as a consequence of a positive tensile stress. In this work, this experimental technique was applied also in dynamic loading conditions by using a setup based on the Split Hopkinson Pressure Bar. Due to the properties of the investigated material, among which are high hardness, high compressive strength, and brittle behaviour, some precautions were needed to assure the validity of the tests. Digital Image Correlation techniques were applied for the analysis of high framerate videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lind, Jannik, Christian Hagenlocher, David Blazquez-Sanchez, Marc Hummel, A. Olowinsky, Rudolf Weber, and Thomas Graf. "Influence of the laser cutting front geometry on the striation formation analysed with high-speed synchrotron X-ray imaging." IOP Conference Series: Materials Science and Engineering 1135, no. 1 (November 1, 2021): 012009. http://dx.doi.org/10.1088/1757-899x/1135/1/012009.

Повний текст джерела
Анотація:
Abstract The generation of low surface roughness of the cut edge during laser beam cutting is a challenge. The striation pattern, which determines the surface roughness, can be distinguished into regular and interrupted striations, the latter resulting in an increased surface roughness. In order to analyse their formation, the space- and time-resolved cutting front geometry and melt film thickness were captured during laser beam fusion cutting of aluminium sheets with a framerate of 1000 Hz by means of high-speed synchrotron X-ray imaging. The comparison of the contours of the cutting fronts for a cut result with regular und interrupted striations shows that the contour fluctuates significantly more in case of interrupted striations. This leads to a strong fluctuation of the local angle of incidence. In addition, the average angle of incidence decreases, which results in an increase of the average absorbed irradiance. Both phenomena, local increase of absorbed irradiance and its dynamic fluctuation, result in a local increase of the melt film thickness at the cutting front which is responsible for the formation of the interrupted striations.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Uhrina, Miroslav, Anna Holesova, Juraj Bienik, and Lukas Sevcik. "Impact of Scene Content on High Resolution Video Quality." Sensors 21, no. 8 (April 19, 2021): 2872. http://dx.doi.org/10.3390/s21082872.

Повний текст джерела
Анотація:
This paper deals with the impact of content on the perceived video quality evaluated using the subjective Absolute Category Rating (ACR) method. The assessment was conducted on eight types of video sequences with diverse content obtained from the SJTU dataset. The sequences were encoded at 5 different constant bitrates in two widely video compression standards H.264/AVC and H.265/HEVC at Full HD and Ultra HD resolutions, which means 160 annotated video sequences were created. The length of Group of Pictures (GOP) was set to half the framerate value, as is typical for video intended for transmission over a noisy communication channel. The evaluation was performed in two laboratories: one situated at the University of Zilina, and the second at the VSB—Technical University in Ostrava. The results acquired in both laboratories reached/showed a high correlation. Notwithstanding the fact that the sequences with low Spatial Information (SI) and Temporal Information (TI) values reached better Mean Opinion Score (MOS) score than the sequences with higher SI and TI values, these two parameters are not sufficient for scene description, and this domain should be the subject of further research. The evaluation results led us to the conclusion that it is unnecessary to use the H.265/HEVC codec for compression of Full HD sequences and the compression efficiency of the H.265 codec by the Ultra HD resolution reaches the compression efficiency of both codecs by the Full HD resolution. This paper also includes the recommendations for minimum bitrate thresholds at which the video sequences at both resolutions retain good and fair subjectively perceived quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ning, Keqing, Zhihao Zhang, Kai Han, Siyu Han, and Xiqing Zhang. "Single-Core Multiscale Residual Network for the Super Resolution of Liquid Metal Specimen Images." Machine Learning and Knowledge Extraction 3, no. 2 (May 27, 2021): 453–66. http://dx.doi.org/10.3390/make3020023.

Повний текст джерела
Анотація:
In a gravity-free or microgravity environment, liquid metals without crystalline nuclei achieve a deep undercooling state. The resulting melts exhibit unique properties, and the research of this phenomenon is critical for exploring new metastable materials. Owing to the rapid crystallization rates of deeply undercooled liquid metal droplets, as well as cost concerns, experimental systems meant for the study of liquid metal specimens usually use low-resolution, high-framerate, high-speed cameras, which result in low-resolution photographs. To facilitate subsequent studies by material scientists, it is necessary to use super-resolution techniques to increase the resolution of these photographs. However, existing super-resolution algorithms cannot quickly and accurately restore the details contained in images of deeply undercooled liquid metal specimens. To address this problem, we propose the single-core multiscale residual network (SCMSRN) algorithm for photographic images of liquid metal specimens. In this model, multiple cascaded filters are used to obtain feature information, and the multiscale features are then fused by a residual network. Compared to existing state-of-the-art artificial neural network super-resolution algorithms, such as SRCNN, VDSR and MSRN, our model was able to achieve higher PSNR and SSIM scores and reduce network size and training time.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

d’Angelo, P., and P. Reinartz. "DIGITAL ELEVATION MODELS FROM STEREO, VIDEO AND MULTI-VIEW IMAGERY CAPTURED BY SMALL SATELLITES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 77–82. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-77-2021.

Повний текст джерела
Анотація:
Abstract. Small satellites play an increasing role in earth observation. This article evaluates different possibilities of utilizing data from Planet’s SkySat and PlanetScope satellites constellations for derivation of digital elevation models. While SkySat provides high resolution image data with a ground sampling distance of up to 50 cm, the PlanetScope constellation consisting of Dove 3U cubesats provide images with a resolution of around 4 m. The PlanetScope acquisition strategy was not designed for stereo acquisitions, but for daily acquisition of nadir viewing imagery. Multiple different products can be acquired by the SkySat satellites: Collects covering an area of usually 12 by 6 km, tri-stereo collects and video products with a framerate of 30 Hz. This study evaluates DSM generation using a Semi-Global Matching from multi date stereo pairs for SkySat and PlanetScope, and the dedicated Video and tri-stereo SkySat acquisitions. DSMs obtained by merging many PlanetScope across track stereo pairs show an normalized median deviation against LiDaR first pulse data of 5.2 meter over diverse landcover at the test sites around the city of Terrassa in Catalonia, Spain. SkySat tri-stereo products with 80 cm resolution reach an NMAD of 1.3 m over Terrassa.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Berge, L., N. Estre, D. Tisseur, E. Payan, D. Eck, V. Bouyer, N. Cassiaut-Louis, C. Journeau, R. Le Tellier, and E. Pluyette. "Fast high-energy X-ray imaging for Severe Accidents experiments on the future PLINIUS-2 platform." EPJ Web of Conferences 170 (2018): 08003. http://dx.doi.org/10.1051/epjconf/201817008003.

Повний текст джерела
Анотація:
The future PLINIUS-2 platform of CEA Cadarache will be dedicated to the study of corium interactions in severe nuclear accidents, and will host innovative large-scale experiments. The Nuclear Measurement Laboratory of CEA Cadarache is in charge of real-time high-energy X-ray imaging set-ups, for the study of the corium-water and corium-sodium interaction, and of the corium stratification process. Imaging such large and high-density objects requires a 15 MeV linear electron accelerator coupled to a tungsten target creating a high-energy Bremsstrahlung X-ray flux, with corresponding dose rate about 100 Gy/min at 1 m. The signal is detected by phosphor screens coupled to high-framerate scientific CMOS cameras. The imaging set-up is established using an experimentally-validated home-made simulation software (MODHERATO). The code computes quantitative radiographic signals from the description of the source, object geometry and composition, detector, and geometrical configuration (magnification factor, etc.). It accounts for several noise sources (photonic and electronic noises, swank and readout noise), and for image blur due to the source spot-size and to the detector unsharpness. In a view to PLINIUS-2, the simulation has been improved to account for the scattered flux, which is expected to be significant. The paper presents the scattered flux calculation using the MCNP transport code, and its integration into the MODHERATO simulation. Then the validation of the improved simulation is presented, through confrontation to real measurement images taken on a small-scale equivalent set-up on the PLINIUS platform. Excellent agreement is achieved. This improved simulation is therefore being used to design the PLINIUS-2 imaging set-ups (source, detectors, cameras, etc.).
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Gwatimba, Alphons, Tim Rosenow, Stephen M. Stick, Anthony Kicic, Thomas Iosifidis, and Yuliya V. Karpievitch. "AI-Driven Cell Tracking to Enable High-Throughput Drug Screening Targeting Airway Epithelial Repair for Children with Asthma." Journal of Personalized Medicine 12, no. 5 (May 17, 2022): 809. http://dx.doi.org/10.3390/jpm12050809.

Повний текст джерела
Анотація:
The airway epithelium of children with asthma is characterized by aberrant repair that may be therapeutically modifiable. The development of epithelial-targeting therapeutics that enhance airway repair could provide a novel treatment avenue for childhood asthma. Drug discovery efforts utilizing high-throughput live cell imaging of patient-derived airway epithelial culture-based wound repair assays can be used to identify compounds that modulate airway repair in childhood asthma. Manual cell tracking has been used to determine cell trajectories and wound closure rates, but is time consuming, subject to bias, and infeasible for high-throughput experiments. We therefore developed software, EPIC, that automatically tracks low-resolution low-framerate cells using artificial intelligence, analyzes high-throughput drug screening experiments and produces multiple wound repair metrics and publication-ready figures. Additionally, unlike available cell trackers that perform cell segmentation, EPIC tracks cells using bounding boxes and thus has simpler and faster training data generation requirements for researchers working with other cell types. EPIC outperformed publicly available software in our wound repair datasets by achieving human-level cell tracking accuracy in a fraction of the time. We also showed that EPIC is not limited to airway epithelial repair for children with asthma but can be applied in other cellular contexts by outperforming the same software in the Cell Tracking with Mitosis Detection Challenge (CTMC) dataset. The CTMC is the only established cell tracking benchmark dataset that is designed for cell trackers utilizing bounding boxes. We expect our open-source and easy-to-use software to enable high-throughput drug screening targeting airway epithelial repair for children with asthma.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Farkas, Csaba, László Fenyvesi, and Károly Petróczki. "Multiple linear regression model of Golden apple's failure characteristics under repeated compressive load." Potravinarstvo Slovak Journal of Food Sciences 13, no. 1 (October 28, 2019): 793–99. http://dx.doi.org/10.5219/1168.

Повний текст джерела
Анотація:
In this paper, the multiple linear regression model of mechanical properties related to the failure mechanism of apple tissue under repeated compressive load was investigated. More refined failure characteristics may lead to improved processing and logistics aspects of the given fruits. For our study, the following failure-related factors are considered during the cyclic measurements of Golden Delicious apples: the viscoelastic parameters, the dissipated energy, and the rupture point of the cell-structure, which is described with the time to failure parameter (TTF). For the determination of viscoelastic components, the three element Poynting-Thomson body was applied, and a closed-loop control system is identified with the measured creep data. From the hysteresis loop – in each cycle of the force-deformation parametric curve – the dissipated energy can be calculated with a numeric integration method. The rupture point of the fruit tissue – where the measuring pin is breaking through the peel and the cortex – is observed with a high-framerate video analysis, so that the time index of the failure point can be evaluated. The focus is to define the influence of the mentioned factors to the TTF parameter of the examined fruit material. During the statistical evaluation of the resulted data, the failure of time can be successfully determined with a multiple lienar regression model of the determined viscoelastic and dissipated energy variables. With the resulted equation, the failure time of Golden Delicious apples can be predicted based on the measured failure-related parameters obtained during the compressive load tests.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Tisseur, D., M. Cavaro, F. Rey, K. Paumel, N. Chikhi, J. Delacroix, P. Fouquart, R. Le Tellier, and V. Bouyer. "Study of online measurements techniques of metallic phase spatial distribution into a corium pool." EPJ Web of Conferences 225 (2020): 08003. http://dx.doi.org/10.1051/epjconf/202022508003.

Повний текст джерела
Анотація:
In the context of in-vessel retention (IVR) strategy in order to better assess the risk of reactor vessel failure, the knowledge related to the kinetics of immiscible liquid phases stratification phenomenon needs to be further improved. So far, only one medium-scale experiment (MASCA-RCW, in the frame of the OECD MASCA program) gives direct information regarding the transient relocation of metal below the oxide phase through post-mortem measurements. No experimental characterization of the stratification inversion kinetics when heavy metal becomes lighter and relocates at the top exists. Further investigation of these hydrodynamic and thermochemical processes could be made possible thanks to on line instrumentation enabling to follow displacement of oxidic and metallic phases into the corium pool. At CEA Cadarache, studies are under progress to set up innovative technologies for corium stratification monitoring which would be integrated to a cold crucible induction melting furnace. Based on space and time resolution specifications, three on-line measurements techniques were selected and studied. The first one is an ultrasonic technique using a refractory material waveguide and based on a time-of-flight measurement. We present the feasibility approach with the preliminary results obtained during experiments at high temperature on VITI facility. The second method consists in electromagnetic characterization of the corium pool thanks to an excitation by a magnetic field induced by surroundings coils and measurement of magnetic response by sensors placed around the crucible. A modelling study has enabled to define an appropriate experimental configuration. An experimental set up has also been tested to verify the calculation results. The third technique is 2D X-rays imaging. A feasibility study for a real-time X-ray imagingwith a framerate of 1 image/s has been performed using home-made simulation software MODHERATO, accounting forscattering, based on corium behavior previsions. Results on thedetection of interfaces between different type of corium phases(oxide, light metal, heavy metal) are shown.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Duarte Cavalcante Pinto, David, Masahisa Yanagisawa, Marcelo Luiz do Prado Villarroel Zurita, Romualdo Arthur Alencar Caldas, Marcelo Domingues, Rafaela Lisboa Costa, Rodrigo Lins da Rocha Júnior, et al. "Analysis of the First Optical Detection of a Meteoroidal Impact on the Lunar Surface Recorded from Brazil." Remote Sensing 14, no. 13 (June 22, 2022): 2974. http://dx.doi.org/10.3390/rs14132974.

Повний текст джерела
Анотація:
Two lunar flashes are reported and fully analyzed, with one of them fulfilling every criterion preconized in the literature for the characterization of an impact, including confirmation by two simultaneous observations. It happened at 07:13:46 UT on 14 December 2017, at the selenographic coordinates of 9.79° (±0.06°)N and 45.42 (±0.07°)E. The peak magnitudes in the R and V bands vary from 6.3 to 7.9 and from 7.4 to 9.0, respectively, depending on the observatory, as the cameras’ exposure times were considerably different. The impactor mass is estimated to be between 1.6 and 2.0 kg, with a diameter of 10 to 11 cm, having produced a crater of 8.4 to 8.9 m in diameter. Results for the second flash are also presented and discussed, although the confirmation of an impact was not possible due to a pause in the recordings at one of the sites. The observations took place as part of an inaugural observing campaign in Brazil for lunar impact flash (LIF) detection conceived by the Brazilian Meteor Observation Network (BRAMON) and were carried out by two teams located in different states in the Northeast Region of Brazil, about 353 km apart from each other, at a time when the Moon was crossing the densest part of the Geminid meteoroid stream in 2017. The observing setups included 0.13 m and 0.2 m telescopes, both equipped with sensitive cameras. The Maceió setup probably delivered the finest definition ever reported in the literature for lunar impact monitoring, resulting in high-accuracy positioning. This will certainly aid in finding the associated crater from orbiter images, which will substantiate another work, aimed at performing a comparative analysis between the results from our photometry and the data retrieved by the LRO images. These observations were also very likely the first and the only one so far made by a normal framerate camera and a long-exposure camera simultaneously. The associated benefits are commented on. The source of the impactors is also discussed. In view of the successful results of this experience, national observing campaigns of this kind will be given continuation.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Esposito, Daniele, Jessica Centracchio, Emilio Andreozzi, Sergio Savino, Gaetano D. Gargiulo, Ganesh R. Naik, and Paolo Bifulco. "Design of a 3D-Printed Hand Exoskeleton Based on Force-Myography Control for Assistance and Rehabilitation." Machines 10, no. 1 (January 13, 2022): 57. http://dx.doi.org/10.3390/machines10010057.

Повний текст джерела
Анотація:
Voluntary hand movements are usually impaired after a cerebral stroke, affecting millions of people per year worldwide. Recently, the use of hand exoskeletons for assistance and motor rehabilitation has become increasingly widespread. This study presents a novel hand exoskeleton, designed to be low cost, wearable, easily adaptable and suitable for home use. Most of the components of the exoskeleton are 3D printed, allowing for easy replication, customization and maintenance at a low cost. A strongly underactuated mechanical system allows one to synergically move the four fingers by means of a single actuator through a rigid transmission, while the thumb is kept in an adduction or abduction position. The exoskeleton’s ability to extend a typical hypertonic paretic hand of stroke patients was firstly tested using the SimScape Multibody simulation environment; this helped in the choice of a proper electric actuator. Force-myography was used instead of the standard electromyography to voluntarily control the exoskeleton with more simplicity. The user can activate the flexion/extension of the exoskeleton by a weak contraction of two antagonist muscles. A symmetrical master–slave motion strategy (i.e., the paretic hand motion is activated by the healthy hand) is also available for patients with severe muscle atrophy. An inexpensive microcontroller board was used to implement the electronic control of the exoskeleton and provide feedback to the user. The entire exoskeleton including batteries can be worn on the patient’s arm. The ability to provide a fluid and safe grip, like that of a healthy hand, was verified through kinematic analyses obtained by processing high-framerate videos. The trajectories described by the phalanges of the natural and the exoskeleton finger were compared by means of cross-correlation coefficients; a similarity of about 80% was found. The time required for both closing and opening of the hand exoskeleton was about 0.9 s. A rigid cylindric handlebar containing a load cell measured an average power grasp force of 94.61 N, enough to assist the user in performing most of the activities of daily living. The exoskeleton can be used as an aid and to promote motor function recovery during patient’s neurorehabilitation therapy.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Sadavarte, Koushal S., Kaushal Kshirsagar, Aditya A. Kshirsagar, Aditya K. Kshirsagar, Ayushi Kowe, and Krishna Chaitanya Amonker. "Motion Detection and its Applications." International Journal for Research in Applied Science and Engineering Technology 10, no. 12 (December 31, 2022): 43–46. http://dx.doi.org/10.22214/ijraset.2022.47799.

Повний текст джерела
Анотація:
Abstract: Motion detection is a widely used tool in computing applications. Modern day techniques have explored this paradigm vastly, and its applications are equally widespread. Proximity detection is a popular mean of measuring motion. The paradigm of proximity detection is of high importance today. From vehicle collisions, to aided systems, it is used everywhere. Alternatively, in video recognition, changes in coordinates of image framerates can be used to determine on screen motion. This can be implemented using various software
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Gordon-Soffer, Racheli, Lucien E. Weiss, Ran Eshel, Boris Ferdman, Elias Nehme, Moran Bercovici, and Yoav Shechtman. "Microscopic scan-free surface profiling over extended axial ranges by point-spread-function engineering." Science Advances 6, no. 44 (October 2020): eabc0332. http://dx.doi.org/10.1126/sciadv.abc0332.

Повний текст джерела
Анотація:
The shape of a surface, i.e., its topography, influences many functional properties of a material; hence, characterization is critical in a wide variety of applications. Two notable challenges are profiling temporally changing structures, which requires high-speed acquisition, and capturing geometries with large axial steps. Here, we leverage point-spread-function engineering for scan-free, dynamic, microsurface profiling. The presented method is robust to axial steps and acquires full fields of view at camera-limited framerates. We present two approaches for implementation: fluorescence-based and label-free surface profiling, demonstrating the applicability to a variety of sample geometries and surface types.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Christen, M. "OPENWEBGLOBE 2: VISUALIZATION OF COMPLEX 3D-GEODATA IN THE (MOBILE) WEBBROWSER." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 6, 2016): 401–6. http://dx.doi.org/10.5194/isprsannals-iii-3-401-2016.

Повний текст джерела
Анотація:
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe “OpenWebGlobe 2” is shown, which displays 3D-Geodata on nearly every device.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Christen, M. "OPENWEBGLOBE 2: VISUALIZATION OF COMPLEX 3D-GEODATA IN THE (MOBILE) WEBBROWSER." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 6, 2016): 401–6. http://dx.doi.org/10.5194/isprs-annals-iii-3-401-2016.

Повний текст джерела
Анотація:
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe “OpenWebGlobe 2” is shown, which displays 3D-Geodata on nearly every device.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Hünermund, Martin, Maik Groneberg, and Nils Brauckmann. "Fast Timeline Based Multi Object Online Tracking." Transport and Telecommunication Journal 24, no. 1 (February 1, 2023): 65–72. http://dx.doi.org/10.2478/ttj-2023-0007.

Повний текст джерела
Анотація:
Abstract Fast state-of-the-art multi-object-tracking (MOT) schemes, such as reported in challenges MOT16 and Mot20, perform tracking on a single sensor, often couple tracking and detection, support only one kind of object representation or don’t take varying latencies and update rates into account. We propose a fast generic MOT system for use in real world applications which is capable of tracking objects from different sensor / detector types with their respective latencies and update rates. An SORT inspired online tracking scheme is extended by time awareness using timelines as unifying concept. The system supports different object, sensor and filter and tracking types by modularizing and generalizing the online tracking scheme, while ensuring high performance using an efficient data-oriented C++-template-based implementation. Using the proposed system we achieve, with comparable evaluation metrics, framerates up to ten times higher than the fastest MOT schemes publicly listed for the axis-aligned bounding-box tracking challenges MOT17 and MOT20.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Bulum, Antonio, Gordana Ivanac, Filip Mandurić, Luka Pfeifer, Marta Bulum, Eugen Divjak, Stipe Radoš, and Boris Brkljačić. "Contribution of UltraFast™ Ultrasound and Shear Wave Elastography in the Imaging of Carotid Artery Disease." Diagnostics 12, no. 5 (May 8, 2022): 1168. http://dx.doi.org/10.3390/diagnostics12051168.

Повний текст джерела
Анотація:
Carotid artery disease is one of the main global causes of disability and premature mortality in the spectrum of cardiovascular diseases. One of its main consequences, stroke, is the second biggest global contributor to disability and burden via Disability Adjusted Life Years after ischemic heart disease. In the last decades, B-mode and Doppler-based ultrasound imaging techniques have become an indispensable part of modern medical imaging of carotid artery disease. However, they have limited abilities in carotid artery plaque and wall characterization and are unable to provide simultaneous quantitative and qualitative flow information while the images are burdened by low framerates. UltraFast™ ultrasound is able to overcome these obstacles by providing simultaneous quantitative and qualitative flow analysis information in high frame rates via UltraFast™ Doppler. Another newly developed ultrasound technique, shear wave elastography, is based on the visualization of induced shear waves and the measurement of the shear wave propagation speed in the examined tissues which enables real-time carotid plaque and wall analysis. These newly developed ultrasound modalities have potential to significantly improve workflow efficiency and are able to provide a plethora of additional imaging information of carotid artery disease in comparison to conventional ultrasound techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Reina, Francesco, John M. A. Wigg, Mariia Dmitrieva, Joël Lefebvre, Jens Rittscher, and Christian Eggeling. "TRAIT2D: a Software for Quantitative Analysis of Single Particle Diffusion Data." F1000Research 10 (August 20, 2021): 838. http://dx.doi.org/10.12688/f1000research.54788.1.

Повний текст джерела
Анотація:
Single particle tracking (SPT) is one of the most widely used tools in optical microscopy to evaluate particle mobility in a variety of situations, including cellular and model membrane dynamics. Recent technological developments, such as Interferometric Scattering microscopy, have allowed recording of long, uninterrupted single particle trajectories at kilohertz framerates. The resulting data, where particles are continuously detected and do not displace much between observations, thereby do not require complex linking algorithms. Moreover, while these measurements offer more details into the short-term diffusion behaviour of the tracked particles, they are also subject to the influence of localisation uncertainties, which are often underestimated by conventional analysis pipelines. we thus developed a Python library, under the name of TRAIT2D (Tracking Analysis Toolbox – 2D version), in order to track particle diffusion at high sampling rates, and analyse the resulting trajectories with an innovative approach. The data analysis pipeline introduced is more localisation-uncertainty aware, and also selects the most appropriate diffusion model for the data provided on a statistical basis. A trajectory simulation platform also allows the user to handily generate trajectories and even synthetic time-lapses to test alternative tracking algorithms and data analysis approaches. A high degree of customisation for the analysis pipeline, for example with the introduction of different diffusion modes, is possible from the source code. Finally, the presence of graphical user interfaces lowers the access barrier for users with little to no programming experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Reina, Francesco, John M. A. Wigg, Mariia Dmitrieva, Bela Vogler, Joël Lefebvre, Jens Rittscher, and Christian Eggeling. "TRAIT2D: a Software for Quantitative Analysis of Single Particle Diffusion Data." F1000Research 10 (January 31, 2022): 838. http://dx.doi.org/10.12688/f1000research.54788.2.

Повний текст джерела
Анотація:
Single particle tracking (SPT) is one of the most widely used tools in optical microscopy to evaluate particle mobility in a variety of situations, including cellular and model membrane dynamics. Recent technological developments, such as Interferometric Scattering microscopy, have allowed recording of long, uninterrupted single particle trajectories at kilohertz framerates. The resulting data, where particles are continuously detected and do not displace much between observations, thereby do not require complex linking algorithms. Moreover, while these measurements offer more details into the short-term diffusion behaviour of the tracked particles, they are also subject to the influence of localisation uncertainties, which are often underestimated by conventional analysis pipelines. we thus developed a Python library, under the name of TRAIT2D (Tracking Analysis Toolbox – 2D version), in order to track particle diffusion at high sampling rates, and analyse the resulting trajectories with an innovative approach. The data analysis pipeline introduced is more localisation-uncertainty aware, and also selects the most appropriate diffusion model for the data provided on a statistical basis. A trajectory simulation platform also allows the user to handily generate trajectories and even synthetic time-lapses to test alternative tracking algorithms and data analysis approaches. A high degree of customisation for the analysis pipeline, for example with the introduction of different diffusion modes, is possible from the source code. Finally, the presence of graphical user interfaces lowers the access barrier for users with little to no programming experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ipsen, Svenja, Ralf Bruder, Esben Schjødt Worm, Rune Hansen, Per Rugaard Poulsen, Morten Høyer, and Achim Schweikard. "Simultaneous acquisition of 4D ultrasound and wireless electromagnetic tracking for in-vivo accuracy validation." Current Directions in Biomedical Engineering 3, no. 2 (September 7, 2017): 75–78. http://dx.doi.org/10.1515/cdbme-2017-0016.

Повний текст джерела
Анотація:
AbstractUltrasound is being increasingly investigated for real-time target localization in image-guided interventions. Yet, in-vivo validation remains challenging due to the difficulty to obtain a reliable ground truth. For this purpose, real-time volumetric (4D) ultrasound imaging was performed simultaneously with electromagnetic localization of three wireless transponders implanted in the liver of a radiotherapy patient. 4D ultrasound and electromagnetic tracking were acquired at framerates of 12Hz and 8Hz, respectively, during free breathing over 8 min following treatment. The electromagnetic antenna was placed directly above and the ultrasound probe on the right side of the patient to visualize the liver transponders. It was possible to record 25.7 s of overlapping ultrasound and electromagnetic position data of one transponder. Good spatial alignment with 0.6 mm 3D root-mean-square error between both traces was achieved using a rigid landmark transform. However, data acquisition was impaired since the electromagnetic tracking highly influenced the ultrasound equipment and vice versa. High intensity noise streaks appeared in the ultrasound scan lines irrespective of the chosen frequency (1.7-3.3 MHz, 2/4 MHz harmonic). To allow for target visualization and tracking in the ultrasound volumes despite the artefacts, an online filter was designed where corrupted pixels in the newest ultrasound frame were replaced with non-corrupted pixels from preceding frames. Aside from these artefacts, the recorded electromagnetic tracking data was fragmented and only the transponder closest to the antenna could be detected over a limited period of six consecutive breathing cycles. This problem was most likely caused by interference from the metal holder of the ultrasound probe and was solved in a subsequent experiment using a 3D-printed non-metal probe fixation. Real-time wireless electromagnetic tracking was compared with 4D ultrasound imaging in-vivo for the first time. For stable tracking, large metal components need to be avoided during data acquisition and ultrasound filtering is required.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Dimiccoli, Mariella, and Petia Radeva. "Visual Lifelogging in the Era of Outstanding Digitization." Digital Presentation and Preservation of Cultural and Scientific Heritage 5 (September 30, 2015): 59–64. http://dx.doi.org/10.55630/dipp.2015.5.4.

Повний текст джерела
Анотація:
In this paper, we give an overview on the emerging trend of the digitized self, focusing on visual lifelogging through wearable cameras. This is about continuously recording our life from a first-person view by wearing a camera that passively captures images. On one hand, visual lifelogging has opened the door to a large number of applications, including health. On the other, it has also boosted new challenges in the field of data analysis as well as new ethical concerns. While currently increasing efforts are being devoted to exploit lifelogging data for the improvement of personal well-being, we believe there are still many interesting applications to explore, ranging from tourism to the digitization of human behavior. 1 Introduction We are already living in the world, where digitization affects our daily lives and socio-economic models thoroughly, from education and art to the industry. In essence, digitization is about implementing new ways to put together physical and digital resources for creating more competitive models. Recently, lifelogging appeared just as another powerful manifestation of this digitization process embraced by people at different extents. Lifelogging refers to the process of automatically, passively and digitally recording our own daily experience, hence, connecting digital resource and daily life for a variety of purposes. In the last century, there has been a small number of dedicated individuals, who actively tried to log their lives. Today, thanks to the advancements in sensing technology and the significant reduction of computer storage cost, one’s personal daily life can be recorded efficiently, discretely and in hand-free fashion (see Fig. 1). The most common way of lifelogging, commonly called visual lifelogging, is through a wearable camera that captures images at a reduced framerate, ranging from 2 fpm of the Narrative Clip to 35 fps of the GoPro. The first commercially available wearable camera, called SenseCam, was presented by Microsoft in 2005 and during the last decade, it has been largely deployed in health research. As summarized in a collection of studies published in a special theme issue of the American Journal of Preventive Medicine [5], information collected by a wearable camera over long periods of time has large number of potential applications, both at individual and population level. At individual level, lifelogging can aid in contrast dementia by cognitive training based on digital memories or in improving well-being by monitoring lifestyle. At population level, lifelogging could be used as an objective tool for 60 understanding and tracking lifestyle behavior, hence enabling a better understanding of the causal relations between noncommunicable diseases and unhealthy trends and risky profiles (such as obesity, depression, etc.) Fig. 1. Evolution of wearable camera technology. From left to right: Mann (1998), GoPro (2002), SenseCam (2005), Narrative Clip (2013). However, the huge potential of these applications is currently strongly limited by technical challenges and ethical concerns. The large amount of data generated, the high variability of object appearance and the free motion of the camera, are some of the difficulties to be handled for mining information from and for managing lifelogging data. On the other hand, legality and social acceptance are the major ethical challenges to be faced. This paper discusses these issues and it is organized as follows: in the next section, we give an overview of potential applications; in section 3, we analyze technical challenges and current solutions. Section 4 is devoted to ethical issues and, finally, in section 5, we draw some conclusions. 2 Potential Applications Humans have always been interested in recording their life experiences for future reference and for storytelling purposes. Therefore, a natural application would be summarizing lifelog collections into a story that will be shared with other people, most likely through a social network. Since the end-users may have very different tastes, storytelling algorithms should incorporate some knowledge of the social context surrounding the photos, such as who the user and the target audience are. However, lifelogging technology allows capturing our entire life, not only those moments that we would like to share with others (see Fig. 2). This offers a great potential to make people aware of their lifestyle, understood as a pattern of behavioral choices that an individual makes in a period of time. This feedback could provide education and motivation to improve health trends, detecting risky profiles, with a personal trainer “in-the-loop”. Indeed, by providing a symbiosis between health professionals and wearable technology, it could be possible to design and implement individualized strategies for changing behavior. Considering that physical activity and poor diet are major risk factors for heart diseases, obesity and leading causes of premature mortality, this social impact of applications will be huge. On the other hand, lifelogging could be useful in monitoring patients affected by neurological disorders such as depression or bipolar disorder by aiding in predicting crisis. 61 Fig. 2. Images recorded by a Narrative Clip: From left to right and from the 1st to the 2nd row: in a bus, biking, attending a seminar, having lunch, in a market, in a shop, in the street, working. Finally, digital memories could be used as a tool for cognitive training for people affected by Mild Cognitive Impairment (MCI), a condition that represents a window for novel intervention tools against the Alzheimer disease. Although the emphasis nowadays is on the use of wearable cameras for health applications, its potential spreads to many other domains ranging from tourism to digitization of intangible heritage. For instance, data collected during a long trip could be used to make short and original photostreams for storytelling purposes and be shared in a network of visitors of a country. On the other hand, probably in the next century, these data would be useful for people interested in comparing how transportation and landscape have changed over time. During the last few decades, there has been an increasing interest in the use of digital media in the preservation, management, interpretation, and representation of cultural heritage. Intangible cultural heritage consists of nonphysical aspects of a particular culture, among which folklore, traditions, behavior. The intangible aspects of our cultural heritage represent a treasure of significant historical and socio-economic importance. Naturally, intangible cultural heritage is more difficult to preserve than physical objects. The digital documentation of intangible cultural heritage represents a huge market potential, which is largely unexplored. Wearable cameras could be used in this field to collect, preserve and make available digitally part of the intangible cultural heritage of the 21th century, such as human behavior. 62 3 Technical Challenges Wearing a camera over a long period of time generates a large amount of data (up to 70.000 images per month), making difficult the problem of retrieving specific information. Beside data organization, the high variability of object appearance in the real world and the free motion of the camera make state of the art object recognition algorithms to fail. In Fig. 3 are shown two sequences acquired by wearing a Narrative Clip (2fpm): one can appreciate the frequency of abrupt changes of the field of view even in temporally adjacent images that makes motion estimation unreliable and frequent occlusions that cause important drop in object recognition performances. Fig. 3. Example of photostreams captured by a Narrative CLip while (first row) biking and having a coffee (second row). As shown in [2], the interest of the computer vision community is rapidly increasing and this trend is expected to continue in the next years. Most available works have been conceived to analyze data captures by high temporal resolution wearable cameras, such as GoPro or Google Glasses and they can be broadly classified depending on the task, they try to solve in: activity-recognition [15, 11, 10, 13, 6], social interaction analysis [1, 3, 19], summarization [4, 16, 12]. Activity recognition usually relies on cues such ego-motion [15, 10], object-hand interaction [11, 10] or attention [13, 6]. Generally, the major difficult to be faced in the task of activity recognition are the large variability of objects and hands and the free motion of the camera that make it very difficult to estimate body movements and attention. Social interaction detection is based on the concept of F-formation that models orientation relationships of groups of people in space. F-formations require estimating pose and 3D-location of people, which are challenging tasks due the continuous changes of aspect ratio, scale and orientation. A common approach to summarization is to try to maximize the relevance of the selected images and minimize the redundancy. Relevancy can be captured by relying on mid-level or high-level features. Mid level features may be motion, global CNN features [4, 16], whereas high-level features may be important objects [12] or topics [18]. 63 4 Ethical Issues Lifelog technology can be considered still in its infancy and assuring that the related ethical issues receive full consideration at this moment is crucial for a responsible development of the field. In the last few years, a number of papers has tried to inquiry into the ethical aspects of lifelogs held by individuals [17, 7, 14], discussing issues to do with privacy, autonomy, and beneficence. Images captured by a wearable camera clearly impact the privacy of lifeloggers as well as of bystanders captured in such images. In [7], the authors identified various factors to make a photo sensitive and proposed to embed into the devices an algorithm that use these factors to automatically delete sensitive images. The most general meaning of autonomy is to be a law to oneself. The authors of [8] recognize that lifelogging offers a great opportunity towards autonomy, since it allows to better understand ourselves. Moreover, they provide recommendations and guidelines to meet the challenges that lifelogs poses towards autonomy. Beneficence concerns with the responsibility to do good by maximizing the benefits to an individual or to society, while minimizing harm to the individual. A critical component is informed consent that should be signed by participant to research projects or clinical projects. More general specifications for wearable camera research are provided in [9], proposing an ethical framework for health research. 5 Conclusions This paper has reviewed some of the most important aspects of visual lifelogging, focusing on the technical and ethical challenges it arises, and on its potential applications. We believe that a responsible development of the field could be highly beneficial for the society. In order to become widely used technology, a large amount of effort should be invested in the development of efficient information retrieval systems, to allow fast and easy access to lifelogging content at a semantic level. Further advances in the field of deep learning will allow filling this semantic gap. Acknowledgments This work was partially founded by TIN2012-38187-C03-01 and SGR 1219. M. Dimiccoli is supported by a Beatriu de Pinos grant (Marie-Curie COFUND action).
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Blignaut, Pieter. "The Effect of Real-time Headbox Adjustments on Data Quality." Journal of Eye Movement Research 11, no. 1 (March 21, 2018). http://dx.doi.org/10.16910/jemr.11.1.4.

Повний текст джерела
Анотація:
Following a patent owned by Tobii, the framerate of a CMOS camera can be increased by reducing the size of the recording window so that it fits the eyes with minimum room to spare. The position of the recording window can be dynamically adjusted within the camera sensor area to follow the eyes as the participant moves the head. Since only a portion of the camera sensor data is communicated to the computer and processed, much higher framerates can be achieved with the same CPU and camera.Eye trackers can be expected to present data at a high speed, with good accuracy and precision, small latency and with minimal loss of data while allowing participants to behave as normally as possible. In this study, the effect of headbox adjustments in real-time is investigated with respect to the above-mentioned parameters.It was found that, for the specific camera model and tracking algorithm, one or two headbox adjustments per second, as would normally be the case during recording of human participants, could be tolerated in favour of a higher framerate. The effect of adjustment of the recording window can be reduced by using a larger recording window at the cost of the framerate.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Liu, Xiaodong, Enhao Zheng, and Qining Wang. "Real-Time Wrist Motion Decoding with High Framerate Electrical Impedance Tomography (EIT)." IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 1. http://dx.doi.org/10.1109/tnsre.2022.3228018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Fan, Bin, Yuchao Dai, and Hongdong Li. "Rolling Shutter Inversion: Bring Rolling Shutter Images to High Framerate Global Shutter Video." IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 1–16. http://dx.doi.org/10.1109/tpami.2022.3212912.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Kumar, Deepak, Sushil Raut, Kohei Shimasaki, Taku Senoo, and Idaku Ishii. "Projection-mapping-based object pointing using a high-frame-rate camera-projector system." ROBOMECH Journal 8, no. 1 (March 1, 2021). http://dx.doi.org/10.1186/s40648-021-00197-2.

Повний текст джерела
Анотація:
AbstractThe novel approach to physical security based on visible light communication (VLC) using an informative object-pointing and simultaneous recognition by high-framerate (HFR) vision systems is presented in this study. In the proposed approach, a convolutional neural network (CNN) based object detection method is used to detect the environmental objects that assist a spatiotemporal-modulated-pattern (SMP) based imperceptible projection mapping for pointing the desired objects. The distantly located HFR vision systems that operate at hundreds of frames per second (fps) can recognize and localize the pointed objects in real-time. The prototype of an artificial intelligence-enabled camera-projector (AiCP) system is used as a transmitter that detects the multiple objects in real-time at 30 fps and simultaneously projects the detection results by means of the encoded-480-Hz-SMP masks on to the objects. The multiple 480-fps HFR vision systems as receivers can recognize the pointed objects by decoding pixel-brightness variations in HFR sequences without any camera calibration or complex recognition methods. Several experiments were conducted to demonstrate our proposed method’s usefulness using miniature and real-world objects under various conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lee, Byounghyo, Dongyeon Kim, Seungjae Lee, Chun Chen, and Byoungho Lee. "High-contrast, speckle-free, true 3D holography via binary CGH optimization." Scientific Reports 12, no. 1 (February 18, 2022). http://dx.doi.org/10.1038/s41598-022-06405-2.

Повний текст джерела
Анотація:
AbstractHolography is a promising approach to implement the three-dimensional (3D) projection beyond the present two-dimensional technology. True 3D holography requires abilities of arbitrary 3D volume projection with high-axial resolution and independent control of all 3D voxels. However, it has been challenging to implement the true 3D holography with high-reconstruction quality due to the speckle. Here, we propose the practical solution to realize speckle-free, high-contrast, true 3D holography by combining random-phase, temporal multiplexing, binary holography, and binary optimization. We adopt the random phase for the true 3D implementation to achieve the maximum axial resolution with fully independent control of the 3D voxels. We develop the high-performance binary hologram optimization framework to minimize the binary quantization noise, which provides accurate and high-contrast reconstructions for 2D as well as 3D cases. Utilizing the fast operation of binary modulation, the full-color high-framerate holographic video projection is realized while the speckle noise of random phase is overcome by temporal multiplexing. Our high-quality true 3D holography is experimentally verified by projecting multiple arbitrary dense images simultaneously. The proposed method can be adopted in various applications of holography, where we show additional demonstration that realistic true 3D hologram in VR and AR near-eye displays. The realization will open a new path towards the next generation of holography.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Juliano Martins, Renato, Emil Marinov, M. Aziz Ben Youssef, Christina Kyrou, Mathilde Joubert, Constance Colmagro, Valentin Gâté, et al. "Metasurface-enhanced light detection and ranging technology." Nature Communications 13, no. 1 (September 29, 2022). http://dx.doi.org/10.1038/s41467-022-33450-2.

Повний текст джерела
Анотація:
AbstractDeploying advanced imaging solutions to robotic and autonomous systems by mimicking human vision requires simultaneous acquisition of multiple fields of views, named the peripheral and fovea regions. Among 3D computer vision techniques, LiDAR is currently considered at the industrial level for robotic vision. Notwithstanding the efforts on LiDAR integration and optimization, commercially available devices have slow frame rate and low resolution, notably limited by the performance of mechanical or solid-state deflection systems. Metasurfaces are versatile optical components that can distribute the optical power in desired regions of space. Here, we report on an advanced LiDAR technology that leverages from ultrafast low FoV deflectors cascaded with large area metasurfaces to achieve large FoV (150°) and high framerate (kHz) which can provide simultaneous peripheral and central imaging zones. The use of our disruptive LiDAR technology with advanced learning algorithms offers perspectives to improve perception and decision-making process of ADAS and robotic systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Sun, Cong, and Patrick Gaydecki. "A Visual Tracking System for Honey Bee (Hymenoptera: Apidae) 3D Flight Trajectory Reconstruction and Analysis." Journal of Insect Science 21, no. 2 (March 1, 2021). http://dx.doi.org/10.1093/jisesa/ieab023.

Повний текст джерела
Анотація:
Abstract We describe the development, field testing, and results from an automated 3D insect flight detection and tracking system for honey bees (Apis mellifera L.) (Hymenoptera: Apidae) that is capable of providing remarkable insights into airborne behavior. It comprises two orthogonally mounted video cameras with an observing volume of over 200 m3 and an offline analysis software system that outputs 3D space trajectories and inflight statistics of the target honey bees. The imaging devices require no human intervention once set up and are waterproof, providing high resolution and framerate videos. The software module uses several forms of modern image processing techniques with GPU-enabled acceleration to remove both stationary and moving artifact while preserving flight track information. The analysis system has thus far provided information not only on flight statistics (such as speeds and accelerations), but also on subtleties associated with flight behavior by generating heat maps of density and classifying flight patterns according to patrol and foraging behavior. Although the results presented here focus on behavior in the locale of a beehive, the system could be adapted to study a wide range of airborne insect activity.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kominsky, Jonathan F., Katarina Begus, Ilona Bass, Joseph Colantonio, Julia A. Leonard, Allyson P. Mackey, and Elizabeth Bonawitz. "Organizing the Methodological Toolbox: Lessons Learned From Implementing Developmental Methods Online." Frontiers in Psychology 12 (September 13, 2021). http://dx.doi.org/10.3389/fpsyg.2021.702710.

Повний текст джерела
Анотація:
Adapting studies typically run in the lab, preschool, or museum to online data collection presents a variety of challenges. The solutions to those challenges depend heavily on the specific questions pursued, the methods used, and the constraints imposed by available technology. We present a partial sample of solutions, discussing approaches we have developed for adapting studies targeting a range of different developmental populations, from infants to school-aged children, and utilizing various online methods such as high-framerate video presentation, having participants interact with a display on their own computer, having the experimenter interact with both the participant and an actor, recording free-play with physical objects, recording infant looking times both offline and live, and more. We also raise issues and solutions regarding recruitment and representativeness in online samples. By identifying the concrete needs of a given approach, tools that meet each of those individual needs, and interfaces between those tools, we have been able to implement many (but not all) of our studies using online data collection during the COVID-19 pandemic. This systematic review aligning available tools and approaches with different methods can inform the design of future studies, in and outside of the lab.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zanon, Mirko, Bastien S. Lemaire, and Giorgio Vallortigara. "Steps towards a computational ethology: an automatized, interactive setup to investigate filial imprinting and biological predispositions." Biological Cybernetics, July 17, 2021. http://dx.doi.org/10.1007/s00422-021-00886-6.

Повний текст джерела
Анотація:
AbstractSoon after hatching, the young of precocial species, such as domestic chicks or ducklings, learn to recognize their social partner by simply being exposed to it (imprinting process). Even artificial objects or stimuli displayed on monitor screens can effectively trigger filial imprinting, though learning is canalized by spontaneous preferences for animacy signals, such as certain kinds of motion or a face-like appearance. Imprinting is used as a behavioural paradigm for studies on memory formation, early learning and predispositions, as well as number and space cognition, and brain asymmetries. Here, we present an automatized setup to expose and/or test animals for a variety of imprinting experiments. The setup consists of a cage with two high-frequency screens at the opposite ends where stimuli are shown. Provided with a camera covering the whole space of the cage, the behaviour of the animal is recorded continuously. A graphic user interface implemented in Matlab allows a custom configuration of the experimental protocol, that together with Psychtoolbox drives the presentation of images on the screens, with accurate time scheduling and a highly precise framerate. The setup can be implemented into a complete workflow to analyse behaviour in a fully automatized way by combining Matlab (and Psychtoolbox) to control the monitor screens and stimuli, DeepLabCut to track animals’ behaviour, Python (and R) to extract data and perform statistical analyses. The automated setup allows neuro-behavioural scientists to perform standardized protocols during their experiments, with faster data collection and analyses, and reproducible results.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Leth-Olsen, M., G. Doehlen, H. Torp, and SA Nyrnes. "Monitoring of cerebral high intensity transient signals during catheter interventions and surgery for congenital heart disease in infants using NeoDoppler." European Heart Journal - Cardiovascular Imaging 22, Supplement_1 (January 1, 2021). http://dx.doi.org/10.1093/ehjci/jeaa356.403.

Повний текст джерела
Анотація:
Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): The Joint Research Committee between St. Olavs Hospital and the faculty of Medicine, NTNU. The Norwegian Association for Children with Congenital Heart Disease Research Foundation, FFHB Background There is a risk of gaseous- and solid micro embolus formation during transcatheter procedures (CATH) and surgery in children with congenital heart disease (CHD). Silent strokes during surgery or CATH may contribute to neurological impairment. NeoDoppler is a non-invasive ultrasound system based on plane wave transmissions to continuously monitor cerebral blood flow in infants with an open fontanelle. Gaseous- and solid micro embolus passing through the ultrasound beam create High Intensity Transient Signals (HITS) in the Doppler signal. Purpose We aimed to study the amount of HITS during CATH and surgery in infants using NeoDoppler. Methods The NeoDoppler probe operates at a frequency of 7.8 MHz. The frame rate is 300 fps and the beam covers a wide cylindrical area (10/35mm width/depth). The system displays a color M-mode Doppler and a spectrogram. The broad ultrasound beam permits prolonged scanning time of each event as the HITS move through the ultrasound beam. The high framerate and color M-mode allows for tracking of embolies in depth. In this study the NeoDoppler probe was attached to the anterior fontanelle of infants with CHD during CATH (n = 15) and cardiac surgery (n = 13). HITS were defined as high intensity signal creating skewed lines in the color M-mode Doppler moving away or towards the probe (blue/red) with a corresponding high intensity signal in the spectrogram. HITS were grouped into single HITS and HITS with curtain effect. Single HITS were defined as single skewed lines in the color M-mode Doppler and spectrogram. HITS with curtain effect were defined as skewed broad lines or multiple intensity increase lines in the color M-mode Doppler with corresponding intensity increase that filled the entire doppler curve. HITS with curtain effect are believed to represent numerous HITS that could not be separated from each other in the spectrogram. HITS were manually detected in an in-house MatLab application. Results The study group consisted of 28 infants (17 males) with different CHD who underwent CATH or surgery. The median age and weight was 96 days (range 3-240 days) and 5650 g (range 2400-8085 g). HITS were detected in 13/15 patients during CATH with a total of 392 HITS (Median 12, Range 0-149) and in all patients during surgery with a total of 772 HITS (Median 45, Range 11-150). The picture shows examples of single HITS (panel A) and HITS with curtain effect (panel B). One can appreciate the embolic trajectory pattern in depth over time in the color M-mode Display. Conclusion In this study we found that NeoDoppler enables detection of frequent HITS in patients with CHD undergoing surgery or CATH. NeoDoppler could become a useful tool to guide modifications of procedures, with aim to reduce the risk of silent stroke. However, further studies are needed to validate the technique. Abstract Figure.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Maassen, Ken, Farzad Poursadegh, and Caroline Genzale. "Spectral Microscopy Imaging System for High-Resolution and High-Speed Imaging of Fuel Sprays." Journal of Engineering for Gas Turbines and Power 142, no. 9 (August 24, 2020). http://dx.doi.org/10.1115/1.4048057.

Повний текст джерела
Анотація:
Abstract Modern high-efficiency engines utilize direct injection for charge preparation at extremely high pressures. At these conditions, the scales of atomization become challenging to measure, as primary breakup occurs on the micrometer and nanosecond scales. As such, fuel sprays at these conditions have proven difficult to study via direct imaging. While high-speed cameras now exist that can shutter at tens to hundreds of nanoseconds, and long-range microscopes can be coupled to these cameras to provide high-resolution images, the resolving power of these systems is typically limited by pixel size and field of view (FOV). The large pixel sizes make the realization of the diffraction-limited optical resolution quite challenging. On the other hand, limited data throughput under high repetition rate operation limits the FOV due to reduced sensor area. Therefore, a novel measurement technique is critical to study fuel spray formation at engine-relevant conditions. In this work, we demonstrate a new high-resolution imaging technique, spectral microscopy, which aims to realize diffraction-limited imaging at effective framerates sufficient for capturing primary breakup in engine-relevant sprays. A spectral microscopy system utilizing a consumer-grade DSLR allows for significantly wider FOV with improved resolving power compared to high-speed cameras. Temporal shuttering is accomplished via separate and independently triggered back illumination sources, with wavelengths selected to overlap with the detection bands of the camera sensor's RGB filter array. The RGB detection channels act as filters to capture independently timed red, green, and blue light pulses, enabling the capture of a three consecutive images at effective framerates exceeding 20 × 106 fps. To optimize system performance, a backlit illumination system is designed to maximize light throughput, a multilens setup is created, and an image-processing algorithm is demonstrated that formulates a three-frame image from the camera sensor. The system capabilities are then demonstrated by imaging engine relevant diesel sprays. The spectral microscopy system detailed in this paper allows for micron-scale feature recognition at framerates exceeding 20 × 106 fps, thus expanding the capability for experimental research on primary breakup in fuel sprays for modern direct-injection engines.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Sawall, Stefan, Jan Beckendorf, Carlo Amato, Joscha Maier, Johannes Backs, Greetje Vande Velde, Marc Kachelrieß, and Jan Kuntz. "Coronary micro-computed tomography angiography in mice." Scientific Reports 10, no. 1 (October 8, 2020). http://dx.doi.org/10.1038/s41598-020-73735-4.

Повний текст джерела
Анотація:
Abstract Coronary computed tomography angiography is an established technique in clinical practice and a valuable tool in the diagnosis of coronary artery disease in humans. Imaging of coronaries in preclinical research, i.e. in small animals, is very difficult due to the high demands on spatial and temporal resolution. Mice exhibit heart rates of up to 600 beats per minute motivating the need for highest detector framerates while the coronaries show diameters below 100 μm indicating the requirement for highest spatial resolution. We herein use a custom built micro–CT equipped with dedicated reconstruction algorithms to illustrate that coronary imaging in mice is possible. The scanner provides a spatial and temporal resolution sufficient for imaging of smallest, moving anatomical structures and the dedicated reconstruction algorithms reduced radiation dose to less than 1 Gy but do not yet allow for longitudinal studies. Imaging studies were performed in ten mice administered with a blood-pool contrast agent. Results show that the course of the left coronary artery can be visualized in all mice and all major branches can be identified for the first time using micro-CT. This reduces the gap in cardiac imaging between clinical practice and preclinical research.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Blignaut, Pieter. "Idiosyncratic Feature-Based Gaze Mapping." Journal of Eye Movement Research 9, no. 3 (April 21, 2016). http://dx.doi.org/10.16910/jemr.9.3.2.

Повний текст джерела
Анотація:
It is argued that polynomial expressions that are normally used for remote, video-based, low cost eye tracking systems, are not always ideal to accommodate individual differences in eye cleft, position of the eye in the socket, corneal bulge, astigmatism, etc. A procedure to identify a set of polynomial expressions that will provide the best possible accuracy for a specific individual is proposed. It is also proposed that regression coefficients are recalculated in real-time, based on a subset of calibration points in the region of the current gaze and that a real-time correction is applied, based on the offsets from calibration targets that are close to the estimated point of regard.It was found that if no correction is applied, the choice of polynomial is critically important to get an accuracy that is just acceptable. Previously identified polynomial sets were confirmed to provide good results in the absence of any correction procedure. By applying real-time correction, the accuracy of any given polynomial improves while the choice of polynomial becomes less critical. Identification of the best polynomial set per participant and correction technique in combination with the aforementioned correction techniques, lead to an average error of 0.32° (sd = 0.10°) over 134 participant recordings.The proposed improvements could lead to low-cost systems that are accurate and fast enough to do reading research or other studies where high accuracy is expected at framerates in excess of 200 Hz.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії