Journal articles on the topic 'Vision-based Force Sensing'

To see the other types of publications on this topic, follow the link: Vision-based Force Sensing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Vision-based Force Sensing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chawda, Vinay, and Marcia K. O'Malley. "Vision-based force sensing for nanomanipulation." IEEE/ASME Transactions on Mechatronics 16, no. 6 (December 2011): 1177–83. http://dx.doi.org/10.1109/tmech.2010.2093535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reddy, Annem Narayana, Nandan Maheshwari, Deepak Kumar Sahu, and G. K. Ananthasuresh. "Miniature Compliant Grippers With Vision-Based Force Sensing." IEEE Transactions on Robotics 26, no. 5 (October 2010): 867–77. http://dx.doi.org/10.1109/tro.2010.2056210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adam, Georges, and David J. Cappelleri. "Towards a real-time 3D vision-based micro-force sensing probe." Journal of Micro-Bio Robotics 16, no. 1 (January 11, 2020): 23–32. http://dx.doi.org/10.1007/s12213-019-00122-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ye, X. W., C. Z. Dong, and T. Liu. "Force monitoring of steel cables using vision-based sensing technology: methodology and experimental verification." Smart Structures and Systems 18, no. 3 (September 25, 2016): 585–99. http://dx.doi.org/10.12989/sss.2016.18.3.585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Xiaoqian, Rajkumar Muthusamy, Eman Hassan, Zhenwei Niu, Lakmal Seneviratne, Dongming Gan, and Yahya Zweiri. "Neuromorphic Vision Based Contact-Level Classification in Robotic Grasping Applications." Sensors 20, no. 17 (August 21, 2020): 4724. http://dx.doi.org/10.3390/s20174724.

Full text
Abstract:
In recent years, robotic sorting is widely used in the industry, which is driven by necessity and opportunity. In this paper, a novel neuromorphic vision-based tactile sensing approach for robotic sorting application is proposed. This approach has low latency and low power consumption when compared to conventional vision-based tactile sensing techniques. Two Machine Learning (ML) methods, namely, Support Vector Machine (SVM) and Dynamic Time Warping-K Nearest Neighbor (DTW-KNN), are developed to classify material hardness, object size, and grasping force. An Event-Based Object Grasping (EBOG) experimental setup is developed to acquire datasets, where 243 experiments are produced to train the proposed classifiers. Based on predictions of the classifiers, objects can be automatically sorted. If the prediction accuracy is below a certain threshold, the gripper re-adjusts and re-grasps until reaching a proper grasp. The proposed ML method achieves good prediction accuracy, which shows the effectiveness and the applicability of the proposed approach. The experimental results show that the developed SVM model outperforms the DTW-KNN model in term of accuracy and efficiency for real time contact-level classification.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Huanbo, Katherine J. Kuchenbecker, and Georg Martius. "A soft thumb-sized vision-based sensor with accurate all-round force perception." Nature Machine Intelligence 4, no. 2 (February 2022): 135–45. http://dx.doi.org/10.1038/s42256-021-00439-3.

Full text
Abstract:
AbstractVision-based haptic sensors have emerged as a promising approach to robotic touch due to affordable high-resolution cameras and successful computer vision techniques; however, their physical design and the information they provide do not yet meet the requirements of real applications. We present a robust, soft, low-cost, vision-based, thumb-sized three-dimensional haptic sensor named Insight, which continually provides a directional force-distribution map over its entire conical sensing surface. Constructed around an internal monocular camera, the sensor has only a single layer of elastomer over-moulded on a stiff frame to guarantee sensitivity, robustness and soft contact. Furthermore, Insight uniquely combines photometric stereo and structured light using a collimator to detect the three-dimensional deformation of its easily replaceable flexible outer shell. The force information is inferred by a deep neural network that maps images to the spatial distribution of three-dimensional contact force (normal and shear). Insight has an overall spatial resolution of 0.4 mm, a force magnitude accuracy of around 0.03 N and a force direction accuracy of around five degrees over a range of 0.03–2 N for numerous distinct contacts with varying contact area. The presented hardware and software design concepts can be transferred to a wide variety of robot parts.
APA, Harvard, Vancouver, ISO, and other styles
7

JO, Kensei, Yasuaki KAKEHI, Kouta MINAMIZAWA, Katsunari SATO, Hideaki NII, Naoki KAWAKAMI, and Susumu Tachi. "1P1-I08 A Basic Study on Vision-Based Force Vector Sensing with Movable Input Surface." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2008 (2008): _1P1—I08_1—_1P1—I08_2. http://dx.doi.org/10.1299/jsmermd.2008._1p1-i08_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Lijun, Xian Yi Li, and Hong Ming Gao. "The Friction Interaction Mechanism of Welding Seam Identifying Based on Force Sensing." Advanced Materials Research 291-294 (July 2011): 995–98. http://dx.doi.org/10.4028/www.scientific.net/amr.291-294.995.

Full text
Abstract:
The welding seam identifying(WSI) is one of remote welding precondition. The welding seam is usually identified by vision sensor. The investigation on WSI based on force sensing is less reported. Because the interaction mechanism of friction is not clear in 6 dimension(6D) force of WSI, the influence of friction on the WSI feed and direction is studied. The experimental results show that the friction will decrease the WSI feed in XSY plane and increase in Z direction, but they is in range permitted by WSI. The friction makes WSI feed direction point to the middle of welding seam track, and do not influence WSI. Above technologies, The WSI of S groove is achieved. The average deviation of WSI is less than ±0.5mm when there is friction in 6D force of WSI. It can meet the precision of WSI in remote welding.
APA, Harvard, Vancouver, ISO, and other styles
9

Min, Kyung-Won, Seok-Jung Jang, and Junhee Kim. "A Standalone Vision Sensing System for Pseudodynamic Testing of Tuned Liquid Column Dampers." Journal of Sensors 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8152651.

Full text
Abstract:
Experimental investigation of the tuned liquid column damper (TLCD) is a primal factory task prior to its installation at a site and is mainly undertaken by a pseudodynamic test. In this study, a noncontact standalone vision sensing system is developed to replace a series of the conventional sensors installed at the TLCD tested. The fast vision sensing system is based on binary pixel counting of the portion of images steamed in a pseudodynamic test and achieves near real-time measurements of wave height, lateral motion, and control force of the TLCD. The versatile measurements of the system are theoretically and experimentally evaluated through a wide range of lab scale dynamic tests.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Shouren, Bergström Niklas, Yuji Yamakawa, Taku Senoo, and Masatoshi Ishikawa. "Robotic Contour Tracing with High-Speed Vision and Force-Torque Sensing based on Dynamic Compensation Scheme." IFAC-PapersOnLine 50, no. 1 (July 2017): 4616–22. http://dx.doi.org/10.1016/j.ifacol.2017.08.654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sadak, Ferhat, Mozafar Saadat, and Amir Hajiyavand. "Vision-Based Sensor for Three-Dimensional Vibrational Motion Detection in Biological Cell Injection." Sensors 19, no. 23 (November 20, 2019): 5074. http://dx.doi.org/10.3390/s19235074.

Full text
Abstract:
Intracytoplasmic sperm injection (ICSI) is an infertility treatment where a single sperm is immobilised and injected into the egg using a glass injection pipette. Minimising vibration in three orthogonal axes is essential to have precise injector motion and full control during the egg injection procedure. Vibration displacement sensing using physical sensors in ICSI operation is challenging since the sensor interfacing is not practically feasible. This study proposes a non-invasive technique to measure the three-dimensional vibrational motion of the injection pipette by a single microscope camera during egg injection. The contrast-limited adaptive histogram equalization (CHALE) method and blob analyses technique were employed to measure the vibration displacement in axial and lateral axes, while the actual dimension of the focal axis was directly measured using the Brenner gradient algorithm as a focus measurement algorithm. The proposed algorithm operates between the magnifications range of 4× to 40× with a resolution of half a pixel. Experiments using the proposed vision-based algorithm were conducted to measure and verify the vibration displacement in axial and lateral axes at various magnifications. The results were compared against manual procedures and the differences in measurements were up to 2% among all magnifications. Additionally, the effect of injection speed on lateral vibration displacement was measured experimentally and was used to determine the values for egg deformation, force fluctuation, and penetration force. It was shown that increases in injection speed significantly increases the lateral vibration displacement of the injection pipette by as much as 54%. It has been demonstrated successfully that visual sensing has played a key role in identifying the limitation of the egg injection speed created by lateral vibration displacement of the injection pipette tip.
APA, Harvard, Vancouver, ISO, and other styles
12

Sferrazza, Carmelo, and Raffaello D’Andrea. "Design, Motivation and Evaluation of a Full-Resolution Optical Tactile Sensor." Sensors 19, no. 4 (February 22, 2019): 928. http://dx.doi.org/10.3390/s19040928.

Full text
Abstract:
Human skin is capable of sensing various types of forces with high resolution and accuracy. The development of an artificial sense of touch needs to address these properties, while retaining scalability to large surfaces with arbitrary shapes. The vision-based tactile sensor proposed in this article exploits the extremely high resolution of modern image sensors to reconstruct the normal force distribution applied to a soft material, whose deformation is observed on the camera images. By embedding a random pattern within the material, the full resolution of the camera can be exploited. The design and the motivation of the proposed approach are discussed with respect to a simplified elasticity model. An artificial deep neural network is trained on experimental data to perform the tactile sensing task with high accuracy for a specific indenter, and with a spatial resolution and a sensing range comparable to the human fingertip.
APA, Harvard, Vancouver, ISO, and other styles
13

Haraguchi, Rintaro, Yukiyasu Domae, Koji Shiratsuchi, Yasuo Kitaaki, Haruhisa Okuda, Akio Noda, Kazuhiko Sumi, Takayuki Matsuno, Shun’ichi Kaneko, and Toshio Fukuda. "Development of Production Robot System that can Assemble Products with Cable and Connector." Journal of Robotics and Mechatronics 23, no. 6 (December 20, 2011): 939–50. http://dx.doi.org/10.20965/jrm.2011.p0939.

Full text
Abstract:
To realize automatic robot-based electrical and electronic product assembly, we developed the handling of cables with connectors - flexible goods which are an obstacle to automation. The element technologies we developed include 3D vision sensing for cable extraction, force control for connector insertion, error recovery for improving system stability, and task-level programming for quick system start-up. We assembled FA control equipment to verify the feasibility of our developments.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Meng, Feng Chen, Wen Zhou, and Ruoyu Zuo. "Design of Flexible Spherical Fruit and Vegetable Picking End-effector Based on Vision Recognition." Journal of Physics: Conference Series 2246, no. 1 (April 1, 2022): 012060. http://dx.doi.org/10.1088/1742-6596/2246/1/012060.

Full text
Abstract:
Abstract In order to solve the problems of poor versatility and high clamping damage rate faced by the end-effector of picking robots in today’s picking operation, a universal spherical fruit and vegetable picking die end-effector based on vision recognition with adaptive flexible force clamping is designed. This end-effector is a pneumatic three structure, while integrating a small air source device, in the control of a variety of sensors, through the fusion of multiple sensing and algorithm improvement and other ways, so that the mold end-effector in the picking operation, through visual recognition to determine the type of fruit and vegetables, and then provide the optimal gripping force. At the same time, the fingertip is equipped with a pressure sensor to adjust the pressure output in real time to achieve the overall picking process, the pressure is kept constant, and in the judgment of the completion of fruit and vegetable picking, a torque sensor is equipped to presume the completion of the picking operation by twisting the fruit strength. Finally, the actual product was made and several fruit and vegetable simulation experiments were carried out. The experimental structure is good, which verifies that the end-effector has good versatility and flexibility.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Tao, Kejian Ni, Minglu Zhu, and Lining Sun. "Microforce Sensing and Flexible Assembly Method for Key Parts of ICF Microtargets." Actuators 12, no. 1 (December 20, 2022): 1. http://dx.doi.org/10.3390/act12010001.

Full text
Abstract:
Microassembly is one of the key techniques in various advanced industrial applications. Meanwhile, high success rates for axial hole assembly of thin-walled deep-cavity-type items remain a challenging issue. Hence, the flexible assembly approach of thin-walled deep-cavity parts is investigated in this study using the assembly of the key components, the microtarget component TMP (thermomechanical package) and the hohlraum in ICF (inertial confinement fusion) research, as examples. A clamping force-assembly force mapping model based on multisource microforce sensors was developed to overcome the incapacity of microscopic vision to properly identify the condition of components after contact. The ICF microtarget flexible assembly system, which integrates multisource microforce sensing and a six degrees of freedom micromotion sliding table, is presented to address the constraint that the standard microassembly approach is difficult to operate once the parts contact. This method can detect contact force down to the mN level, modify deviation of the component posture efficiently, and achieve nondestructive ICF microtarget assembly at the end.
APA, Harvard, Vancouver, ISO, and other styles
16

Adam, Georges, Subramanian Chidambaram, Sai Swarup Reddy, Karthik Ramani, and David J. Cappelleri. "Towards a Comprehensive and Robust Micromanipulation System with Force-Sensing and VR Capabilities." Micromachines 12, no. 7 (June 30, 2021): 784. http://dx.doi.org/10.3390/mi12070784.

Full text
Abstract:
In this modern world, with the increase of complexity of many technologies, especially in the micro and nanoscale, the field of robotic manipulation has tremendously grown. Microrobots and other complex microscale systems are often to laborious to fabricate using standard microfabrication techniques, therefore there is a trend towards fabricating them in parts then assembling them together, mainly using micromanipulation tools. Here, a comprehensive and robust micromanipulation platform is presented, in which four micromanipulators can be used simultaneously to perform complex tasks, providing the user with an intuitive environment. The system utilizes a vision-based force sensor to aid with manipulation tasks and it provides a safe environment for biomanipulation. Lastly, virtual reality (VR) was incorporated into the system, allowing the user to control the probes from a more intuitive standpoint and providing an immersive platform for the future of micromanipulation.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhao, Yinan, Feng Gao, Yue Zhao, and Zhijun Chen. "Peg-in-Hole Assembly Based on Six-Legged Robots with Visual Detecting and Force Sensing." Sensors 20, no. 10 (May 18, 2020): 2861. http://dx.doi.org/10.3390/s20102861.

Full text
Abstract:
Manipulators with multi degree-of-freedom (DOF) are widely used for the peg-in-hole task. Compared with manipulators, six-legged robots have better mobility performance apart from completing operational tasks. However, there are nearly no previous studies of six-legged robots performing the peg-in-hole task. In this article, a peg-in-hole approach for six-legged robots is studied and experimented with a six-parallel-legged robot. Firstly, we propose a method whereby a vision sensor and a force/torque (F/T) sensor can be used to explore the relative location between the hole and peg. According to the visual information, the robot can approach the hole. Next, based on the force feedback, the robot plans the trajectory in real time to mate the peg and hole. Then, during the insertion, admittance control is implemented to guarantee the smooth insertion. In addition, during the whole assembly process, the peg is held by the gripper and attached to the robot body. Connected to the body, the peg has sufficient workspace and six DOF to perform the assembly task. Finally, experiments were conducted to prove the suitability of the approach.
APA, Harvard, Vancouver, ISO, and other styles
18

Mennella, Ciro, Susanna Alloisio, Antonio Novellino, and Federica Viti. "Characteristics and Applications of Technology-Aided Hand Functional Assessment: A Systematic Review." Sensors 22, no. 1 (December 28, 2021): 199. http://dx.doi.org/10.3390/s22010199.

Full text
Abstract:
Technology-aided hand functional assessment has received considerable attention in recent years. Its applications are required to obtain objective, reliable, and sensitive methods for clinical decision making. This systematic review aims to investigate and discuss characteristics of technology-aided hand functional assessment and their applications, in terms of the adopted sensing technology, evaluation methods and purposes. Based on the shortcomings of current applications, and opportunities offered by emerging systems, this review aims to support the design and the translation to clinical practice of technology-aided hand functional assessment. To this end, a systematic literature search was led, according to recommended PRISMA guidelines, in PubMed and IEEE Xplore databases. The search yielded 208 records, resulting into 23 articles included in the study. Glove-based systems, instrumented objects and body-networked sensor systems appeared from the search, together with vision-based motion capture systems, end-effector, and exoskeleton systems. Inertial measurement unit (IMU) and force sensing resistor (FSR) resulted the sensing technologies most used for kinematic and kinetic analysis. A lack of standardization in system metrics and assessment methods emerged. Future studies that pertinently discuss the pathophysiological content and clinimetrics properties of new systems are required for leading technologies to clinical acceptance.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Baochun, Yu Wang, Haoao Cui, Haoran Niu, Yijian Liu, Zhongli Li, and Da Chen. "Full Soft Capacitive Omnidirectional Tactile Sensor Based on Micro-Spines Electrode and Hemispheric Dielectric Structure." Biosensors 12, no. 7 (July 10, 2022): 506. http://dx.doi.org/10.3390/bios12070506.

Full text
Abstract:
Flourishing in recent years, intelligent electronics is desirably pursued in many fields including bio-symbiotic, human physiology regulatory, robot operation, and human–computer interaction. To support this appealing vision, human-like tactile perception is urgently necessary for dexterous object manipulation. In particular, the real-time force perception with strength and orientation simultaneously is critical for intelligent electronic skin. However, it is still very challenging to achieve directional tactile sensing that has eminent properties, and at the same time, has the feasibility for scale expansion. Here, a fully soft capacitive omnidirectional tactile (ODT) sensor was developed based on the structure of MWCNTs coated stripe electrode and Ecoflex hemisphere array dielectric. The theoretical analysis of this structure was conducted for omnidirectional force detection by finite element simulation. Combined with the micro-spine and the hemispheric hills dielectric structure, this sensing structure could achieve omnidirectional detection with high sensitivity (0.306 ± 0.001 kPa−1 under 10 kPa) and a wide response range (2.55 Pa to 160 kPa). Moreover, to overcome the inherent disunity in flexible sensor units due to nano-materials and polymer, machine learning approaches were introduced as a prospective technical routing to recognize various loading angles and finally performed more than 99% recognition accuracy. The practical validity of the design was demonstrated by the detection of human motion, physiological activities, and gripping of a cup, which was evident to have great potential for tactile e-skin for digital medical and soft robotics.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhai, Zhiqiang, Zuohui Jin, and Ruoyu Zhang. "Information integration of force sensing and machine vision for in‐shell shrivelled walnut detection based on the golden‐section search optimal discrimination threshold." Journal of the Science of Food and Agriculture 99, no. 8 (March 18, 2019): 3941–49. http://dx.doi.org/10.1002/jsfa.9618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kobayashi, Yuichi, and Takahiro Nomura. "Learning of Obstacle Avoidance with Redundant Manipulator by Hierarchical SOM." Journal of Advanced Computational Intelligence and Intelligent Informatics 15, no. 5 (July 20, 2011): 525–31. http://dx.doi.org/10.20965/jaciii.2011.p0525.

Full text
Abstract:
This paper proposes a method of obstacle avoidance motion generation for a redundant manipulator with a Self-OrganizingMap (SOM) and reinforcement learning. To consider redundancy, two types of SOMs - a hand position map and a joint angle map - are combined. Multiple joint angles corresponding to the same hand position are memorized in the proposed map. Preserved redundant configuration information is used to generate motions based on tasks and situations, while resolving inverse kinematics problems with a redundant manipulator. The proposed map is applied to planning motion control using reinforcement learning in an unknown environment, where collision with obstacles is detected only directly by tactile sensing. The feasibility of the proposed framework was verified by simulation and experiments with an arm robot with force and a vision sensors.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhou, Hongyu, Jinhui Xiao, Hanwen Kang, Xing Wang, Wesley Au, and Chao Chen. "Learning-Based Slip Detection for Robotic Fruit Grasping and Manipulation under Leaf Interference." Sensors 22, no. 15 (July 22, 2022): 5483. http://dx.doi.org/10.3390/s22155483.

Full text
Abstract:
Robotic harvesting research has seen significant achievements in the past decade, with breakthroughs being made in machine vision, robot manipulation, autonomous navigation and mapping. However, the missing capability of obstacle handling during the grasping process has severely reduced harvest success rate and limited the overall performance of robotic harvesting. This work focuses on leaf interference caused slip detection and handling, where solutions to robotic grasping in an unstructured environment are proposed. Through analysis of the motion and force of fruit grasping under leaf interference, the connection between object slip caused by leaf interference and inadequate harvest performance is identified for the first time in the literature. A learning-based perception and manipulation method is proposed to detect slip that causes problematic grasps of objects, allowing the robot to implement timely reaction. Our results indicate that the proposed algorithm detects grasp slip with an accuracy of 94%. The proposed sensing-based manipulation demonstrated great potential in robotic fruit harvesting, and could be extended to other pick-place applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Weiting, Guoshi Zhang, Binpeng Zhan, Liang Hu, and Tao Liu. "Fine Texture Detection Based on a Solid–Liquid Composite Flexible Tactile Sensor Array." Micromachines 13, no. 3 (March 14, 2022): 440. http://dx.doi.org/10.3390/mi13030440.

Full text
Abstract:
Surface texture information plays an important role in the cognition and manipulation of an object. Vision and touch are the two main methods for extracting an object’s surface texture information. However, vision is often limited since the viewing angle is uncertain during manipulation. In this article, we propose a fine surface texture detection method based on a stochastic resonance algorithm through a novel solid–liquid composite flexible tactile sensor array. A thin flexible layer and solid–liquid composite conduction structure on the sensor effectively reduce the attenuation of the contact force and enhance the sensitivity of the sensor. A series of ridge texture samples with different heights (0.9, 4, 10 μm), different widths (0.3, 0.5, 0.7, 1 mm), but the same spatial period (2 mm) of ridges were used in the experiment. The experimental results prove that the stochastic resonance algorithm can significantly improve the signal characteristic of the output signal of the sensor. The sensor has the capability to detect fine ridge texture information. The mean relative error of the estimation for the spatial period was 1.085%, and the ridge width and ridge height, respectively, have a monotonic mapping relationship with the corresponding model output parameters. The sensing capability to sense a fine texture of tactile senor surpasses the limit of human fingers.
APA, Harvard, Vancouver, ISO, and other styles
24

Wen, Bor-Jiunn, and Che-Chih Yeh. "Automatic Fruit Harvesting Device Based on Visual Feedback Control." Agriculture 12, no. 12 (November 29, 2022): 2050. http://dx.doi.org/10.3390/agriculture12122050.

Full text
Abstract:
With aging populations, and people′s demand for high-quality or high-unit-price fruits and vegetables, the corresponding development of automatic fruit harvesting has attracted significant attention. According to the required operating functions, based on the fruit planting environment and harvesting requirements, this study designed a harvesting mechanism to independently drive a gripper and scissor for individual tasks, which corresponded to forward or reverse rotation using a single motor. The study utilized a robotic arm in combination with the harvesting mechanism, supported by a single machine vision component, to recognize fruits by deep-learning neural networks based on a YOLOv3-tiny algorithm. The study completed the coordinate positioning of the fruit, using a two-dimensional visual sensing method (TVSM), which was used to achieve image depth measurement. Finally, impedance control, based on visual feedback from YOLOv3-tiny and the TVSM, was used to grip the fruits according to their size and rigidity, so as to avoid the fruits being gripped by excessive force; therefore, the apple harvesting task was completed with a 3.6 N contact force for an apple with a weight of 235 g and a diameter of 80 mm. During the cutting process, the contact point of the metal scissors of the motor-driven mechanism provided a shear force of 9.9 N, which was significantly smaller than the simulation result of 94 N using ADAMS and MATLAB software, even though the scissors were slightly blunt after many cuts. This study established an automatic fruit harvesting device based on visual feedback control, which can provide an automatic and convenient fruit harvest by reducing harvesting manpower.
APA, Harvard, Vancouver, ISO, and other styles
25

Fagogenis, G., M. Mencattelli, Z. Machaidze, B. Rosa, K. Price, F. Wu, V. Weixler, M. Saeed, J. E. Mayer, and P. E. Dupont. "Autonomous robotic intracardiac catheter navigation using haptic vision." Science Robotics 4, no. 29 (April 24, 2019): eaaw1977. http://dx.doi.org/10.1126/scirobotics.aaw1977.

Full text
Abstract:
Although all minimally invasive procedures involve navigating from a small incision in the skin to the site of the intervention, it has not been previously demonstrated how this can be performed autonomously. To show that autonomous navigation is possible, we investigated it in the hardest place to do it—inside the beating heart. We created a robotic catheter that can navigate through the blood-filled heart using wall-following algorithms inspired by positively thigmotactic animals. The catheter uses haptic vision, a hybrid sense using imaging for both touch-based surface identification and force sensing, to accomplish wall following inside the blood-filled heart. Through in vivo animal experiments, we demonstrate that the performance of an autonomously controlled robotic catheter rivaled that of an experienced clinician. Autonomous navigation is a fundamental capability on which more sophisticated levels of autonomy can be built, e.g., to perform a procedure. Similar to the role of automation in a fighter aircraft, such capabilities can free the clinician to focus on the most critical aspects of the procedure while providing precise and repeatable tool motions independent of operator experience and fatigue.
APA, Harvard, Vancouver, ISO, and other styles
26

Rouhafzay, Ghazal, Ana-Maria Cretu, and Pierre Payeur. "Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition." Sensors 21, no. 1 (December 27, 2020): 113. http://dx.doi.org/10.3390/s21010113.

Full text
Abstract:
Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Hyunseok, Yuchul Jung, and Yong K. Hwang. "Taxonomy of Atomic Actions for Home-Service Robots." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 2 (March 20, 2005): 114–20. http://dx.doi.org/10.20965/jaciii.2005.p0114.

Full text
Abstract:
In household environments, robots are expected to conduct many tasks. It is difficult, however, to write programs for all tasks beforehand ddue to task diversity and changing environmental conditions. One basic task of developing autonomous multifunctional robots is to define a set of basic robot actions that are executed unambiguously and checked for completion. A task planner then uses these actions to accomplish complex tasks that home-service robots are expected to do. This paper first proposes a set of tasks for first-generation home-service robots, then systematically decomposes them into sequences of smaller but meaningful actions called molecular actions. Molecular actions are then broken down into yet more primitive actions called atomic actions. Because vision, sound, range sensors, and force sensors are the main means of monitoring task progress and ompletion, atomic actions are classified based on the complexities and frequencies of the sensing algorithms used. The resulting taxonomy of atomic actions serves as a set of basic building blocks for a knowledge-based task planner. Its advantages are verified and demonstrated through experiments.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Enliang, Shengbo Hu, Hongwei Han, Yuang Li, Zhifeng Ren, and Shilin Du. "Ice Velocity in Upstream of Heilongjiang Based on UAV Low-Altitude Remote Sensing and the SIFT Algorithm." Water 14, no. 12 (June 18, 2022): 1957. http://dx.doi.org/10.3390/w14121957.

Full text
Abstract:
In river management, it is important to obtain ice velocity quickly and accurately during ice flood periods. However, traditional ice velocity monitoring methods require buoys, which are costly and inefficient to distribute. It was found that UAV remote sensing images combined with machine vision technology yielded obvious practical advantages in ice velocity monitoring. Current research has mainly monitored sea ice velocity through GPS or satellite remote sensing technology, with few reports available on river ice velocity monitoring. Moreover, traditional river ice velocity monitoring methods are subjective. To solve the problems of existing time-consuming and inaccurate ice velocity monitoring methods, a new ice velocity extraction method based on UAV remote sensing technology is proposed in this article. In this study, the Mohe River section in Heilongjiang Province was chosen as the research area. High-resolution orthoimages were obtained with a UAV during the ice flood period, and feature points in drift ice images were then extracted with the scale-invariant feature transform (SIFT) algorithm. Moreover, the extracted feature points were matched with the brute force (BF) algorithm. According to optimization results obtained with the random sample consensus (RANSAC) algorithm, the motion trajectories of these feature points were tracked, and an ice displacement rate field was finally established. The results indicated that the average ice velocities in the research area reached 2.00 and 0.74 m/s, and the maximum ice velocities on the right side of the river center were 2.65 and 1.04 m/s at 16:00 on 25 April 2021 and 8:00 on 26 April 2021, respectively. The ice velocity decreased from the river center toward the river banks. The proposed ice velocity monitoring technique and reported data in this study could provide an effective reference for the prediction of ice flood disasters.
APA, Harvard, Vancouver, ISO, and other styles
29

Stansfield, Sharon A. "Haptic Perception with an Articulated, Sensate Robot Hand." Robotica 10, no. 6 (November 1992): 497–508. http://dx.doi.org/10.1017/s0263574700005828.

Full text
Abstract:
SUMMARYIn this paper we present a series of haptic exploratory procedures, or EPs, implemented for a multi-fingered, articulated, sensate robot hand. These EPs are designed to extract specific tactile and kinesthetic information from an object via their purposive invocation by an intelligent robotic system. Taken together, they form an active robotic touch perception system to be used both in extracting information about the environment for internal representation and in acquiring grasps for manipulation. The theory and structure of this robotic haptic system is based upon models of human haptic exploration and information processing.The haptic system presented utilizes an integrated robotic system consisting of a PUMA 560 robot arm, a JPL/Stanford robot hand, with joint torque sensing in the fingers, a wrist force/torque sensor, and a 256 element, spatially-resolved fingertip tactile array. We describe the EPs implemented for this system and provide experimental results which illustrate how they function and how the information which they extract may be used. In addition to the sensate hand and arm, the robot also contains structured-lighting vision and a Prolog-based reasoning system capable of grasp generation and object categorization. We present a set of simple tasks which show how both grasping and recognition may be enhanced by the addition of active touch perception.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Xiaoye, G. K. Ananthasuresh, and James P. Ostrowski. "Vision-based sensing of forces in elastic objects." Sensors and Actuators A: Physical 94, no. 3 (November 2001): 142–56. http://dx.doi.org/10.1016/s0924-4247(01)00705-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kuang, Winnie, Michael Yip, and Jun Zhang. "Vibration-Based Multi-Axis Force Sensing: Design, Characterization, and Modeling." IEEE Robotics and Automation Letters 5, no. 2 (April 2020): 3082–89. http://dx.doi.org/10.1109/lra.2020.2975726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Zhongkai, Jeremie Dequidt, and Christian Duriez. "Vision-Based Sensing of External Forces Acting on Soft Robots Using Finite Element Method." IEEE Robotics and Automation Letters 3, no. 3 (July 2018): 1529–36. http://dx.doi.org/10.1109/lra.2018.2800781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Woojong, Won Dong Kim, Jeong-Jung Kim, Chang-Hyun Kim, and Jung Kim. "UVtac: Switchable UV Marker-Based Tactile Sensing Finger for Effective Force Estimation and Object Localization." IEEE Robotics and Automation Letters 7, no. 3 (July 2022): 6036–43. http://dx.doi.org/10.1109/lra.2022.3163450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Can, Ping Chen, Xin Xu, Xinyu Wang, and Aijun Yin. "A Coarse-to-Fine Method for Estimating the Axis Pose Based on 3D Point Clouds in Robotic Cylindrical Shaft-in-Hole Assembly." Sensors 21, no. 12 (June 12, 2021): 4064. http://dx.doi.org/10.3390/s21124064.

Full text
Abstract:
In this work, we propose a novel coarse-to-fine method for object pose estimation coupled with admittance control to promote robotic shaft-in-hole assembly. Considering that traditional approaches to locate the hole by force sensing are time-consuming, we employ 3D vision to estimate the axis pose of the hole. Thus, robots can locate the target hole in both position and orientation and enable the shaft to move into the hole along the axis orientation. In our method, first, the raw point cloud of a hole is processed to acquire the keypoints. Then, a coarse axis is extracted according to the geometric constraints between the surface normals and axis. Lastly, axis refinement is performed on the coarse axis to achieve higher precision. Practical experiments verified the effectiveness of the axis pose estimation. The assembly strategy composed of axis pose estimation and admittance control was effectively applied to the robotic shaft-in-hole assembly.
APA, Harvard, Vancouver, ISO, and other styles
35

Schmitt, M., and M. Recla. "COMPARISON OF SINGLE-IMAGE URBAN HEIGHT RECONSTRUCTION FROM OPTICAL AND SAR DATA." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 1139–44. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-1139-2022.

Full text
Abstract:
Abstract. Deep learning-based depth estimation has become an important topic in recent years, not only in the field of computer vision. Also in the context of remote sensing, scientists started a few years ago to adapt or develop suitable approaches to realize a reconstruction of the Earth’s surface without requiring several images. There are many reasons for this: First, of course, the aspect of general economization, since especially high-resolution satellite images are often accompanied by high data acquisition costs. In addition, there is also the desire to be able to acquire high-quality geoinformation as quickly as possible in time-critical cases – for example, the provision of up-to-date maps for emergency forces in disaster scenarios. Finally, a reconstruction of topography based only on single images can also provide important approximate values for the classic multi-image methods. For example, various processing steps in a classical InSAR process chain require a rough knowledge of the Earth’s surface in order to achieve the most accurate and reliable results. In this paper, we review the developments documented in the remote sensing literature so far. Using an established neural network architecture, we produce example results for both very-high-resolution SAR and optical imagery. The comparison shows that SAR-based single-image-height reconstruction seems to bear an even greater potential than single-image height reconstruction from optical data.
APA, Harvard, Vancouver, ISO, and other styles
36

Lei, Lei, Dongli Song, Zhendong Liu, Xiao Xu, and Zejun Zheng. "Displacement Identification by Computer Vision for Condition Monitoring of Rail Vehicle Bearings." Sensors 21, no. 6 (March 17, 2021): 2100. http://dx.doi.org/10.3390/s21062100.

Full text
Abstract:
Bearings of rail vehicles bear various dynamic forces. Any fault of the bearing seriously threatens running safety. For fault diagnosis, vibration and temperature measured from the bogie and acoustic signals measured from trackside are often used. However, installing additional sensing devices on the bogie increases manufacturing cost while trackside monitoring is susceptible to ambient noise. For other application, structural displacement based on computer vision is widely applied for deflection measurement and damage identification of bridges. This article proposes to monitor the health condition of the rail vehicle bearings by detecting the displacement of bolts on the end cap of the bearing box. This study is performed based on an experimental platform of bearing systems. The displacement is monitored by computer vision, which can image real-time displacement of the bolts. The health condition of bearings is reflected by the amplitude of the detected displacement by phase correlation method which is separately studied by simulation. To improve the calculation rate, the computer vision only locally focuses on three bolts rather than the whole image. The displacement amplitudes of the bearing system in the vertical direction are derived by comparing the correlations of the image’s gray-level co-occurrence matrix (GLCM). For verification, the measured displacement is checked against the measurement from laser displacement sensors, which shows that the displacement accuracy is 0.05 mm while improving calculation rate by 68%. This study also found that the displacement of the bearing system increases with the increase in rotational speed while decreasing with static load.
APA, Harvard, Vancouver, ISO, and other styles
37

Sabieleish, Muhannad, Katarzyna Heryan, Axel Boese, Christian Hansen, Michael Friebe, and Alfredo Illanes. "Study of needle punctures into soft tissue through audio and force sensing: can audio be a simple alternative for needle guidance?" International Journal of Computer Assisted Radiology and Surgery 16, no. 10 (October 2021): 1683–97. http://dx.doi.org/10.1007/s11548-021-02479-x.

Full text
Abstract:
Abstract Purpose Percutaneous needle insertion is one of the most common minimally invasive procedures. The clinician’s experience and medical imaging support are essential to the procedure’s safety. However, imaging comes with inaccuracies due to artifacts, and therefore sensor-based solutions were proposed to improve accuracy. However, sensors are usually embedded in the needle tip, leading to design limitations. A novel concept was proposed for capturing tip–tissue interaction information through audio sensing, showing promising results for needle guidance. This work demonstrates that this audio approach can provide important puncture information by comparing audio and force signal dynamics during insertion. Methods An experimental setup for inserting a needle into soft tissue was prepared. Audio and force signals were synchronously recorded at four different insertion velocities, and a dataset of 200 recordings was acquired. Indicators related to different aspects of the force and audio were compared through signal-to-signal and event-to-event correlation analysis. Results High signal-to-signal correlations between force and audio indicators regardless of the insertion velocity were obtained. The force curvature indicator obtained the best correlation performances to audio with more than $$70\%$$ 70 % of the correlations higher than 0.6. The event-to-event correlation analysis shows that a puncture event in the force is generally identifiable in audio and that their intensities firmly related. Conclusions Audio contains valuable information for monitoring needle tip/tissue interaction. Significant dynamics obtained from a well-known sensor as force can also be extracted from audio, regardless of insertion velocities.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Geonu, Kimin Yun, and Jungchan Cho. "Occluded Pedestrian-Attribute Recognition for Video Sensors Using Group Sparsity." Sensors 22, no. 17 (September 1, 2022): 6626. http://dx.doi.org/10.3390/s22176626.

Full text
Abstract:
Pedestrians are often obstructed by other objects or people in real-world vision sensors. These obstacles make pedestrian-attribute recognition (PAR) difficult; hence, occlusion processing for visual sensing is a key issue in PAR. To address this problem, we first formulate the identification of non-occluded frames as temporal attention based on the sparsity of a crowded video. In other words, a model for PAR is guided to prevent paying attention to the occluded frame. However, we deduced that this approach cannot include a correlation between attributes when occlusion occurs. For example, “boots” and “shoe color” cannot be recognized simultaneously when the foot is invisible. To address the uncorrelated attention issue, we propose a novel temporal-attention module based on group sparsity. Group sparsity is applied across attention weights in correlated attributes. Accordingly, physically-adjacent pedestrian attributes are grouped, and the attention weights of a group are forced to focus on the same frames. Experimental results indicate that the proposed method achieved 1.18% and 6.21% higher F1-scores than the advanced baseline method on the occlusion samples in DukeMTMC-VideoReID and MARS video-based PAR datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Chou, Jui-Sheng, and Chia-Hsuan Liu. "Automated Sensing System for Real-Time Recognition of Trucks in River Dredging Areas Using Computer Vision and Convolutional Deep Learning." Sensors 21, no. 2 (January 14, 2021): 555. http://dx.doi.org/10.3390/s21020555.

Full text
Abstract:
Sand theft or illegal mining in river dredging areas has been a problem in recent decades. For this reason, increasing the use of artificial intelligence in dredging areas, building automated monitoring systems, and reducing human involvement can effectively deter crime and lighten the workload of security guards. In this investigation, a smart dredging construction site system was developed using automated techniques that were arranged to be suitable to various areas. The aim in the initial period of the smart dredging construction was to automate the audit work at the control point, which manages trucks in river dredging areas. Images of dump trucks entering the control point were captured using monitoring equipment in the construction area. The obtained images and the deep learning technique, YOLOv3, were used to detect the positions of the vehicle license plates. Framed images of the vehicle license plates were captured and were used as input in an image classification model, C-CNN-L3, to identify the number of characters on the license plate. Based on the classification results, the images of the vehicle license plates were transmitted to a text recognition model, R-CNN-L3, that corresponded to the characters of the license plate. Finally, the models of each stage were integrated into a real-time truck license plate recognition (TLPR) system; the single character recognition rate was 97.59%, the overall recognition rate was 93.73%, and the speed was 0.3271 s/image. The TLPR system reduces the labor force and time spent to identify the license plates, effectively reducing the probability of crime and increasing the transparency, automation, and efficiency of the frontline personnel’s work. The TLPR is the first step toward an automated operation to manage trucks at the control point. The subsequent and ongoing development of system functions can advance dredging operations toward the goal of being a smart construction site. By intending to facilitate an intelligent and highly efficient management system of dredging-related departments by providing a vehicle LPR system, this paper forms a contribution to the current body of knowledge in the sense that it presents an objective approach for the TLPR system.
APA, Harvard, Vancouver, ISO, and other styles
40

Massari, Luca, Giulia Fransvea, Jessica D’Abbraccio, Mariangela Filosa, Giuseppe Terruso, Andrea Aliperta, Giacomo D’Alesio, et al. "Functional mimicry of Ruffini receptors with fibre Bragg gratings and deep neural networks enables a bio-inspired large-area tactile-sensitive skin." Nature Machine Intelligence 4, no. 5 (May 2022): 425–35. http://dx.doi.org/10.1038/s42256-022-00487-3.

Full text
Abstract:
AbstractCollaborative robots are expected to physically interact with humans in daily living and the workplace, including industrial and healthcare settings. A key related enabling technology is tactile sensing, which currently requires addressing the outstanding scientific challenge to simultaneously detect contact location and intensity by means of soft conformable artificial skins adapting over large areas to the complex curved geometries of robot embodiments. In this work, the development of a large-area sensitive soft skin with a curved geometry is presented, allowing for robot total-body coverage through modular patches. The biomimetic skin consists of a soft polymeric matrix, resembling a human forearm, embedded with photonic fibre Bragg grating transducers, which partially mimics Ruffini mechanoreceptor functionality with diffuse, overlapping receptive fields. A convolutional neural network deep learning algorithm and a multigrid neuron integration process were implemented to decode the fibre Bragg grating sensor outputs for inference of contact force magnitude and localization through the skin surface. Results of 35 mN (interquartile range 56 mN) and 3.2 mm (interquartile range 2.3 mm) median errors were achieved for force and localization predictions, respectively. Demonstrations with an anthropomorphic arm pave the way towards artificial intelligence based integrated skins enabling safe human–robot cooperation via machine intelligence.
APA, Harvard, Vancouver, ISO, and other styles
41

Teixidó, Pedro, Juan Antonio Gómez-Galán, Rafael Caballero, Francisco J. Pérez-Grau, José M. Hinojo-Montero, Fernando Muñoz-Chavero, and Juan Aponte. "Secured Perimeter with Electromagnetic Detection and Tracking with Drone Embedded and Static Cameras." Sensors 21, no. 21 (November 6, 2021): 7379. http://dx.doi.org/10.3390/s21217379.

Full text
Abstract:
Perimeter detection systems detect intruders penetrating protected areas, but modern solutions require the combination of smart detectors, information networks and controlling software to reduce false alarms and extend detection range. The current solutions available to secure a perimeter (infrared and motion sensors, fiber optics, cameras, radar, among others) have several problems, such as sensitivity to weather conditions or the high failure alarm rate that forces the need for human supervision. The system exposed in this paper overcomes these problems by combining a perimeter security system based on CEMF (control of electromagnetic fields) sensing technology, a set of video cameras that remain powered off except when an event has been detected. An autonomous drone is also informed where the event has been initially detected. Then, it flies through computer vision to follow the intruder for as long as they remain within the perimeter. This paper covers a detailed view of how all three components cooperate in harmony to protect a perimeter effectively, without having to worry about false alarms, blinding due to weather conditions, clearance areas, or privacy issues. The system also provides extra information of where the intruder is or has been, at all times, no matter whether they have become mixed up with more people or not during the attack.
APA, Harvard, Vancouver, ISO, and other styles
42

Iacobescu, Ciprian, Gabriel Oltean, Camelia Florea, and Bogdan Burtea. "Unified InterPlanetary Smart Parking Network for Maximum End-User Flexibility." Sensors 22, no. 1 (December 29, 2021): 221. http://dx.doi.org/10.3390/s22010221.

Full text
Abstract:
Technological breakthroughs have offered innovative solutions for smart parking systems, independent of the use of computer vision, smart sensors, gap sensing, and other variations. We now have a high degree of confidence in spot classification or object detection at the parking level. The only thing missing is end-user satisfaction, as users are forced to use multiple interfaces to find a parking spot in a geographical area. We propose a trustless federated model that will add a layer of abstraction between the technology and the human interface to facilitate user adoption and responsible data acquisition by leveraging a federated identity protocol based on Zero Knowledge Cryptography. No central authority is needed for the model to work; thus, it is trustless. Chained trust relationships generate a graph of trustworthiness, which is necessary to bridge the gap from one smart parking program to an intelligent system that enables smart cities. With the help of Zero Knowledge Cryptography, end users can attain a high degree of mobility and anonymity while using a diverse array of service providers. From an investor’s standpoint, the usage of IPFS (InterPlanetary File System) lowers operational costs, increases service resilience, and decentralizes the network of smart parking solutions. A peer-to-peer content addressing system ensures that the data are moved close to the users without deploying expensive cloud-based infrastructure. The result is a macro system with independent actors that feed each other data and expose information in a common protocol. Different client implementations can offer the same experience, even though the parking providers use different technologies. We call this InterPlanetary Smart Parking Architecture NOW—IPSPAN.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Zhen, Pan Fu, Bing-Ting Wei, Jie Wang, An-Long Li, Ming-Jun Li, and Gui-Bin Bian. "An automatic drug injection device with spatial micro-force perception guided by an microscopic image for robot-assisted ophthalmic surgery." Frontiers in Robotics and AI 9 (August 3, 2022). http://dx.doi.org/10.3389/frobt.2022.913930.

Full text
Abstract:
Retinal vein injection guided by microscopic image is an innovative procedure for treating retinal vein occlusion. However, the retina organization is complex, fine, and weak, and the operation scale and force are small. Surgeons’ limited operation and force-sensing accuracy make it difficult to perform precise and stable drug injection operations on the retina in a magnified field of image vision. In this paper, a 3-DOF automatic drug injection mechanism was designed for microscopic image guiding robot-assisted needle delivery and automatic drug injection. Additionally, the robot-assisted real-time three-dimensional micro-force-sensing method for retinal vein injection was proposed. Based on the layout of three FBG sensors on the hollow outer wall of the nested needle tube in a circular array of nickel-titanium alloys, the real-time sensing of the contact force between the intraoperative instrument and the blood vessel was realized. The experimental data of 15 groups of porcine eyeball retinal veins with diameters of 100–200 μm showed that the piercing force of surgical instruments and blood vessels is 5.95∼12.97 mN, with an average value of 9.98 mN. Furthermore, 20 groups of experimental measurements on chicken embryo blood vessels with diameters of 150–500 μm showed that the piercing force was 4.02∼23.4 mN, with an average value of 12.05 mN.
APA, Harvard, Vancouver, ISO, and other styles
44

Abeln, B. G. S., V. F. Van Dijk, J. C. Balt, M. C. E. F. Wijffels, and L. V. A. Boersma. "Initial experience with dielectric-based local tissue assessment for catheter ablation." European Heart Journal 43, Supplement_2 (October 1, 2022). http://dx.doi.org/10.1093/eurheartj/ehac544.456.

Full text
Abstract:
Abstract Background Dielectric imaging systems can be used to guide catheter ablation for cardiac arrhythmia. Currently, the KODEX-EPD system can be used to acquire high-resolution anatomical data for electroanatomical mapping. Future software versions of this system enable tissue assessment utilizing local dielectric sensing. These dielectric-based tissue assessments (KODEX Vision) enable new features, including wall thickness measurement, catheter-tissue contact assessment, and ablation lesion assessment. Purpose To gain insight in the dielectric-based tissue assessment features. Methods The KODEX-EPD system was used to perform repeat ablations in patients with recurrent atrial fibrillation after pulmonary vein isolation. A primary system with the current software version (1.4.8) was used by the operator. Ablation was performed with a non-contact force-sensing irrigated radiofrequency catheter, with power set at 30–35W and aiming for ablation times of 30–60s. A secondary system was blinded for the operator and loaded with beta software (1.5.0) to enable the KODEX Vision features. Only successful local tissue assessments are presented in this abstract. All patients were enrolled in a prospective registry study which was approved by the local ethics committee. Results Electroanatomic mapping in 25 patients (age 65±8yr, male 84%, LAVI 31±9ml/m2) revealed electrical reconnection in 66 of 100 pulmonary veins. A total of 308 radiofrequency applications were used to re-isolate the veins and create additional lesions (superior vena cava isolation in 6, box lesion in 1). Wall thickness at the sites of ablation ranged from 1.3–3.9 mm. Tissue contact assessment at the start of RF applications indicated “touch” in 77%, while 17.5% and 5.5% of results indicated `no touch' or “high touch”. Tissue Response Viewer indicated low dielectric response in 18.5% and high dielectric response in 81.5% of results. High dielectric response was associated with lower wall thickness (OR: 0.86, 95% CI [0.76–0.97], p=0.014). Conclusion This preliminary evaluation underlines the potential of dielectric-based tissue assessment features. Wall thickness measurement, catheter-tissue contact, and ablation lesion assessment seem feasible with the beta software. Further evaluation of the feasibility, accuracy and clinical impact of the finalized software is warranted. Funding Acknowledgement Type of funding sources: None.
APA, Harvard, Vancouver, ISO, and other styles
45

Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind." M/C Journal 15, no. 5 (October 12, 2012). http://dx.doi.org/10.5204/mcj.558.

Full text
Abstract:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography