surface estimation from multi-modal tactile data
abstract
the increasing popularity of robotic applications has seen use in healthcare, surgery,
and as an industrial tool. these robots are expected to be able to make physical
contact with the objects in the environment which allows tasks such as grasping and
manipulation, while also allowing to obtain information about the objects such as
shape, texture, and hardness. in an ideal world, a complete model of the environment would be known beforehand and robots would not need to explore objects and
surfaces since their information would be available in the model of the world. in the
real world, most environments are unstructured and robots must be able to operate
safely without causing harm to themselves or objects while taking into account environmental uncertainties and building models for the environment and its objects.
to overcome this, the trend has been to use computer vision to detect objects in the
environment. although computer vision has seen great advancement in this regard,
there are some problems that cannot be solved by using vision alone. objects that
are occluded, transparent, or do not have rich visual features cannot be detected by
using vision. it is also impossible to estimate features such as hardness or tactile
texture using vision. to this end, we use a bio-inspired tactile sensor consisting of
a compliant structure, a marg sensor, and a pressure sensor along with a robotic
manipulator to explore surfaces with the only assumption that the general location
of the surface is known. this sensing module allows the robotic manipulator to have
a predetermined angle of approach which is essential when exploring unseen surfaces. [...]