lightweight deep learning for monocular depth estimation
abstract
monocular depth estimation is a challenging but significant part of computer
vision with many applications in other areas of study. this estimation method aims to
provide a relative depth prediction for a single input image. in the past, conventional
methods have been able to give rough depth estimations however their accuracies were
not sufficient. in recent years, due to the rise of deep convolutional neural networks
(dcnns), the accuracy of the depth estimations has increased. however, dcnns do
so at the expense of compute resources and time. this leads to the need for more
lightweight solutions for the task.
in this thesis, we use recent advances made in lightweight network design to reduce
complexity. furthermore, we use conventional methods to increase the performance
of lightweight networks. specifically, we propose a novel lightweight network architecture which has a significantly reduced complexity compared to current methods while
still maintaining a competitive accuracy. we propose an encoder-decoder architecture
that utilizes dice units [47] to reduce the complexity of the encoder. in addition, we
utilize a custom designed decoder based on depthwise-separable convolutions. furthermore, we propose a novel lightweight self-supervised training framework which
leverages conventional methods to remove the need for pose estimation that current
self-supervised methods have. similar to current unsupervised and self-supervised
methods, out method needs a pair of stereo images during training. however, we
take advantage of this need to compute a ground truth approximation. doing this
we are able to eliminate the need for pose estimation that other self-supervised approaches have. both our lightweight network and our self-supervised framework reduce the size and complexity of current state-of-the-art methods while maintaining
competitive results in their respective areas.