Skip to main content

Anti-occlusion Light-Field Optical Flow Estimation Using Light-Field Super-Pixels

  • Conference paper
  • First Online:
  • 1621 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11367))

Abstract

Optical flow estimation is one of the most important problem in community. However, current methods still can not provide reliable results in occlusion boundary areas. Light field cameras provide hundred of views in a single shot, so the ambiguity can be better analysed using other views. In this paper, we present a novel method for anti-occlusion optical flow estimation in a dynamic light field. We first model the light field superpixel (LFSP) as a slanted plane in 3D. Then the motion of the occluded pixels in central view slice can be optimized by the un-occluded pixels in other views. Thus the optical flow in occlusion boundary areas can be well computed. Experimental results on both synthetic and real light fields demonstrate the advantages over state-of-the-arts and the performance on 4D optical flow computation.

The work was supported in part by NSFC under Grant 61531014.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The motion \(\varvec{R}_i,\varvec{t}_i\) are constants while the normal \(\varvec{n}_i^{u,v}\) changes with each view.

  2. 2.

    The code for the OSF yields a runtime error when processing the low resolution data for the “Drawing” scene and, as a result, has been omitted from the results shown here for that scene.

References

  1. Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. Comput. Models Vis. Process. 1(2), 3–20 (1991)

    Google Scholar 

  2. Andreas, G., Philip, L., Raquel, U., Moritz, M.: The kitti vision benchmark suite (2012). http://www.cvlibs.net/datasets/kitti/

  3. Brox, T., Malik, J.: Large displacement optical flow: descriptor matching in variational motion estimation. IEEE T-PAMI 33(3), 500–513 (2011). https://doi.org/10.1109/TPAMI.2010.143

    Article  Google Scholar 

  4. Chen, C., Lin, H., Yu, Z., Kang, S., Yu, J.: Light field stereo matching using bilateral statistics of surface cameras. In: IEEE CVPR, pp. 1518–1525 (2014)

    Google Scholar 

  5. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: SIGGRAPH, pp. 43–54. ACM (1996)

    Google Scholar 

  6. Heber, S., Pock, T.: Scene flow estimation from light fields via the preconditioned primal-dual algorithm. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 3–14. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11752-2_1

    Chapter  Google Scholar 

  7. Hog, M., Sabater, N., Guillemot, C.: Super-rays for efficient light field processing. IEEE J-STSP (2017, in press). https://doi.org/10.1109/JSTSP.2017.2738619

    Article  Google Scholar 

  8. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: 4D light field dataset (2016). http://hci-lightfield.iwr.uni-heidelberg.de/

  9. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_2

    Chapter  Google Scholar 

  10. Jeon, H.G., et al.: Accurate depth map estimation from a lenslet light field camera. In: IEEE CVPR, pp. 1547–1555 (2015)

    Google Scholar 

  11. Levoy, M., Hanrahan, P.: Light field rendering. In: SIGGRAPH, pp. 31–42. ACM (1996)

    Google Scholar 

  12. Lytro: The ultimate creative tool for cinema and broadcast (2016). http://blog.lytro.com/?s=cinema

  13. Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: IEEE CVPR, pp. 3061–3070, June 2015. https://doi.org/10.1109/CVPR.2015.7298925

  14. Ng, R.: Digital light field photography. Ph.D. thesis, Stanford University (2006)

    Google Scholar 

  15. Pan, L., Dai, Y., Liu, M., Porikli, F.: Simultaneous stereo video deblurring and scene flow estimation. In: IEEE CVPR, July 2017

    Google Scholar 

  16. Srinivasan, P.P., Tao, M.W., Ng, R., Ramamoorthi, R.: Oriented light-field windows for scene flow. In: IEEE ICCV, pp. 3496–3504 (2015)

    Google Scholar 

  17. Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: IEEE ICCV, pp. 673–680 (2013)

    Google Scholar 

  18. Vogel, C., Schindler, K., Roth, S.: 3D scene flow estimation with a piecewise rigid scene model. IJCV 115(1), 1–28 (2015)

    Article  MathSciNet  Google Scholar 

  19. Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE T-PAMI 38(11), 2170–2181 (2016)

    Article  Google Scholar 

  20. Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE T-PAMI 36(3), 606–619 (2014)

    Article  Google Scholar 

  21. Zhu, H., Wang, Q., Yu, J.: Occlusion-model guided anti-occlusion depth estimation in light field. IEEE J-STSP 11(7), 965–978 (2017). https://doi.org/10.1109/JSTSP.2017.2730818

    Article  Google Scholar 

  22. Zhu, H., Zhang, Q., Wang, Q.: 4D light field superpixel and segmentation. In: IEEE CVPR, pp. 1–8. IEEE (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qing Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, H., Sun, X., Zhang, Q., Wang, Q., Robles-Kelly, A., Li, H. (2019). Anti-occlusion Light-Field Optical Flow Estimation Using Light-Field Super-Pixels. In: Carneiro, G., You, S. (eds) Computer Vision – ACCV 2018 Workshops. ACCV 2018. Lecture Notes in Computer Science(), vol 11367. Springer, Cham. https://doi.org/10.1007/978-3-030-21074-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-21074-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-21073-1

  • Online ISBN: 978-3-030-21074-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics