Prasad B @Dolby Laboratories, SF: Comments and reviews.
This post contains edited, non-confidential feedback given by Prasad Balasubramanian for the answers received by him on GapJumpers.
Design and implement a layered depth of field blur based on Gaussian Blur.
I am looking for an implementation of a layered depth of field blur on an image based on the knowledge of it’s corresponding depth value. If depth values are in the range of 0…1 (32-bit float), you could assume that an image pixel that is closest to you (depth of 1.0) is in focus. You may then define a set of ‘10' blur kernels (2x2, 3x3, 4x4 … ) at a step of 0.1, say for instance, pick the kernel for a image pixel based on the corresponding depth value.
Depth-of-field blur is used in 2D content to give the impression of 3D. It is a fairly common filter applied to the main camera in games and uses the z(depth buffer) to blur objects that are distant from focal point.
Study approaches to automatic logo detection in broadcast video and propose strategies for real-time GPU implementation.
Prepare a research report that compares existing approaches. Evaluate feasibility of real-time implementation on a GPU.
I am interested in your thoughts on how you could take advantage of the temporal redundancy inherent in video applications to track logos once detected; and how this approach can be extended to further improve accuracy of detection. What are it’s likely implications on GPU video/ texture memory requirements and performance?
Would you also consider implementing scale space computation of uncompressed HD Video using OpenCL or Cuda ?
(Answer questions on GapJumpers and receive top quality feedback from industry experts. The complete feedback is confidential only to the applicants.)
Email me when GapJumpers publishes or recommends stories