IITH organized two days training session on 'Camera' & 'Computer Vision' demonstrated by the Qualcomm Multimedia team. It took place on 19th and 20th (today) October, 2016. We were informed through the mail (as like any other event). You can find the agenda below -
The topic quite interested me as I thought it would be better to delve into the technicality of camera, being a camera enthusiast. I must say it didn't disappoint me. First day was quite interesting and the turnout was good as well. It was mostly focused on cameras and its various features, the computing that goes behind, the development aspects and of course how Qualcomm, as a company, is consistently working on making it better.
There were continuous references to the image-processing courses (which I haven't taken) but thanks to their elegant explanation, I could understand most of the things without knowing the jargons. The presentation had lot of flowcharts, some of which you can find in the notes attached. I couldn't copy the entire thing because again it's a tradeoff between listening and writing in between the lines.
The second day was mostly focused on FastCV - a computer vision library developed by Qualcomm & Hexagon DSP. FastCV is similar to OpenCV, just a lot faster, as it was shown in the comparison chart. The later part became highly monotonous and boring because the guy went too much into the technical aspects of vector extensions rather than explaining the use cases for the beginners. It was followed by demo of some of the multimedia features like corner detection, object tracking, etc.
I don't remember all the questions raised during the sessions but here are mine -
Ques. - Now a days, digital cameras support variable white balance. So how do you ensure that?
Ans. - Variable white balance is a secondary thing and is infact much easier than auto white-balance. In the later case, you've to find the source of illumination and tune your post-processing to adjust that while in the former, you've already been provided with a probable source. So you can easily negate that to get the enhanced image.
Ques. - Regarding the motorized camera (as in Oppo N3), if the subject is moving and it needs to track it. So how fast is it tracking (or the refresh rate) because it needs to give the signal to actuator for movement (if used in follow-me drone)?
Ans. - Generally the tracking happens from bottom to top of the pyramid where the frame is subsequently scaled down. So if the object moved by 120px in the wider frame and you went up in the pyramid to scale it to half, the object only moved by 60px in this one. I didn't understand much of this because it needs thorough reading.
Talking about the numbers, generally it's tracking at 30fps with 1080p resolution.
Conclusion
All in all, it was a great session. I got to know what are the new stuffs these companies are working on and where the future is heading to.
Presentation matters. A boring stuff could be made interesting if explained in an interesting way and vice versa. It's better if you preserve humor in your talks and connect to the audience.
Demo usually irks because sometimes things might not function as you expect them to.
Till the next post. Peace ✌