My Realm

Adventure Time- A beginner's guide to OpenCV Android

The iconic image used for Image processing

I always wanted my own motion tracker, so I decided to make one. Before embarking on this adventure, I needed my tools and weapons. So, I downloaded NDK, opencv for android and opencv manager app on my phone. The language I chose was Xtend. Because of opencv being quite old, the functional features of Xtend couldn't be exploited to the fullest. Nevertheless, I used it to get a hang of it.

This "motion tracker" of mine is quite simple and does even simpler jobs. Take you phone, fire up this app. A camera feed with a frame would appear, place any black rectangular object in the frame. After 4-5 seconds, it will lock onto that object and will start tracking it's position. It can also send the positions and the differences in it to your PC.

Below is the code for the app. Don't freak out, I am explaining it down down below.

Now, that you scanned the code. You are absolutely ready. Now, one thing I must tell is that, I had no prior experience of Image processing and opencv, and I did all this in 18 hrs approx. So, it might seem a bit intimidating and overwhelming but trust me, this is NO BIG DEAL.Let's begin now.

The first thing to do is include openCV library in your project. Now that the power of opencv is included in the project, you can start implementing the features. The basic flow in open cv apps is,

1)Implement CvCameraViewListener2 in your android Activity.This gives you features to have a camera feed and get the input and process it.

2)In oncreate initialize cameraView and set it's listener.

3)In onResume, check if the opencvlibraries are available. If they are then finally ENABLE the cameraview. In oncreate we just initialized it. And then apply this view to your activity through setContentView().

4)If everything goes well then the cameraview would get enabled and would be visible on the screen. It would be blank though.

5)When you were implementing the CvCameraViewListener2, you must have added 3 unimplemented methods. The onCameraFrame is the one we are most interested in. In the method, you have an argument also. Go inside the method block and initialize a Mat object.(A mat object is the fundamental container used to store images). Now copy the methodArgument.rgba in this mat object, and then finally return it. For example:

                                       onCameraFrame(CvCameraViewFrame inputFrame) {

                                       var mat = inputFrame.rgba

                                       return mat }

And it's done. now you have live camera feed in your app. 

It's actually time for my class, I will continuing this saga sometime later. Stay tuned.


So, I am back. Yes, that class was pretty long and I nearly died there. Well, I can't understand why a software student is taught BJT and diodes working, and that too in details. I feel like dropping out. But, anyways, lets continue with our adventure. I guess(I wish actually), some of you guys tried something with opencv and tinkered with the sample opencv apps provided by them. But for our friends who were busy with halloween, I will write everything that I understood and learnt. In making this project, a lot of little bugs were swatted, and they were really weird ones ,  I will be listing them too.

1) If you take any sample code or example code on opencv for android, the basic structure is declaring the camera object, extending the necessary class(CvCameraViewListener2 in our case), having a function to check if libs are available and load them and then calling that function from onResume(). The important unimplemented methods from extended class are included automatically. Now, my first mistake in testing the working of the first opencv app(which is analogous to "hello world"), was to declare a Mat object in onCreate. Because, onResume is called after onCreate, and  the libraries are yet to be loaded, doing anything related to openCV in oncreate is not an option.Do all the work in the onCameraFrame method.

2) Now that you have a live camera feed(the steps were told before my AE class). It's time for some fun. A "Mat" is just a 2D array which is used to store the images. Images here tend to have different formats, the most relevant to me was RGBA. In an RGBA image, as received in line no.70, each element of the matrix has 4 characteristics, RGB is for usual Red, Green, Blue, and A is for alpha, which corresponds to opacity. So, in line 70, the object mat would have a composition like,


3) The mat is the thing which you actually process and flip around. The Core and Imgproc are two statically accessed classes which, to me, were pretty important. To get started,with the help of Core, you can split that RGBA matrix to 4 matrices containing the 4 separate channel. The first one for R, the next for G etc. And, with Imgproc, you can convert an image to grayscale. These were just some of the very basic ways, the classes can be used, but their power is immense.With some test and play with them, you can see how awesome they can be. Check out functions like Core.flip, Core.split, Core.merge, Core.max, Imgproc.cvtColor(Yeah, the name of the article contains the word 'Guide'...). I learnt this "search this on your own" method from my Digital System Design teacher. Anything that might take time, 'Do it as your assignment'.

4)I swear I will continue tomorrow...

No comments:

Post a Comment

I am on Wordpress too.|No Copyright © 2014

Powered by Blogger.