Thursday, August 08, 2013

Downsampling images to speed up GrabCut

In the previous post [1] I've tried the GrabCut function of OpenCV, when I noticed it was time consuming but didn't try to check the exact processing time. But I was still wondering ``how slow'' did GrabCut could be. So, I started to add the clock() function to see the result [2].

Later, I though of that the processing time was greatly affected by the image size, so I searched for functions which could reduce the images for speeding up GrabCut. What I found were cv::pyrDown() and cv::pyrUp() and they've been implemented in my test code (listed below).

#include "opencv2/opencv.hpp"
#include <iostream>
#include <time.h>

using namespace std;

const bool DOWN_SAMPLED = true;
const unsigned int BORDER = 1;
const unsigned int BORDER2 = BORDER + BORDER;

int main( )
    clock_t tStart_all = clock();
    // Open another image
    cv::Mat image;
    image = cv::imread("sunflower02.jpg");

    if(! ) // Check for invalid input
        cout <<  "Could not open or find the image" << std::endl ;
        return -1;

    cv::Mat result; // segmentation result (4 possible values)
    cv::Mat bgModel,fgModel; // the models (internally used)

        // downsample the image
        cv::Mat downsampled;
        cv::pyrDown(image, downsampled, cv::Size(image.cols/2, image.rows/2));

        cv::Rect rectangle(BORDER,BORDER,downsampled.cols-BORDER2,downsampled.rows-BORDER2);

        clock_t tStart = clock();
        // GrabCut segmentation
        cv::grabCut(downsampled,    // input image
            result,   // segmentation result
            rectangle,// rectangle containing foreground
            bgModel,fgModel, // models
            1,        // number of iterations
            cv::GC_INIT_WITH_RECT); // use rectangle
        printf("Time taken by GrabCut with downsampled image: %f s\n", (clock() - tStart)/(double)CLOCKS_PER_SEC);

        // Get the pixels marked as likely foreground
        // upsample the resulting mask
        cv::Mat resultUp;
        cv::pyrUp(result, resultUp, cv::Size(result.cols*2, result.rows*2));
        // Generate output image
        cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
        image.copyTo(foreground,resultUp); // bg pixels not copied

        // display original image

        // display downsampled image
        cv::rectangle(downsampled, rectangle, cv::Scalar(255,255,255),1);
        cv::namedWindow("Downsampled Image");
        cv::imshow("Downsampled Image",downsampled);

        // display downsampled mask
        cv::namedWindow("Downsampled Mask");
        cv::imshow("Downsampled Mask",result);

        // display final mask
        cv::namedWindow("Final Mask");
        cv::imshow("Final Mask",resultUp);

        // display result
        cv::namedWindow("Segmented Image");
        cv::imshow("Segmented Image",foreground);
    else {
        cv::Rect rectangle(BORDER,BORDER,image.cols-BORDER2,image.rows-BORDER2);

        clock_t tStart = clock();
        // GrabCut segmentation
        cv::grabCut(image,    // input image
            result,   // segmentation result
            rectangle,// rectangle containing foreground
            bgModel,fgModel, // models
            1,        // number of iterations
            cv::GC_INIT_WITH_RECT); // use rectangle
        printf("Time taken by GrabCut with original image: %f s\n", (clock() - tStart)/(double)CLOCKS_PER_SEC);

        // Get the pixels marked as likely foreground
        // Generate output image
        cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
        image.copyTo(foreground,result); // bg pixels not copied

        // display original image
        cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);

        // display result
        cv::namedWindow("Segmented Image");
        cv::imshow("Segmented Image",foreground);

    printf("Total processing time: %f s\n", (clock() - tStart_all)/(double)CLOCKS_PER_SEC);

    return 0;

The key idea was to downsample the image for GrabCut and then upsample the result (I thought it was a mask) to the original size. The result showed a remarkable speeding up in both the debug and the release mode.

Here are the output images with the downsampling strategy:

Fig 1. Original image
Fig 2. Downsampled image

Fig 3. Mask obtained by using GrabCut

Fig 4. Upsampled mask

Fig 5. Final result
Here is the result without the downsampling strategy:
Fig 6. GrabCut result without downsampling

Comparing Figure 5 and 6, we can easily notice the differences between the segmented results. When applying the downsampling strategy, some image details were lost and the mask would be different and had rougher edges as well.

Although the downsampling strategy has the drawback of losing image details, the benefit of reducing processing time was significant. The following table lists the processing time obtained by using above code with and without the downsampling strategy.

processing time (sec.)without downsamplingwith downsampling
debug modeGrabCut3.0780.717
release modeGrabCut0.5990.123

[1] Try GrabCut using OpenCV
[2] How to use clock() in C++

Friday, August 02, 2013

Building OpenCV libs and dlls using CMake in Windows 7

(This is a simple note of my building process.)

As mentioned in the previous post, in which  I tried the GrabCut by using the OpenCV's library. Because I didn't have libs and dlls for debug mode, so I tried to use CMake to build them for my own usage.

First I went to OpenCV website to download the latest stable version 2.4.6. The source code for Windows were packed in a exe file. Don't worry about it, just download it and click it and the 7zip will extract the whole source package for you. In my case, the extracted folder was named ``opencv''.

Then I lauched CMake GUI, chose the location where the extracted folder was located, and chose the build directory for the building files.

Click the ``Configure'' button and if everything is okay then the ``Generate'' button. In my case, I'd chosen the generator as ``Visual Studio 2005'' (at a certain step I didn't rememberd0, so the generating result contained an OpenCV.sln in the build folder.

The final step was just click the OpenCV.sln to launch the Visual Studio and then Build the project for Debug and Release mode. The products were located in the build/bin and build/lib directories.

Try GrabCut using OpenCV

I was considering using GrabCut to cut out the target in one of my working project. After testing it using Python, I thought it's necessary to try it in C++ code. Therefore I started to find some example code and picked one for my test [1].

Here is my test code, the sample photo, and the result:

#include "opencv2/opencv.hpp"
#include <iostream>

using namespace cv;
using namespace std;

int main( )
    // Open another image
    Mat image;
    image = cv::imread("sunflower02.jpg");

    if(! ) // Check for invalid input
        cout <<  "Could not open or find the image" << std::endl ;
        return -1;

    // define bounding rectangle
    int border = 20;
    int border2 = border + border;
    cv::Rect rectangle(border,border,image.cols-border2,image.rows-border2);

    cv::Mat result; // segmentation result (4 possible values)
    cv::Mat bgModel,fgModel; // the models (internally used)

    // GrabCut segmentation
    cv::grabCut(image,    // input image
        result,   // segmentation result
        rectangle,// rectangle containing foreground 
        bgModel,fgModel, // models
        1,        // number of iterations
        cv::GC_INIT_WITH_RECT); // use rectangle
    // Get the pixels marked as likely foreground
    // Generate output image
    cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
    image.copyTo(foreground,result); // bg pixels not copied

    // draw rectangle on original image
    cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);

    // display result
    cv::namedWindow("Segmented Image");
    cv::imshow("Segmented Image",foreground);

    return 0;


The sample photo used in the test

The result of applying GrabCut

During the test, I encountered an old problem to me, which had been some odd runtime bugs for the debug mode when using OpenCV. The solution might be NOT to mix up the debug and release libraries [2].

Oh, by the way, the processing time of the GrabCut was too long (about 2 seconds in the test case), and I thought it's not feasible for realtime applications. Orz


Thursday, August 01, 2013

cv2.waitKey bug

I was trying a Grabcut sample code (don't use this version) written in Python by abid rahman and stuck at a strange problem: the cv2.waitKey() didn't work normally!

Yesterday I searched the problem on Google and found nothing useful (according to my skill level, I might overlooked something that could be hints), so I decided to ask in the G+ Python community [1].

Based on Brett Ponsler's helpful suggestion, I tried to rewrite my test code and also went to download the Grabcut sample code again (this version is fine). This time, the sample code was running successfully.

Then I noticed a magic word ``0xFF'' in the new downloaded sample code. Using the hint, I finally found the bug report about cv2.waitKey() and came up with a tiny test code:

import cv2
import numpy as np


while True:
    #key = cv2.waitKey(33) #this won't work
    #key = 0xFF & cv2.waitKey(33) #this is ok
    key = np.int16(cv2.waitKey(33)) #this is ok [2]

    if key == 27:
        print key, hex(key), key % 256

[1] The question I posted in the Python community:

Wednesday, July 31, 2013

Trying sklearn for machine learning (with the face recognition sample)

I was trying to evaluate the feasibility of a project, and Python of course was my first choice. During the build-up of the developing environment, however, I was frustrated due to the installation of scikit-learn package.

Quick tip: download the latest stable version (0.14a1) of scikit-learn and play with the sample code given in the source package.

Installation by pip (failed)

The first frustration might be caused by my stupidity.

I followed the instruction on the scikit-learn page and used pip to complete the installation. Then I googled an example code and found it couldn't be run successfully. The Python interpreter always complained with
ImportError: cannot import name scikits.learn
I googled for the solution again and again, and found all the answers pointed to ``multiple versions of Python installed in the system.'' But I have only Python 2.7 in my Ubuntu!

What I had done was uninstall the scikit-learn and reinstall. Also I tried to install it from the source, but nothing changed.

Then I thought of something and tried to find some sample code on the scikit-learn page. It turned out that the module should be sklearn instead of scikits.learn... Orz

So what I had found was a sample code using old module names.

Using version 0.13.1 (failed)

I am not sure whether this is a bug. I could not run the example code ( located in the source package of version 0.13.1. When my scikit-learn modules were also the version 0.13.1. The error message was:
ImportError: cannot import name column_or_1d
and I found ``import sklearn.datasets'' would trigger this error.

I also tried to follow the traceback message given by the interpreter but only knew it was due to an importing of My skill on debugging couldn't bring me further.

Verion 0.14a1 (Succeeded)

Okay, I'd run out my approaches... I almost gave up, but then I thought of the possibility of using the latest version to solve the problem. So I downloaded the source of version 0.14a1 and installed it. Finally, I got the sample code run with expected outputs.

Face recognition example test

If you have downloaded the source package, you can find the example in the path of: YOUR_FOLDER/scikit-learn-0.14a1/examples/applications/

Frankly, I have no idea about the output yet, but I would like to post the text output of running with the figures of result.

Text output

Faces recognition example using eigenfaces and SVMs

The dataset used in this example is a preprocessed excerpt of the
"Labeled Faces in the Wild", aka LFW_: (233MB)

.. _LFW:

Expected results for the top 5 most represented people in the dataset::

                     precision    recall  f1-score   support

  Gerhard_Schroeder       0.91      0.75      0.82        28
    Donald_Rumsfeld       0.84      0.82      0.83        33
         Tony_Blair       0.65      0.82      0.73        34
       Colin_Powell       0.78      0.88      0.83        58
      George_W_Bush       0.93      0.86      0.90       129

        avg / total       0.86      0.84      0.85       282

2013-07-31 08:04:43,243 Downloading LFW metadata:
2013-07-31 08:04:46,028 Downloading LFW metadata:
2013-07-31 08:04:46,740 Downloading LFW metadata:
2013-07-31 08:04:48,140 Downloading LFW data (~200MB):
2013-07-31 08:11:24,620 Decompressing the data archive to /home/thk/scikit_learn_data/lfw_home/lfw_funneled
2013-07-31 08:11:33,822 Loading LFW people faces from /home/thk/scikit_learn_data/lfw_home
2013-07-31 08:11:33,981 Loading face #00001 / 01288
2013-07-31 08:11:36,218 Loading face #01001 / 01288
Total dataset size:
n_samples: 1288
n_features: 1850
n_classes: 7
Extracting the top 150 eigenfaces from 966 faces
done in 0.806s
Projecting the input data on the eigenfaces orthonormal basis
done in 0.065s
Fitting the classifier to the training set
done in 16.244s
Best estimator found by grid search:
SVC(C=1000.0, cache_size=200, class_weight=auto, coef0=0.0, degree=3,
  gamma=0.001, kernel=rbf, max_iter=-1, probability=False,
  random_state=None, shrinking=True, tol=0.001, verbose=False)
Predicting people's names on the test set
done in 0.049s
                   precision    recall  f1-score   support

     Ariel Sharon       0.67      0.78      0.72        18
     Colin Powell       0.77      0.80      0.78        61
  Donald Rumsfeld       0.71      0.76      0.73        29
    George W Bush       0.90      0.89      0.89       134
Gerhard Schroeder       0.71      0.63      0.67        27
      Hugo Chavez       0.93      0.58      0.72        24
       Tony Blair       0.69      0.83      0.75        29

      avg / total       0.81      0.80      0.80       322

[[ 14   2   1   1   0   0   0]
 [  3  49   1   3   0   1   4]
 [  1   3  22   2   0   0   1]
 [  2   6   4 119   1   0   2]
 [  1   1   1   4  17   0   3]
 [  0   3   1   0   5  14   1]
 [  0   0   1   3   1   0  24]]


Compiling NiSimpleViewer -- from Kinect OpenNI sample code

(This was an old article kept in draft state for about four months)

In previous post I installed the OpenNI SDK and tested some of its samples [1]. I am trying to study something from the sample code and what I've chosen now is the NiSimpleViewer example (which can be found in openni/Samples/NiSimpleViewer).

To avoid messing up the original sample code, I copied the whole directory of NiSimpleViewer and renamed it to mySimpleViewer.

The first work was to compile the source code. Of course the samples included makefiles but they were used for more general and more complicated cases. What I needed was just a simple and self-contained makefile. My first attempt was to simplify the makefile, but unfortunately it's too complicate for me to understand or even to modify it. So I wrote a simple one as the follows (I knew it's unnecessary to do so for such a simple case, but I just wanted to do a bit of exercise on writing makefiles.) :

CC = g++ 
CFLAGS = -g -Wall
LDFLAGS = -lglut -lOpenNI -I /usr/include/ni 
EXECUTABLE = mySimpleViewer


After running mySimpleViewer, the program complained that the file SamplesConfig.xml cannot be found. I checked the source code as well as the files in related directories and found the path should be changed from
In fact the relative path for the source code to find the xml file is ``../Config'' but not ``../../Config''. This was because the original Makefile putted the binaries in openni/Samples/Bin/Release/ which was one more directory deeper than my test example (thanks to my colleague's reminder).

Also, I modified some #define macros to const type. Based on my knowledge, this would be more ``c++ style''.
const XnChar* SAMPLE_XML_PATH = "../Config/SamplesConfig.xml";

const int GL_WIN_SIZE_X = 1280;
const int GL_WIN_SIZE_Y = 1024;

const unsigned int DISPLAY_MODE_OVERLAY = 1;
const unsigned int DISPLAY_MODE_DEPTH = 2;
const unsigned int DISPLAY_MODE_IMAGE = 3;
[1] Test Kinect in Ubuntu 12.04

Monday, July 15, 2013

Network problems of Linux Mint

I have installed and used Linux Mint (Maya) on my Toshiba Satellite for several months. The network setting has always troubled and annoyed me, especially the wireless one.

I had encountered three problems:
  1. If the cable doesn't connect to the laptop or the network is not working, the system always wait for a long time with the messages read:
    ``Waiting for network configuration...''
    ``Waiting up to 60 more seconds for network configuration...''
  2. When the network is interrupted, it won't recover automatically.So every time I close the laptop lid to sleep the system and open it to resume it, I have to open the terminal and type ``sudo pon dsl-provider'' to get the network connection back.
  3. I couldn't connect to my wireless network at home, and found nowhere to get the settings done. My wireless network has been set as hidden, and I have added it in the Network Connections. But when I tried to connect it, the icon always showed the processing state and the connection was never done.
Today I got all these problems solved and I am happy now.

Here are the solutions I found and tested successfully:
  1. Edit the file: /etc/init/failsafe.conf
    Find the lines with ``sleep'' and comment out the two which related to the system messages just mentioned above.
  2. Reinstall network-manager by
    sudo apt-get --reinstall install network-manager
    and use the following command to start the manager:
    sudo /etc/init.d/network-manager restart
  3. Edit the file: /etc/NetworkManager/NetworkManager.conf
    change ``managed=false'' to ``managed=true''
    restart the network manager by
    sudo /etc/init.d/network-manager restart
    After doing these, the available wireless network will show up. My hidden wireless network still couldn't be connected so I tried to click its icon and a window popped out for me to enter the password. I inputed the password and everything went as expected as shown in the figure:

Monday, May 20, 2013

i3 -- a tiling window manager

I had used wmii for several months, and almost forgot it till today when my Ubuntu 12.04 stuck. I launched top and found the compiz consumed most of the resource of my PC. Of course I didn't find out the cause (due to my time and my skill level) and finally restart the system in command line mode.

So I recalled wmii the wonderful and lightweight window manager.

But I also recalled some reasons which prevented me using wmii as my main window manager:

  1. I didn't know how to make wmii show the system panel (that was the gnome panel) which keeps something like volume controls and daemon icons of ibus and Dropbox, etc.
  2. I have had two monitors and didn't know how to make wmii work with dual-monitor setting

I did some quick search and found an interesting article written by Tanguy: Tiling window managers.

The article listed three tilting window managers among which I've only used wmii. I heard of awesome but haven’t try it yet. After reading Tanguy's introduction, I decided to try i3.

I also did some more search about the system panel and finally got what I want. I listed some setting in my i3 configuration file (~/.i3/config):

# start-ups
exec unity-2d-panel
exec nm-applet
exec ibus-daemon
exec dropbox start -i
exec ~/Downloads/copy/x86/CopyAgent

# for dual-monitor
exec xrandr --output DVI-I-1 --auto --left-of DVI-I-2

where the DVI-I-1 and DVI-I-2 are my monitors detected by using the command xrandr.

Here is my working monitors with i3 as the window manager:

Using i3 window manager with dual monitors.

Close-up of the Unity panel.

I am not sure whether I got the dual-monitor setting right. In my case, the monitors show two different workspace but not a single workspace with extension monitor.

Wednesday, April 24, 2013

Some notes on find and replace using Vim, rename, and perl in command line...

Using rename in bash script

As mentioned in the previous post [1], I need to replace dots to other tokens in the filenames of a bunch of eps figures. The following script was used to do the work [2]:

for f in $(find . -name "*.eps" -type f)
    echo "found: "$f
    # option -n is useful to preview the renamed results:
    rename -v 's/(\d+)\.(\d+)\.(\d+)/$1-$2-$3/' $f

In this case, I learned how to keep some parts of the old string and to replace other parts. The key concept was to use parentheses to group the parts we want to keep and then use $n to indicate the nth group in substitution expression. Use ``91.1.8.eps'' as the example:
  • \d stands for digits
  • \d+ means at least ONE digit
  • (\d+) hold the parts which are respectively 91, 1, and 8 in this example
  • $1 corresponds to the first group which is 91
  • $2 corresponds to the second group which is 1
  • $3 corresponds to the third group which is 8
Therefore, the dots between the digits will be replaced by dashes.

Substitution in Vim

After renaming all the eps files, I had another more complicated problem. All the corresponding filename strings resided in the tex files also had to be changed! At the beginning I edited one of the tex file in Vim and played with the substitution command in it. The final command I used was [3][4]:


Note that there are some minor differences when writing the expressions. Some of the modifications, e.g. the escaping backslash, were due to the difference between BRE and ERE [5].

Find and Replace in multiple files

Although I could do the find and replace works in Vim, it was not a good idea when there were maybe hundreds of such files. To write a bash script was my first thought and with information found on the internet [6][7][8] I got a usable script as the follows:

for f in $(find . -name "*.tex" -type f)
    echo "found: "$f
    perl -p -i -e 's/(\d+)\.(\d+)\.(\d+)/$1-$2-$3/' $f

[1] XeTeX -- using dots in eps filenames would cause errors
[2] batch renaming with the rename command

[3] Vim Regular Expressions 101: Grouping and Backreferences
[4] Search/Replace in Vim

[5] Basic Regular Expressions and Extended Regular Expressions
[6] Eeasy Search and Replace in Multiple Files on Linux Command Line
[7] bash find directories
[8] Find file or directory in whole directory structure

Monday, April 15, 2013

Ubuntu 12.04 doesn't print PDF file via network printer

My office PC has been linked to two printers via network (intranet I think...) and had been worked fine until recently. Although I have installed and used Ubuntu 12.04 for a while but have not printed files often. Several weeks ago I tried to print some documents but the printers just gave me strange error messages and stopped working. Today I tried to print something again and had the same problem with the printers. This time I decided to make it work.

At the beginning I had only vague keywords and got no useful searching results in return. I tried to launch LibreOffice Writer to create a simple test file and it was printed successfully, but after saved as PDF the printing was failed. Then I noticed that the documents which had failed in printing were also all PDF files. So the problem could be the file type.

I used PDF as one of the searching keywords and found some bug reports of Ubuntu. I followed some suggestions in one of the threads [1] and got little success.

The approach of updating cups-filters with precise-proposed didn't work to me [2]. Actually I didn't see any update packages after I enabling the precise-proposed option.

The working one was changing settings of the printers via command line [3]. I made one of the printer worked with PDF documents by using settings as the following:
$ lpadmin -p Hewlett-Packard-HP-LaserJet-P3005 -o pdftops-renderer-default=gs
$ lpadmin -p Hewlett-Packard-HP-LaserJet-P3005 -o pdftops-max-image-resolution-default=0
where Hewlett-Packard-HP-LaserJet-P3005 is the printer name. I deleted the second setting by using
$ lpadmin -p Hewlett-Packard-HP-LaserJet-P3005 -R pdftops-max-image-resolution-default
and the HP printer still worked fine when printing PDF documents.

The other Xerox printer, however, still didn't work after changing the settings.

[1] Printing on PostScript printers (or printers with PostScript-based driver) not working
[2] #20 of the above thread
[3] #17 of the above thread

Thursday, April 11, 2013

XeTeX -- using dots in eps filenames would cause errors

I had a set of tex files which included many eps figures and were compiled successfully by using the latex+dvips+ps2pdf commands. But due to some Unicode issues I've shifted to XeTeX for at least several months [1]. A strange problem, however, emerged when I was invoking xelatex to compile the same set of tex files. The error message was:
! Unable to load picture or PDF file './EPS_FILE_DIR/91.1.8.eps'.
That's strange because I remembered that I have compiled other tex files with eps figures with xelatex flawlessly. I found the successfully compiled files to make sure it still could be compiled in my machine. It did. So the problem might be caused by the 91.1.8.eps itself. Suddenly it occurred to me that maybe the dots confused the xelatex command, so I changed its filename to 91_1_8.eps and solved the problem.

Although the problem has been solved, I still have no idea about why the dots could cause such a problem. I also tested with a 91.1.8.jpg file and to my surprise it passed the compilation without the error message.

I don't know whether other files (such as png, bmp, pdf, ...) also have similar problems, but I decided not to use dots to name my files anymore.

[1] XeTeX -- using the system fonts for CJK tex file

Monday, April 08, 2013

Some problems caused by Vistalizator

In the previous post I said using Vistalizator to change the display language in Win7 is easily. It was easy indeed, but there were some problems and I have only solved one of it.

Thursday, March 28, 2013

Rename the bolg title

My original blog title was ``人 teh 飛, 天 teh 看'' in Taiwanese and the meaning was ``human being are flying and God is watching.'' It came from my old hobby: RC airplanes and helicopters.

After graduating from the university and leaving the lab, my work has been nothing to do with aerial vehicles. I have had no time to keep the RC flying hobby and gradually my interesting (partly due to my work) has been shifted to programming for vision applications.

Maybe one day I will have time to fly RC aerial vehicles again, but for now I am focusing on the topics of image processing and programming and hoping I could get more experiences in the future. So I changed the blog title, which is simple and contains no non-ascii characters, and with the tiny change I could help myself to focus on techniques I want to develop.

Monday, March 25, 2013

Test Kinect in Ubuntu 12.04

In the previous post I tried to make the Kinect (for Windows) work in my Win7 with x86 driver. Actually my first attempt was to try to make the Kinect work in my Ubuntu 12.04 but failed, so I switched back to Win7 to make sure the Kinect is workable.

Several days ago I wanted to test the Kinect in my Ubuntu 12.04, and of course I googled and got OpenKinect. I followed the installation procedure but got no luck and before proceeding on I decided to make sure the Kinect is okay (and therefore `the previous post' about the test in Win7).

This time I followed another tutorial by igorbarbosa and finally got the Kinect working in my Ubuntu 12.04.

My installation steps were slightly different from that described in igorbarbosa's tutorial. I believe it is due to the version changes of avin2's SensorKinect driver.

Now I am sharing my installation and hope it might be useful to you.

Friday, March 22, 2013

Vistalizator -- Easy way to change display language in Win7

Warning: Do NOT use Vistalizator to change your display language. I have encountered something annoying and decided to restore my system. I will post some screenshots of the applications in my Win7 which were affected by  Vistalizator.

For the problems cause by Vistalizator in my Win7, see this post.

Test Kinect in Win7 x64 using x86 driver

In the previous posts [1][2] I tried to rebuild a VS project using Kinect given by my colleague and encountered some problems. Because my Win7 is the x64 version but the project had been built for x86 (Win32) platform, the rebuilding work in my Win7 machine cost me and my colleague almost a whole working day (without my colleague's help, I may give up the rebuilding).

Although I changed the project configuration to x64 [2], there were still problems with other libraries, for instance, OpenCV, built along with the project for x86 platform. When I was wondering the next step, my colleague suggested me to try the x86 driver of Kinect and if it works, just keep sticking on the Win32 setting of the project configuration.

Hmm, it seemed I had taken a long route and headed back to the origin... XD

Okay, the happy news is the x86 driver really worked in the x64 Win7, and here is a working log of the test.

Thursday, March 21, 2013

Change VS2005 project from Win32 to x64

I was rebuilding a Win32 project in my x64 Win7 PC. The online sources [1][2] told me just to change the active project configuration from Win32 to x64, but my VS2005 has no x64 option. Then it occurred to me that maybe I didn't install something related to x64 modules when installing the VS2005, so I checked it by using the installation package.

Wednesday, March 20, 2013

'WIN32': No such file or directory

I was rebuilding a project given by my colleague and encountered strange error message:

c1xx : fatal error C1083: Cannot open source file: 'WIN32': No such file or directory

This error was caused by an empty path variable [1] $(OPEN_NI_INCLUDE) which (I guess) indicates the directory containing the installed OpenNI SDK. It was empty because I forgot to restart Visual Studio after install the SDK... :-p

So the solution is simple: restart Visual Studio and the path variable will be set.


After solving the previous problem, I got another similar problem which related to the Linker path.

I used the set command [2] in cmd of Windows to the list environment variables about OpenNI:
OPEN_NI_BIN64=C:\Program Files\OpenNI\Bin64
OPEN_NI_INCLUDE=C:\Program Files\OpenNI\Include
OPEN_NI_INSTALL_PATH64=C:\Program Files\OpenNI\
OPEN_NI_LIB64=C:\Program Files\OpenNI\Lib64
Then I changed $(OPEN_NI_LIB) to $(OPEN_NI_LIB64) and the new error message said the file OpenNI.lib cannot be found, and the reason was it should be  OpenNI64.lib in my system. The right place to change the library file name is in the following path:
Properties -> Configuration Properties -> Linker -> Input -> Additional Dependencies

[1] Cannot open source file: 'WIN32': No such file or directory
[2] How can I display the contents of an environment variable from the command prompt in Windows 7?

Friday, March 01, 2013

[PCVP] Gaussian blurring and a minor imshow problem

As mentioned in the previous post (To-Do List of 2013), I am trying to learn something by following the examples listed in Programming Computer Vision with Python (PCVP).

I've done some exercises without any systematic or consistent record. So I think it's time to post some results here to record my own understanding as well as to push myself to keep going forward.

The exercise here is a test of Gaussian blurring given in pp.31-32 of the PCVP book. The following is my code and output images.

from PIL import Image
from PIL import Image
import numpy as np
from scipy.ndimage import filters
import matplotlib.pylab as plt 

img ='../data/empire.jpg')
im0 = np.array(img) #original image
im1 = np.array(img.convert('L')) #convert to grayscale
im2 = filters.gaussian_filter(im0,2)
im3 = filters.gaussian_filter(im0,5)
im4 = filters.gaussian_filter(im0,10)

for i in range(5):
    plt.title( '(' + str(unichr(97+i)) + ')' )
    #plt.imshow(eval('im'+str(i))) # This will output the grayscale image with colors 

From left to right: (a) original image, (b) grayscale image, (c)  Gaussian filter with σ = 2, (d) σ = 5, (e) σ = 10.

Actually, the Gaussian blurring has no problems but the convert('L') and the imshow() functions bothered me for a while. I expected the imshow() function will gave me a grayscale image as the one shown in the PCVP book, but it gave me an image with colors as the second one listed in the following figure.

The solution I found [1] and tested is to add a cmap option to tell the imshow() which colormap [2] I want it to apply on the output image.

[1] Display image as grayscale using matplotlib
[2] Matplotlib Color Maps

Wednesday, January 23, 2013

iBus 的行列輸入法 (Array30 input method of iBus)

If you want to use Array30 input method in iBus, remember to choose ``ibus-array'' package instead of ``ibus-table-array30''. The former one acts identical to that runs on Windows and makes fast input possible.

Once you installed ibus-array successfully, there will be an icon with blue character as the following one:
ibus-array icon

If your icon is a red one looks like this:
ibus-table-array30 icon
then you have the wrong choice.

Wednesday, January 16, 2013

To-Do List of 2013

Hmm... It's already 16th January, and suddenly I feel like to make a to-do list of this year. Actually, I've never made any annual plan before, but I think it's would be good to list what I want to do in the year on my own blog and to keep tracking my progress of the to-do items.

The following list shows things I am doing or want to do:
  1. to take open courses on Coursera
  2. to complete EGGN 512 Computer Vision course on YouTube
  3. to read and to do exercises of Programming Computer Vision with Python

Coursera is a great place to find something to learn. And I think joining a course to keep learning progress is a good idea. So far I've taken two courses and found they are fun: Introduction to Astronomy and Programming Languages.

For the EGGN 512 Computer Vision by William Hoff, I've followed the videos in last year but was failed to keep learning. So this is my second trial to follow the course at my own tempo. I don't want to fail it again.

It is not only the videos. There is a reference book Computer Vision: Algorithms and Applications by Richard Szeliski. This could be a heavy burden for me to read it thoroughly.

Finally, I would like to pick up Python programming again. It has been out of my life for a while and I want it back. Because I have to learn these stuff in limited time and energy, I decided to learn computer vision techniques with Python as the programming tool.

In fact my list could be longer, but the presented items will definitely consume out my leisure time. So I think it's better not be too greedy or I might get nothing as the end of 2013.

Okay. List has been determined.

Get thing done. And just do it. :-)