Object Detection using Tensorflow: bee and butterfly Part II

home>ML>Image Processing

Object Detection using Tensorflow: bee and butterflies

  1. Part 1: set up tensorflow in a virtual environment
  2. adhoc functions
  3. Part 2: preparing annotation in PASCAL VOC format
  4. Part 3: preparing tfrecord files
  5. more scripts
  6. Part 4: start training our machine learning algorithm!
  7. COCO API for Windows
  8. Part 5: perform object detection

Tips: do remember to activate the virtual environment if you have deactivated it. Virtual environment helps ensure that packages we download may not interfere with the system or other projects, especially when we need older version of some packages.

We continue from Part I. Let us prepare the data to feed into the algorithm for training. We will not feed the images into the algorithm directly, but will convert them into tfrecord files. Create the following directory for the preparation. I named it keropb, you can name it anything.

adhoc/keropb
+ butterflies_and_bees
  + Butterflies
    - butterflyimage1.png 
    - ...
  + Butterflies_canvas
    - butterflyimage1.png
  + Bees
    - beeimage1.png
    - ...
  + Bees_canvas
    - beeimage1.png
    - ...
+ do_clone_to_annotate.py
+ do_convert_to_PASCALVOC.py
+ do_move_a_fraction.py
+ adhoc_functions.py

Note: the image folders and the corresponding canvas folders can be downloaded here. Also, do not worry, the last 4 python files will be provided along the way.

We store all our butterflies images in the folder Butterflies and bee images in the folder Bees. The _canvas folders are exact replicas of the corresponding folders. You can copy-paste both Butterflies and Bees folders and rename them. In the canvas folders, however, we will mark out the butterflies and the bees. In a sense, we are teaching the algorithm which objects in the pictures are butterflies, and which are bees. To mark out a butterfly, use white (255,255,255) RGB to block out the butterfly. Ok, this is easy to do, just use the good ol’ Paint program and use white color to paint over the butterfly, or use eraser. See the example below. Note that the images have exactly the same names.

Tips: if the image contains white patches, they might be wrongly detected as a butterfly too. This is bad. In that case, paint this irrelevant white patches with other obvious color, such as black.

butterfly.jpg

Install the package kero and its dependecies.

pip install kero
pip install opencv-python
pip install pandas

Tips. Consider using clone_to_annotate_faster(). It is A LOT faster with a little trade off on the accuracy of bounding boxes on rotated images. The step by step instruction to do this can be found in Object Detection using Tensorflow: bee and butterfly Part II, faster. If you do, we can skip the following steps and skip the front part of Part III. Follow the instruction there.

Create and run the following script do_clone_to_annotate.py from adhoc/keropb i.e. in cmd.exe, cd into keropb and run the command

python do_clone_to_annotate.py

Tips: We have set check_missing_mode=False. It is good to set it to True first. This helps us check if each image in Butterflies have a corresponding image in Butterflies_canvas. Before processing, we want to identify missing images so that we can fix them before proceeding. If everything is fine, “ALL GREEN. No missing files.” will be printed. Then set it to False and run it again.

do_clone_to_annotate.py

import kero.ImageProcessing.photoBox as kip

# adjust the folders accordingly
this_folder = "C:\\Users\\acer\\Desktop\\adhoc\\keropb\\butterflies_and_bees\\Butterflies"
tag_folder = "C:\\Users\\acer\\Desktop\\adhoc\\keropb\\butterflies_and_bees\\Butterflies_canvas"

gsw=kip.GreyScaleWorkShop()
rotate_angle_set = [0,30,60,90,120,150,180] # None
annotation_name = "butterfly"
gsw.clone_to_annotate(this_folder, tag_folder,1,annotation_name,
  order_name="imgBUT_",
  tag_name="imgBUT_",
  check_missing_mode=False,
  rotate_angle_set=rotate_angle_set,
  thresh=250,
  significant_fraction=0.01)

Note: set order_name and tag_name to be the same so that adhoc_functions.py need not be adjusted later. See that Bees_LOG.txt and Butterflies_LOG.txt are created also, listing how the image files are renamed.

Tips: Read ahead. We will be doing the same thing for Bees folder, so go ahead and open new cmd.exe, create a copy and name it do_clone_to_annotate2.py so that we can run the process in parallel to save time.

Tips: If annotation fails for one reason or another after ground truth image generation is complete, then make sure to set skip_ground_truth=True before rerunning the script, so that we do not waste time re-spawning the ground truth images.

This will create Butterflies_CLONE, Butterflies_GT and Butterflies_ANNOT folders.

  1. The CLONE folder contains the images from Butterflies folder, but rotated to different angles as specified by the variable rotate_angle_set. This is to create more training images, so that the algorithm will learn to recognise the object even if it is tilted.
  2. The GT folder contains the ground truth images, set to black and white. White patch will be (desirably) the object we point to. Note that this may not be perfect and more settings will be available as we develop the package to optimize this.
  3. The ANNOT folder contains annotations, which are boxes to show where the object, butterfly or bee, is. This information is stored in txt file which contains the information in the format:
    label height width xmin ymin xmax ymax

    where label is either bee or butterfly; height and width are the width and height of the entire image. The image will be saved together with the annotation box as shown below.

    imgan.JPG

Notice that we do this for the Butterflies folder. Do it for Bees folder as well. Also, I am using only about 30 images for each category bee and butterfly (you should use more). Using the above code, we perform 6x rotations on each image, by angles specified in the variable rotate_angle_set. This is so that the algorithm will be able to recognise the same object even if it appears in the different orientation. Note that at the time of writing, research on DNN is still ongoing and more robust image classification that can handle more transformations such as rotation might be available in the future. In total, then, we have about 180 images each.

To make tfrecord files that we will feed into the algorithm, we will need to convert this information further into PASCAL VOC format. Run the following script do_convert_to_PASCALVOC.py from adhoc/keropb. (See adhoc_functions.py here)

import adhoc_functions as af

annot_foldername = "C:\\Users\\acer\\Desktop\\adhoc\\keropb\\butterflies_and_bees\\Butterflies_ANNOT"
annot_filetype = ".txt"
img_foldername = "C:\\Users\\acer\\Desktop\\adhoc\\keropb\\butterflies_and_bees\\Butterflies_CLONE"
img_filetype = ".png"
af.mass_convert_to_PASCAL_VOC_xml(annot_foldername,annot_filetype,
img_foldername,img_filetype)

annot_foldername = "C:\\Users\\acer\\Desktop\\adhoc\\keropb\\butterflies_and_bees\\Bees_ANNOT"
annot_filetype = ".txt"
img_foldername = "C:\\Users\\acer\\Desktop\\adhoc\\keropb\\butterflies_and_bees\\Bees_CLONE"
img_filetype = ".png"
af.mass_convert_to_PASCAL_VOC_xml(annot_foldername,annot_filetype,
img_foldername,img_filetype)

A bunch of xml files, each corresponding to a butterfly or bee image, will be created in the _ANNOT folder. The format of these xml files are like this.

Good! We are ready to create tfrecords files in Part III.