Blender multiple rendering

Pretty often, when we create a 3D model, for example using Blender, we would like to automate the rendering of the images from different camera angles and perspectives. See for example, this pre-processing part of an image processing project. The blender file containing the 3D model can be found here.

After creating the model, use the following codes to run auto_capture.py for rendering. The codes are to be input into python console in Blender (to access it, press shift+f4).

import os, sys, importlib
current_dir = "path\\to\\current\\directory"
sys.path.append(current_dir) # add current path
import auto_capture # importlib.reload(auto_capture) # reload to rerun the script after first time import

auto_capture.py

import os, sys, importlib
current_dir = "path\\to\\current\\directory"
sys.path.append(current_dir) # add current path
# import auto_capture
# importlib.reload(auto_capture)

print("Hello!")

import bpy
obj = bpy.data.objects['Animal.001']
obj_cam = bpy.data.objects['Camera']

# x, y, z, euler_x, euler_y, euler_z
pos = [
[0, -3, 5, 30, 0, 0],
[0, -3.5, 5, 40, 0, 10],
[0, -4, 5, 45, 0, -5],
[-2, -3, 3, 50, 0, -20],
]

count = 1
def convert_deg_to_rad(x):
return x*3.142/180
for x in pos:
obj_cam.location = x[0:3]
obj_cam.rotation_euler = [convert_deg_to_rad(y) for y in x[3:]]
print(obj_cam.location)
print(obj_cam.rotation_euler )

bpy.data.scenes['Scene'].render.filepath = current_dir + '\\a' + str(count) + '.jpg'
bpy.ops.render.render(write_still=True)
count = count + 1

We use pos to specify the position of the camera x, y, z and then the extrinsic Euler angles (in degree) of the camera w.r.t the absolute x, y, z axes. We have specified 4 positions, and of course you can automate by creating a longer list. The output is a set of 4 images, a0.jpg, a1.jpg, a2.jpg and a3.jpg as shown in figure (D) below.

process

Ground Truth Images from 3D model

Look at this photo of a katydid. Amazing camouflage, isn’t it?

Katydid_camouflaged_in_basil_plant

By Jeff Kwapil – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=50923289

Recently I have been tasked with a project related to the detection of a rather elusive object. Our aim is to use image processing algorithm to detect the object, and the fact that we cannot obtain too many samples of the real object is a problem. I mean, imagine you have to capture the images of a katydid, searching high and low, but it is so easy to overlook! Perhaps we can automate photo taking, and then from the many photos obtained, we want to use algorithm to detect which photos actually contain a katydid. We need to train the model of the image processing algorithm, but has too few samples. What are we to do? Let us assume we know the shape of this object.

Here, we use 3D modelling to generate many samples of the object seen from different perspective in a camouflage background. In our training, the images to be used for training need to be marked with, say, a red box to mark out the region where the elusive object is present, as shown above. We will be using ground truth images to help draw the red boxes. What we want to show here is the simple steps to create this ground truth image. We will be using Blender and python. The codes can be found here.

process

Let us suppose we want to detect the elusive “animal” from figure (A). The model of the object might not be easy to create, but let us also assume we have created a 3D model that replicates well the “real animal”, as shown in figure (B).

  1. In Object mode, see yellow box in figure (B); select the object by right-clicking it. Once selected, change to Edit mode.
  2. Select the entire object. You can also do this by selecting one vertex, edge or face, and press A twice.
  3. Press SHIFT+D (duplicate the object), then left-click wherever the mouse cursor is now. Press P and choose Selection. Notice that a separate object has been created. The name of my object is animal, and the duplicated object generated is called Animal.001. This can be seen in the Outliner panel; see figure (C) yellow dashed rectangle.
  4. Scale Animal.001 to be just very slightly larger than the actual Animal object, just so that Animal.001 will completely cover it. Now can change the material to a color very different from the background. I change it to red. To do this, use the icon marked with dashed yellow circle in figure (C).
  5. Render and save the images from different point of views, as shown in figure D. You can use a python script to automate the process (see example here). The files are saved as a1.png, a2.png, a3.png and a4.png.
  6. Use the following code ground.py to generate the ground truth images b1.png, b2.png, b3.png and b4.png.
import cv2

# Set the current directory to the current directory
# where the Blender file is located.
current_dir = "path\\to\\current\\directory"
for i in range(4):
    count = i + 1
    myimg = cv2.imread(current_dir + '\\a' + str(count) + '.png')
    myimg_hsv = cv2.cvtColor(myimg, cv2.COLOR_BGR2HSV)
    output = cv2.inRange(myimg_hsv, (0, 40, 50), (30, 255, 255))
    cv2.imwrite(current_dir + '\\b' +str(count) + '.png', output)

In this code, the image is converted to hsv format, and we are to extract the part of image with red color, more specifically the one with hue between 0 to 30, saturation from 40 to 255 and value (brightness) from 50 to 255. This is done through inRange() with arguments (0, 40, 50) and (30, 255, 255). The desired color is marked white and the rest is black; this is our ground truth image and we are done! Of course, these images are to be further processed for image processing, though not within the scope of this post.

ground_truth.JPG

One final note: of course we can just set the object Animal to red color. However, when we need to extract only a part of an object, use the above method. Highlight the part, duplicate it, change its color to red and capture the ground truth image.

Image Processing #2: Common Blender Functions

We will list some common functions and shortcuts.

Navigation. Use the arrows in the numpad to navigate, and also their combinations with Ctrl or Shift. Have fun trying them out!

Selecting multiple object. Hold shift + left mouse click.

Real time rendering.

Look at the red circles. We can change the sections in Blender’s interface by clicking on them, depending on our preferences. To edit a scene and see them change in real time, set both to 3D view as shown below. At the bottom half, shown with green rectangle, type Shift + Z and edit the top half. See how it changes, pretty neat!

blender3.PNG

Round edge

blender_round.png

Click the edge, then Ctrl + B and then go increase the number of segments as shown in red.

Bending

blender2

Highlight the region to bend, then click Shift + W. Place the cursor (crosshair) as the pivot, and then move the mouse around to bend the object.

Merging Vertices

You might be faced a lot of meshes while working with Blender. Sometimes we might want to merge vertices. The following shows 4 vertices to be merged into 1.

Select all four vertices simultaneously, click Alt+M, and choose the way you want to merge (in this example, we choose to merge the vertices towards the crosshair cursor).

blender4.png

Image Processing #1: Using Blender

I am embarking on an project on image processing now. The first task was to generate some images to help model training, since actual images for training are hard to come by. Let us use Blender to generate these images.

In this example, I will create the following directory. The folder image_save is empty; this is where we will save the rendered images. The folder somemodule is to show how to import an external module we would like to include in the project. The file practice.blend is created through Blender. It will start off with a scene containing the following objects: Camera, Cube, Lamp and World and RenderLayers. Also, __init__.py file is, as usual, the required python file use an external module such as somemodule. It can be left empty, or be used just like any other python scripts.

blender
/image_save
/somemodule
//__init__.py
//test_mod.py
/practice.blend
/mytest.py

What to expect? We show how to

  1. render an image of a cube using python from inside Blender. Then, save the image.
  2. tilt the same cube, render the image, and then save it as well.

mytest.py

print("test")

import bpy
obj = bpy.data.objects['Cube']
current_dir = "your\\directory\\to\\blender"

print(" - location = ", obj.location)
print(" - angle = ", obj.rotation_euler)
bpy.data.scenes['Scene'].render.filepath = current_dir + '\\image_save\\mytest\\ggwp.jpg'
bpy.ops.render.render( write_still=True )


obj.location[0] = 1.0
obj.rotation_euler[0] = 30
print(" - location (after) = ", obj.location)
print(" - angle (after) = ", obj.rotation_euler)
bpy.data.scenes['Scene'].render.filepath = current_dir + '\\image_save\\mytest\\ggwp2.jpg'
bpy.ops.render.render( write_still=True )

test_mod.py

print("inside test_mod")

Now we are ready. Inside Blender, after creating new file practice.blend, press SHIFT+F4 to access Blender’s internal python console.

import os, sys, importlib
>>> cur_dir = “your\\directory\\to\\blender”
>>> sys.path.append(cur_dir) # add current path
>>> import somemodule.test_mod
inside test_mod
>>> import mytest
test – location = <Vector (0.0000, 0.0000, 0.0000)> – angle = <Euler (x=0.0000, y=0.0000, z=0.0000), order=’XYZ’> – location (after) = <Vector (1.0000, 0.0000, 0.0000)> – angle (after) = <Euler (x=30.0000, y=0.0000, z=0.0000), order=’XYZ’>

and two images, ggwp.png and ggwp2.png are created in image_save/mystest, as shown below

blender1

We have run the script by importing, for example through import mytest. In the case we need to rerun the script, use instead

importlib.reload(mytest)

since import mytest will no longer work.