Simulating a Working Solar Tracker With Blender, EEVEE, And OpenCV

Johan Schwind
8 min readFeb 26, 2024

--

TL;DR: Full project code and Blender scene on GitHub.

Solar trackers keep photovoltaic panels aligned with the sun’s arc throughout the day, maximizing panel power output. They’re only viable in niche applications but I’ve always loved them for their sci-fi appeal. I’ve always wanted to build my own tracker but have too many hardware design projects on my hands to start another one. Then I thought: “Why not do this in Blender?” There’s an obvious way to do this: Model a tracker in 3D and then rig it to a sun in the scene — pretty straightforward.

But I was interested in whether it was possible to program a tracker in Python that uses computer vision to find the sun in the scene and adjust accordingly. This tracking algorithm could be fine-tuned and tested inside of a virtual world created in Blender and then be applied to an actual hardware system in the real world.

Blender is ideally suited for this because all its functionality is exposed through a Python API. It also comes with a real time render engine (EEVEE) that we will use like a video camera would be used in a real-world computer vision application.

Solar Tracker Principles

Solar trackers use motors or hydraulics to align solar panels with the changing position of the sun throughout the day. There are many different approaches, but a common one is to rotate a panel along two axes, one vertical and one horizontal. Combining both axes, the solar panel can be aligned perpendicularly to the incidence angle of the sun.

Example of a two-axis motor system for solar tracking.

The 3D model used for this project was created by María Culman on GrabCAD and uses two slewing drives — ideal for our application, because we can easily control each axis through Blender’s Python API.

There are different ways to control the panel position, too. One approach is to hard code the sun’s arc (which changes throughout the year and based on location) into the system. A more common approach is to use some sort of light sensitive sensor to find the sun’s position in the sky. Often, this sensor is a camera — so we’ll use a camera object in Blender to simulate a physical real-world camera.

Using OpenCV in Blender

Blender uses its own Python installation, so any packages you want to add for use in Blender need to be installed specifically for Blender’s Python installation, as opposed to your system’s Python installation. It took me a while to figure out how to do this best. Below is a step-by-step guide for Windows.

First, find the path to Blender’s Python installation. In Blender’s Python console:

This is the path of your Blender Python installation.

Now launch PowerShell as an Administrator* and change to the directory found above. In my case:

cd 'C:\\Program Files\\Blender Foundation\\Blender 4.0\\4.0\\python'

Ensure you have pip installed:

.\bin\python.exe -m ensurepip

Then install the desired Python package, for example OpenCV:

.\bin\python.exe -m pip install pip install opencv-python

*Running PowerShell as administrator can pose security risks. Never execute any scripts from untrusted sources in an admin shell.

Scene Setup

The Blender scene setup is fairly simple. It contains the 3D geometry for the tracker and panel with moving parts separated into separate items. These items are paired to two “Empty” objects, one that controls rotation and one tilt. Note that the tilt mechanism is a child to the rotation mechanism, so that the tilt subsystem follows the larger rotating system.

The scene also contains a camera that is fixed perpendicularly to the top of the solar array. We’ll use the data from this camera for our OpenCV tracking algorithm.

The camera that provides data for our tracking algorithm.

Note: I use the Physical Starlight And Atmosphere addon for this scene which is fantastic for visuals. This isn’t required for the tracking to work, and a normal physical sky and sunlight should also work.

Recursive Tracking Algorithm

To find and track the sun, we’ll use a recursive approach. Our recursive base case is reached when the sun is roughly in the center of our sensor or camera frame. If it’s not, we take the steps below:

  1. Take a camera image.
  2. Identify where the sun is relative to solar panel.
  3. Nudge the solar panels towards that location along its rotation and/or tilt axis.
  4. While the array is not perpendicular to the sun (meaning the sun is not centered in the camera frame), start over from step 1.

We can do all of this directly in Blender’s scripting environment.

Take A Camera Image

Using EEVEE we can get an (almost) real time view of the sky from our scene’s camera. To do this, we first have to set the scene up with a viewer node in “Compositing”. I’ve also added an exposure node, which ensures that the resulting image is not blown-out, and the sun is a nice bright spot in the sky.

We can now define a function to turn the pixels from the Viewer node into an array we can use in OpenCV:

def _update_sensor(self):
"""
Function to update the camera image, convert it to an array, and save
it to disk.
"""
# Render image and get pixels
self.scene.view_layers["ViewLayer"].update() # Ensure the view layer is up-to-date
bpy.context.scene.frame_set(self.frame_no) # Make sure we're on the current frame
bpy.ops.render.render(write_still=False)
pixels = bpy.data.images['Viewer Node'].pixels

# Get the dimensions of the image
width = bpy.data.images['Viewer Node'].size[0]
height = bpy.data.images['Viewer Node'].size[1]

# Convert the pixels to a 1D NumPy array, then reshape and convert to 8-bit integer
np_pixels = np.array(pixels) # This creates a flat array
np_pixels = np.clip(np_pixels * 255, 0, 255).astype(np.uint8) # Convert to [0, 255] range
np_pixels = np_pixels.reshape((height, width, 4)) # Reshape to (height, width, RGBA)
np_pixels = np.flipud(np_pixels)

self.data = np_pixels

# Create an image using Pillow and save it
path = self.path + "\\view\\"
filename = path + datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".png"
img = Image.fromarray(np_pixels, 'RGBA')
img.save(filename)

The viewer node provides the rendered pixels as a list, so we have to perform a few numpy operations to turn them into a three-dimensional array where the first two dimensions are width and height, and the third dimension is the number of channels (R, G, B, and Alpha). Each entry in this array is a value between 0–255 for the respective channel. This is a standard representation of an image that we can use in OpenCV.

Identify Where the Sun Is

To find where the sun is, we can use cv2.minMaxLoc(), which returns the x, y pixel coordinate that has the brightest pixel value in the image, which is generally around the location of the sun. Even if the sun is not in frame for the camera, the atmosphere is generally brightest in the direction of the sun*.

We can then do some simple math and comparison, to determine if we want to move the array to the top, bottom, left, or right. To prevent over-adjusting when the sun is very close to the center of the image, we also introduce a tolerance factor t, which is a pixel range within which the tracker just asserts that it has centered the sun on the array.

def _find_bright_spot(self):
"""
Function to find the sun in the camera image. This function creates the move command.
"""
self._update_sensor() # Get a new image before trying to find the bright spot
self.image = cv2.cvtColor(self.data, cv2.COLOR_RGBA2BGR)
self.gray = cv2.cvtColor(self.data, cv2.COLOR_RGBA2GRAY)
(_, _, _, max_loc) = cv2.minMaxLoc(self.gray)
cv2.circle(self.image, max_loc, 5, (14, 94, 233), 1)
cv2.line(self.image, (self.center, 0), (self.center, self.res_x), (14, 94, 233), 1)
cv2.line(self.image, (0, self.center), (self.res_x, self.center), (14, 94, 233), 1)

# Save the image to disk
path = self.path + "\\cv\\"
filename = path + datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".png"
cv2.imwrite(filename, self.image)

# Produce the move command according to the sun's location in the image
if max_loc[0] >= self.center:
dir_x = "RIGHT"
else:
dir_x = "LEFT"
if max_loc[1] >= self.center:
dir_y = "DOWN"
else:
dir_y = "UP"

if self.center - self.t < max_loc[0] < self.center + self.t:
dir_x = "CENTER"

if self.center - self.t < max_loc[1] < self.center + self.t:
dir_y = "CENTER"

print(dir_x, dir_y)
return (dir_x, dir_y)

* This is a simplified approach. In reality there might be local bright spots in the atmosphere that do not point in the direction of the sun, such as clouds reflecting sunlight back at the camera.

Move the Solar Panels

We can now rotate the array by passing the movement command to our imaginary motors. To do this, I’ve created four helper functions that decrease and increase rotation and tilt by accessing the rotation of the “Emptys” in the scene:

def _increase_rotation(self):
self.rotation += self.step
bpy.data.objects['motor_rotate'].rotation_euler[2] = math.radians(self.rotation)

These functions then get called by a driver function called iterate(), which is the entry point to our recursive algorithm:

def iterate(self):
"""
Main driver function that moves the solar panel. This function gets called
from the external loop to adjust the solar panel.
"""
direction = self._find_bright_spot()
self.dir_x = direction[0]
self.dir_y = direction[1]

if self.dir_x == "CENTER" and self.dir_y == "CENTER":
# Base case - we've reached goal orientation
return

if self.dir_x == "RIGHT":
self._decrease_rotation()
if self.dir_x == "LEFT":
self._increase_rotation()
if self.dir_y == "DOWN":
self._decrease_tilt()
if self.dir_y == "UP":
self._increase_tilt()

# Generate keyframes
self.frame_no += self.frame_step
bpy.data.objects['motor_tilt'].keyframe_insert(
data_path="rotation_euler",
index=1,
frame=self.frame_no
)
bpy.data.objects['motor_rotate'].keyframe_insert(
data_path="rotation_euler",
index=2,
frame=self.frame_no
)

This function continuously gets the latest sun direction and then decides if the panel needs to move or not.

Results

The final result looks like this. To create this animation, I created keyframes via the script and rendered the animation separately. It is also possible to visualize the algorithm in the viewport in realtime, by using a timer event. This is particularly helpful during debugging. I’ve included an example of this in the GitHub repo.

A Note on bpy.context

bpy.context allows you to access the properties of the currently selected object. For example, bpy.context.object.rotation_euler[0] = 1.0 rotates the currently selected object around the X axis. This is convenient but relies on the object being selected, which can become an issue if you’re setting the properties of different objects in a single script. I’ve found it better to always explicitly call the object by name. For instance bpy.data.objects[‘motor_tilt’] instead of bpy.context.object

Conclusion

We’ve implemented a basic tracking algorithm. For a real world application, it would require a number of improvements that would allow it to deal with local bright spots, camera noise, and the mechanical limitations of the tracker itself. However, this simple algorithm demonstrates that near-real time computer vision is possible within Blender. The closed-loop nature of this virtual system and Blender’s ability to visualize complex 3D scenes opens up opportunities for research and development and computer vision and even machine learning.

--

--

Johan Schwind
Johan Schwind

Written by Johan Schwind

I help companies build technology-driven solutions for the future of cities by combining human-centric design with rapid prototyping and testing.

No responses yet