DEV Community

Cover image for Advent of Code 2020: Day 17 using 3D/4D Convolution in TensorFlow in Python
Yuan Gao
Yuan Gao

Posted on • Edited on

Advent of Code 2020: Day 17 using 3D/4D Convolution in TensorFlow in Python

Running through these quickly! Today's is another short one because the solution looks a lot like Day 12's solution

The Challenge Part 1

Link to challenge on Advent of Code 2020 website

This challenge is another cellular automata question, asking you to run a cellular automata for a few cycles. In fact, it's an easier one than Day 12, but involves doing it in 3D.

Fortunately for us, our solution to Day 12 was done in TensorFlow, and extending it to 3D is easy:

3D cellular automata in TensorFlow

Copy-pasting most of the Day 12 solution, and adjusting to add an extra dimension. I'm manually defining a fixed space for the automata, and then dumping the initial state somewhere near the middle of it, though it should be relatively easy to let it expand outwards.

import numpy as np
import tensorflow as tf

data = np.loadtxt("input.txt", 'U', comments=None)
bool_data = data.view('U1') == '#'
bool_data = bool_data.reshape((1, data.size, -1))

zinit, yinit, xinit = bool_data.shape
zdim, ydim, xdim = 40,40,40

padding = tf.constant([[1, 1], [1, 1], [1, 1]])
neighbors_np = np.ones([3, 3, 3])
neighbors_np[1, 1, 1] = 0
conv_filt = tf.reshape(tf.cast(neighbors_np, tf.float32),  [3, 3, 3, 1, 1])

ystart = xstart = (xdim-xinit)//2
yend = xend = xstart+xinit
zstart = zdim//2
zend = zstart+1
init_data_np = np.zeros([zdim, ydim, xdim], dtype=np.bool)
init_data_np[zstart:zend, ystart:yend, xstart:xend] = bool_data
init_data = tf.convert_to_tensor(init_data_np)

@tf.function
def generate(this_gen):
    padded = tf.pad(this_gen, padding)
    padded = tf.reshape(padded, [1, zdim+2, ydim+2, xdim+2, 1]) # need for tf.nn.convolution

    convolved = tf.nn.convolution(tf.cast(padded, tf.float32), conv_filt)
    neighbors = tf.reshape(convolved, [zdim, xdim, ydim])

    three_neighbors = tf.math.equal(neighbors, 3)
    two_or_three_neighbors = tf.math.logical_or(tf.math.equal(neighbors, 2), three_neighbors)

    next_gen = tf.math.logical_or(this_gen, three_neighbors)
    next_gen = tf.math.logical_and(next_gen, two_or_three_neighbors)
    return next_gen

generation = init_data
for _ in range(6):
    generation = generate(generation)

print("total", tf.math.reduce_sum(tf.cast(generation, tf.int32)))
Enter fullscreen mode Exit fullscreen mode

This is the full solution, the meat of it is the generate() function which is almost exactly the same as last time, but implementing the specific cellular automata rules for this problem.

The Challenge Part 2

The second part of the challege is exactly the same as the first, but extending the automata to run in 4 dimensions.

4D Convolution in TensorFlow

This presents a slight challenge to us because TensorFlow doesn't usually do 4D convolution. Fortunately someone has implemented a method to do it, but this is for TensorFlow 1, while we're using the newer TensorFlow 2. I took some time to adapt the code for our use-case

The full code is therefore:

import numpy as np
import tensorflow as tf

data = np.loadtxt("input.txt", 'U', comments=None)
bool_data = data.view('U1') == '#'
bool_data = bool_data.reshape((1, 1, data.size, -1))

winit, zinit, yinit, xinit = bool_data.shape
wdim, xdim, ydim, zdim = 40, 40, 40, 40

padding = tf.constant([[1, 1], [1, 1], [1, 1], [1, 1]])
neighbors_np = np.ones([3, 3, 3, 3])
neighbors_np[1, 1, 1, 1] = 0
conv_filt = tf.reshape(tf.cast(neighbors_np, tf.float32),  [3, 3, 3, 3, 1, 1])

init_data_np = np.zeros([wdim, zdim, xdim, ydim], dtype=np.bool)
ystart = xstart = (xdim-xinit)//2
yend = xend = xstart+xinit
wstart = zstart = zdim//2
wend = zend = zstart+1
init_data_np[wstart:wend, zstart:zend, ystart:yend, xstart:xend] = bool_data
init_data = tf.convert_to_tensor(init_data_np)

@tf.function
def conv4d(data, conv_filt):
    # input, kernel, and output sizes
    (b, wi, zi, yi, xi, c) = data.shape.as_list()
    (wk, zk, yk, xk, ik, ok) = conv_filt.shape.as_list()

    # output size and tensor
    wo = wi - wk + 1
    results = [ None ]*wo

    # convolve each kernel frame i with each input frame j
    for i in range(wk):
        for j in range(wi):

            # add results to this output frame
            out_frame = j - (i - wk//2) - (wi - wo)//2
            if out_frame < 0 or out_frame >= wo:
                continue

            # convolve input frame j with kernel frame i
            frame_conv3d = tf.nn.convolution(tf.reshape(data[:,:,j,:,:], (b, zi, yi, xi, c)), conv_filt[:,:,:,i])

            if results[out_frame] is None:
                results[out_frame] = frame_conv3d
            else:
                results[out_frame] += frame_conv3d

    return tf.stack(results, axis=2)

@tf.function
def generate(this_gen):
    padded = tf.pad(this_gen, padding)
    padded = tf.reshape(padded, [1, wdim+2, zdim+2, ydim+2, xdim+2, 1]) # need for tf.nn.convolution

    convolved = conv4d(tf.cast(padded, tf.float32), conv_filt)
    neighbors = tf.reshape(convolved, [wdim, zdim, xdim, ydim])

    three_neighbors = tf.math.equal(neighbors, 3)
    two_or_three_neighbors = tf.math.logical_or(tf.math.equal(neighbors, 2), three_neighbors)

    next_gen = tf.math.logical_or(this_gen, three_neighbors)
    next_gen = tf.math.logical_and(next_gen, two_or_three_neighbors)
    return next_gen

generation = init_data
for _ in range(6):
    generation = generate(generation)

print("total", tf.math.reduce_sum(tf.cast(generation, tf.int32)))
Enter fullscreen mode Exit fullscreen mode

Thanks to the power of TensorFlow, we didn't really need to think too much about the implementation of this cellular automata: instead of thinking about iterating over all the dimensions and worrying that we have off-by-one errors, we can just paint it in broad strokes by doing tensor operations.

Onwards!

Top comments (0)