# Manipulating speed in motion clouds

An essential dimension of motion is speed. However, this notion is prone to confusions as the speed that has to be measured can be relative to different object. Is it the speed of pixels? The speed of visual objects? We try to distinguish the latter two in this post.

In [ ]:
%matplotlib inline
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt


In standard Motion Clouds, speed is that of the pixels of the textons that produce the texture. It is thus defined as a (normal) distribution of probability around the speed plane that defines the mean speed:

!rm -fr ../files/physical_speed_parallax*
In [ ]:
import os
name = 'physical_speed'
DOWNSCALE = 1

import MotionClouds as mc
N_X, N_Y, N_frame = mc.N_X/DOWNSCALE, mc.N_Y/DOWNSCALE, mc.N_frame/DOWNSCALE

fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)

name_ = name + '_vanilla'
z = mc.envelope_gabor(fx, fy, ft)
mc.figures(z, name_)
mc.in_show_video(name_)


Another way to consider speed is that of visual objects. Consider looking through the window of a train a field of similarly sized (Gabor-shaped) bushes on an infinite plane. The objects which are closer appear bigger on the retina and their speed is perceived as higher than the smaller bushes near the horizon. This relation between size and speed is linear and is called motion parallax (credit: http://www.rhsmpsychology.com/Handouts/monocular_cues_IV.htm).

In [ ]:
from IPython.display import Image
Image('http://www.rhsmpsychology.com/images/monocular_IV.jpg')

Out[ ]:

Such a pattern of motion distribution is highly prototypical in natural settings and it is easy to generate a texture having such a profile but with textons that are uniformly spaced in space, similar to the "Golconde" painting from Magritte (By The Shimon Yanowitz Website, Fair use, https://en.wikipedia.org/w/index.php?curid=3773027):

In [ ]:
Image('https://upload.wikimedia.org/wikipedia/en/7/71/Golconde.jpg')

Out[ ]:

More specifically, the relation reads : $$\frac{1}{f_0} \propto \frac{d\alpha}{d\theta}$$

where $f_0$ is the central scale (spatial frequency) of the visual object and $\frac{d\alpha}{d\theta}$ is the resulting retinal speed. See for instance this paper in Front. Psychol. (06 October 2014 | https://doi.org/10.3389/fpsyg.2014.01103) "Modeling depth from motion parallax with the motion/pursuit ratio" by Mark Nawrot et al:

In [ ]:
Image('http://www.frontiersin.org/files/Articles/104347/fpsyg-05-01103-r2/image_m/fpsyg-05-01103-g001.jpg')

Out[ ]:

For this, we need to modify the way we define the speed plane, a bit similar to what we did with gravitational waves. It is similar to convolving a vanilla Motion Clouds with a joint relation of $f_0$ and $V$ as defined above:

In [ ]:
mc.B_V

Out[ ]:
0.5
In [ ]:
def envelope_physical_speed(fx, fy, ft, K=1., V_X=mc.V_X, V_Y=mc.V_Y,
B_V=mc.B_V, sf_0=mc.sf_0, B_sf=mc.B_sf, loggabor=mc.loggabor,
theta=mc.theta, B_theta=mc.B_theta, alpha=mc.alpha):
"""
physical speed envelope:
selects a gaussian perpendicualr to the plane corresponding to the speed (V_X, V_Y) with some thickness B_V

"""

envelope = mc.envelope_color(fx, fy, ft, alpha=alpha)
envelope *= mc.envelope_orientation(fx, fy, ft, theta=theta, B_theta=B_theta)
envelope *= mc.envelope_radial(fx, fy, ft, sf_0=sf_0, B_sf=B_sf, loggabor=loggabor)

envelope *= np.exp(-.5*((ft + sf_0 * V_X - (f_radius - sf_0) / K )**2/(B_V*sf_0)**2))
# Hack to remove the other part of the envelope - use angle instead to allow for higher B_theta values
envelope *= fx > 0

return envelope

name_ = name + '_parallax'
K = 1.
opts= dict(V_X=.5, sf_0=.2, B_V=.2)
z = envelope_physical_speed(fx, fy, ft, K=K, **opts)
mc.figures(z, name_)
mc.in_show_video(name_)


Let's explore what happens for different values of K:

In [ ]:
for K_ in 1./np.linspace(0.6, 1.4, 7)*K:
name_ = name + '_parallax_K_' + str(K_).replace('.', '_')
z = envelope_physical_speed(fx, fy, ft, K=K_, **opts)
mc.figures(z, name_)
mc.in_show_video(name_)


# Static motion clouds?

Motion Clouds were defined in the origin to define parameterized moving textures. The library in itself is also interesting in itself to generate a spatial texture. Herein I describe a solution to generate just one static frame.

# A bit more fun with gravity waves

## More fun with gravity waves¶

Motion Clouds were defined in the origin to provide a simple parameterization for textures. Thus we used a simple unimodal, normal distribution (on the log-radial frequency space to be more precise). But the larger set of Random Phase Textures may provide some interesting examples, some of them can even be fun! This is the case of this simulation of the waves you may observe on the surface on the ocean.

Main features of gravitational waves are:

1. longer waves travel faster (tsunami are fast and global, ripples are slow and local) - speed is linearly proportional to wavelength
2. phase speed (following a wave's crest) is twice as fast as group speed (following a group of waves).

# IRM clouds

A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.

# Mirror clouds

In [1]:
%matplotlib inline
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt


An essential component of natural images is that they may contain order at large scales such as symmetries. Let's try to generate some textures with an axial (arbitrarily vertical) symmetry and take advantage of the different properties of random phase textures. For that, we will first create a "vanilla" texture:

In [2]:
import os
name = 'mirror'
DOWNSCALE = 1
V_X = .5
#!rm -fr ../files/mirror*

In [3]:
import MotionClouds as mc
N_X, N_Y, N_frame = mc.N_X/DOWNSCALE, mc.N_Y/DOWNSCALE, mc.N_frame/DOWNSCALE

fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)
z = mc.envelope_gabor(fx, fy, ft, V_X=V_X)

name_ = name + '_nomirror'
movie = mc.rectif(mc.random_cloud(z))
mc.anim_save(movie, os.path.join(mc.figpath, name_))
mc.in_show_video(name_)


The first thing to try is to use the periodicity of MotionClouds: Since they are generated in the Fourier space, they have a period in space (width of the display) and in time (there is no gap perceived when you play these in loops as we do here). So we may take the above texture, mirror it vertically and concatenate it to its right:

In [4]:
name_ = name + '_horizontal'
mirrored_movie = np.vstack((movie, movie[::-1, :, :]))
mc.anim_save(mirrored_movie, os.path.join(mc.figpath, name_))
mc.in_show_video(name_)


Mathematically, taking the sum of these two parts $$I_M(x, y, t) = \frac 1 2 \cdot (I(x, y, t) + I(x, -y, t))$$

generates a symmetric pattern: $$I_M(x, -y, t) = \frac 1 2 \cdot (I(x, -y, t) + I(x, y, t)) = I_M(x, y, t)$$

The vertical axis of symmetry is around $y=0$ and (from the periodicity $I(x, y+N_y, t)=I(x, y, t)$) around $y=N_y/2$: $$I_M(x, -y-N_y/2, t) = \frac 1 2 \cdot (I(x, -y-N_y/2, t) + I(x, -(-y-N_y/2), t)) = \frac 1 2 \cdot (I(x, -N_y/2-y, t) + I(x, N_y/2+y, t)) = I_M(x, y-N_y/2, t)$$

Another advantage of these clouds is that they are random phase textures: the different features are not coherent. While adding these 2 textures (the original and the mirrored one) would produce an interference pattern if we would have used gratings, it is different here:

In [5]:
name_ = name + '_fuse'
mirrored_movie = np.vstack((movie, movie[::-1, :, :]))
mirrored_movie = .5*(movie + movie[::-1, :, :])
mc.anim_save(mirrored_movie, os.path.join(mc.figpath, name_))
mc.in_show_video(name_)


What happens now if we add more symmetries?

In general, it is possible to make this axis of symmetry at every coordinate $y=Y$. Let's call this operation $\mathcal{M}^Y$, such that : $$\mathcal{M}^Y(I)(x, y-Y, t) = \frac 1 2 \cdot (I(x, y-Y, t) + I(x, -y-Y, t))$$ It follows: $$\mathcal{M}^Y(I)(x, y, t) = \frac 1 2 \cdot (I(x, y, t) + I(x, -y-2Y, t))$$

In particular, in the example above, $I_M = \mathcal{M}^0(I)$.

Interestingly, the Fourier spectrum of such a transform is dual and in particular: $$\mathcal{F}(\mathcal{M}^0(I))(f_x, f_y, f_t) = \mathcal{F}(\frac 1 2 \cdot (I(x, y, t) + I(x, -y, t)))$$ thus $$\mathcal{F}(\mathcal{M}^0(I))(f_x, f_y, f_t) =\frac 1 2 \cdot (\mathcal{F}(I)(f_x, f_y, f_t) - \mathcal{F}(I)(f_x, -f_y, f_t)))$$ It means that this "mirrored motion clouds" are still random phase textures but where the phase is bound to have a mirror symmetry with respect to the central plane (here, $f_x, f_t$).

For instance, let's add it at $N_y/4$ by creating $\mathcal{M}^{N_y/4}(I)$:

In [6]:
def mirror(movie, Y=0):
# translate
movie = np.roll(movie, -Y, axis=0)
mirrored_movie = .5*(movie + movie[::-1, :, :])
# translate in the other direction
return np.roll(mirrored_movie, Y, axis=0)

name_ = name + '_fuse_quarter'
mirrored_movie = mirror(movie, Y=int(N_Y/4))
mc.anim_save(mirrored_movie, os.path.join(mc.figpath, name_))
mc.in_show_video(name_)


Note that in the particular case of Motion Clouds (and everything created in the Fourier domain in general), the stimulus is periodic in time and space. Due to this, we have $I(x, y + N_Y, t)=I(x, y, t)$ and thus $$\mathcal{M}^Y(I)(x, y-Y+N_Y/2, t) = \frac 1 2 \cdot (I(x, y-Y+N_Y/2, t) + I(x, -y-Y+N_Y/2- N_Y, t))= \mathcal{M}^Y(I)(x, -y-Y+N_Y/2, t)$$

That is, $Y$ is an axis of symmetry for $\mathcal{M}^Y(I)$, but $Y+N_Y/2$ is too. For $Y=0$, there is an axis of symmetry at $y=0$ and one at $y=N_y/2$.

Now we may pipe both operations: $$I_MM = \mathcal{M}^{N_y/4}(\mathcal{M}^0(I))$$

We check that: $$I_MM(-y) = \mathcal{M}^{N_y/4}(\mathcal{M}^0(I(-y))) = \mathcal{M}^{N_y/4}(\mathcal{M}^0(I(y))) = I_MM(y)$$ and $$I_MM(-y-N_y/4) = \mathcal{M}^{N_y/4}(.5 * (I(y-N_y/4)+I(-y+N_y/4)))) = .25 * (I(y-N_y/4)+I(-y-N_y/4)+I(y+N_y/4)+I(-y+N_y/4)) = I_MM(y-N_y/4)$$ The stimulus is thus symmetric at $y \in \{ 0, N_y/4, N_y/2, 3*N_y/4 \}$.

In [7]:
name_ = name + '_fuse_double'

mirrored_movie = mirror(mirror(movie, Y=0), Y=int(N_Y/4))
mc.anim_save(mirrored_movie, os.path.join(mc.figpath, name_))
mc.in_show_video(name_)


You may have noticed that contrast has decreased: that's quite normal as we make sums and make the random phase texture more coherent. It can be easily proven that if we repeat this procedure again and again to be symmetric at every signle coordinate $y$, we end up with a constant stimulus (in the $y$ coordinate, not $x$ or $t$).

# Speed distributions

In [1]:
%matplotlib inline
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt
#!rm -fr ../files/speed*

In [2]:
import MotionClouds as mc
name = 'noisy-speed'
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
print(mc.envelope_speed.__doc__)

    Returns the speed envelope:
selects the plane corresponding to the speed (V_X, V_Y) with some bandwidth B_V.

* (V_X, V_Y) = (0,1) is downward and  (V_X, V_Y) = (1, 0) is rightward in the movie.
* A speed of V_X=1 corresponds to an average displacement of 1/N_X per frame.
To achieve one spatial period in one temporal period, you should scale by
V_scale = N_X/float(N_frame)
If N_X=N_Y=N_frame and V=1, then it is one spatial period in one temporal
period. It can be seen along the diagonal in the fx-ft face of the MC cube.

A special case is used when B_V=0, where the fx-ft plane is used as
the speed plane: in that case it is desirable to set (V_X, V_Y) to (0, 0)
to avoid aliasing problems.

Run the 'test_speed' notebook to explore the speed parameters, see
http://motionclouds.invibe.net/posts/testing-speed.html


In [3]:
# explore parameters
for B_V in [0.0, 0.01, 0.1, 1.0, 10.0]:
name_ = name + '-B_V-' + str(B_V).replace('.', '_')
z = mc.envelope_gabor(fx, fy, ft, V_X=0, B_V=B_V)
mc.figures(z, name_)
mc.in_show_video(name_)


# Smooth transition between MCs

In [1]:
"""
A smooth transition while changing parameters

(c) Laurent Perrinet - INT/CNRS

"""

import MotionClouds as mc
import numpy as np
import os

name = 'smooth'

#initialize
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)

name_ = mc.figpath + name

seed = 123456
B_sf_ = [0.025, 0.05, 0.1, 0.2, 0.4, 0.2, 0.1, 0.05]
im = np.empty(shape=(mc.N_X, mc.N_Y, 0))
name_ = name + '-B_sf'
for i_sf, B_sf in enumerate(B_sf_):
im_new = mc.random_cloud(mc.envelope_gabor(fx, fy, ft, B_sf=B_sf), seed=seed)
im = np.concatenate((im, im_new), axis=-1)

mc.anim_save(mc.rectif(im), os.path.join(mc.figpath, name_))
mc.in_show_video(name_)

In [2]:
name_ += '_smooth'
smooth = (ft - ft.min())/(ft.max() - ft.min()) # smoothly progress from 0. to 1.
N = len(B_sf_)
im =  np.empty(shape=(mc.N_X, mc.N_Y, 0))
for i_sf, B_sf in enumerate(B_sf_):
im_old = mc.random_cloud(mc.envelope_gabor(fx, fy, ft, B_sf=B_sf), seed=seed)
im_new = mc.random_cloud(mc.envelope_gabor(fx, fy, ft, B_sf=B_sf_[(i_sf+1) % N]), seed=seed)
im = np.concatenate((im, (1.-smooth)*im_old+smooth*im_new), axis=-1)

mc.anim_save(mc.rectif(im), os.path.join(mc.figpath, name_))
mc.in_show_video(name_)


# More is not always better

## MotionClouds¶

MotionClouds are random dynamic stimuli optimized to study motion perception.

Notably, this method was used in the following paper:

• Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian, Guillaume S. Masson. More is not always better: dissociation between perception and action explained by adaptive gain control. Nature Neuroscience, 2012 URL

In this notebook, we describe the scripts used to generate such stimuli.

# The Vancouver set

I have heard Vancouver can get foggy and cloudy in the winter. Here, I will provide some examples of realistic simulations of it...

This stimulation was used in the following poster presented at VSS:


@article{Kreyenmeier2016,
author = {Kreyenmeier, Philipp and Fooken, Jolande and Spering, Miriam},
doi = {10.1167/16.12.457},
issn = {1534-7362},
journal = {Journal of Vision},
month = {sep},
number = {12},
pages = {457},
publisher = {The Association for Research in Vision and Ophthalmology},
title = {{Similar effects of visual context dynamics on eye and hand movements}},
url = {http://jov.arvojournals.org/article.aspx?doi=10.1167/16.12.457},
volume = {16},
year = {2016}
}

# An optic flow with Motion Clouds

## Horizon¶

Script done in collaboration with Jean Spezia.

TODO: describe what we do here...

In [1]:
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
seed = 1234
size = 5
N_X, N_Y, N_frame = 2**size, 2**size, 128

In [2]:
fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)

In [3]:
N_orient = 9
hzn1 = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
for i, x_i in enumerate(np.linspace(-1, 1., N_orient)):
for j, x_j in enumerate(np.linspace(-1, 1., N_orient)):
V_X = 2 * x_i / (1+x_i**2)
V_Y = 2 * x_j / (1+x_j**2)
#f_0 = ...
# theta = np.arctan2(V_Y, V_X)
env = mc.envelope_gabor(fx, fy, ft, V_X=V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
speed2 = mc.random_cloud(env, seed=seed)
hzn1[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :] = speed2

name = 'optic-flow'
mc.anim_save(mc.rectif(hzn1, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name)

In [4]:
N_orient = 9
hzn2 = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
i = 0
j = 0
while (i != 9):
while (j != 9):
hzn2[(i)*N_X:(i+1)*N_Y, (j)*N_Y:(j+1)*N_Y, :] = speed2
j += 1
j = 0
i += 1

V_X = 0.5
V_Y = 0.0
i = 3
j = 0
while (i != -1):
env = mc.envelope_gabor(fx, fy, ft, V_X=-V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
speed = mc.random_cloud(env, seed=seed)
env = mc.envelope_gabor(fx, fy, ft, V_X=V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
speed2 = mc.random_cloud(env, seed=seed)
while (j != 9):
hzn2[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :] = speed
hzn2[(8-i)*N_X:(9-i)*N_X, j*N_Y:(j+1)*N_Y, :] = speed2
j += 1
j = 0
V_X = V_X + V_X*1.5
V_Y = V_Y + V_Y*1.5
i += -1

name = 'Horizon'
mc.anim_save(mc.rectif(hzn1, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name)

In [5]:
hzn = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
hzn[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :].shape

Out[5]:
(0, 32, 128)
In [6]:
hzn = hzn1 + hzn2
dest = 'flow'
mc.anim_save(mc.rectif(hzn, contrast=.99), dest)
mc.in_show_video(name)