Posts about motion-detection

Smooth transition between MCs

In [1]:
"""
A smooth transition while changing parameters

(c) Laurent Perrinet - INT/CNRS

"""

import MotionClouds as mc
import numpy as np
import os

name = 'smooth'

#initialize
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)

name_ = mc.figpath + name

seed = 123456
B_sf_ = [0.025, 0.05, 0.1, 0.2, 0.4, 0.2, 0.1, 0.05]
im = np.empty(shape=(mc.N_X, mc.N_Y, 0))
name_ = name + '-B_sf'
for i_sf, B_sf in enumerate(B_sf_):
    im_new = mc.random_cloud(mc.envelope_gabor(fx, fy, ft, B_sf=B_sf), seed=seed)
    im = np.concatenate((im, im_new), axis=-1)

mc.anim_save(mc.rectif(im), os.path.join(mc.figpath, name_))
mc.in_show_video(name_)
In [2]:
name_ += '_smooth'
smooth = (ft - ft.min())/(ft.max() - ft.min()) # smoothly progress from 0. to 1.
N = len(B_sf_)
im =  np.empty(shape=(mc.N_X, mc.N_Y, 0))
for i_sf, B_sf in enumerate(B_sf_):
    im_old = mc.random_cloud(mc.envelope_gabor(fx, fy, ft, B_sf=B_sf), seed=seed)
    im_new = mc.random_cloud(mc.envelope_gabor(fx, fy, ft, B_sf=B_sf_[(i_sf+1) % N]), seed=seed)
    im = np.concatenate((im, (1.-smooth)*im_old+smooth*im_new), axis=-1)

mc.anim_save(mc.rectif(im), os.path.join(mc.figpath, name_))
mc.in_show_video(name_)

More is not always better

MotionClouds

MotionClouds are random dynamic stimuli optimized to study motion perception.

Notably, this method was used in the following paper:

  • Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian, Guillaume S. Masson. More is not always better: dissociation between perception and action explained by adaptive gain control. Nature Neuroscience, 2012 URL

In this notebook, we describe the scripts used to generate such stimuli.

Read more…

The Vancouver set

I have heard Vancouver can get foggy and cloudy in the winter. Here, I will provide some examples of realistic simulations of it...

This stimulation was used in the following poster presented at VSS:


@article{Kreyenmeier2016,
author = {Kreyenmeier, Philipp and Fooken, Jolande and Spering, Miriam},
doi = {10.1167/16.12.457},
issn = {1534-7362},
journal = {Journal of Vision},
month = {sep},
number = {12},
pages = {457},
publisher = {The Association for Research in Vision and Ophthalmology},
title = {{Similar effects of visual context dynamics on eye and hand movements}},
url = {http://jov.arvojournals.org/article.aspx?doi=10.1167/16.12.457},
volume = {16},
year = {2016}
}

Read more…

An optic flow with Motion Clouds

Horizon

Script done in collaboration with Jean Spezia.

TODO: describe what we do here...

In [1]:
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
seed = 1234
size = 5
N_X, N_Y, N_frame = 2**size, 2**size, 128
In [2]:
fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)
In [3]:
N_orient = 9
hzn1 = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
for i, x_i in enumerate(np.linspace(-1, 1., N_orient)):
    for j, x_j in enumerate(np.linspace(-1, 1., N_orient)):
        V_X = 2 * x_i / (1+x_i**2)
        V_Y = 2 * x_j / (1+x_j**2)
        #f_0 = ...
        # theta = np.arctan2(V_Y, V_X)
        env = mc.envelope_gabor(fx, fy, ft, V_X=V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
        speed2 = mc.random_cloud(env, seed=seed)
        hzn1[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :] = speed2

name = 'optic-flow'
mc.anim_save(mc.rectif(hzn1, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name)
In [4]:
N_orient = 9
hzn2 = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
i = 0
j = 0
while (i != 9):
    while (j != 9):
        hzn2[(i)*N_X:(i+1)*N_Y, (j)*N_Y:(j+1)*N_Y, :] = speed2
        j += 1
    j = 0
    i += 1

V_X = 0.5
V_Y = 0.0
i = 3
j = 0
while (i != -1):
    env = mc.envelope_gabor(fx, fy, ft, V_X=-V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
    speed = mc.random_cloud(env, seed=seed)
    env = mc.envelope_gabor(fx, fy, ft, V_X=V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
    speed2 = mc.random_cloud(env, seed=seed)
    while (j != 9):
        hzn2[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :] = speed
        hzn2[(8-i)*N_X:(9-i)*N_X, j*N_Y:(j+1)*N_Y, :] = speed2
        j += 1
    j = 0
    V_X = V_X + V_X*1.5
    V_Y = V_Y + V_Y*1.5
    i += -1 

name = 'Horizon'
mc.anim_save(mc.rectif(hzn1, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name)
In [5]:
hzn = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
hzn[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :].shape
Out[5]:
(0, 32, 128)
In [6]:
hzn = hzn1 + hzn2
dest = 'flow'
mc.anim_save(mc.rectif(hzn, contrast=.99), dest)
mc.in_show_video(name)

Colored Motion Clouds

Exploring colored Motion Clouds

By construction, Motion Clouds are grayscale:

In [1]:
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
name = 'color'
In [2]:
env = mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0.)
mc.figures(env, name + '_gray', do_figs=False)
mc.in_show_video(name + '_gray')

However it is not hard to imagine extending them to the color space. A first option is to create a diferent motion cloud for each channel:

In [3]:
colored_MC = np.zeros((env.shape[0], env.shape[1], 3, env.shape[2]))

for i in range(3):
    colored_MC[:, :, i, :] = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0.)))

mc.anim_save(colored_MC, os.path.join(mc.figpath, name + '_color'))
mc.in_show_video(name + '_color')

Note that the average luminance is also a random cloud:

In [4]:
mc.anim_save(mc.rectif(colored_MC.sum(axis=2)), os.path.join(mc.figpath, name + '_gray2'))
mc.in_show_video(name + '_gray2')

We may create a strictly isoluminant cloud:

In [5]:
luminance = colored_MC.sum(axis=2)[:, :, np.newaxis, :]
mc.anim_save(colored_MC/luminance, os.path.join(mc.figpath, name + '_isocolor'))
mc.in_show_video(name + '_isocolor')

There are now many more possibilities, such as

  • weighting the different R, G and B channels to obtain a better tuning to psychophysics,
  • let only the predominant channels be activated (like the one corresponding to the red and green channels which correspond to the maximal responses of cones).