An optic flow with Motion Clouds

Horizon

Script done in collaboration with Jean Spezia.

TODO: describe what we do here...

In [1]:
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
seed = 1234
size = 5
N_X, N_Y, N_frame = 2**size, 2**size, 128
In [2]:
fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)
In [3]:
N_orient = 9
hzn1 = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
for i, x_i in enumerate(np.linspace(-1, 1., N_orient)):
    for j, x_j in enumerate(np.linspace(-1, 1., N_orient)):
        V_X = 2 * x_i / (1+x_i**2)
        V_Y = 2 * x_j / (1+x_j**2)
        #f_0 = ...
        # theta = np.arctan2(V_Y, V_X)
        env = mc.envelope_gabor(fx, fy, ft, V_X=V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
        speed2 = mc.random_cloud(env, seed=seed)
        hzn1[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :] = speed2

name = 'optic-flow'
mc.anim_save(mc.rectif(hzn1, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name)
In [4]:
N_orient = 9
hzn2 = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
i = 0
j = 0
while (i != 9):
    while (j != 9):
        hzn2[(i)*N_X:(i+1)*N_Y, (j)*N_Y:(j+1)*N_Y, :] = speed2
        j += 1
    j = 0
    i += 1

V_X = 0.5
V_Y = 0.0
i = 3
j = 0
while (i != -1):
    env = mc.envelope_gabor(fx, fy, ft, V_X=-V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
    speed = mc.random_cloud(env, seed=seed)
    env = mc.envelope_gabor(fx, fy, ft, V_X=V_X, V_Y=V_Y, B_theta=np.inf, B_sf=np.inf)
    speed2 = mc.random_cloud(env, seed=seed)
    while (j != 9):
        hzn2[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :] = speed
        hzn2[(8-i)*N_X:(9-i)*N_X, j*N_Y:(j+1)*N_Y, :] = speed2
        j += 1
    j = 0
    V_X = V_X + V_X*1.5
    V_Y = V_Y + V_Y*1.5
    i += -1 

name = 'Horizon'
mc.anim_save(mc.rectif(hzn1, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name)
In [5]:
hzn = mc.np.zeros((N_orient*N_X, N_orient*N_X, N_frame))
hzn[i*N_X:(i+1)*N_X, j*N_Y:(j+1)*N_Y, :].shape
Out[5]:
(0, 32, 128)
In [6]:
hzn = hzn1 + hzn2
dest = 'flow'
mc.anim_save(mc.rectif(hzn, contrast=.99), dest)
mc.in_show_video(name)

Colored Motion Clouds

Exploring colored Motion Clouds

By construction, Motion Clouds are grayscale:

In [1]:
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
name = 'color'
In [2]:
env = mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0.)
mc.figures(env, name + '_gray', do_figs=False)
mc.in_show_video(name + '_gray')

However it is not hard to imagine extending them to the color space. A first option is to create a diferent motion cloud for each channel:

In [3]:
colored_MC = np.zeros((env.shape[0], env.shape[1], 3, env.shape[2]))

for i in range(3):
    colored_MC[:, :, i, :] = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0.)))

mc.anim_save(colored_MC, os.path.join(mc.figpath, name + '_color'))
mc.in_show_video(name + '_color')

Note that the average luminance is also a random cloud:

In [4]:
mc.anim_save(mc.rectif(colored_MC.sum(axis=2)), os.path.join(mc.figpath, name + '_gray2'))
mc.in_show_video(name + '_gray2')

We may create a strictly isoluminant cloud:

In [5]:
luminance = colored_MC.sum(axis=2)[:, :, np.newaxis, :]
mc.anim_save(colored_MC/luminance, os.path.join(mc.figpath, name + '_isocolor'))
mc.in_show_video(name + '_isocolor')

There are now many more possibilities, such as

  • weighting the different R, G and B channels to obtain a better tuning to psychophysics,
  • let only the predominant channels be activated (like the one corresponding to the red and green channels which correspond to the maximal responses of cones).

Recruiting different population ratios in V1 using orientation components: defining a protocol

A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.

This is part of a larger study to tune orientation bandwidth.

summary of the electro-physiology protocol

Read more…

Reverse-phi and Asymmetry of ON and OFF responses

In [1]:
import os
import MotionClouds as mc
import scipy.ndimage as nd
import matplotlib.cm as cm
import numpy as np
%pylab inline
Populating the interactive namespace from numpy and matplotlib
In [2]:
import holoviews as hv
#hv.notebook_extension('bokeh')
hv.notebook_extension()
nb_name = 'reverse_'
HoloViewsJS successfully loaded in this cell.

reverse phi motion generator

In [3]:
def toss():
    return (np.random.rand()>.5)*2-1

def mcg(img,  wait=12, term=6, shift_range=4, reverse=True, nb_test=1, random=False, verbose=False):
    """
    From a (square, gray level) image, we write a funtion that returns a (3D ndarray) movie with a revese phi motion.
    
    This movie consists of nb_test*(wait + 2*term) gray scale frames where:
    
    nb_test: is the number of repetition (in case of random tosses of the shift)
    wait: the number of frames we are waiting
    term: the number of frames we are presenting one frame and the then the other
    reverse: if we reverse the contrast
    
    """
    if (nb_test <= 0):
        return (movie[0, 0, 0])
    period = wait + 2*term
            
    movie = np.zeros([img.shape[0], img.shape[1], (nb_test*(wait + 2*term))])
    for i_test in range(nb_test):
        if random:
            contrast = toss()
            shift = toss() * np.random.randint(shift_range)
        else:
            contrast = 2*(reverse==False) - 1
            shift = shift_range
        if verbose: print('contrast=', contrast)
        start = i_test*period + wait
        movie[:, :, start:(start + term)] = img[:, :, np.newaxis]
        img_shifted = np.roll(img, shift, 1) * contrast
        movie[:, :, (start + term):(start + 2*term)] = img_shifted[:, :, np.newaxis]
    return movie

Random Image

Let's first try with a binary random image:

In [4]:
%%opts Image style(cmap='gray')
pictogram = 2. * (np.random.rand(64, 64) > .5) - 1.
hv.Image(pictogram).hist()
Out[4]:

The simplest movement is to have no reversal ($\phi$-motion) the image goes up:

In [5]:
movie = mcg(img=pictogram, reverse=False)
name = nb_name + 'pictogram-phi'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

The simplest movement is to have no reversal ($\phi$-motion) the image goes down:

In [6]:
movie = mcg(img=pictogram, reverse=False, shift_range =-4)
name = nb_name + 'pictogram-phidown'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

Flipping the contrast induces a reversal (reverse-$\phi$ motion):

In [7]:
movie = mcg(img=pictogram, reverse=True)
name = nb_name + 'pictogram-reversephi'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

Showing a sequence of such motions with random reversals and shifts:

In [8]:
movie = mcg(img=pictogram, nb_test=10, random=True)
name = nb_name + 'pictogram'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

trying out with a (natural) image

In [9]:
from scipy.misc import face
color = face()
print('Size of image @ loading ', color.shape)
image = color[:, (1024-768):, :].mean(axis=-1)
image = 2* (image - image.min()) / (image.max() - image.min()) - 1.
print('Size of image after square reshaping + gray levels ', image.shape)
Size of image @ loading  (768, 1024, 3)
Size of image after square reshaping + gray levels  (768, 768)
In [10]:
%%opts Image style(cmap='gray')
hv.RGB(color) + hv.Image(image).hist()
Out[10]:
In [11]:
movie = mcg(img=np.rot90(image, 3))
name = nb_name + 'natural'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

Showing a sequence of such motions with random reversals and shifts demonstrates that there is a "surprising" effect related to the reversal:

In [12]:
movie = mcg(img=np.rot90(image, 3), nb_test=10, random=True)
name = nb_name + 'natural_random'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

static MotionClouds

trying with just one frame of a motion cloud:

In [13]:
%%opts Image style(cmap='gray')
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, 1)
name = nb_name + 'MC1'
image = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft)))
image = 2. * image.reshape((mc.N_X, mc.N_Y)) - 1.
image = np.rot90(image, 1)
hv.Image(image).hist()
Out[13]:

The simplest movement is to have no reversal ($\phi$-motion) the image goes up:

In [14]:
movie = mcg(img=image, reverse=False)
name = nb_name + 'MC1-phi'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

Flipping the contrast induces a reversal (reverse-$\phi$ motion):

In [15]:
movie = mcg(img=image, reverse=True)
name = nb_name + 'MC1-reversephi'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

Showing a sequence of such motions with random reversals and shifts shows that the difference between a reversal or not is harder to catch than on a natural image:

In [16]:
movie = mcg(img=image, nb_test=10, random=True)
name = nb_name + 'MC1re-random'
mc.anim_save(mc.rectif(movie), os.path.join(mc.figpath, name))
mc.in_show_video(name)

MotionClouds

trying now with a full motion cloud:

In [17]:
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
name = nb_name + 'MC'
env = mc.envelope_gabor(fx, fy, ft)
env = rot90(env)
mc.figures(env, name, seed=12234565)
mc.in_show_video(name)

Now generate a reverse phi stimulus by showing just 2 frames the next one being filpped over:

In [18]:
i_frame = 4
mov = mc.random_cloud(env)
mov_ =  np.zeros_like(mov)
print ('Size of movie @ loading ', mov_[:, :, (mc.N_frame//2-i_frame):(mc.N_frame//2)].shape, '\nSize of frame = ', mov[:, :, mc.N_frame//2-1].shape)
mov_[:, :, (mc.N_frame//2-i_frame):(mc.N_frame//2)] = mov[:, :, mc.N_frame//2-1][:, :, np.newaxis]
mov_[:, :, (mc.N_frame//2):(mc.N_frame//2+i_frame)] = mov[:, :, mc.N_frame//2][:, :, np.newaxis]
mov_ = mc.rectif(mov_, contrast=1.)
mc.anim_save(mov_, os.path.join(mc.figpath, name + '_just_Phi'))
mc.in_show_video(name + '_just_Phi')
Size of movie @ loading  (256, 256, 4) 
Size of frame =  (256, 256)
In [19]:
%%opts Image style(cmap='gray')
key_dims = [hv.Dimension('x',range=(-.5,.5)), hv.Dimension('y',range=(-.5,.5))]
one = hv.Image(mov_[:, :, mc.N_frame//2-i_frame].T, group='one frame', key_dimensions=key_dims)
two = hv.Image(mov_[:, :, mc.N_frame//2].T, group='second frame', key_dimensions=key_dims).hist()
one + two
Out[19]:
In [20]:
i_frame = 4
mov = mc.random_cloud(env)
mov_ =  np.zeros_like(mov)
print ('Size of movie @ loading ', mov_[:, :, (mc.N_frame//2-i_frame):(mc.N_frame//2)].shape, '\nSize of frame = ', mov[:, :, mc.N_frame//2-1].shape)
mov_[:, :, (mc.N_frame//2-i_frame):(mc.N_frame//2)] = mov[:, :, mc.N_frame//2-1][:, :, np.newaxis]
mov_[:, :, (mc.N_frame//2):(mc.N_frame//2+i_frame)] = -mov[:, :, mc.N_frame//2][:, :, np.newaxis]
mov_ = mc.rectif(mov_, contrast=1.)
mc.anim_save(mov_, os.path.join(mc.figpath, name + '_reverse_Phi'))
mc.in_show_video(name + '_reverse_Phi')
Size of movie @ loading  (256, 256, 4) 
Size of frame =  (256, 256)
In [21]:
%%opts Image style(cmap='gray')
resone = hv.Image(mov_[:, :, mc.N_frame//2-i_frame].T, group='one frame', key_dimensions=key_dims)
two = hv.Image(mov_[:, :, mc.N_frame//2].T, group='second frame', key_dimensions=key_dims).hist()
one + two
Out[21]:

Recruiting different population ratios in V1 using orientation components: a biphoton study

A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.

To install the necessary libraries, check out the documentation.

summary of the biphoton protocol

For the biphoton experiment:

  • The refresh rate of the screen is 70Hz and stimulation for 5 times 1s, which makes 350 images.
  • for the spatial frequency 0.125 cyc/deg is optimal (between 0.01 and 0.16).
  • for the temporal frequency 2 cyc/sec is optimal (between 0.8 and 4 sic/sec), we manipulate $B_V$ to get a qualitative estimate.

Read more…

Eschercube

Eschercube

Motion Clouds are a set of stimuli designed to explore in a systematic way the functional response of a sensory system to a natural-like motion stimulus.

These are optimized for translating, full-field motion and are by construction textures synthesized from randomly placed similar motion patches with characteristics spatial parameters. The object of such an endeavor is to systematically test a system to varying the parameters by testing the response to a series of such textures.

We show here for a fixed set of central parameters (mean speed, direction, orientation and spatial frequency) a cube constituted by a family of such Motion Clouds when varying the bandwidth parameters for speed (left panel), frequency (right panel) and orientation (top panel).

Each elementary cube in the larger cube denoting the parameter space represents a Motion Cloud and is shown as a cube to represent the corresponding movie, with time flowing from lower left to upper right in the right and top facets. Overlaid hue gives a measure of a typical response for a sensory response (here a motion energy model) which gives a complete characterization of the sensory system at hand.

In [1]:
from IPython.display import HTML
HTML(data='''<video alt="MCartwork" controls autoplay="1" loop="1" width="100%">
                <source src={url} type="video/mp4"/>
             </video>'''.format(url='http://motionclouds.invibe.net/files/MCartwork.mp4'))
Out[1]:

the making of the parameter cube : 1) making a cristal

In [2]:
infile = \
"""ATOM 001 N   ASN A  4 0 0 0
ATOM 002 CA  ASN A  4 0 0 1
ATOM 003 C   ASN A  4 0 0 2
ATOM 004 O   ASN A  4 0 0 3
ATOM 005 CB  ASN A  4 0 0 4
ATOM 006 CG  ASN A  4 0 1 0
ATOM 007 OD1 ASN A  4 0 1 1
ATOM 008 ND2 ASN A  4 0 1 2
ATOM 009 N   CYS A  5 0 1 3
ATOM 010 CA  CYS A  5 0 1 4
ATOM 011 C   CYS A  5 0 2 0
ATOM 012 O   CYS A  5 0 2 1
ATOM 013 CB  CYS A  5 0 2 2
ATOM 014 SG  CYS A  5 0 2 3
ATOM 015 N   GLU A  6 0 2 4
ATOM 016 CA  GLU A  6 0 3 0
ATOM 017 C   GLU A  6 0 3 1
ATOM 018 O   GLU A  6 0 3 2
ATOM 019 CB  GLU A  6 0 3 3
ATOM 020 CG  GLU A  6 0 3 4
ATOM 021 CD  GLU A  6 0 4 0
ATOM 022 OE1 GLU A  6 0 4 1
ATOM 023 OE2 GLU A  6 0 4 2
ATOM 024 N   ARG A  7 0 4 3
ATOM 025 CA  ARG A  7 0 4 4
ATOM 026 C   ARG A  7 1 0 0
ATOM 027 O   ARG A  7 1 0 1
ATOM 028 CB  ARG A  7 1 0 2
ATOM 029 CG  ARG A  7 1 0 3
ATOM 030 CD  ARG A  7 1 0 4
ATOM 031 NE  ARG A  7 1 1 0
ATOM 032 CZ  ARG A  7 1 1 1
ATOM 033 NH1 ARG A  7 1 1 2
ATOM 034 NH2 ARG A  7 1 1 3
ATOM 035 N   VAL A  8 1 1 4
ATOM 036 CA  VAL A  8 1 2 0
ATOM 037 C   VAL A  8 1 2 1
ATOM 038 O   VAL A  8 1 2 2
ATOM 039 CB  VAL A  8 1 2 3
ATOM 040 CG1 VAL A  8 1 2 4
ATOM 041 CG2 VAL A  8 1 3 0
ATOM 042 N   TRP A  9 1 3 1
ATOM 043 CA  TRP A  9 1 3 2
ATOM 044 C   TRP A  9 1 3 3
ATOM 045 O   TRP A  9 1 3 4
ATOM 046 CB  TRP A  9 1 4 0
ATOM 047 CG  TRP A  9 1 4 1
ATOM 048 CD1 TRP A  9 1 4 2
ATOM 049 CD2 TRP A  9 1 4 3
ATOM 050 NE1 TRP A  9 1 4 4
ATOM 051 CE2 TRP A  9 2 0 0
ATOM 052 CE3 TRP A  9 2 0 1
ATOM 053 CZ2 TRP A  9 2 0 2
ATOM 054 CZ3 TRP A  9 2 0 3
ATOM 055 CH2 TRP A  9 2 0 4
ATOM 056 N   LEU A 10 2 1 0
ATOM 057 CA  LEU A 10 2 1 1
ATOM 058 C   LEU A 10 2 1 2
ATOM 059 O   LEU A 10 2 1 3
ATOM 060 CB  LEU A 10 2 1 4
ATOM 061 CG  LEU A 10 2 2 0
ATOM 062 CD1 LEU A 10 2 2 1
ATOM 063 CD2 LEU A 10 2 2 2
ATOM 064 N   ASN A 11 2 2 3
ATOM 065 CA  ASN A 11 2 2 4
ATOM 066 C   ASN A 11 2 3 0
ATOM 067 O   ASN A 11 2 3 1
ATOM 068 CB  ASN A 11 2 3 2
ATOM 069 CG  ASN A 11 2 3 3
ATOM 070 OD1 ASN A 11 2 3 4
ATOM 071 ND2 ASN A 11 2 4 0
ATOM 072 N   VAL A 12 2 4 1
ATOM 073 CA  VAL A 12 2 4 2
ATOM 074 C   VAL A 12 2 4 3
ATOM 075 O   VAL A 12 2 4 4
ATOM 076 CB  VAL A 12 3 0 0
ATOM 077 CG1 VAL A 12 3 0 1
ATOM 078 CG2 VAL A 12 3 0 2
ATOM 079 N   THR A 13 3 0 3
ATOM 080 CA  THR A 13 3 0 4
ATOM 081 C   THR A 13 3 1 0
ATOM 082 O   THR A 13 3 1 1
ATOM 083 CB  THR A 13 3 1 2
ATOM 084 OG1 THR A 13 3 1 3
ATOM 085 CG2 THR A 13 3 1 4
ATOM 086 N   PRO A 14 3 2 0
ATOM 087 CA  PRO A 14 3 2 1
ATOM 088 C   PRO A 14 3 2 2
ATOM 089 O   PRO A 14 3 2 3
ATOM 090 CB  PRO A 14 3 2 4
ATOM 091 CG  PRO A 14 3 3 0
ATOM 092 CD  PRO A 14 3 3 1
ATOM 093 N   ALA A 15 3 3 2
ATOM 094 CA  ALA A 15 3 3 3
ATOM 095 C   ALA A 15 3 3 4
ATOM 096 O   ALA A 15 3 4 0
ATOM 097 CB  ALA A 15 3 4 1
ATOM 098 N   THR A 16 3 4 2
ATOM 099 CA  THR A 16 3 4 3
ATOM 100 C   THR A 16 3 4 4
ATOM 101 O   THR A 16 4 0 0
ATOM 102 CB  THR A 16 4 0 1
ATOM 103 OG1 THR A 16 4 0 2
ATOM 104 CG2 THR A 16 4 0 3
ATOM 105 N   LEU A 17 4 0 4
ATOM 106 CA  LEU A 17 4 1 0
ATOM 107 C   LEU A 17 4 1 1
ATOM 108 O   LEU A 17 4 1 2
ATOM 109 CB  LEU A 17 4 1 3
ATOM 110 CG  LEU A 17 4 1 4
ATOM 111 CD1 LEU A 17 4 2 0
ATOM 112 CD2 LEU A 17 4 2 1
ATOM 113 N   ARG A 18 4 2 2
ATOM 114 CA  ARG A 18 4 2 3
ATOM 115 C   ARG A 18 4 2 4
ATOM 116 O   ARG A 18 4 3 0
ATOM 117 CB  ARG A 18 4 3 1
ATOM 118 CG  ARG A 18 4 3 2
ATOM 119 CD  ARG A 18 4 3 3
ATOM 120 NE  ARG A 18 4 3 4
ATOM 121 CZ  ARG A 18 4 4 0
ATOM 122 NH1 ARG A 18 4 4 1
ATOM 123 NH2 ARG A 18 4 4 2
ATOM 124 N   SER A 19 4 4 3
ATOM 125 CA  SER A 19 4 4 4
CONECT 001 002 003 004 005
CONECT 006 007 008 009 010
CONECT 011 012 013 014 015
CONECT 016 017 018 019 020
CONECT 021 022 023 024 025
CONECT 026 027 028 029 030
CONECT 031 032 033 034 035
CONECT 036 037 038 039 040
CONECT 041 042 043 044 045
CONECT 046 047 048 049 050
CONECT 051 052 053 054 055
CONECT 056 057 058 059 060
CONECT 061 062 063 064 065
CONECT 066 067 068 069 070
CONECT 071 072 073 074 075
CONECT 076 077 078 079 080
CONECT 081 082 083 084 085
CONECT 086 087 088 089 090
CONECT 091 092 093 094 095
CONECT 096 097 098 099 100
CONECT 101 102 103 104 105
CONECT 106 107 108 109 110
CONECT 111 112 113 114 115
CONECT 116 117 118 119 120
CONECT 121 122 123 124 125
CONECT 001 006 011 016 021
CONECT 026 031 036 041 046
CONECT 051 056 061 066 071
CONECT 076 081 086 091 096
CONECT 101 106 111 116 121
CONECT 002 007 012 017 022
CONECT 027 032 037 042 047
CONECT 052 057 062 067 072
CONECT 077 082 087 092 097
CONECT 102 107 112 117 122
CONECT 003 008 013 018 023
CONECT 028 033 038 043 048
CONECT 053 058 063 068 073
CONECT 078 083 088 093 098
CONECT 103 108 113 118 123
CONECT 004 009 014 019 024
CONECT 029 034 039 044 049
CONECT 054 059 064 069 074
CONECT 079 084 089 094 099
CONECT 104 109 114 119 124
CONECT 005 010 015 020 025
CONECT 030 035 040 045 050
CONECT 055 060 065 070 075
CONECT 080 085 090 095 100
CONECT 105 110 115 120 125
CONECT 001 026 051 076 101
CONECT 002 027 052 077 102
CONECT 003 028 053 078 103
CONECT 004 029 054 079 104
CONECT 005 030 055 080 105
CONECT 006 031 056 081 106
CONECT 007 032 057 082 107
CONECT 008 033 058 083 108
CONECT 009 034 059 084 109
CONECT 010 035 060 085 110
CONECT 011 036 061 086 111
CONECT 012 037 062 087 112
CONECT 013 038 063 088 113
CONECT 014 039 064 089 114
CONECT 015 040 065 090 115
CONECT 016 041 066 091 116
CONECT 017 042 067 092 117
CONECT 018 043 068 093 118
CONECT 019 044 069 094 119
CONECT 020 045 070 095 120
CONECT 021 046 071 096 121
CONECT 022 047 072 097 122
CONECT 023 048 073 098 123
CONECT 024 049 074 099 124
CONECT 025 050 075 100 125
"""

the making of the parameter cube : 2) filling the cristal

In [3]:
import os
if not os.path.exists('../galleries/Artwork/MCartwork.mp4'):
    import numpy as np
    from tvtk.api import tvtk
    from mayavi.sources.vtk_data_source import VTKDataSource
    from mayavi import mlab
    import MotionClouds as mc
    import itertools
    import os

    def source(dim, bwd):
        """
        Create motion cloud source
        """
        fx, fy, ft = dim
        z = mc.envelope_gabor(fx, fy, ft, B_sf=bwd[0], B_V=bwd[1], B_theta=bwd[2])
        data = mc.rectif(mc.random_cloud(z))
        return data

    def image_data(dim, sp, orig, bwd):
        """
        Create Image Data for the surface from a data source.
        """
        data = source(dim, bwd)
        i = tvtk.ImageData(spacing=sp, origin=orig)
        i.point_data.scalars = data.ravel()
        i.point_data.scalars.name = 'scalars'
        i.dimensions = data.shape
        return i

    def view(dataset):
        """
        Open up a mayavi scene and display the cubeset in it.
        """
        engine = mlab.get_engine()
        src = VTKDataSource(data=dataset)
        engine.add_source(src)
        mlab.pipeline.surface(src, colormap='gray')
        return

    def main(dim, sp=(1, 1, 1), orig=(0, 0, 0), B=(.01, .1, .4)):
        view(image_data(dim=dim, sp=sp, orig=orig, bwd=B))
        return

    def get_nodes_and_edges():
        nodes = dict()
        edges = list()
        atoms = set()
        # Build the graph from the PDB information
        for line in infile.splitlines():
            line = line.split()
            if line[0] == 'ATOM':
                nodes[line[1]] = (line[2], line[6], line[7], line[8])
                atoms.add(line[2])
            elif line[0] == 'CONECT':
                for start, stop in zip(line[1:-1], line[2:]):
                    edges.append((start, stop))
        atoms = list(atoms)
        atoms.sort()
        atoms = dict(zip(atoms, range(len(atoms))))
        # Turn the graph into 3D positions, and a connection list.
        labels = dict()
        x       = list()
        y       = list()
        z       = list()
        scalars = list()
        for index, label in enumerate(nodes):
            labels[label] = index
            this_scalar, this_x, this_y, this_z= nodes[label]
            scalars.append(atoms[this_scalar])
            x.append(float(this_x))
            y.append(float(this_y))
            z.append(float(this_z))
        connections = list()
        for start, stop in edges:
            connections.append((labels[start], labels[stop]))
        x       = np.array(x)
        y       = np.array(y)
        z       = np.array(z)
        scalars = np.array(scalars)
        return x, y, z, connections, scalars

    size = 2**4 ##
    space = 1.5 * size # space between 2 cubes
    N = 5
    idx = np.arange(N)
    pos = idx * space
    Bf = np.logspace(-2., 0.1, N)
    Bv = [0.01, 0.1, 0.5, 1.0, 10.0]
    sigma_div = [2, 3, 5, 8, 13]
    Bo = np.pi/np.array(sigma_div)
    fx, fy, ft = mc.get_grids(size, size, size)
    downscale = 4
    mlab.figure(1, bgcolor=(.6, .6, .6), fgcolor=(0, 0, 0), size=(1600/downscale,1200/downscale))
    mlab.clf()
    do_grid, do_splatter, do_MCs = True, True, True

    """ Generate the cubical graph structure to connect each individual cube """
    if do_grid:
        x, y, z, connections, scalars = get_nodes_and_edges()
        x = x * space + size / 2.
        y = y * space + size / 2.
        z = z * space + size / 2.
        scalars = 0.8*np.ones(x.shape)
        pts = mlab.points3d(x, y, z, scalars, colormap='Blues', scale_factor=5.0, resolution=10)
        pts.mlab_source.dataset.lines = np.array(connections)
        # Use a tube filter to plot tubes on the link, varying the radius with the scalar value
        tube = mlab.pipeline.tube(pts, tube_sides=15, tube_radius=1.5)
        mlab.pipeline.surface(tube, color=(0.4, 0.4, 0.4) )

    """ Gaussian Splatter """
    if do_splatter:
        # Visualize the local atomic density
        x_, y_, z_ = np.mgrid[0:(N*space), 0:(N*space), 0:(N*space)]
        response = np.exp(-((x_-4*space)**2 +(y_- 1*space)**2 +((z_-3.4*space)/2.)**2)/2/(1.3*space)**2)
        mlab.contour3d(response, opacity=0.5)

    """ Generate the Motion Clouds cubes """
    if do_MCs:
        for i, j, k in list(itertools.product(idx, idx, idx)):
            main(dim=(fx, fy, ft), orig=(pos[i], pos[j], pos[k]), B=(Bf[i], Bv[k], Bo[j]))
    elevation = 90. - 18.
    view = mlab.view(azimuth=290., elevation=elevation, distance='auto', focalpoint='auto')
    distance = view[2]
    mlab.savefig('MCartwork.png')
    if True:
        N_frame = 64 ##
        for i_az, azimuth in enumerate(np.linspace(0, 360, N_frame, endpoint=False)):
            mlab.view(*view)
            mlab.view(azimuth=azimuth)
            mlab.view(distance=distance*(.7 + 0.3*np.cos(azimuth*2*np.pi/360.)))
            mlab.savefig('_MCartwork%03d.png' % i_az, size=(1600/downscale, 1200/downscale))
        os.system('ffmpeg -y -i _MCartwork%03d.png  MCartwork.mpg')
        os.system('ffmpeg -y -i _MCartwork%03d.png  MCartwork.mp4')
        os.system('convert -delay 8 -loop 0 -colors 256  -scale 25% -layers Optimize  _MCartwork*.png  MCartwork.gif')
        os.system('rm _MCartwork*')

the making of the parameter cube : 3) post-production

In [4]:
from IPython.display import display, Image
#display(Image(filename='../galleries/Artwork/MCartwork.png', embed=True))

Image(url='http://motionclouds.invibe.net/galleries/Artwork/MCartwork.png', embed=False)
Out[4]:

A bit of fun with gravity waves

A bit of fun with gravity waves

Motion Clouds were defined in the origin to provide a simple parameterization for textures. Thus we used a simple unimodal, normal distribution (on the log-radial frequency space to be more precise). But the larger set of Random Phase Textures may provide some interesting examples, some of them can even be fun! This is the case of this simulation of the waves you may observe on the surface on the ocean.

In [1]:
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="../files/waves.mp4" width=61.8%/></center>')
Out[1]:

Main features of gravitational waves are:

  1. longer waves travel faster (tsunami are fast and global, ripples are slow and local) - speed is linearly proportional to wavelength
  2. phase speed (following a wave's crest) is twice as fast as group speed (following a group of waves).

Motion Plaid

MotionPaids : from MotionClouds components to a plaid-like stimulus

In [1]:
import numpy as np
import MotionClouds as mc
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
name = 'MotionPlaid'

Plaids are usually created by adding two moving gratings (the components) to form a new simulus (the pattern). Such stimuli are crucial to understand how for instance information about motion is represented and processed and its neural mechanisms have been extensively studied notably by Anthony Movshon at the NYU. One shortcomming is the fact that these are created by components which created interference patterns when they are added (as in a Moiré pattern). The question remains to know if everything that we know about components vs pattern processing comes from these interference patterns or reaaly by the processing of the two componenents as a whole. As such, Motion Clouds are ideal candidates because they are generated with random phases: by construction, there should not be any localized interference between components. Let's verfy that:

Defintion of parameters:

In [2]:
theta1, theta2, B_theta = np.pi/4., -np.pi/4., np.pi/32

This figure shows how one can create MotionCloud stimuli that specifically target component and pattern cell. We show in the different lines of this table respectively: Top) one motion cloud component (with a strong selectivity toward the orientation perpendicular to direction) heading in the upper diagonal Middle) a similar motion cloud component following the lower diagonal Bottom) the addition of both components: perceptually, the horizontal direction is predominant.

In [3]:
print( mc.in_show_video.__doc__)

    Columns represent isometric projections of a cube. The left column displays
    iso-surfaces of the spectral envelope by displaying enclosing volumes at 5
    different energy values with respect to the peak amplitude of the Fourier spectrum.
    The middle column shows an isometric view of the faces of the movie cube.
    The first frame of the movie lies on the x-y plane, the x-t plane lies on the
    top face and motion direction is seen as diagonal lines on this face (vertical
    motion is similarly see in the y-t face). The third column displays the actual
    movie as an animation.

    Given a name, displays the figures corresponding to the Fourier spectra, the
    stimulus cubes and movies within the notebook.

    

Component one:

In [4]:
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
name_ = name + '_comp1'
mc.figures(diag1, name_, seed=12234565)
mc.in_show_video(name_)

Component two:

In [5]:
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
name_ = name + '_comp2'
mc.figures(diag2, name_, seed=12234565)
mc.in_show_video(name_)

The pattern is the sum of the two components:

In [6]:
name_ = name
mc.figures(diag1 + diag2, name, seed=12234565)
mc.in_show_video(name_)

A grid of plaids shows switch from transparency to pattern motion

Script done in collaboration with Jean Spezia.

In [7]:
import os
import numpy as np
import MotionClouds as mc

name = 'MotionPlaid_9x9'
if mc.check_if_anim_exist(name):
    B_theta = np.pi/32
    N_orient = 9
    downscale = 4
    fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
    mov = mc.np.zeros(((N_orient*mc.N_X/downscale), (N_orient*mc.N_X/downscale), mc.N_frame/downscale))
    i = 0
    j = 0
    for theta1 in np.linspace(0, np.pi/2, N_orient)[::-1]:
        for theta2 in np.linspace(0, np.pi/2, N_orient):
            diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
            diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
            mov[(i)*mc.N_X/downscale:(i+1)*mc.N_X/downscale, (j)*mc.N_Y/downscale : (j+1)*mc.N_Y/downscale, :] = mc.random_cloud(diag1 + diag2, seed=1234)
            j += 1
        j = 0
        i += 1
    mc.anim_save(mc.rectif(mov, contrast=.99), os.path.join(mc.figpath, name))
In [8]:
mc.in_show_video(name)

As in (Rust, 06) we show in this table the concatenation of a table of 9x9 MotionPlaids where the angle of the components vary on respectively the horizontal and vertical axes.
The diagonal from the bottom left to the top right corners show the addition of two component MotionClouds of similar direction: They are therefore also intance of the same Motion Clouds and thus consist in a single component.
As one gets further away from this diagonal, the angle between both component increases, as can be seen in the figure below. Note that first and last column are different instance of similar MotionClouds, just as first and last lines in in the table.

A parametric exploration on MotionPlaids

In [9]:
N_orient = 8
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame)
theta = 0
for dtheta in np.linspace(0, np.pi/2, N_orient):
    name_ = name + 'dtheta_' + str(dtheta).replace('.', '_')
    diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta + dtheta, V_X=np.cos(theta + dtheta), V_Y=np.sin(theta + dtheta), B_theta=B_theta)
    diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta - dtheta, V_X=np.cos(theta - dtheta), V_Y=np.sin(theta - dtheta), B_theta=B_theta)
    mc.figures(diag1 + diag2, name_, seed=12234565)
    mc.in_show_video(name_)

For clarity, we display MotionPlaids as the angle between both component increases from 0 to pi/2.
Left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum.
Right column of the table displays the actual movie as an animation.

Aperture Problem

SlippingClouds: MotionClouds for exploring the aperture problem

In [1]:
import numpy as np
import MotionClouds as mc

fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
name = 'SlippingClouds'

B_sf = .3
B_theta = np.pi/32
In [2]:
theta1, theta2 = 0., 0.
diag = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_sf=B_sf, B_theta=B_theta)
name_ = name + '_iso'
mc.figures(diag, name_, seed=12234565)
mc.in_show_video(name_)
In [3]:
theta1, theta2 = 0., np.pi/4.
diag = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_sf=B_sf, B_theta=B_theta)
name_ = name + '_diag'
mc.figures(diag, name_, seed=12234565)
mc.in_show_video(name_)
In [4]:
theta1, theta2 = 0., np.pi/2.
diag = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_sf=B_sf, B_theta=B_theta)
name_ = name + '_contra'
mc.figures(diag, name_, seed=12234565)
mc.in_show_video(name_)

This figure shows how one can create MotionCloud stimuli that have specific direction and orientation which are not necessarily perpendicular, components "slipping" relative to the motion.
We generate MotionClouds components with a strong selectivity toward one orientation. We show in different lines of this table respectively: Top) orientation perpendicular to orientation as in the standard case with a line, Middle) a 45° difference between both, Bottom) orientation and direction are parallel.
We created the aperture problem without any aperture! This can be shown by looking at envelopes in the Fourier space. To best infer motion, you would look fro the best speed plane that would go through each envelope. If these envelopes are rather tight, there are many different speed planes that can go through this envelope: motion is ambiguous.
Columns represent isometric projections of a cube. The left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum. The middle column shows an isometric view of the faces of the movie cube.
The first frame of the movie lies on the x-y plane, the x-t plane lies on the top face and motion direction is seen as diagonal lines on this face (vertical motion is similarly see in the y-t face). The third column displays the actual movie as an animation.

exploring different slipping angles

In [5]:
N_orient = 8
#downscale= 2
#fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame)
theta = 0
for dtheta in np.linspace(0, np.pi/2, N_orient):
    name_ = name + '_dtheta_' + str(dtheta).replace('.', '_')
    theta1, theta2, B_theta = 0., np.pi/2., np.pi/32
    diag = mc.envelope_gabor(fx, fy, ft, theta=theta+dtheta, V_X=np.cos(theta), V_Y=np.sin(theta), B_sf=B_sf, B_theta=B_theta)
    mc.figures(diag, name_, seed=12234565)
    mc.in_show_video(name_)

We display SlippingClouds with different angle between direction and orientation.
Left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum.
Right column of the table displays the actual movie as an animation.

manipulating different ambiguities

In [6]:
N_test = 8
#downscale= 2
#fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame)
theta = 0
dtheta = np.pi/4
for B_theta_ in np.pi/8/np.arange(1, N_test+1):#np.linspace(0, np.pi/8, N_test)[1:]:
    name_ = name + '_B_theta_' + str(B_theta_).replace('.', '_')
    theta1, theta2, B_theta = 0., np.pi/2., np.pi/32
    diag = mc.envelope_gabor(fx, fy, ft, theta=theta+dtheta, V_X=np.cos(theta), V_Y=np.sin(theta), B_sf=B_sf, B_theta=B_theta_)
    mc.figures(diag, name_, seed=12234565)
    mc.in_show_video(name_)

The ambiguity in !SlippingClouds can be manipualted by changing B_theta for a given B_V.
Left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum.
Right column of the table displays the actual movie as an animation.