Manipulating speed in motion clouds

An essential dimension of motion is speed. However, this notion is prone to confusions as the speed that has to be measured can be relative to different object. Is it the speed of pixels? The speed of visual objects? We try to distinguish the latter two in this post.

In [ ]:
%matplotlib inline
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt

In standard Motion Clouds, speed is that of the pixels of the textons that produce the texture. It is thus defined as a (normal) distribution of probability around the speed plane that defines the mean speed:

!rm -fr ../files/physical_speed_parallax*
In [ ]:
import os
name = 'physical_speed'
DOWNSCALE = 1

import MotionClouds as mc
N_X, N_Y, N_frame = mc.N_X/DOWNSCALE, mc.N_Y/DOWNSCALE, mc.N_frame/DOWNSCALE

fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)

name_ = name + '_vanilla'
z = mc.envelope_gabor(fx, fy, ft)
mc.figures(z, name_)
mc.in_show_video(name_)

Another way to consider speed is that of visual objects. Consider looking through the window of a train a field of similarly sized (Gabor-shaped) bushes on an infinite plane. The objects which are closer appear bigger on the retina and their speed is perceived as higher than the smaller bushes near the horizon. This relation between size and speed is linear and is called motion parallax (credit: http://www.rhsmpsychology.com/Handouts/monocular_cues_IV.htm).

In [ ]:
from IPython.display import Image
Image('http://www.rhsmpsychology.com/images/monocular_IV.jpg')
Out[ ]:

Such a pattern of motion distribution is highly prototypical in natural settings and it is easy to generate a texture having such a profile but with textons that are uniformly spaced in space, similar to the "Golconde" painting from Magritte (By The Shimon Yanowitz Website, Fair use, https://en.wikipedia.org/w/index.php?curid=3773027):

In [ ]:
Image('https://upload.wikimedia.org/wikipedia/en/7/71/Golconde.jpg')
Out[ ]:

More specifically, the relation reads : $$ \frac{1}{f_0} \propto \frac{d\alpha}{d\theta} $$

where $f_0$ is the central scale (spatial frequency) of the visual object and $\frac{d\alpha}{d\theta}$ is the resulting retinal speed. See for instance this paper in Front. Psychol. (06 October 2014 | https://doi.org/10.3389/fpsyg.2014.01103) "Modeling depth from motion parallax with the motion/pursuit ratio" by Mark Nawrot et al:

In [ ]:
Image('http://www.frontiersin.org/files/Articles/104347/fpsyg-05-01103-r2/image_m/fpsyg-05-01103-g001.jpg')
Out[ ]:

For this, we need to modify the way we define the speed plane, a bit similar to what we did with gravitational waves. It is similar to convolving a vanilla Motion Clouds with a joint relation of $f_0$ and $V$ as defined above:

In [ ]:
mc.B_V
Out[ ]:
0.5
In [ ]:
def envelope_physical_speed(fx, fy, ft, K=1., V_X=mc.V_X, V_Y=mc.V_Y,
                        B_V=mc.B_V, sf_0=mc.sf_0, B_sf=mc.B_sf, loggabor=mc.loggabor,
                        theta=mc.theta, B_theta=mc.B_theta, alpha=mc.alpha):
    """
     physical speed envelope:
     selects a gaussian perpendicualr to the plane corresponding to the speed (V_X, V_Y) with some thickness B_V

    """


    envelope = mc.envelope_color(fx, fy, ft, alpha=alpha)
    envelope *= mc.envelope_orientation(fx, fy, ft, theta=theta, B_theta=B_theta)    
    envelope *= mc.envelope_radial(fx, fy, ft, sf_0=sf_0, B_sf=B_sf, loggabor=loggabor)
    
    f_radius = mc.frequency_radius(fx, fy, ft, ft_0=mc.ft_0, clean_division=True)
    envelope *= np.exp(-.5*((ft + sf_0 * V_X - (f_radius - sf_0) / K )**2/(B_V*sf_0)**2))
    # Hack to remove the other part of the envelope - use angle instead to allow for higher B_theta values
    envelope *= fx > 0

    
    return envelope

name_ = name + '_parallax'
K = 1.
opts= dict(V_X=.5, sf_0=.2, B_V=.2)
z = envelope_physical_speed(fx, fy, ft, K=K, **opts)
mc.figures(z, name_)
mc.in_show_video(name_)

Let's explore what happens for different values of K:

In [ ]:
for K_ in 1./np.linspace(0.6, 1.4, 7)*K:
    name_ = name + '_parallax_K_' + str(K_).replace('.', '_')
    z = envelope_physical_speed(fx, fy, ft, K=K_, **opts)
    mc.figures(z, name_)
    mc.in_show_video(name_)

Speed distributions

In [1]:
%matplotlib inline
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt
#!rm -fr ../files/speed*
In [2]:
import MotionClouds as mc
name = 'noisy-speed'
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
print(mc.envelope_speed.__doc__)
    Returns the speed envelope:
    selects the plane corresponding to the speed ``(V_X, V_Y)`` with some bandwidth ``B_V``.

    * (V_X, V_Y) = (0,1) is downward and  (V_X, V_Y) = (1, 0) is rightward in the movie.
    * A speed of V_X=1 corresponds to an average displacement of 1/N_X per frame.
    To achieve one spatial period in one temporal period, you should scale by
    V_scale = N_X/float(N_frame)
    If N_X=N_Y=N_frame and V=1, then it is one spatial period in one temporal
    period. It can be seen along the diagonal in the fx-ft face of the MC cube.

    A special case is used when ``B_V=0``, where the ``fx-ft`` plane is used as
    the speed plane: in that case it is desirable to set ``(V_X, V_Y)`` to ``(0, 0)``
    to avoid aliasing problems.

    Run the 'test_speed' notebook to explore the speed parameters, see
    http://motionclouds.invibe.net/posts/testing-speed.html

    
In [3]:
# explore parameters
for B_V in [0.0, 0.01, 0.1, 1.0, 10.0]:
    name_ = name + '-B_V-' + str(B_V).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, V_X=0, B_V=B_V)
    mc.figures(z, name_)
    mc.in_show_video(name_)

More is not always better

MotionClouds

MotionClouds are random dynamic stimuli optimized to study motion perception.

Notably, this method was used in the following paper:

  • Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian, Guillaume S. Masson. More is not always better: dissociation between perception and action explained by adaptive gain control. Nature Neuroscience, 2012 URL

In this notebook, we describe the scripts used to generate such stimuli.

Read more…

Colored Motion Clouds

Exploring colored Motion Clouds

By construction, Motion Clouds are grayscale:

In [1]:
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
name = 'color'
In [2]:
env = mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0.)
mc.figures(env, name + '_gray', do_figs=False)
mc.in_show_video(name + '_gray')

However it is not hard to imagine extending them to the color space. A first option is to create a diferent motion cloud for each channel:

In [3]:
colored_MC = np.zeros((env.shape[0], env.shape[1], 3, env.shape[2]))

for i in range(3):
    colored_MC[:, :, i, :] = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0.)))

mc.anim_save(colored_MC, os.path.join(mc.figpath, name + '_color'))
mc.in_show_video(name + '_color')

Note that the average luminance is also a random cloud:

In [4]:
mc.anim_save(mc.rectif(colored_MC.sum(axis=2)), os.path.join(mc.figpath, name + '_gray2'))
mc.in_show_video(name + '_gray2')

We may create a strictly isoluminant cloud:

In [5]:
luminance = colored_MC.sum(axis=2)[:, :, np.newaxis, :]
mc.anim_save(colored_MC/luminance, os.path.join(mc.figpath, name + '_isocolor'))
mc.in_show_video(name + '_isocolor')

There are now many more possibilities, such as

  • weighting the different R, G and B channels to obtain a better tuning to psychophysics,
  • let only the predominant channels be activated (like the one corresponding to the red and green channels which correspond to the maximal responses of cones).

Eschercube

Eschercube

Motion Clouds are a set of stimuli designed to explore in a systematic way the functional response of a sensory system to a natural-like motion stimulus.

These are optimized for translating, full-field motion and are by construction textures synthesized from randomly placed similar motion patches with characteristics spatial parameters. The object of such an endeavor is to systematically test a system to varying the parameters by testing the response to a series of such textures.

We show here for a fixed set of central parameters (mean speed, direction, orientation and spatial frequency) a cube constituted by a family of such Motion Clouds when varying the bandwidth parameters for speed (left panel), frequency (right panel) and orientation (top panel).

Each elementary cube in the larger cube denoting the parameter space represents a Motion Cloud and is shown as a cube to represent the corresponding movie, with time flowing from lower left to upper right in the right and top facets. Overlaid hue gives a measure of a typical response for a sensory response (here a motion energy model) which gives a complete characterization of the sensory system at hand.

In [1]:
from IPython.display import HTML
HTML(data='''<video alt="MCartwork" controls autoplay="1" loop="1" width="100%">
                <source src={url} type="video/mp4"/>
             </video>'''.format(url='http://motionclouds.invibe.net/files/MCartwork.mp4'))
Out[1]:

the making of the parameter cube : 1) making a cristal

In [2]:
infile = \
"""ATOM 001 N   ASN A  4 0 0 0
ATOM 002 CA  ASN A  4 0 0 1
ATOM 003 C   ASN A  4 0 0 2
ATOM 004 O   ASN A  4 0 0 3
ATOM 005 CB  ASN A  4 0 0 4
ATOM 006 CG  ASN A  4 0 1 0
ATOM 007 OD1 ASN A  4 0 1 1
ATOM 008 ND2 ASN A  4 0 1 2
ATOM 009 N   CYS A  5 0 1 3
ATOM 010 CA  CYS A  5 0 1 4
ATOM 011 C   CYS A  5 0 2 0
ATOM 012 O   CYS A  5 0 2 1
ATOM 013 CB  CYS A  5 0 2 2
ATOM 014 SG  CYS A  5 0 2 3
ATOM 015 N   GLU A  6 0 2 4
ATOM 016 CA  GLU A  6 0 3 0
ATOM 017 C   GLU A  6 0 3 1
ATOM 018 O   GLU A  6 0 3 2
ATOM 019 CB  GLU A  6 0 3 3
ATOM 020 CG  GLU A  6 0 3 4
ATOM 021 CD  GLU A  6 0 4 0
ATOM 022 OE1 GLU A  6 0 4 1
ATOM 023 OE2 GLU A  6 0 4 2
ATOM 024 N   ARG A  7 0 4 3
ATOM 025 CA  ARG A  7 0 4 4
ATOM 026 C   ARG A  7 1 0 0
ATOM 027 O   ARG A  7 1 0 1
ATOM 028 CB  ARG A  7 1 0 2
ATOM 029 CG  ARG A  7 1 0 3
ATOM 030 CD  ARG A  7 1 0 4
ATOM 031 NE  ARG A  7 1 1 0
ATOM 032 CZ  ARG A  7 1 1 1
ATOM 033 NH1 ARG A  7 1 1 2
ATOM 034 NH2 ARG A  7 1 1 3
ATOM 035 N   VAL A  8 1 1 4
ATOM 036 CA  VAL A  8 1 2 0
ATOM 037 C   VAL A  8 1 2 1
ATOM 038 O   VAL A  8 1 2 2
ATOM 039 CB  VAL A  8 1 2 3
ATOM 040 CG1 VAL A  8 1 2 4
ATOM 041 CG2 VAL A  8 1 3 0
ATOM 042 N   TRP A  9 1 3 1
ATOM 043 CA  TRP A  9 1 3 2
ATOM 044 C   TRP A  9 1 3 3
ATOM 045 O   TRP A  9 1 3 4
ATOM 046 CB  TRP A  9 1 4 0
ATOM 047 CG  TRP A  9 1 4 1
ATOM 048 CD1 TRP A  9 1 4 2
ATOM 049 CD2 TRP A  9 1 4 3
ATOM 050 NE1 TRP A  9 1 4 4
ATOM 051 CE2 TRP A  9 2 0 0
ATOM 052 CE3 TRP A  9 2 0 1
ATOM 053 CZ2 TRP A  9 2 0 2
ATOM 054 CZ3 TRP A  9 2 0 3
ATOM 055 CH2 TRP A  9 2 0 4
ATOM 056 N   LEU A 10 2 1 0
ATOM 057 CA  LEU A 10 2 1 1
ATOM 058 C   LEU A 10 2 1 2
ATOM 059 O   LEU A 10 2 1 3
ATOM 060 CB  LEU A 10 2 1 4
ATOM 061 CG  LEU A 10 2 2 0
ATOM 062 CD1 LEU A 10 2 2 1
ATOM 063 CD2 LEU A 10 2 2 2
ATOM 064 N   ASN A 11 2 2 3
ATOM 065 CA  ASN A 11 2 2 4
ATOM 066 C   ASN A 11 2 3 0
ATOM 067 O   ASN A 11 2 3 1
ATOM 068 CB  ASN A 11 2 3 2
ATOM 069 CG  ASN A 11 2 3 3
ATOM 070 OD1 ASN A 11 2 3 4
ATOM 071 ND2 ASN A 11 2 4 0
ATOM 072 N   VAL A 12 2 4 1
ATOM 073 CA  VAL A 12 2 4 2
ATOM 074 C   VAL A 12 2 4 3
ATOM 075 O   VAL A 12 2 4 4
ATOM 076 CB  VAL A 12 3 0 0
ATOM 077 CG1 VAL A 12 3 0 1
ATOM 078 CG2 VAL A 12 3 0 2
ATOM 079 N   THR A 13 3 0 3
ATOM 080 CA  THR A 13 3 0 4
ATOM 081 C   THR A 13 3 1 0
ATOM 082 O   THR A 13 3 1 1
ATOM 083 CB  THR A 13 3 1 2
ATOM 084 OG1 THR A 13 3 1 3
ATOM 085 CG2 THR A 13 3 1 4
ATOM 086 N   PRO A 14 3 2 0
ATOM 087 CA  PRO A 14 3 2 1
ATOM 088 C   PRO A 14 3 2 2
ATOM 089 O   PRO A 14 3 2 3
ATOM 090 CB  PRO A 14 3 2 4
ATOM 091 CG  PRO A 14 3 3 0
ATOM 092 CD  PRO A 14 3 3 1
ATOM 093 N   ALA A 15 3 3 2
ATOM 094 CA  ALA A 15 3 3 3
ATOM 095 C   ALA A 15 3 3 4
ATOM 096 O   ALA A 15 3 4 0
ATOM 097 CB  ALA A 15 3 4 1
ATOM 098 N   THR A 16 3 4 2
ATOM 099 CA  THR A 16 3 4 3
ATOM 100 C   THR A 16 3 4 4
ATOM 101 O   THR A 16 4 0 0
ATOM 102 CB  THR A 16 4 0 1
ATOM 103 OG1 THR A 16 4 0 2
ATOM 104 CG2 THR A 16 4 0 3
ATOM 105 N   LEU A 17 4 0 4
ATOM 106 CA  LEU A 17 4 1 0
ATOM 107 C   LEU A 17 4 1 1
ATOM 108 O   LEU A 17 4 1 2
ATOM 109 CB  LEU A 17 4 1 3
ATOM 110 CG  LEU A 17 4 1 4
ATOM 111 CD1 LEU A 17 4 2 0
ATOM 112 CD2 LEU A 17 4 2 1
ATOM 113 N   ARG A 18 4 2 2
ATOM 114 CA  ARG A 18 4 2 3
ATOM 115 C   ARG A 18 4 2 4
ATOM 116 O   ARG A 18 4 3 0
ATOM 117 CB  ARG A 18 4 3 1
ATOM 118 CG  ARG A 18 4 3 2
ATOM 119 CD  ARG A 18 4 3 3
ATOM 120 NE  ARG A 18 4 3 4
ATOM 121 CZ  ARG A 18 4 4 0
ATOM 122 NH1 ARG A 18 4 4 1
ATOM 123 NH2 ARG A 18 4 4 2
ATOM 124 N   SER A 19 4 4 3
ATOM 125 CA  SER A 19 4 4 4
CONECT 001 002 003 004 005
CONECT 006 007 008 009 010
CONECT 011 012 013 014 015
CONECT 016 017 018 019 020
CONECT 021 022 023 024 025
CONECT 026 027 028 029 030
CONECT 031 032 033 034 035
CONECT 036 037 038 039 040
CONECT 041 042 043 044 045
CONECT 046 047 048 049 050
CONECT 051 052 053 054 055
CONECT 056 057 058 059 060
CONECT 061 062 063 064 065
CONECT 066 067 068 069 070
CONECT 071 072 073 074 075
CONECT 076 077 078 079 080
CONECT 081 082 083 084 085
CONECT 086 087 088 089 090
CONECT 091 092 093 094 095
CONECT 096 097 098 099 100
CONECT 101 102 103 104 105
CONECT 106 107 108 109 110
CONECT 111 112 113 114 115
CONECT 116 117 118 119 120
CONECT 121 122 123 124 125
CONECT 001 006 011 016 021
CONECT 026 031 036 041 046
CONECT 051 056 061 066 071
CONECT 076 081 086 091 096
CONECT 101 106 111 116 121
CONECT 002 007 012 017 022
CONECT 027 032 037 042 047
CONECT 052 057 062 067 072
CONECT 077 082 087 092 097
CONECT 102 107 112 117 122
CONECT 003 008 013 018 023
CONECT 028 033 038 043 048
CONECT 053 058 063 068 073
CONECT 078 083 088 093 098
CONECT 103 108 113 118 123
CONECT 004 009 014 019 024
CONECT 029 034 039 044 049
CONECT 054 059 064 069 074
CONECT 079 084 089 094 099
CONECT 104 109 114 119 124
CONECT 005 010 015 020 025
CONECT 030 035 040 045 050
CONECT 055 060 065 070 075
CONECT 080 085 090 095 100
CONECT 105 110 115 120 125
CONECT 001 026 051 076 101
CONECT 002 027 052 077 102
CONECT 003 028 053 078 103
CONECT 004 029 054 079 104
CONECT 005 030 055 080 105
CONECT 006 031 056 081 106
CONECT 007 032 057 082 107
CONECT 008 033 058 083 108
CONECT 009 034 059 084 109
CONECT 010 035 060 085 110
CONECT 011 036 061 086 111
CONECT 012 037 062 087 112
CONECT 013 038 063 088 113
CONECT 014 039 064 089 114
CONECT 015 040 065 090 115
CONECT 016 041 066 091 116
CONECT 017 042 067 092 117
CONECT 018 043 068 093 118
CONECT 019 044 069 094 119
CONECT 020 045 070 095 120
CONECT 021 046 071 096 121
CONECT 022 047 072 097 122
CONECT 023 048 073 098 123
CONECT 024 049 074 099 124
CONECT 025 050 075 100 125
"""

the making of the parameter cube : 2) filling the cristal

In [3]:
import os
if not os.path.exists('../galleries/Artwork/MCartwork.mp4'):
    import numpy as np
    from tvtk.api import tvtk
    from mayavi.sources.vtk_data_source import VTKDataSource
    from mayavi import mlab
    import MotionClouds as mc
    import itertools
    import os

    def source(dim, bwd):
        """
        Create motion cloud source
        """
        fx, fy, ft = dim
        z = mc.envelope_gabor(fx, fy, ft, B_sf=bwd[0], B_V=bwd[1], B_theta=bwd[2])
        data = mc.rectif(mc.random_cloud(z))
        return data

    def image_data(dim, sp, orig, bwd):
        """
        Create Image Data for the surface from a data source.
        """
        data = source(dim, bwd)
        i = tvtk.ImageData(spacing=sp, origin=orig)
        i.point_data.scalars = data.ravel()
        i.point_data.scalars.name = 'scalars'
        i.dimensions = data.shape
        return i

    def view(dataset):
        """
        Open up a mayavi scene and display the cubeset in it.
        """
        engine = mlab.get_engine()
        src = VTKDataSource(data=dataset)
        engine.add_source(src)
        mlab.pipeline.surface(src, colormap='gray')
        return

    def main(dim, sp=(1, 1, 1), orig=(0, 0, 0), B=(.01, .1, .4)):
        view(image_data(dim=dim, sp=sp, orig=orig, bwd=B))
        return

    def get_nodes_and_edges():
        nodes = dict()
        edges = list()
        atoms = set()
        # Build the graph from the PDB information
        for line in infile.splitlines():
            line = line.split()
            if line[0] == 'ATOM':
                nodes[line[1]] = (line[2], line[6], line[7], line[8])
                atoms.add(line[2])
            elif line[0] == 'CONECT':
                for start, stop in zip(line[1:-1], line[2:]):
                    edges.append((start, stop))
        atoms = list(atoms)
        atoms.sort()
        atoms = dict(zip(atoms, range(len(atoms))))
        # Turn the graph into 3D positions, and a connection list.
        labels = dict()
        x       = list()
        y       = list()
        z       = list()
        scalars = list()
        for index, label in enumerate(nodes):
            labels[label] = index
            this_scalar, this_x, this_y, this_z= nodes[label]
            scalars.append(atoms[this_scalar])
            x.append(float(this_x))
            y.append(float(this_y))
            z.append(float(this_z))
        connections = list()
        for start, stop in edges:
            connections.append((labels[start], labels[stop]))
        x       = np.array(x)
        y       = np.array(y)
        z       = np.array(z)
        scalars = np.array(scalars)
        return x, y, z, connections, scalars

    size = 2**4 ##
    space = 1.5 * size # space between 2 cubes
    N = 5
    idx = np.arange(N)
    pos = idx * space
    Bf = np.logspace(-2., 0.1, N)
    Bv = [0.01, 0.1, 0.5, 1.0, 10.0]
    sigma_div = [2, 3, 5, 8, 13]
    Bo = np.pi/np.array(sigma_div)
    fx, fy, ft = mc.get_grids(size, size, size)
    downscale = 4
    mlab.figure(1, bgcolor=(.6, .6, .6), fgcolor=(0, 0, 0), size=(1600/downscale,1200/downscale))
    mlab.clf()
    do_grid, do_splatter, do_MCs = True, True, True

    """ Generate the cubical graph structure to connect each individual cube """
    if do_grid:
        x, y, z, connections, scalars = get_nodes_and_edges()
        x = x * space + size / 2.
        y = y * space + size / 2.
        z = z * space + size / 2.
        scalars = 0.8*np.ones(x.shape)
        pts = mlab.points3d(x, y, z, scalars, colormap='Blues', scale_factor=5.0, resolution=10)
        pts.mlab_source.dataset.lines = np.array(connections)
        # Use a tube filter to plot tubes on the link, varying the radius with the scalar value
        tube = mlab.pipeline.tube(pts, tube_sides=15, tube_radius=1.5)
        mlab.pipeline.surface(tube, color=(0.4, 0.4, 0.4) )

    """ Gaussian Splatter """
    if do_splatter:
        # Visualize the local atomic density
        x_, y_, z_ = np.mgrid[0:(N*space), 0:(N*space), 0:(N*space)]
        response = np.exp(-((x_-4*space)**2 +(y_- 1*space)**2 +((z_-3.4*space)/2.)**2)/2/(1.3*space)**2)
        mlab.contour3d(response, opacity=0.5)

    """ Generate the Motion Clouds cubes """
    if do_MCs:
        for i, j, k in list(itertools.product(idx, idx, idx)):
            main(dim=(fx, fy, ft), orig=(pos[i], pos[j], pos[k]), B=(Bf[i], Bv[k], Bo[j]))
    elevation = 90. - 18.
    view = mlab.view(azimuth=290., elevation=elevation, distance='auto', focalpoint='auto')
    distance = view[2]
    mlab.savefig('MCartwork.png')
    if True:
        N_frame = 64 ##
        for i_az, azimuth in enumerate(np.linspace(0, 360, N_frame, endpoint=False)):
            mlab.view(*view)
            mlab.view(azimuth=azimuth)
            mlab.view(distance=distance*(.7 + 0.3*np.cos(azimuth*2*np.pi/360.)))
            mlab.savefig('_MCartwork%03d.png' % i_az, size=(1600/downscale, 1200/downscale))
        os.system('ffmpeg -y -i _MCartwork%03d.png  MCartwork.mpg')
        os.system('ffmpeg -y -i _MCartwork%03d.png  MCartwork.mp4')
        os.system('convert -delay 8 -loop 0 -colors 256  -scale 25% -layers Optimize  _MCartwork*.png  MCartwork.gif')
        os.system('rm _MCartwork*')

the making of the parameter cube : 3) post-production

In [4]:
from IPython.display import display, Image
#display(Image(filename='../galleries/Artwork/MCartwork.png', embed=True))

Image(url='http://motionclouds.invibe.net/galleries/Artwork/MCartwork.png', embed=False)
Out[4]:

Motion Plaid

MotionPaids : from MotionClouds components to a plaid-like stimulus

In [1]:
import numpy as np
import MotionClouds as mc
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
name = 'MotionPlaid'

Plaids are usually created by adding two moving gratings (the components) to form a new simulus (the pattern). Such stimuli are crucial to understand how for instance information about motion is represented and processed and its neural mechanisms have been extensively studied notably by Anthony Movshon at the NYU. One shortcomming is the fact that these are created by components which created interference patterns when they are added (as in a Moiré pattern). The question remains to know if everything that we know about components vs pattern processing comes from these interference patterns or reaaly by the processing of the two componenents as a whole. As such, Motion Clouds are ideal candidates because they are generated with random phases: by construction, there should not be any localized interference between components. Let's verfy that:

Defintion of parameters:

In [2]:
theta1, theta2, B_theta = np.pi/4., -np.pi/4., np.pi/32

This figure shows how one can create MotionCloud stimuli that specifically target component and pattern cell. We show in the different lines of this table respectively: Top) one motion cloud component (with a strong selectivity toward the orientation perpendicular to direction) heading in the upper diagonal Middle) a similar motion cloud component following the lower diagonal Bottom) the addition of both components: perceptually, the horizontal direction is predominant.

In [3]:
print( mc.in_show_video.__doc__)

    Columns represent isometric projections of a cube. The left column displays
    iso-surfaces of the spectral envelope by displaying enclosing volumes at 5
    different energy values with respect to the peak amplitude of the Fourier spectrum.
    The middle column shows an isometric view of the faces of the movie cube.
    The first frame of the movie lies on the x-y plane, the x-t plane lies on the
    top face and motion direction is seen as diagonal lines on this face (vertical
    motion is similarly see in the y-t face). The third column displays the actual
    movie as an animation.

    Given a name, displays the figures corresponding to the Fourier spectra, the
    stimulus cubes and movies within the notebook.

    

Component one:

In [4]:
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
name_ = name + '_comp1'
mc.figures(diag1, name_, seed=12234565)
mc.in_show_video(name_)

Component two:

In [5]:
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
name_ = name + '_comp2'
mc.figures(diag2, name_, seed=12234565)
mc.in_show_video(name_)

The pattern is the sum of the two components:

In [6]:
name_ = name
mc.figures(diag1 + diag2, name, seed=12234565)
mc.in_show_video(name_)

A grid of plaids shows switch from transparency to pattern motion

Script done in collaboration with Jean Spezia.

In [7]:
import os
import numpy as np
import MotionClouds as mc

name = 'MotionPlaid_9x9'
if mc.check_if_anim_exist(name):
    B_theta = np.pi/32
    N_orient = 9
    downscale = 4
    fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
    mov = mc.np.zeros(((N_orient*mc.N_X/downscale), (N_orient*mc.N_X/downscale), mc.N_frame/downscale))
    i = 0
    j = 0
    for theta1 in np.linspace(0, np.pi/2, N_orient)[::-1]:
        for theta2 in np.linspace(0, np.pi/2, N_orient):
            diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
            diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
            mov[(i)*mc.N_X/downscale:(i+1)*mc.N_X/downscale, (j)*mc.N_Y/downscale : (j+1)*mc.N_Y/downscale, :] = mc.random_cloud(diag1 + diag2, seed=1234)
            j += 1
        j = 0
        i += 1
    mc.anim_save(mc.rectif(mov, contrast=.99), os.path.join(mc.figpath, name))
In [8]:
mc.in_show_video(name)

As in (Rust, 06) we show in this table the concatenation of a table of 9x9 MotionPlaids where the angle of the components vary on respectively the horizontal and vertical axes.
The diagonal from the bottom left to the top right corners show the addition of two component MotionClouds of similar direction: They are therefore also intance of the same Motion Clouds and thus consist in a single component.
As one gets further away from this diagonal, the angle between both component increases, as can be seen in the figure below. Note that first and last column are different instance of similar MotionClouds, just as first and last lines in in the table.

A parametric exploration on MotionPlaids

In [9]:
N_orient = 8
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame)
theta = 0
for dtheta in np.linspace(0, np.pi/2, N_orient):
    name_ = name + 'dtheta_' + str(dtheta).replace('.', '_')
    diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta + dtheta, V_X=np.cos(theta + dtheta), V_Y=np.sin(theta + dtheta), B_theta=B_theta)
    diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta - dtheta, V_X=np.cos(theta - dtheta), V_Y=np.sin(theta - dtheta), B_theta=B_theta)
    mc.figures(diag1 + diag2, name_, seed=12234565)
    mc.in_show_video(name_)

For clarity, we display MotionPlaids as the angle between both component increases from 0 to pi/2.
Left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum.
Right column of the table displays the actual movie as an animation.

Instructions for Psychophysicists

making experiments using motionclouds

making psychophysics using psychopy

installation (preparation 3min, cooking / downloading 10 min)

  • it is easy to create a simple psychophysical experiment using a combination of PsychoPy and MotionClouds. Simply download the standalone psychopy binary for your computer and try to load the example script psychopy_competing.py.
  • for this, you may:
    1. clone / download the MC repository on github.
    2. download PsychoPy
  • then, open the psychopy_competing.py file in the editor (the so-called "PsychoPy Coder")

running

  • Hit the run button (the green round button with a runner on it), it will then:
  • tell you about some configuration parameters (be sure to enter the correct size of your screen)
  • pre-generate data
  • run the experiment, that is present a sequence of trials consisting of :
    1. a movie presentation,
    2. a blank screen where the program waits for a key press: UP arrow if you saw the movie going up, DOWN if down or Q / Esc to exit

results

  • the results are displayed at the end showing that :
  • the movies consisted of 2 MotionClouds in opposite directions (UP and DOWN) but added with different contrasts,
  • the percept was going up or down (on the y-axis) as a function of this contrast (x-axis)
  • it seems I have a slight bias for UP movements as the contrast of equal probability seems inferior to 1/2 (a proper analysis would necessitate a fit).

TODO: Frequently asked questions

Experimental protocol

These instructions were given on VSG system but should work on similar working platforms:

  1. Measure your experimental settings (screen size, frame rate, resolution, viewing distance) as these values will constraint your spatial and temporal sampling frequencies.

  2. Calibrate your screen (gamma / luminance).

  3. Experimental constants (with some default values):

    a. $D$ : Viewing distance distance = 570 mm

    b. $S$: screen size = 390 mm

    c. $dt$ : Max frame period 0,01 s

    d. $\text{VA}$ : Stimulus Width in degrees of visual angle at viewing distance $D$: 38,0958

    e. $f_{d}$ : Desired spatial frequency in cycles per degree - cpd

    f. $V_{drift}$: Drifting velocity in degrees per second - d/s

    g. $X$, $Y$ : sampling step in degrees.

    h. $F_X$, $F_Y$: spatial sampling rates in cpd

    i. $T$ : Stimulus duration in seconds.

    j. $\alpha$: mean orientation of the frequency component (called $\theta$ in the scripts)

    k. $\theta$: direction of image motion.

  4. Parameters for the stimulus:

    a. ($N_X$, $N_Y$) : Stimuli’s frame size in pixels as generated in Fourier space.

    b. ($n_X$, $n_Y$): Screen resolution = 640 x 480 px (approx 0.06 deg/px)

    c. ($N_frame$): Number of movie frames as generated in Fourier space.

    d. $f_pref$ : Central frequency normalized in Fourier space

    e. $V_X$: left - rightward motion

    f. $V_Y$: up-downward motion

    g. $W_X$, $W_Y$: spatial sampling frequencies in Fourier space

How do I compute a visual angle?

$$VA = 2*arctan\left( \frac{S}{2D} \right)$$

where $S$ is stimulus size on the screen (X or Y) and $D$ is the viewing distance.

How do I set the spatial sampling?

$X = \frac{\text{VA}}{n_{x}}$ is the spatial step in the x-axis and for the y-axis the sampling step, Y, has the same value the x-axis.

In practice, the spatial sampling step (X and/or Y) can not be larger than the semi-period of the finest spatial frequency ($F_X$ and or $F_Y$) that is represented in the image. The image will not contain those frequency components are higher than the Nyquist frequency along the two axes.

What is the unit for frequencies?

The frequency distribution is periodic with period $F_S$. The normalized frequency ranges from 0.0 to 1.0, which corresponds to a real frequency range of 0 to the sampling frequency $F_S$.

The normalized frequency also wraps around 1.0 so a normalized frequency of 1.1 is equivalent to 0.1.

When the actual frequency, $f_t$, has units of Hz, for example, the normalized frequencies, also denoted by $f_t$, have units of cycles per sample, and the periodicity of the normalized distribution is 1.

What is the maximal frequency ?

If a sampled signal is real-valued the periodicity of the frequency distribution is still $F_S$. But due to symmetry, it is completely defined by the content within a span of just $F_S/2$ (Nyquist frequency). Accordingly, some applications use that as the normalization reference (and the resulting units are half-cycles per sample).

Normalization produces a distribution that is independent of the sample-rate, and thus one plot is sufficient for all possible sample-rates.

fx, fy, ft = np.mgrid[(-N\_X//2):((N\_X-1)//2 + 1),
(-N\_Y//2):((N\_Y-1)//2 + 1),(-N\_frame//2):((N\_frame-1)//2 + 1)]

fx, fy, ft = fx\*1./N\_X, fy\*1./N\_Y, ft\*1./N\_frame

The range is always $(-0.5, 0.5)$ which corresponds to the maximum frequency we can represent.

Example: N_X = N_Y = 512 and N_frames = 128 then,

Nyquist limit is $2^{- 1}W\ cyc/width$, W is the width of the stimulus.

The highest frequency band considered will be centered at $2^{- 2}W\ cyc/width$

The blobs change spatial frequency in octave steps, the scale-k band will have a spatial frequency pass band centered at: $2^{- (k + 2)}W\ cyc/width$

easy: it's just a design choice, we could have used B_tf, but defining B_sf then slicing in the envelope with a B_V envelope was more intuitive.

Convention used in DFT

From numpy.fft.ifftn(a, s=None, axes=None)

From http://docs.scipy.org/doc/numpy/reference/routines.fft.html:

The values in the result follow so-called “standard” order: If A = fft(a,n), then A[0] contains the zero-frequency term (the mean of the signal), which is always purely real for real inputs. Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains the negative-frequency terms, in order of decreasingly negative frequency.

For an even number of input points, A[n/2] represents both positive and negative Nyquist frequency (+/-0.5), and is also purely real for real input.

For an odd number of input points, A[(n-1)/2] contains the largest positive frequency, while A[(n+1)/2] contains the largest negative frequency. The routine np.fft.fftfreq(A) returns an array giving the frequencies of corresponding elements in the output.

The routine np.fft.fftshift(A) shifts transforms and their frequencies to put the zero-frequency components in the middle, and np.fft.ifftshift(A) undoes that shift. (...)

When the input is purely real, its transform is Hermitian, i.e., the component at frequency is the complex conjugate of the component at frequency , which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of rfft functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, n input points produce n/2+1 complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of n points uses n/2+1 input points.

Theoretical background: Nyquist–Shannon sampling theorem

Our function is band limited to [${- F}_{B},F_{B}{}_{}$] where B stands for bandwidth.

The Nyquist sampling theorem provides a prescription for the nominal sampling interval required to avoid aliasing and to perfectly reconstruct the signal:

The sampling frequency should be at least twice the highest frequency contained in the signal.

$F_{S} \geq {2F}_{B}$

$F_{N} = \ \frac{F_{S}}{2}$ is called Nyquist frequency .

Why should we use polar coordinates for frequencies?

For the $f_x$ and $f_y$ axes units are cycles/px and for ft axis is

Example: $f_x= 0,25 cpd$ (vertical grating)

$f_{\text{s\ }} =$\

fd is the magnitude of the spatial frequency in polar coordinates. The direction \theta is $$ \theta=\arctan (fx/fy) $$ (w.r.t cartesian coordinates) It can be seen as the rotation of the gaussian envelope.

$f_{x_{\text{norm}}\ } = \ f_{s}\text{cos\ }\theta$

$f_{y_{\text{norm}}\ } = \ f_{s}\text{sin\ }\theta$

${f_{\text{spatial}{}_{cyc/pix}} = {(f}_{\text{pre}f_{\text{cpd}}}*\ VA)/N_{x}}_{}$

$V_{X} = \ \frac{V_{\text{drift}}*T}{N_{f}}$

$ft\ cyc/frame = fspatial\ /Vx$

It’s not a deformation, projections (marginalization I would say) of gaussians are still gaussians

B_tf = B_sf*V

$V_{\max} = \ \frac{VA*T}{n_{f}}$

Vx=(Vdrift * T) / VAx

VAx or N_X for horizontal dispalcement

Vmax= Width /Length = (deg/sec)

Ex: vmax= 30deg/64frames*0,02sec/frame = 23deg/sec

Vx= 16 deg/sec

Vx_norm = 0,68

f_t = Vx_norm * f_s norm (case of drifting grating, no rotation in x-y axes)

f_t = 0,68 * 0,03152 = 0,02

Log-Gabor filter construction process in the Fourier domain (Hanssen and Hess, 2006, Journal of Vision)

an example physiology experiment using VSDI

In [1]:
%%writefile ../files/experiment_VSDI.py
#!/usr/bin/env python
"""
experiment_VSDI.py
Experiment designed for optical imaging showing a conversion of size from Fourier 
to screen coordinates and an export to a zipped file conaining BMP files (as the
video card has limited memory)

(c) Paula Sanz Leon - INT/CNRS


"""
import os
import scipy
import MotionClouds as mc
import numpy as np

# uncomment to preview movies
# vext, display = None, True

#------------------------- Zipped grating ------------------------------- #

name = 'zipped_grating'

#initialize - 
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
sf_0 = 0.2
B_sf = sf_0 / 4.
alpha = 1.0
V_X = 0.5
B_V = V_X / 10.

name_ = mc.figpath + name
for seed in range(424242, 424242+8):
    name_ = mc.figpath + name + '-seed-' + str(seed)
    mc.figures_MC(fx, fy, ft, name_, B_V=B_V, sf_0=sf_0, B_sf=B_sf, V_X=V_X, theta=np.pi/4., alpha=alpha, seed=seed, vext='.zip')

#-------------------- Narrowband vs Braodband experiment ---------------- #
vext = '.mpg'
#vext = '.mat'
#vext = '.zip'
#display = False

# Initialize frequency cube
N_X = 640.
N_Y = 480.
N_frame = 30. # a full period in time frames
fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)

# Experimental constants 
contrast = 0.5
seeds = 1
VA = 38.0958       # VSG constants for a viewing ditance 570 mm. 
framerate = 50.    # Refreshing rate in [Hz]
T = 0.6            # Stimulus duration [s] 
f_d = 0.5         # Desired spatial frequency [cpd]

# Clouds parameters      
B_V = 0.2     # BW temporal frequency (speed plane thickness)
B_sf = 0.15   # BW spatial frequency
theta = 0.0   # Central orientation
B_theta = np.pi/12 # BW orientation
verbose = False
alpha = 1.0

# Get normalised units
sf_0=0.1
V_X=0.5

def physicalUnits2discreteUnits(sf0_cpd, Bsf_cpd, v_dps, B_dps,pixelPitch,viewingDistance,frameRate):
    """%   % laptop monitor
    %   pixelPitch      = 0.22/10; % in cm
    %   viewingDistance = 50;
    %   sf0_cpd         = 4 ;
    %   Bsf_cpd         = 1 ;
    %   v_dps           = 4 ;
    %   B_dps           = 1 ;
    %   frameRate       = 20 ;
    % convert to machine units
    """
    cmPerDegree    = 2*viewingDistance*tand(1/2)
    pxPerDegree    = cmPerDegree/pixelPitch
    sf0            = sf0_cpd/pxPerDegree
    Bsf            = Bsf_cpd/pxPerDegree

    v              = v_dps/frameRate*pxPerDegree
    Bv             = B_dps/frameRate*pxPerDegree
    return sf0, Bsf, v, Bv


# take monitor parameters
# planar display
monitorRefresh = 60 ;
pixelPitch     = 0.2865/10 ;% pixelsize in cm

# experiments parameter
viewingDistance = 70 ;

frameRefresh   = monitorRefresh/3 ;
stimModRate    = 1 ;
numberOfReps   = 12 ;
framesPerMod   = frameRefresh/stimModRate/2 ; 


# Masks
# gaussian mask
sigma_mask_x = 0.15
sigma_mask_y = 0.2
x, y, t = mc.get_grids(N_X, N_Y, N_frame)
n_x, n_y = N_X, N_Y
gauss = np.exp(-(((x-172./n_x)**2/(2*sigma_mask_x**2)) + (((y-108./n_y)**2)/(2*sigma_mask_y**2))))


def tukey(n, r=0.5):
    '''The Tukey window, also known as the tapered cosine window, can be regarded as a 
    cosine lobe of width r * N / 2 that is convolved with a rectangle window of width 
    (1 - r / 2). At r = 1 it becomes rectangular, and at r = 0 it becomes a Hann window.
    http://www.mathworks.com/access/helpdesk/help/toolbox/signal/tukeywin.html
    '''
    # Special cases
    if r <= 0:
        return np.ones(n.shape) #rectangular window
    elif r >= 1:
        return np.hanning(n.shape)

    # Normal case
    x = np.linspace(0, 1, n)
    w = np.ones(x.shape)

    # first condition 0 <= x < r/2
    first_condition = x<r/2
    w[first_condition] = 0.5 * (1 + np.cos(2*np.pi/r * (x[first_condition] - r/2) ))

    # second condition already taken care of

    # third condition 1 - r / 2 <= x <= 1
    third_condition = x>=(1 - r/2)
    w[third_condition] = 0.5 * (1 + np.cos(2*np.pi/r * (x[third_condition] - 1 + r/2))) 

    return w

# Tukey mask - fading effect
tw_x = tukey(n=n_x, r=0.15)
tw_y = tukey(n=n_y, r=0.15)
w = np.tile(((np.outer(tw_y,tw_x))), (N_frame,1,1))
tukey_mask = w.T


# Get Random Clouds
name_ = mc.figpath + name

for seed in [123456 + step for step in range(seeds)]:
    name__ = mc.figpath + name + '-seed-' + str(seed) + '-sf0-' + str(sf_0).replace('.', '_') + '-V_X-' + str(V_X).replace('.', '_')
    # broadband 
    z = mc.envelope_gabor(fx, fy, ft, name_, B_sf=Bsf, sf_0=sf_0, theta=theta, B_V=B_V, B_theta = B_theta, alpha=alpha)
    movie = mc.figures(z, name=None, vext=vext, seed=seed, masking=True)    
    for label, mask in zip(['_mask', '_tukey_mask'], [gauss, tukey_mask]):
        name_ = name__ + '-cloud-' + label
        if anim_exist(name_): 
            movie = mc.rectif(movie*mask)
            mc.anim_save(movie, name_, display=False, vext=vext)

   # narrowband 
    z = mc.envelope(fx, fy, ft, name_, B_sf=B_sf/10., sf_0=sf_0, theta=theta, B_V=B_V, B_theta=B_theta, alpha=alpha)
    movie = mc.figures(z, name=None, vext=vext, seed=seed, masking=True)
    for label, mask in zip(['_mask', 'tukey_mask'], [gauss, tukey_mask]):
        name_ = name__ + '-blob-' + label
        if anim_exist(name_):
            movie = mc.rectif(movie*mask)
            mc.anim_save(movie, name_, display=False, vext=vext)
Overwriting ../files/experiment_VSDI.py

Testing Color

Testing the Color

In [1]:
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt
%matplotlib inline
In [2]:
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
print(mc.envelope_color.__doc__)
    Returns the color envelope.

    In some circonstances, it is desirable to create a texture with a different "color"
    than that of natural images (that is where the envelope is in 1/f).

    Run 'test_color' notebook to see the effect of alpha
    alpha = 0 white
    alpha = 1 pink
    alpha = 2 red/brownian
    (see http://en.wikipedia.org/wiki/1/f_noise )
    
In [3]:
import os
name = 'color'
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
z = mc.envelope_color(fx, fy, ft, alpha=1.0, ft_0=1.)
mc.figures(z, name)
mc.in_show_video(name)
In [4]:
# explore parameters
for alpha in [0.001, 0.5, 1.0, 1.5, 2.0]:
    # resp. white(0), pink(1), red(2) or brownian noise (see http://en.wikipedia.org/wiki/1/f_noise
    name_ = name + '-alpha-' + str(alpha).replace('.', '_')
    z = mc.envelope_color(fx, fy, ft, alpha)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [5]:
for ft_0 in [0.125, 0.25, 0.5, 1., 2., 4., np.inf]:# time space scaling
    name_ = name + '-ft_0-' + str(ft_0).replace('.', '_')
    z = mc.envelope_color(fx, fy, ft, alpha=1., ft_0=ft_0)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [6]:
for size in range(5, 7):
    N_X, N_Y, N_frame = 2**size, 2**size, 2**size
    fx, fy, ft = mc.get_grids(N_X, N_Y, N_frame)
    ft_0 = N_X/float(N_frame)
    name_ = name + '-size-' + str(size).replace('.', '_')
    z = mc.envelope_color(fx, fy, ft, alpha=1., ft_0=ft_0)
    mc.figures(z, name_)
    mc.in_show_video(name_)

Testing Radial

Testing spatial frequency component

In [1]:
import numpy as np
np.set_printoptions(precision=5, suppress=True)
import pylab
import matplotlib.pyplot as plt
%matplotlib inline
#!rm -fr ../files/radial*
In [2]:
import MotionClouds as mc
import os
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
help(mc.envelope_radial)
Help on function envelope_radial in module MotionClouds:

envelope_radial(fx, fy, ft, sf_0=0.125, B_sf=0.1, ft_0=inf, loggabor=True)
    Returns the radial frequency envelope:
    
    Selects a preferred spatial frequency ``sf_0`` and a bandwidth ``B_sf``.
    
    Run the 'test_radial' notebook to see the explore the effect of ``sf_0`` and ``B_sf``, see
    http://motionclouds.invibe.net/posts/testing-radial.html

In [3]:
name = 'radial'
#initialize
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)

mc.figures_MC(fx, fy, ft, name)
verbose = False
mc.in_show_video(name)
In [4]:
N = 5
np.logspace(-5, 0., 7, base=2)
Out[4]:
array([ 0.03125,  0.05568,  0.09921,  0.17678,  0.31498,  0.56123,  1.     ])

exploring different frequency bandwidths

In [5]:
for B_sf in np.logspace(-4, 0., N, base=2):
    name_ = name + '-B_sf-' + str(B_sf).replace('.', '_')
    mc.figures_MC(fx, fy, ft, name_, B_sf=B_sf, verbose=verbose)
    mc.in_show_video(name_)

exploring different (median) central frequencies

In [6]:
for sf_0 in [0.01, 0.05, 0.1, 0.2, 0.4, 0.8]:
    name_ = name + '-sf_0-' + str(sf_0).replace('.', '_')
    mc.figures_MC(fx, fy, ft, name_, sf_0=sf_0, verbose=verbose)
    mc.in_show_video(name_)

Note that some information is lost in the last case as $f_0$ exceeds the Nyquist frequency.

exploring different speed bandwidths

In [7]:
for B_V in [0, 0.001, 0.01, 0.05, 0.1, 0.5, 1.0, 10.0]:
    name_ = name + '-B_V-' + str(B_V).replace('.', '_')
    mc.figures_MC(fx, fy, ft, name_, B_V=B_V, verbose=verbose)
    mc.in_show_video(name_)

Testing without using a log-normal distribution

In [8]:
for B_sf in np.logspace(-4, 0., N, base=2):
    name_ = name + '-B_sf_nologgabor-' + str(B_sf).replace('.', '_')
    mc.figures_MC(fx, fy, ft, name_, B_sf=B_sf, loggabor=False, verbose=verbose)
    mc.in_show_video(name_)    

checking that sf_0 is consistant with the number of cycles per period

By design, MotionClouds are created to be characterized by a given frequency range. Let's check on the raw image that this is correct.

In [9]:
downscale = 4
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
N_X, N_Y, N_frame = fx.shape

def spatial_xcorr(im):
    import scipy.ndimage as nd
    N_X, N_Y, N_frame = im.shape
    Xcorr = np.zeros((N_X, N_Y))
    for t in range(N_frame):
        Xcorr += nd.correlate(im[:, :, t], im[:, :, t], mode='wrap')
    return Xcorr

mc_i = mc.envelope_gabor(fx, fy, ft, 
                         V_X=0., V_Y=0., 
                         sf_0=0.15, B_sf=0.03)
im = mc.random_cloud(mc_i)
Xcorr = spatial_xcorr(im)
plt.imshow(Xcorr)
Out[9]:
<matplotlib.image.AxesImage at 0x115ce2128>
In [10]:
xcorr = Xcorr.sum(axis=1)
xcorr /= xcorr.max()
plt.plot(np.arange(-N_X/2, N_X/2), xcorr)
print('First maximum reached at index ' + str(np.argmax(xcorr)) + ' - that is normal as we plot the autocorrelation')
print('Max = ', xcorr[np.argmax(xcorr)])
First maximum reached at index 32 - that is normal as we plot the autocorrelation
Max =  1.0

To find the period, intuitively, one could just find the zero of the gradient:

In [11]:
half_xcorr = xcorr[N_X//2:]
half_xcorr_gradient = np.gradient(xcorr)[N_X//2:]
plt.plot(half_xcorr)
plt.plot(half_xcorr_gradient, 'g')
idx = np.argmax(half_xcorr_gradient>0)
print('First zero gradient reached at index ' + str(idx) + ' - at the minimum of the xcorr')
print (half_xcorr_gradient[idx-1], half_xcorr_gradient[idx])
First zero gradient reached at index 4 - at the minimum of the xcorr
-0.210252703916 0.435683947279

We thus get in pixels half of the width of a period.

More precisely, let's fit with a sinusoid (that's yet another FFT...):

In [12]:
xcorr_MC = np.absolute(mc_i**2).sum(axis=-1).sum(axis=-1)
half_xcorr_MC = xcorr_MC[N_X//2:]
half_xcorr_MC /= half_xcorr_MC.max()
plt.plot(half_xcorr_MC, 'r--')

half_xcorr_FT = np.absolute(np.fft.fft(xcorr))[:N_X//2]
half_xcorr_FT /= half_xcorr_FT.max()
plt.plot(half_xcorr_FT)
idx = np.argmax(half_xcorr_FT)
print('Max auto correlation reached at frequency ' + str(idx))
print('Max auto correlation reached with a period ' + str(1.*N_X/idx))
Max auto correlation reached at frequency 9
Max auto correlation reached with a period 7.11111111111

Wrapping things up:

In [13]:
sf_0_ = [0.01, 0.05, 0.1, 0.2, 0.4, 0.8]
sf_0_ = np.linspace(0.01, .5, 12)
downscale = 4
fx, fy, ft = mc.get_grids(mc.N_X//downscale, mc.N_Y//downscale, mc.N_frame//downscale)

def period_px(mc_i):
    N_X, N_Y, N_frame = im.shape
    xcorr_MC = np.absolute(mc_i**2).sum(axis=-1).sum(axis=-1)
    half_xcorr_MC = xcorr_MC[N_X//2:]
    idx = np.argmax(half_xcorr_MC)
    return np.fft.fftfreq(N_X)[idx]

ppx = np.zeros_like(sf_0_)
for i, sf_0 in enumerate(sf_0_):
    mc_i = mc.envelope_gabor(fx, fy, ft, 
                             V_X=0., V_Y=0., 
                             sf_0=sf_0, B_sf=0.03)
    ppx[i] = period_px(mc_i)
    
plt.plot(sf_0_, ppx)
plt.xlabel('sf_0 [a. u.]')
plt.ylabel('observed freq [a. u.]')
Out[13]:
<matplotlib.text.Text at 0x11607a320>

Note that the range of frequencies is accessible using:

In [14]:
print(np.fft.fftfreq(fx.shape[0]))
[ 0.       0.01562  0.03125  0.04688  0.0625   0.07812  0.09375  0.10938
  0.125    0.14062  0.15625  0.17188  0.1875   0.20312  0.21875  0.23438
  0.25     0.26562  0.28125  0.29688  0.3125   0.32812  0.34375  0.35938
  0.375    0.39062  0.40625  0.42188  0.4375   0.45312  0.46875  0.48438
 -0.5     -0.48438 -0.46875 -0.45312 -0.4375  -0.42188 -0.40625 -0.39062
 -0.375   -0.35938 -0.34375 -0.32812 -0.3125  -0.29688 -0.28125 -0.26562
 -0.25    -0.23438 -0.21875 -0.20312 -0.1875  -0.17188 -0.15625 -0.14062
 -0.125   -0.10938 -0.09375 -0.07812 -0.0625  -0.04688 -0.03125 -0.01562]

Testing Grating

testing the orientation component

On this page, we concerntrate on the orientation variable and test the effect of changing the average orientation as well as the bandwidth.

In [1]:
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
import MotionClouds as mc
import os
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
help(mc.envelope_orientation)
Help on function envelope_orientation in module MotionClouds:

envelope_orientation(fx, fy, ft, theta=0.0, B_theta=0.19634954084936207)
    Returns the orientation envelope:
    We use a von-Mises distribution on the orientation:
    - mean orientation is ``theta`` (in radians),
    - ``B_theta`` is the bandwidth (in radians). It is equal to the standard deviation of the Gaussian
    envelope which approximate the distribution for low bandwidths. The Half-Width at Half Height is
    given by approximately np.sqrt(2*B_theta_**2*np.log(2)).
    
    Run 'testing-grating.py' notebook to see the effect of changing theta and B_theta, see
    http://motionclouds.invibe.net/posts/testing-grating.html

In [3]:
name = 'grating'
#initialize
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)

z = mc.envelope_gabor(fx, fy, ft)
mc.figures(z, name, vext='.gif')
mc.figures(z, name)
mc.in_show_video(name)
In [4]:
# explore parameters
for sigma_div in [1, 2, 3, 5, 8, 13 ]:
    name_ = name + '-largeband-B_theta-pi-over-' + str(sigma_div).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, B_theta=np.pi/sigma_div)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [5]:
for div in [1, 2, 4, 3, 5, 8, 13, 20, 30]:
    name_ = name + '-theta-pi-over-' + str(div).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, theta=np.pi/div)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [6]:
V_X = 1.0
for sigma_div in [1, 2, 3, 5, 8, 13 ]:
    name_ = name + '-B_theta-pi-over-' + str(sigma_div).replace('.', '_') + '-V_X-' + str(V_X).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, V_X=V_X, B_theta=np.pi/sigma_div)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [7]:
for B_sf in [0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.8]:
    name_ = name + '-B_sf-' + str(B_sf).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, B_sf=B_sf)
    mc.figures(z, name_)
    mc.in_show_video(name_)

tuning the bandwidth

In the MotionClouds script, we define the orientation envelope based on a VonMises distribution:

In [8]:
def vonmises(th, theta, kappa, norm=True):
    if kappa==0:
        env = np.ones_like(th) 
    elif kappa==np.inf:
        env = np.zeros_like(th)
        env[np.argmin(th < theta)] = 1.
    else:
        env = np.exp(kappa*(np.cos(2*(th-theta))-1))
    if norm:
        return env/env.sum()
    else:
        return env

This definition handles also the extreme cases when the bandwidth is infinite (in which case we return a flat distribution) of null (so we get a dirac). Note also that the von Mises distribution is on a continuous variable (orientation), while we use a discrete fourier transform. In particular, some care should be taken for very narrow bandwidths.

In [9]:
N_theta = 16
theta, B_sf, B_V = np.pi/2, .1, .3
bins = 60
th = np.linspace(0, np.pi, bins, endpoint=False)
fig, ax = plt.subplots(1, 1, figsize=(13, 8))
B_theta_ = [np.pi/640, np.pi/32, np.pi/16, np.pi/8, np.pi/4, np.pi/2, np.inf]
for B_theta, color in zip(B_theta_, [plt.cm.hsv(h) for h in np.linspace(0, 1, len(B_theta_))]):
    kappa = 1./4/B_theta**2
    ax.plot(th, vonmises(th, theta, kappa), alpha=.6, color=color, lw=3)
    ax.fill_between(th, 0, vonmises(th, theta, B_theta), alpha=.1, color=color)
    ax.set_xlabel('orientation (radians)')
_ = ax.set_xlim([0, np.pi])

Getting everything peaking at one:

In [10]:
N_theta = 16
theta, B_sf, B_V = np.pi/2, .1, .3
bins = 360
th = np.linspace(0, np.pi, bins, endpoint=False)
fig, ax = plt.subplots(1, 1, figsize=(13, 8))
for B_theta, color in zip(B_theta_, [plt.cm.hsv(h) for h in np.linspace(0, 1, len(B_theta_))]):
    kappa = 1./4/B_theta**2
    ax.plot(th, vonmises(th, theta, kappa, norm=False), alpha=.6, color=color, lw=3)
    ax.fill_between(th, 0, vonmises(th, theta, kappa, norm=False), alpha=.1, color=color)
    ax.set_xlabel('orientation (radians)')
_ = ax.set_xlim([0, np.pi])

Let's check that the HWHH (half-width at half-height) is given by the bandwidth parameter B_theta. Note that for high values of B_theta, the minimum of the distribution may be higher than 1/2 and that this decriptive value may be ill-defined (see for instance Swindale, N. V. (1998). Orientation tuning curves: empirical description and estimation of parameters. Biological Cybernetics, 78(1):45-56.). We will use the broader definition by using the fully normalized von mises that peaks at one and with a minimum at 0 : $$M(\theta) = \frac {\exp( \kappa \cdot cos(2\theta)) - \exp( -\kappa) }{\exp(\kappa) - \exp( -\kappa) }$$ The different expression for the HWHH are for the Gaussian approximation: $$\theta_{HWHH} = \sqrt{2\ln(2)} \cdot \kappa $$ for the non-normalized von-Mises pdf (as in Swindale): $$\theta_{HWHH} = \frac 1 2 \cdot \arccos (1 + \frac{\kappa - \ln(2)}{\kappa})$$for the normalized (exact) solution: $$\theta_{HWHH} = \frac 1 2 \cdot \arccos (1 + \frac{\ln(\frac{1+e^{-2\kappa})}{2}}{\kappa})$$

In [11]:
N_theta = 160
theta, B_sf, B_V = 0, .1, .3
B_theta_ = np.linspace(0, np.pi/2, N_theta, endpoint=False)
bins = 3600
th = np.linspace(0, np.pi, bins, endpoint=False)
HWHH = np.zeros(N_theta)
for _, B_theta in enumerate(B_theta_):
    kappa = 1./4/B_theta**2
    dist = vonmises(th, theta, kappa)
    dist = (dist - dist.min())/(dist.max() - dist.min())
    ind_HWHH = np.argmax(dist<.5)
    HWHH[_] = th[ind_HWHH]
fig, ax = plt.subplots(1, 1, figsize=(13, 8))
ax.plot(B_theta_, HWHH, 'k+', label='observed')
ax.plot(B_theta_, B_theta_*np.sqrt(2*np.log(2)), 'r', label='gaussian')
k= 1./4/B_theta_**2
ax.plot(B_theta_, .5*np.arccos((-np.log(2)+k)/k), 'b', label='swindale')
ax.plot(B_theta_, .5*np.arccos(1+ np.log((1+np.exp(-2*k))/2)/k), 'g', label='exact')
ax.set_xlabel('B_theta (radians)')
ax.set_ylabel('HWHH (radians)')
ax.legend()
_ = ax.set_xlim([0, np.pi/2])
/usr/local/lib/python3.6/site-packages/ipykernel/__main__.py:8: RuntimeWarning: divide by zero encountered in double_scalars
/usr/local/lib/python3.6/site-packages/ipykernel/__main__.py:16: RuntimeWarning: divide by zero encountered in true_divide
/usr/local/lib/python3.6/site-packages/ipykernel/__main__.py:17: RuntimeWarning: invalid value encountered in true_divide
/usr/local/lib/python3.6/site-packages/ipykernel/__main__.py:17: RuntimeWarning: invalid value encountered in arccos

This shows that in the range of relevant values of B_theta (that is inferior to approximately 60 degrees) we can control the bandwidth as for a Gaussian or a von-Mises curve, while for higher values, the distribution can be considered to be flat.

Testing Speed

Testing the Speed

A moving pattern (stimulus/image) can be thought as a set of pixels and that each pixel intensities are translated from one frame to the next frame (the intensities, not the pixel:

I(x,t)=I(x+u,t+1); space is a vector with coordinates (x1,x2)’ and time is t. u=(u1,u2) is the 2D velocity. Motion Clouds represent 2D + t images. x(t), each pixel follows a 2D path as a function of time. So we have a space-time function f(x,t) by translating a 2D signal f0(x) with velocity u, i.e, f(x,t)=f0(x-ut).

The Fourier transform of such a function is

F(wx, wy, wt)=F0(wx,wy) \delta(u1wx + u2wy + wt)

F0 is the 2D transform of f0 and \delta is the dirac delta. The energy spectrum is nonzero on a plane whose orientation gives the velocity.

In [1]:
%matplotlib inline
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt
#!rm -fr ../files/speed*
In [2]:
import MotionClouds as mc
name = 'speed'
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
help(mc.envelope_speed)
Help on function envelope_speed in module MotionClouds:

envelope_speed(fx, fy, ft, V_X=1.0, V_Y=0.0, B_V=0.5)
    Returns the speed envelope:
    selects the plane corresponding to the speed ``(V_X, V_Y)`` with some bandwidth ``B_V``.
    
    * (V_X, V_Y) = (0,1) is downward and  (V_X, V_Y) = (1, 0) is rightward in the movie.
    * A speed of V_X=1 corresponds to an average displacement of 1/N_X per frame.
    To achieve one spatial period in one temporal period, you should scale by
    V_scale = N_X/float(N_frame)
    If N_X=N_Y=N_frame and V=1, then it is one spatial period in one temporal
    period. It can be seen along the diagonal in the fx-ft face of the MC cube.
    
    A special case is used when ``B_V=0``, where the ``fx-ft`` plane is used as
    the speed plane: in that case it is desirable to set ``(V_X, V_Y)`` to ``(0, 0)``
    to avoid aliasing problems.
    
    Run the 'test_speed' notebook to explore the speed parameters, see
    http://motionclouds.invibe.net/posts/testing-speed.html

In [3]:
# explore parameters
for V_X in [-1.0, -0.5, 0.0, 0.1, 0.5, 1.0, 4.0]:
    name_ = name + '-V_X-' + str(V_X).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, V_X=V_X)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [4]:
for V_Y in [-1.0, -0.5, 0.5, 1.0, 2.0]:
    name_ = name + '-V_Y-' + str(V_Y).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, V_X=0.0, V_Y=V_Y)
    mc.figures(z, name_)
    mc.in_show_video(name_)
In [5]:
for B_V in [0, 0.001, 0.01, 0.05, 0.1, 0.5, 1.0, 10.0]:
    name_ = name + '-B_V-' + str(B_V).replace('.', '_')
    z = mc.envelope_gabor(fx, fy, ft, B_V=B_V)
    mc.figures(z, name_)
    mc.in_show_video(name_)

Testing the utility functions of Motion Clouds

Motion Clouds utilities

Here, we test some of the utilities that are delivered with the MotionClouds package.

In [1]:
%load_ext autoreload
%autoreload 2
In [2]:
import os
import MotionClouds as mc
In [3]:
mc.N_X, mc.N_Y, mc.N_frame = 30, 40, 50
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)

generating figures

As they are visual stimuli, the main outcome of the scripts are figures. Utilities allow to plot all figures, usually marked by a name:

In [4]:
name = 'testing_utilities'
In [5]:
help(mc.figures_MC)
Help on function figures_MC in module MotionClouds:

figures_MC(fx, fy, ft, name, V_X=1.0, V_Y=0.0, do_figs=True, do_movie=True, B_V=0.5, sf_0=0.125, B_sf=0.1, loggabor=True, recompute=False, theta=0.0, B_theta=0.19634954084936207, alpha=0.0, vext='.mp4', seed=None, impulse=False, do_amp=False, verbose=False, figpath='../files/', return_envelope=False, **kwargs)
    Generates the figures corresponding to the Fourier spectra and the stimulus cubes and
    movies directly from the parameters.
    
    The figures names are automatically generated.

In [6]:
mc.figures_MC(fx, fy, ft, name, recompute=True)
In [7]:
help(mc.in_show_video)
Help on function in_show_video in module MotionClouds:

in_show_video(name, vext='.mp4', loop=True, autoplay=True, controls=True, embed=False, figpath='../files/', **kwargs)
    Columns represent isometric projections of a cube. The left column displays
    iso-surfaces of the spectral envelope by displaying enclosing volumes at 5
    different energy values with respect to the peak amplitude of the Fourier spectrum.
    The middle column shows an isometric view of the faces of the movie cube.
    The first frame of the movie lies on the x-y plane, the x-t plane lies on the
    top face and motion direction is seen as diagonal lines on this face (vertical
    motion is similarly see in the y-t face). The third column displays the actual
    movie as an animation.
    
    Given a name, displays the figures corresponding to the Fourier spectra, the
    stimulus cubes and movies within the notebook.

In [8]:
mc.in_show_video(name)

This function embeds the images and video within the notebook. Sometimes you want to avoid that:

In [9]:
mc.in_show_video(name, embed=False)

Sometimes, you may have already computed some envelope or just want to distort it, then you can use mc.figures:

In [10]:
env = mc.envelope_gabor(fx, fy, ft)
In [11]:
help(mc.figures)
Help on function figures in module MotionClouds:

figures(z=None, name='MC', vext='.mp4', do_movie=True, do_figs=True, recompute=False, seed=None, impulse=False, verbose=False, masking=False, do_amp=False, figpath='../files/', **kwargs)
    Given an envelope, generates the figures corresponding to the Fourier spectra
    and the stimulus cubes and movies.
    
    The figures names are automatically generated.

In [12]:
import numpy as np
mc.figures(np.sqrt(env), name + '_0')
In [13]:
mc.in_show_video(name + '_0')

low-level figures : 3D visualizations

In [14]:
help(mc.cube)
Help on function cube in module MotionClouds:

cube(im_in, azimuth=30.0, elevation=45.0, name=None, ext='.png', do_axis=True, show_label=True, cube_label={'x': 'x', 'y': 'y', 't': 't'}, colormap='gray', roll=-180.0, vmin=0.0, vmax=1.0, figsize=(800, 800), figpath='../files/', **kwargs)
    Visualization of the stimulus as a cube

In [15]:
help (mc.visualize)
Help on function visualize in module MotionClouds:

visualize(z_in, azimuth=25.0, elevation=30.0, thresholds=[0.94, 0.89, 0.75, 0.5, 0.25, 0.1], opacities=[0.9, 0.8, 0.7, 0.5, 0.2, 0.1], fourier_label={'f_x': 'f_x', 'f_y': 'f_y', 'f_t': 'f_t'}, name=None, ext='.png', do_axis=True, do_grids=False, draw_projections=True, colorbar=False, f_N=2.0, f_tN=2.0, figsize=(800, 800), figpath='../files/', **kwargs)
    Visualization of the Fourier spectrum by showing 3D contour plots at different thresholds
    
    parameters
    ----------
    z : envelope of the cloud

Handling filenames

By default, the folder for generating figures or data is mc.figpath:

In [16]:
print(mc.figpath)
../files/
print(os.listdir(mc.figpath))

To generate figures, we assign file names, such as:

In [17]:
filename = os.path.join(mc.figpath, name)

It is then possible to check if that figures exist:

In [18]:
print('filename=', filename, ', exists? : ', mc.check_if_anim_exist(filename))
filename= ../files/testing_utilities , exists? :  False

Note that the file won't be recomputed if it exists:

In [19]:
mc.figures(env, name)

This behavior can be overriden using the recompute option

In [20]:
mc.figures(env, name, recompute=True)

Warning: be sure that when you display a given file, it corresponds to the parameters you have set for your stimulus.

low-level figures : exporting to various formats

It is possible to export motion clouds to many different formats. Here are some examples:

In [21]:
!rm -fr ../files/export
In [22]:
name = 'export'
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
z = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft)))
mc.PROGRESS = False
for vext in mc.SUPPORTED_FORMATS:
    print ('Exporting to format: ', vext)
    mc.anim_save(z, os.path.join(mc.figpath, name), display=False, vext=vext, verbose=False)
Exporting to format:  .h5
Exporting to format:  .mpg
Exporting to format:  .mp4
Exporting to format:  .gif
Exporting to format:  .webm
Exporting to format:  .zip
Exporting to format:  .mat
Exporting to format:  .png

showing a video

To show a video in a notebook, issue:

In [23]:
mc.notebook = True # True by default
mc.in_show_video('export')

Rectifying the contrast

The mc.rectif function allows to rectify the amplitude of luminance values within the whole generated texture between $0$ and $1$:

In [24]:
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
envelope = mc.envelope_gabor(fx, fy, ft)
image = mc.random_cloud(envelope)
print('Min :', image.min(), ', mean: ', image.mean(), ', max: ', image.max())
Min : -2.21666797481 , mean:  -1.70234197109e-19 , max:  2.09408478901
In [25]:
image = mc.rectif(image)
print('Min :', image.min(), ', mean: ', image.mean(), ', max: ', image.max())
Min : 0.0 , mean:  0.5 , max:  0.972349673655
In [26]:
import pylab
import numpy as np
import matplotlib.pyplot as plt
import math
%matplotlib inline
#%config InlineBackend.figure_format='retina' # high-def PNGs, quite bad when using file versioning
%config InlineBackend.figure_format='svg'
In [27]:
name = 'contrast_methods-'
#initialize
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
ext = '.zip'
contrast = 0.25
B_sf = 0.3

for method in ['Michelson', 'Energy']:
    z = mc.envelope_gabor(fx, fy, ft, B_sf=B_sf)
    im = np.ravel(mc.random_cloud(z, seed =1234))
    im_norm = mc.rectif(mc.random_cloud(z), contrast, method=method, verbose=True)
    plt.figure()
    plt.subplot(111)
    plt.title(method + ' Histogram Ctr: ' + str(contrast))
    plt.ylabel('pixel counts')
    plt.xlabel('grayscale')
    bins = int((np.max(im_norm[:])-np.min(im_norm[:])) * 256)
    plt.xlim([0, 1])
    plt.hist(np.ravel(im_norm), bins=bins, normed=False, facecolor='blue', alpha=0.75)
    #plt.savefig(name_)

def image_entropy(img):
    """calculate the entropy of an image"""
    histogram = img.histogram()
    histogram_length = np.sum(histogram)
    samples_probability = [float(h) / histogram_length for h in histogram]
    return -np.sum([p * math.log(p, 2) for p in samples_probability if p != 0])
Before Rectification of the frames
Mean= 1.06581410364e-18 , std= 0.84527878442 , Min= -3.07067785094 , Max= 3.47591337022  Abs(Max)= 3.47591337022
After Rectification of the frames
Mean= 0.5 , std= 0.0303977219219 , Min= 0.389572986872 , Max= 0.625
percentage pixels clipped= 0.0
Before Rectification of the frames
Mean= -1.18423789293e-19 , std= 0.817871986507 , Min= -3.42450006883 , Max= 3.40014935729  Abs(Max)= 3.42450006883
After Rectification of the frames
Mean= 0.5 , std= 0.125 , Min= -0.0233857078689 , Max= 1.01966405094
percentage pixels clipped= 0.005

If we normalise the histogram then the entropy base on gray levels is going to be the almost the same.
TODO: Review the idea of entropy between narrowband and broadband stimuli.

Testing basic functions of Motion Clouds

Motion Clouds: raw principles

Motion Clouds are synthesized textures which aim at having similar characteristics as natural images but with controlled parameters. There are many ways to achieve these results and this notebook aims at showing that different procedures from different communities (neurioscience, modelling, computer vision, ...) may produce similar results.

In [1]:
import numpy as np
np.set_printoptions(precision=3, suppress=True)
import pylab
import matplotlib.pyplot as plt
%matplotlib inline

Using Fourier ("official Motion Clouds")

In [2]:
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)

Using mixtures of images

In [3]:
from scipy.misc import face
lena = face()
print(lena.shape)
lena = lena[:, (1024-768):, :].mean(axis=-1)
print(lena.shape)
lena -= lena.mean()
lena /= lena.std()
print(lena.shape)
(768, 1024, 3)
(768, 768)
(768, 768)
In [4]:
plt.imshow(lena, cmap=plt.cm.gray)
Out[4]:
<matplotlib.image.AxesImage at 0x103200d68>