The increasing amount of computational resources becoming available is causing a shift towards increasingly heterogeneous models of neural circuits and brain regions, and employment of increasingly complex stimulation and experimental protocols in an effort to bridge the gap between simulations and biological experiments. This poses a challenge for the existing tool-chains, as the set of tools that are involved in the typical modeler's workflow is expanding, with a growing amount and complexity of meta-data, describing the experimental context, flowing between them. A plethora of tools is currently available covering different parts of the workflow; however, numerous areas lack dedicated tools, while integration and interoperability of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases hindering their productivity.
To address these issues, we have developed mozaik: an integrated workflow system for spiking neuronal network simulations written in Python. mozaik integrates the model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant meta-data are available to each of the workflow components. It is based on several widely used Python tools, including PyNN Davison et al. (2009), Neo Davison et al. (2011) and Matplotlib Hunter et al. (2007).
Here we will show a minimalist example, demonstrating the most important steps in setting up a simple randomly connected network of excitatory and inhibitory neurons. The model can be specified via a configuration file(s). The top level file specifies references to files containing configuration of individual layers, and several high-level parameters, such as what input space and layer will be used by the model (in our case none) or where to store results.
{ 'exc_layer': url("param/exc_layer"),
'inh_layer': url("param/inh_layer"),
'results_dir': '',
'name' : 'Vogels&Abbott',
'reset' : False,
'null_stimulus_period' : 0.0,
'input_space' : None,
'input_space_type' : 'None', }
The configuration file containing information about the excitatory layer and the connections that it sends out is as follows:
#CorticalUniformSheet parameters
'component': 'mozaik.sheets.vision.VisualCorticalUniformSheet',
'params':{ 'name':'Exc_Layer',
'sx':1000.0,
'sy':1000.0,
'density': 3200.0,
'mpi_safe' : False,
'magnification_factor': 1.0,
'cell': { 'model': 'IF_cond_exp',
'params': {
'v_thresh':-50.0,
'v_rest': -60.0,
'v_reset': -60.0,
'tau_refrac': 5.0,
'tau_m': 20.0,
'cm': 0.2,
'e_rev_E': 0.0,
'e_rev_I': -80.0,
'tau_syn_E': 5.0,
'tau_syn_I': 10.0, },
'initial_values': {
'v': PyNNDistribution(name='uniform',params=(-60,-50)),
},
},
'artificial_stimulators' : {},
'recorders' : url("param/exc_rec"),
},
'ExcExcConnection': {
'target_synapses' : 'excitatory',
'short_term_plasticity': None,
'connection_probability': 0.02,
'weights': 0.004, # microS, the synapse strength
'delay': 0.2, # ms delay of the connections
},
'ExcInhConnection': ref('exc_layer.ExcExcConnection'),
The
"2" : {
'component' : 'mozaik.sheets.population_selector.RCRandomN',
'variables' : ("spikes","v","gsyn_exc" , "gsyn_inh"),
'params' : {'num_of_cells' : 21}
}
This tells mozaik to record from 21 randomly picked neurons in the Exc_Layer the spikes, membrane potential and conductances.
The list of experiments is declared in a separate method as follows. In our case all we do is initially give external input to the network, and then let the network run while recording its activity.
return
[ PoissonNetworkKick(
model,duration=210,
sheet_list=["V1_Exc_L4","V1_Inh_L4"],
recording_configuration={'component' : 'population_selector.RCRandomPercentage', 'params' : {'percentage' : 20.0}},
lambda_list=[100.0,100.0],
weight_list=[0.1,0.1] ),
NoStimulation( model, duration=1400)]
Once configured the whole workflow is executed with a single command, to which the model class and a method returning a list of experiments is passed.
data_store,model = run_workflow('VogelsAbbott2005',VogelsAbbott,create_experiments)
During execution the recorded data are stored in the central data-store, which is saved for future access by the analysis and visualization components.
An important concept in mozaik is a data-store view (DSV) - a proxy instance of a datastore that allows expressing any subset of the data-store without copying data in memory. Together with the accompanying query module, this allows for powerful manipulation of data in the data-store.
Following query returns a DSV with data (recordings or analysis data structures)
associated only with sheet
param_filter_query(data_store,sheet_name=['V1_Exc_L4'])
while the following one filters data structures that declare parameter named value_name with value AfferentOrientation belonging to any of the listed sheets:
param_filter_query(data_store,
sheet_name=['V1_Exc_L4','V1_Inh_L4','V1_Exc_L2/3','V1_Inh_L2/3'],
value_name='AfferentOrientation')
Finally the following query filters all recordings or analysis data structures assoicated with stimulus FullfieldSinusoidalGrating
of horizontal (0)
param_filter_query(data_store,st_name='FullfieldSinusoidalGrating',st_orientation=0)
mozaik offers a number of other query methods providing more complex data manipulation such as collation of data with respect to selected parameters.
Unlike in most other libraries, in mozaik, analysis and plotting methods do not accept as input the specific data-structures to be processed but instead a DSV. It is the responsibility of the analysis or visualization class to apply itself to as wide range of data in the DSV as possible. In conjunction with the query system, this provides a powerful unified and flexible way to alter the analysis and visualization process, and increases the degree of automation.
Let us now assume a model with visual input space, which was presented with various stimuli to measure typical visual cortex properties. Following code visualizes the raw data recorded during presentation of stimulus FullfieldSinusoidalGrating at horizontal and vertical orientation and 100% contrast:
dsv = param_filter_query(data_store,
st_orientation=[0,numpy.pi/2],
st_name=['FullfieldDriftingSinusoidalGrating'],
st_contrast=100)
OverviewPlot(dsv,ParameterSet({'sheet_name' : 'V1_Exc_L4', 'neuron' : l4_exc, 'sheet_activity' : {}}))
while this code will do the same but it will display the data for any orientation of the grating stimulus that was presented and recorded:
dsv = param_filter_query(data_store,
st_name=['FullfieldDriftingSinusoidalGrating'],
st_contrast=100)
OverviewPlot(dsv,ParameterSet({'sheet_name' : 'V1_Exc_L4', 'neuron' : l4_exc, 'sheet _activity' : {}}))
The same system works for analysis. This is how a user expresses his wish to compute trial-averaged firing rates to all the presented stimuli:
TrialAveragedFiringRate(data_store,ParameterSet({'stimulus_type':"DriftingSinusoidalGratingCenterSurroundStimulus"})).analyse()
Behind the scenes, this code creates a number of analysis data structures holding the average firing rates, and adds them back into the data store. These analysis data structures are annotated with mete-data allowing for their identification using the query system. Thus the following single command fits the tuning curves implied by the average firing rates across some parameter of the stimuli (here orientation) that has been varied during the experiment:
dsv = param_filter_query(data_store,st_name=['FullfieldDriftingSinusoidalGrating'])
GaussianTuningCurveFit(dsv,ParameterSet({'parameter_name' : 'orientation'})).analyse()
Similarly we can plot the same 'raw' tuning curves that we have just fitted with Gaussian curves like this:
dsv = param_filter_query(data_store,st_name='DriftingSinusoidalGratingDisk',analysis_algorithm=['TrialAveragedFiringRate'])
PlotTuningCurve(dsv,ParameterSet({'parameter_name' : 'orientation',
'neurons': list_of_4_neurons,
'sheet_name' : 'V1_Exc_L4'})).plot()