Software and services status tables →

Active projects

PyNN

A simulator-independent Python API for specifying and running spiking neural network models on multiple backends.

[repository] [documentation]

PyNN (pronounced 'pine') lets researchers write a neuronal network model once, using a common Python API, and then run it without modification on any supported simulator — currently NEURON, NEST, and Brian 2 — as well as on neuromorphic hardware systems such as SpiNNaker and BrainScaleS. The API operates at a high level of abstraction (populations of neurons, layers, columns, and the connections between them) while still permitting access to individual neuron and synapse parameters when needed.

PyNN ships with a library of standard neuron, synapse, and synaptic plasticity models that have been verified to produce consistent results across backends, together with a suite of built-in connectivity algorithms (all-to-all, random, distance-dependent, small-world, and others). It has been adopted as the common interface for the neuromorphic hardware systems of the EBRAINS research infrastructure.

lazyarray

A Python package providing a lazily-evaluated numerical array class compatible with NumPy, supporting deferred computation and partial evaluation.

[repository] [documentation]

lazyarray provides larray, a drop-in NumPy-compatible array type whose construction and element-wise operations are deferred until the values are actually needed. Arrays may be created from scalars, sequences, NumPy arrays, iterators, generators, or functions of the form f(i) or f(i, j); in the function case, individual elements are only computed when accessed. All operations on an larray are queued and executed lazily on first access.

The primary motivation for lazyarray is memory and runtime efficiency in distributed or conditional computation: when only a subset of elements will ever be used — for example, because an MPI parallel simulation assigns different subsets of neurons to different processes — the full array never needs to be materialised. It was originally developed as an internal component of PyNN to handle the large parameter arrays (synaptic weights, delays, neuron parameters) that arise in large-scale neural network models, but is packaged and distributed independently so it can be used in other contexts.

Neo

A Python package providing a common object model for representing and reading electrophysiology data across dozens of file formats.

[repository] [documentation]

Neo solves the interoperability problem that arises because electrophysiology data is stored in a large number of incompatible proprietary and open formats (Spike2, Plexon, Blackrock, Axon, NeuroExplorer, AlphaOmega, Neurodata Without Borders, MATLAB, HDF5, and many others). It defines a hierarchical object model — Block, Segment, AnalogSignal, SpikeTrain, Event, and related classes — that can faithfully represent continuous signals, spike trains, and trial structure, with full physical units carried through all objects via the quantities package.

Deliberately limited to data representation rather than analysis or visualisation, Neo is designed to be a lightweight dependency that other tools can build on. It is used by Elephant (analysis), NeoViewer (browser-based visualisation), PyNN (simulation output), tridesclous and SpikeInterface (spike sorting), ephyviewer, SpykeViewer, and the EBRAINS G-Node data infrastructure, making it a central piece of the Python neuroscience ecosystem. IO modules covering dozens of formats are included, with both a high-level and a raw-IO layer to simplify adding new formats.

Sumatra

Automated electronic lab notebook for tracking and reproducing numerical simulations and data analyses.

[repository] [documentation]

Reproducibility is a central challenge in computational science: when a simulation is run weeks or months later, the exact code version, parameters, and environment may be forgotten or changed. Sumatra solves this by automatically recording all information needed to reproduce a computation — the script and executable versions (via Git, Mercurial, Subversion, or Bazaar), parameter values, execution time, console output, and output data file locations.

Sumatra provides three interfaces: a command-line tool (smt) that wraps the execution of any script or program; a built-in web interface (smtweb) for browsing, searching, filtering, and annotating records; and a Python API for deeper integration into custom workflows. It also supports re-running a previous computation and automatically verifying that the results are unchanged, making it a practical tool for validating reproducibility.

quantities

NumPy-based library for arithmetic and unit conversion of physical quantities with optional uncertainty support.

[repository] [documentation]

Working with physical measurements in scientific code requires tracking not just numerical values but also their units and dimensions. The quantities library extends NumPy arrays so that every value carries explicit unit information, enabling automatic unit tracking through arithmetic operations: dividing a distance by a time yields a velocity in the correct units, and attempting to add incompatible quantities (e.g. metres and seconds) raises an error rather than silently producing nonsense.

Built on top of NumPy and compatible with its universal functions, quantities supports a comprehensive set of physical units, unit conversion, physical constants, and optional propagation of measurement uncertainties. It is used across scientific Python workflows wherever dimensional correctness needs to be enforced programmatically.

quantities was originally developed by Darren Dale; I am the current maintainer.

fairgraph

Python API for the EBRAINS Knowledge Graph, providing an object-oriented interface to neuroscience research metadata.

[repository] [documentation]

fairgraph allows researchers to interact with the EBRAINS Knowledge Graph using Python objects rather than raw API calls. Each metadata type defined in the openMINDS schema — datasets, models, subjects, brain atlases, and more — is represented as a Python class, so users can query, filter, and traverse relationships between research objects with natural syntax.

The library is used both by data curators registering new datasets and models in the Knowledge Graph and by scientists discovering and reusing existing research products. It supports reading and writing metadata, resolving linked entities, and works seamlessly within the EBRAINS JupyterLab environment. fairgraph is widely used within the EBRAINS data-sharing infrastructure and is the primary Python interface for working programmatically with brain research metadata deposited in EBRAINS.

NMPI client

Python client for submitting and managing simulation jobs on the EBRAINS Neuromorphic Computing Platform.

[repository] [documentation]

The NMPI (Neuromorphic Platform Interface) client provides programmatic access to the EBRAINS Neuromorphic Computing Platform, which offers remote access to two large-scale neuromorphic hardware systems: the BrainScaleS system developed in Heidelberg and the SpiNNaker system developed in Manchester. Users write experiment scripts using the PyNN API, which can target either hardware backend or a software simulator interchangeably.

The client handles authentication, job submission, status polling, and retrieval of results and output data. Experiment code can be provided as a local file path, a Git repository URL, or a compressed archive containing a run.py entry point. This abstraction lets computational neuroscientists run spiking neural network experiments on dedicated neuromorphic hardware without needing to manage the underlying platform infrastructure directly.

VF client

Python client for the EBRAINS Model Validation Framework, enabling systematic quantitative validation of neuroscience models.

[repository] [documentation]

The ebrains-validation-client provides a Python interface to the EBRAINS Model Validation Framework web service, which stores metadata about computational neuroscience models, validation tests, and test results. It allows researchers to register models and tests, run validations programmatically, and record results in a shared, searchable database — supporting reproducible model development workflows.

Validation tests are formalised using the SciUnit framework, which defines a standard interface for comparing model output against experimental data. Specialised testing libraries built on SciUnit — such as HippoUnit for hippocampal cell models and MorphoUnit for neuronal morphology — integrate directly with this framework.

EBRAINS Storage client

Python client for reading and writing files on EBRAINS Collaboratory Drive and Bucket (object) storage.

[repository] [documentation]

ebrains-drive provides a unified Python interface to the two main storage systems in the EBRAINS research infrastructure: the Collaboratory Drive, a Seafile-based file storage system for collaborative working, and Buckets, an object storage service backed by the EBRAINS data-proxy.

The library also supports accessing public buckets without authentication. It is used by researchers and platform services within the EBRAINS ecosystem who need programmatic, scriptable access to shared data storage from Python code or Jupyter notebooks.

Hippounit

A SciUnit library for automated, data-driven validation testing of hippocampal CA1 pyramidal cell and interneuron models.

[repository] [documentation]

HippoUnit provides a suite of quantitative validation tests for detailed compartmental models of hippocampal neurons, built on the SciUnit framework. It compares model output against experimental electrophysiological data using z-scores as a standardised measure of discrepancy.

The library implements six tests covering somatic excitability, depolarisation block, backpropagating action potentials in the apical trunk, postsynaptic potential attenuation from dendrite to soma, oblique dendrite integration with synchronous and asynchronous synaptic inputs, and pathway interaction under theta-rhythmic stimulation. It was developed within the Human Brain Project to support reproducible model validation, with the lead developer being Sára Sáray in the lab of Szabolcs Káli, and is aimed at neuroscientists building or evaluating detailed biophysical models of hippocampal circuits. For more information, see Sáray et al, 2021.

basalunit

A SciUnit library for data-driven validation testing of basal ganglia computational models using curve-similarity scoring.

[repository]

BasalUnit applies the SciUnit testing framework to the validation of basal ganglia models, measuring how closely model output traces match experimental recordings.

The library was developed as part of the European Human Brain Project (the lead developer being Shailesh Appukuttan) and is intended to provide objective, repeatable quality metrics for models of the basal ganglia — a set of subcortical nuclei involved in action selection, reinforcement learning, and motor control. Like the other HBP validation libraries, it integrates with the EBRAINS model validation infrastructure.

CerebUnit

A SciUnit library for running data-driven validation tests on computational models of the cerebellum.

[repository] [documentation]

CerebUnit provides a collection of validation tests for cerebellar computational models, built on the SciUnit framework. By comparing simulation output against experimental data, it generates standardised scores that quantify how well a model reproduces observed cerebellar physiology.

The package was developed within the European Human Brain Project (the lead developer being Lungsi Sharma) as part of a broader effort to establish rigorous, community-shared validation workflows for brain region models. It is closely related to the other HBP-developed SciUnit libraries (HippoUnit, BasalUnit, MorphoUnit) and is intended to integrate with the EBRAINS model validation platform.

MorphoUnit

A SciUnit library for data-driven testing and validation of neuronal morphologies against experimental measurements.

[repository]

MorphoUnit extends the SciUnit framework to the validation of neuronal morphology, allowing researchers to compare geometric and structural properties of reconstructed or synthesised cell models against experimental reference data. It provides scoring metrics that quantify the similarity between model morphologies and empirical measurements, and has been applied to cell types including fast-spiking interneurons.

The library was developed by Shailesh Appukuttan and Pedro Garcia-Rodriguez as part of the Human Brain Project, complementing electrophysiology-focused tools such as HippoUnit and BasalUnit. It targets a gap in the model-validation ecosystem by treating morphological fidelity — soma size, dendritic arborisation, axonal geometry — as a testable, quantifiable property rather than a visual judgement.

NeoViewer

A React component and REST API for interactive browser-based visualisation of electrophysiology data in any format supported by Neo.

[repository] [documentation]

NeoViewer JS (npm package neural-activity-visualizer-react) is a React component that renders analog signals, spike trains, LFPs, and other neurophysiological recordings directly in a web browser. It communicates with a backend REST API — implemented with FastAPI and deployable as a Docker container — which uses Neo to read data from any of the dozens of file formats Neo supports and returns the data as JSON. An AngularJS variant of the front-end component is also available.

The goal is to allow neuroscientists to inspect and share electrophysiology datasets without requiring local software installation. The architecture — a thin embeddable JavaScript component backed by a containerised Neo-based API — means it can be embedded in any web page or data portal, making it straightforward to add interactive previews to data repositories or electronic publications. For more information see Ates et al., 2024.

openMINDS Python

Python package providing all openMINDS metadata schemas as Python classes for creating and manipulating neuroscience metadata.

[repository] [documentation]

openMINDS is the open Metadata Initiative for Neuroscience Data Structures, a set of metadata models developed to facilitate access to and interoperability of neuroscience research products. The openMINDS Python package auto-generates Python classes from the openMINDS JSON-Schema definitions, covering schemas for datasets, models, software, specimens, and computational workflows. It supports import and export in JSON-LD format, making it straightforward to create openMINDS-compliant metadata records from Python code and to serialize them for upload to the EBRAINS Knowledge Graph.

The package is the foundational tooling layer of the openMINDS ecosystem. It is used directly by researchers curating data for EBRAINS, and is depended upon by higher-level tools such as fairgraph and bids2openminds. The build pipeline is automated via GitHub Actions, regenerating the package whenever the upstream schema definitions change and publishing updated releases to PyPI.

bids2openminds

Converts BIDS-organised neuroimaging datasets into openMINDS metadata format for EBRAINS data registration.

[repository] [documentation]

bids2openminds automates the extraction of both implicit and explicit metadata from datasets structured according to the Brain Imaging Data Structure (BIDS) standard and conversion of these metadata into the openMINDS schemas used by the EBRAINS Knowledge Graph. Rather than manually authoring JSON-LD metadata files for each dataset submission, researchers point the tool at an existing BIDS dataset and receive ready-to-submit openMINDS metadata as output.

It is invoked from the command line or via its Python API, and is aimed at neuroimaging researchers who wish to share their BIDS datasets through the EBRAINS infrastructure. The project is part of the Open Metadata Initiative and is under active early development.

EBRAINS Workflow Library

A collection of reusable CWL tool and workflow definitions for running neuroscience analysis pipelines on EBRAINS infrastructure.

[repository]

The EBRAINS Workflow Library (EWL) provides a collection of reusable Common Workflow Language (CWL) descriptions wrapping individual EBRAINS tools and services, allowing them to be composed into larger analysis pipelines that are portable across local, cloud, and HPC computing environments.

Each component wraps a command-line tool or REST API call with a CWL document that declares its inputs, outputs, and resource requirements. Input and output data can be loaded from and written back to the EBRAINS Knowledge Graph using openMINDS-compliant metadata, enabling sharing and reproducibility. The repository supports the broader EBRAINS goal of making scientific workflows FAIR (Findable, Accessible, Interoperable, Reusable) by providing well-described, version-controlled building blocks that can be assembled by individual research groups for their own simulation and analysis needs.

Live Papers

An EBRAINS web service for creating interactive, resource-rich companion documents for neuroscience publications.

[Live Papers Viewer app] [Live Papers Builder app] [Live Papers API] [Collab File Cloner]

Live Papers are structured online documents that complement published scientific articles by presenting the underlying data, code, model files, and interactive visualisations within a narrative structure. Authors use the web-based Live Papers Builder to assemble a paper, linking to resources stored on EBRAINS, ModelDB, NeuroMorpho.org, the Allen Brain Atlas, and other community repositories. Readers can then download or directly run the associated resources, view embedded interactive plots of electrophysiology recordings, or launch neuronal simulations, all without leaving the browser.

Interactive tools run on the EBRAINS Research Infrastructure. The platform consists of a viewer app, a builder app, and a REST API backed by the EBRAINS Knowledge Graph. Researchers in the Human Brain Project have used it to publish interactive supplementary material accompanying articles in journals including Nature Communications, PLOS Computational Biology, and eLife. Over 30 live papers have been published to date

Model Validation Service

An EBRAINS service providing a web API and catalog for registering computational neuroscience models and recording structured validation test results.

[Model Validation API] [Model Catalog]

The Model Validation Service addresses the challenge of assessing the scientific quality of computational brain models in a systematic and reproducible way. It consists of a REST API (model-validation-api.apps.ebrains.eu) backed by the EBRAINS Knowledge Graph, and the Model Catalog web interface where users can browse and curate model entries, validation test definitions, and the results of running those tests.

Validation tests are written using the SciUnit framework, which decouples validation test implementations from model implementation details, and several domain-specific validation libraries (HippoUnit, MorphoUnit, BasalUnit, CerebUnit, NetworkUnit, CerebTests) have been developed, providing pre-built test suites. The Python client (ebrains-validation-framework) allows tests to be run and results submitted to the service programmatically.

NeoViewer service

A browser-based service for visualising electrophysiology recordings from any file format supported by the Neo library.

[Neo Viewer API] [Neo Viewer Demo]

Neo Viewer (also called the Neural Activity Visualizer) allows neurophysiology data to be visualised and explored without requiring local software installation or format conversion. The service exposes a REST API that accepts a URL pointing to an electrophysiology data file, reads it using the Neo Python library, and returns the signal and spike-train data as JSON. The backend supports all file formats that Neo can read, including proprietary formats such as Plexon, NeuroExplorer, and AlphaOmega, as well as open formats like Neurodata Without Borders and HDF5.

On the frontend, the project provides React and AngularJS JavaScript components (published on npm as neural-activity-visualizer-react) that can be embedded in any web page to render interactive plots using the Plotly library. This makes it straightforward for data portals and scientific papers — including EBRAINS Live Papers — to embed live, zoomable views of recorded signals directly in the browser. A public demonstration instance is hosted on EBRAINS at neoviewer.apps.ebrains.eu, and the backend is easily self-hosted via Docker.

EBRAINS Provenance

An EBRAINS REST API for recording and querying the provenance of neuroscience simulations and data analysis workflows.

[Provenance API] [Provenance Visualiser]

Computational provenance is the structured record of all steps in a scientific workflow: the code run, input datasets, the computing environment, the person responsible, and the resulting outputs. The EBRAINS Provenance API (prov-api.apps.ebrains.eu) provides a lightweight REST interface for recording this information in the EBRAINS Knowledge Graph using the openMINDS computation schemas, which are compatible with the W3C PROV data model and define types for Simulation, DataAnalysis, Visualization, and related activities.

The API is designed to be called programmatically at the end of a simulation or analysis script, registering inputs, outputs, parameter sets, software versions, and hardware environment in a single request. Stored provenance records can then be queried and visualised through a companion Provenance Visualiser web app. The service complements other EBRAINS tools: provenance records link directly to datasets in the Knowledge Graph and to entries in the Model Validation Service, giving researchers a traceable audit trail from raw data through analysis to published results.

EBRAINS Neuromorphic Computing

An EBRAINS service providing remote access to the SpiNNaker and BrainScaleS neuromorphic hardware systems via a job queue and web interface.

[Neuromorphic Job Manager] [Neuromorphic Remote Access API] [Demo Neuromorphic Provider]

The neuromorphic computing systems SpiNNaker (University of Manchester) and BrainScaleS (Heidelberg University) implement spiking neural network models in specialised low-power hardware, with BrainScaleS emulating neurons at up to 1000x biological real time. The EBRAINS Neuromorphic Computing service provides a centralised job queue that allows researchers to submit simulations to these systems without needing direct hardware access. Jobs are written as Python scripts using the PyNN simulator-independent API, submitted via the hbp-neuromorphic-platform Python client or the Neuromorphic Job Manager web app, queued on the server, executed on the chosen hardware system, and results made available for download.

The service consists of a REST job queue API (nmpi-v3.hbpneuromorphic.eu), the Job Manager web application (neuromorphic-job-manager.apps.ebrains.eu), and the Python/command-line client. Job scripts can reference input data stored locally, in Git repositories, or in EBRAINS Collaboratory storage. The platform is integrated with EBRAINS authentication and the Knowledge Graph for provenance tracking of neuromorphic simulation runs.

EBRAINS Metadata Utilities

A collection of EBRAINS web applications for generating and validating structured metadata for software tools, web services, and neural activity datasets.

[Codemeta Generator] [Servicemeta Generator] [Neural Activity Resource] [Curation tools]

Registering software and services in the EBRAINS ecosystem requires structured metadata in community-standard formats. The EBRAINS Metadata Utilities are a suite of small web applications that lower the barrier to producing that metadata. The Codemeta Generator (codemeta.apps.ebrains.eu) provides a web form for creating codemeta.json files — JSON-LD documents describing software tools — with validation rules aligned to EBRAINS registration requirements. The Servicemeta Generator produces analogous structured metadata for web services. The Neural Activity Resource app supports the visualisation of detailed metadata for electrophysiology datasets. Curation tools provide internal support for EBRAINS data managers reviewing and improving submitted metadata.

All the applications feed into the EBRAINS Knowledge Graph: the metadata they produce is used to create or update Knowledge Graph entries, making research outputs findable through the EBRAINS search interface. The tools are lightweight and self-contained, designed to be usable without deep familiarity with JSON-LD or the Knowledge Graph data model.

Arkheia

A web-based data management and communication platform for storing, exploring, and sharing computational neuroscience simulation results.

[Arkheia]

Arkheia addresses the practical problem of managing the large numbers of simulation runs generated during computational neuroscience modelling work, particularly during parameter searches. It is a client-server application — built on a REST API with a web-based front end — that provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and model specifications within a browser.

Unlike centralised platforms, Arkheia is designed for self-hosting: a researcher or group can deploy their own instance either locally (as a private daily-use store for simulation results, with GUI-based exploration of parameter searches) or publicly (as a publishing platform to accompany a paper or project). The API is openly specified and separated from the storage layer, so any simulation framework can write a backend adapter to push data into an Arkheia instance. The project was developed primarily by Ján Antolík and described in a Frontiers in Neuroinformatics article.

Projects my team no longer contributes to

Elephant

An open-source Python toolkit for the statistical analysis of electrophysiology data, built on Neo data structures.

[repository] [documentation]

Elephant (Electrophysiology Analysis Toolkit) provides a modular, community-maintained library of analysis methods for neurophysiology data, operating directly on Neo objects so that data read from any Neo-supported file format can be analysed without format-specific preprocessing. Its core modules cover spike train statistics (firing rates, inter-spike interval distributions, Fano factor, CV), spike train correlation (cross-correlograms, covariance, SPADE), spectral analysis of analog signals and LFPs, spike train generation, and dissimilarity measures.

Elephant grew out of earlier NeuralEnsemble efforts (including NeuroTools) and is now primarily maintained by the Institute of Neuroscience and Medicine at Forschungszentrum Jülich, though Andrew Davison's team contributed to its foundation and early development. It is a core analysis component in the EBRAINS research infrastructure and is widely used alongside Neo and PyNN in computational and experimental neuroscience workflows.

mozaik

An integrated workflow framework for specifying, running, analysing, and visualising large-scale spiking neural network simulations.

[repository]

Mozaik addresses the growing complexity of large-scale, heterogeneous spiking network simulations by integrating model specification, experimental protocol definition, simulation execution, data storage, analysis, and visualisation into a single automated pipeline. Researchers define a model and a set of virtual experiments in a high-level Python API; Mozaik then orchestrates the entire workflow through to publication-quality figures, reducing the fragmentation that typically arises when each stage uses a different tool.

The framework is built on top of PyNN and the NEST simulator, and uses MPI for distributed execution on multi-core and cluster systems. Data is stored using the Neo data model, ensuring interoperability with the broader neuroscience data ecosystem. Mozaik was originally developed in the Davison team at UNIC (CNRS, Gif sur Yvette), primarily by Jan Antolík, and is now developed primarily in Antolík's Computational Systems Neuroscience Group at Charles University (Prague). It is particularly well suited to modelling visual cortical circuits with complex, stimulus-driven experimental protocols (see Antolík et al., 2024).

NineML

Python library implementing the NineML specification for describing and exchanging neuronal network models.

[repository] [documentation]

NineML (Network Interchange for Neuroscience Markup Language, also written 9ML) is a declarative language for specifying the dynamics and connectivity of spiking neuronal networks. The Python library maps the NineML object model onto Python classes, providing a programmatic interface for creating, reading, validating, and manipulating NineML descriptions without hand-editing XML.

Models can be serialised and deserialised in XML, JSON, YAML, and HDF5, making it straightforward to exchange model descriptions between simulators and tools. The library is used to implement the NineML Catalog, a curated collection of reusable model components. Originally developed by a working group of the International Neuroinformatics Coordinating Facility (INCF), it is intended to support simulator-agnostic model sharing across the computational neuroscience community, complementing tools such as PyNN and NeuroML.

parameters

Python library for defining, validating, and managing hierarchical parameter sets for computational models.

[repository] [documentation]

A key software engineering practice in computational modelling is keeping parameter sets cleanly separated from model code, so that configurations can be versioned, stored in databases, and explored systematically. The parameters package provides Python classes for building deeply hierarchical parameter sets, drawing values from random distributions, defining ranges for sensitivity analysis, specifying physical dimensions and allowed value ranges, and validating sets against schemas.

The library is particularly aimed at computational neuroscience workflows where models may have many interacting subsystems, each with their own configuration. It integrates naturally with tools such as PyNN and Sumatra. Further information is at parameters.readthedocs.io.

Inactive projects

NeuroTools

Collection of Python utilities for representing, managing, and analysing data from neural network simulations.

[repository] [documentation]

NeuroTools grew out of the NeuralEnsemble community's need to avoid duplicating analysis and data-handling code across different simulator codebases. It provides modules for simulation setup, parameterisation, spike train and membrane potential analysis, data management, and visualisation, and is designed to work with simulation engines that expose a Python interface, including NEURON, NEST, and Brian.

The project is now in maintenance-only mode; active development of electrophysiology analysis functionality has migrated to Elephant. NeuroTools remains useful as a source of established analysis routines and as part of older workflows built around PyNN.

hbp-archive

High-level Python API for accessing archival storage from the Human Brain Project at CSCS.

[repository] [documentation]

The Human Brain Project generated large volumes of simulation and experimental data stored in the CSCS archival system. hbp-archive provided a simple Python interface — wrapping the underlying storage API — that allowed researchers to list, download, read, and manage files in HBP project containers without writing low-level storage calls directly.

The library offered classes for public and authenticated containers, project-level organisation, and bulk operations. The repository is now archived (January 2026) and the underlying storage is no longer accessible via this package; users working with EBRAINS data should use the successor ebrains-drive library instead.

Helmholtz

An inactive Django-based web framework for managing electrophysiology experiment metadata and recordings in neuroscience laboratories.

[Helmholtz]

Helmholtz was a modular, customisable web application for neurophysiology data management, designed to serve as an in-laboratory database for electrophysiology experiments (both in vivo and in vitro). It allowed labs to define their own metadata schemas for stimulation and recording protocols, store results, and expose the data through a web interface and a web-services API, making experimental records findable and accessible within a research group.

The project was presented at the Neuroinformatics 2010 conference as an early attempt to bring structured data management to experimental neuroscience labs, predating the Human Brain Project and the EBRAINS infrastructure. It is now inactive; the data management and provenance-tracking goals it pursued have been taken over by the EBRAINS Knowledge Graph, the Provenance API, and the openMINDS metadata framework. The source code is archived at github.com/apdavison/helmholtz.