A communication infrastructure for a million processor machine

Andrew Brown, Steve Furber, Jeff Reeve, Peter Wilson Wilson, Mark Zwolinski, John Chad, Luis Plana, David Lester

Research output: Contribution to conferencePaper

3 Citations (Scopus)

Abstract

The SpiNNaker machine is a massively parallel computing system, consisting of 1,000,000 cores. From one perspective, it has a place in Flynns' taxonomy: it is a straightforward MIMD machine. However, there is no interconnecting bus structure, and there is no attempt to maintain coherency between any of the memory banks. Inter-core communication is implemented by means of chip-to-chip packet transfer. Unlike conventional parallel machines, where packet-based communication is supported by a software layer, in SpiNNaker, the packet communication fabric is built in at the hardware level, and is the only mechanism whereby an arbitrary pair of cores can communicate. There is no unifying synchronisation system, and the packet delivery infrastructure is non-deterministic. In a number of application arenas - most notably neural simulation - this architecture is remarkably powerful, and supports techniques that can only be clumsily realised in conventional machines. In order to realise these advantages, a software layer - the routing system - is necessary to facilitate and choreograph the movement of the packets throughout the machine. The sheer size of the SpiNNaker machine makes conventional techniques difficult or impossible; the machine has to be largely self-organising. The routing tables that underpin the communication infrastructure can therefore be derived dynamically as a prologue processing phase. This paper describes the low-level software packet management system embodied in SpiNNaker. I
Original languageUndefined/Unknown
Publication statusPublished - 1 May 2010

Cite this

Brown, A., Furber, S., Reeve, J., Wilson, P. W., Zwolinski, M., Chad, J., ... Lester, D. (2010). A communication infrastructure for a million processor machine.

A communication infrastructure for a million processor machine. / Brown, Andrew; Furber, Steve; Reeve, Jeff; Wilson, Peter Wilson; Zwolinski, Mark; Chad, John; Plana, Luis; Lester, David.

2010.

Research output: Contribution to conferencePaper

Brown, A, Furber, S, Reeve, J, Wilson, PW, Zwolinski, M, Chad, J, Plana, L & Lester, D 2010, 'A communication infrastructure for a million processor machine'.
Brown A, Furber S, Reeve J, Wilson PW, Zwolinski M, Chad J et al. A communication infrastructure for a million processor machine. 2010.
Brown, Andrew ; Furber, Steve ; Reeve, Jeff ; Wilson, Peter Wilson ; Zwolinski, Mark ; Chad, John ; Plana, Luis ; Lester, David. / A communication infrastructure for a million processor machine.
@conference{00cae9f57b654bbe970a8b878b7574ef,
title = "A communication infrastructure for a million processor machine",
abstract = "The SpiNNaker machine is a massively parallel computing system, consisting of 1,000,000 cores. From one perspective, it has a place in Flynns' taxonomy: it is a straightforward MIMD machine. However, there is no interconnecting bus structure, and there is no attempt to maintain coherency between any of the memory banks. Inter-core communication is implemented by means of chip-to-chip packet transfer. Unlike conventional parallel machines, where packet-based communication is supported by a software layer, in SpiNNaker, the packet communication fabric is built in at the hardware level, and is the only mechanism whereby an arbitrary pair of cores can communicate. There is no unifying synchronisation system, and the packet delivery infrastructure is non-deterministic. In a number of application arenas - most notably neural simulation - this architecture is remarkably powerful, and supports techniques that can only be clumsily realised in conventional machines. In order to realise these advantages, a software layer - the routing system - is necessary to facilitate and choreograph the movement of the packets throughout the machine. The sheer size of the SpiNNaker machine makes conventional techniques difficult or impossible; the machine has to be largely self-organising. The routing tables that underpin the communication infrastructure can therefore be derived dynamically as a prologue processing phase. This paper describes the low-level software packet management system embodied in SpiNNaker. I",
author = "Andrew Brown and Steve Furber and Jeff Reeve and Wilson, {Peter Wilson} and Mark Zwolinski and John Chad and Luis Plana and David Lester",
note = "Event Dates: May 2010",
year = "2010",
month = "5",
day = "1",
language = "Undefined/Unknown",

}

TY - CONF

T1 - A communication infrastructure for a million processor machine

AU - Brown, Andrew

AU - Furber, Steve

AU - Reeve, Jeff

AU - Wilson, Peter Wilson

AU - Zwolinski, Mark

AU - Chad, John

AU - Plana, Luis

AU - Lester, David

N1 - Event Dates: May 2010

PY - 2010/5/1

Y1 - 2010/5/1

N2 - The SpiNNaker machine is a massively parallel computing system, consisting of 1,000,000 cores. From one perspective, it has a place in Flynns' taxonomy: it is a straightforward MIMD machine. However, there is no interconnecting bus structure, and there is no attempt to maintain coherency between any of the memory banks. Inter-core communication is implemented by means of chip-to-chip packet transfer. Unlike conventional parallel machines, where packet-based communication is supported by a software layer, in SpiNNaker, the packet communication fabric is built in at the hardware level, and is the only mechanism whereby an arbitrary pair of cores can communicate. There is no unifying synchronisation system, and the packet delivery infrastructure is non-deterministic. In a number of application arenas - most notably neural simulation - this architecture is remarkably powerful, and supports techniques that can only be clumsily realised in conventional machines. In order to realise these advantages, a software layer - the routing system - is necessary to facilitate and choreograph the movement of the packets throughout the machine. The sheer size of the SpiNNaker machine makes conventional techniques difficult or impossible; the machine has to be largely self-organising. The routing tables that underpin the communication infrastructure can therefore be derived dynamically as a prologue processing phase. This paper describes the low-level software packet management system embodied in SpiNNaker. I

AB - The SpiNNaker machine is a massively parallel computing system, consisting of 1,000,000 cores. From one perspective, it has a place in Flynns' taxonomy: it is a straightforward MIMD machine. However, there is no interconnecting bus structure, and there is no attempt to maintain coherency between any of the memory banks. Inter-core communication is implemented by means of chip-to-chip packet transfer. Unlike conventional parallel machines, where packet-based communication is supported by a software layer, in SpiNNaker, the packet communication fabric is built in at the hardware level, and is the only mechanism whereby an arbitrary pair of cores can communicate. There is no unifying synchronisation system, and the packet delivery infrastructure is non-deterministic. In a number of application arenas - most notably neural simulation - this architecture is remarkably powerful, and supports techniques that can only be clumsily realised in conventional machines. In order to realise these advantages, a software layer - the routing system - is necessary to facilitate and choreograph the movement of the packets throughout the machine. The sheer size of the SpiNNaker machine makes conventional techniques difficult or impossible; the machine has to be largely self-organising. The routing tables that underpin the communication infrastructure can therefore be derived dynamically as a prologue processing phase. This paper describes the low-level software packet management system embodied in SpiNNaker. I

UR - http://eprints.soton.ac.uk/270988/

M3 - Paper

ER -