Welcome to P K Kelkar Library, Online Public Access Catalogue (OPAC)

Normal view MARC view ISBD view

Space-time computing with temporal neural networks /

By: Smith, James E. (James Edward) 1950-, [author.].
Material type: materialTypeLabelBookSeries: Synthesis digital library of engineering and computer science: ; Synthesis lectures in computer architecture: # 39.Publisher: [San Rafael, California] : Morgan & Claypool, 2017.Description: 1 PDF (xxv, 215 pages) : illustrations.Content type: text Media type: electronic Carrier type: online resourceISBN: 9781627058902.Subject(s): Neural networks (Computer science) | Computational neuroscience | Space and time -- Data processing | Temporal databases | spiking neural networks | temporal models | unsupervised learning | classification | neuron models | computing theoryGenre/Form: Electronic books.DDC classification: 006.32 Online resources: Abstract with links to resource Also available in print.
Contents:
Part I. Introduction to space-time computing and temporal neural networks -- 1. Introduction -- 1.1 Basics of neuron operation -- 1.2 Space-time communication and computation -- 1.2.1 Communication -- 1.2.2 Computation -- 1.2.3 Discussion -- 1.3 Background: neural network models -- 1.3.1 Rate coding -- 1.3.2 Temporal coding -- 1.3.3 Rate processing -- 1.3.4 Spike processing -- 1.3.5 Summary and taxonomy -- 1.4 Background: machine learning -- 1.5 Approach: interaction of computer engineering and neuroscience -- 1.6 Bottom-up analysis: a guiding analogy -- 1.7 Overview --
2. Space-time computing -- 2.1 Definition of terms -- 2.2 Feedforward computing networks -- 2.3 General TNN model -- 2.4 Space-time computing systems -- 2.5 Implications of invariance -- 2.6 TNN system architecture -- 2.6.1 Training -- 2.6.2 Computation (evaluation) -- 2.6.3 Encoding -- 2.6.4 Decoding -- 2.7 Summary: meta-architecture -- 2.7.1 Simulation -- 2.7.2 Implied functions -- 2.8 Special case: feedforward McCulloch-Pitts networks -- 2.9 Race logic --
3. Biological overview -- 3.1 Overall brain structure (very brief ) -- 3.2 Neurons -- 3.2.1 Synapses -- 3.2.2 Synaptic plasticity -- 3.2.3 Frequency-current relationship -- 3.2.4 Inhibition -- 3.3 Hierarchy and columnar organization -- 3.3.1 Neurons -- 3.3.2 Columns (micro-columns) -- 3.3.3 Macro-columns -- 3.3.4 Regions -- 3.3.5 Lobes -- 3.3.6 Uniformity -- 3.4 Inter-neuron connections -- 3.4.1 Path distances -- 3.4.2 Propagation velocities -- 3.4.3 Transmission delays -- 3.4.4 Numbers of connections -- 3.4.5 Attenuation of excitatory responses -- 3.4.6 Connections summary -- 3.5 Sensory processing -- 3.5.1 Receptive fields -- 3.5.2 Saccades and whisks -- 3.5.3 Vision pathway -- 3.5.4 Waves of spikes -- 3.5.5 Feedforward processing path -- 3.5.6 Precision -- 3.5.7 Information content -- 3.5.8 Neural processing -- 3.6 Oscillations -- 3.6.1 Theta oscillations -- 3.6.2 Gamma oscillations --
Part II. Modeling temporal neural networks -- 4. Connecting TNNs with biology -- 4.1 Communication via voltage spikes -- 4.2 Columns and spike bundles -- 4.3 Spike synchronization -- 4.3.1 Aperiodic synchronization: saccades, whisks, and sniffs -- 4.3.2 Periodic synchronization -- 4.4 First spikes carry information -- 4.5 Feedforward processing -- 4.6 Simplifications summary -- 4.7 Plasticity and training -- 4.8 Fault tolerance and temporal stability -- 4.8.1 Interwoven fault tolerance -- 4.8.2 Temporal stability -- 4.8.3 Noise (or lack thereof ) -- 4.9 Discussion: reconciling biological complexity with model simplicity -- 4.10 Prototype architecture overview --
5. Neuron modeling -- 5.1 Basic models -- 5.1.1 Hodgkin Huxley neuron model -- 5.1.2 Derivation of the leaky integrate and fire (LIF) model -- 5.1.3 Spike response model (SRM0) -- 5.2 Modeling synaptic connections -- 5.3 Excitatory neuron implementation -- 5.4 The menagerie of LIF neurons -- 5.4.1 Synaptic conductance model -- 5.4.2 Biexponential SRM0 model -- 5.4.3 Single stage SRM0 -- 5.4.4 Linear leak integrate and fire (LLIF) -- 5.5 Other neuron models -- 5.5.1 Alpha function -- 5.5.2 Quadratic integrate-and-fire -- 5.6 Synaptic plasticity and training --
6. Computing with excitatory neurons -- 6.1 Single neuron clustering -- 6.1.1 Definitions -- 6.1.2 Excitatory neuron function, approximate description -- 6.1.3 Looking ahead -- 6.2 Spike coding -- 6.2.1 Volleys -- 6.2.2 Nonlinear mappings -- 6.2.3 Distance functions -- 6.3 Prior work: radial basis function (RBF) neurons -- 6.4 Excitatory neuron I: training mode -- 6.4.1 Modeling excitatory response functions -- 6.4.2 Training set -- 6.4.3 STDP update rule -- 6.4.4 Weight stabilization -- 6.5 Excitatory neuron I: compound response functions -- 6.6 Excitatory neuron model II -- 6.6.1 Neuron model derivation -- 6.6.2 Training mode -- 6.6.3 Evaluation mode -- 6.7 Attenuation of excitatory responses -- 6.8 Threshold detection -- 6.9 Excitatory neuron model II summary --
7. System architecture -- 7.1 Overview -- 7.2 Interconnection structure -- 7.3 Input encoding -- 7.4 Excitatory column operation -- 7.4.1 Evaluation -- 7.4.2 Training -- 7.4.3 Unsupervised synaptic weight training -- 7.4.4 Supervised weight training -- 7.5 Inhibition -- 7.5.1 Feedback inhibition -- 7.5.2 Lateral inhibition -- 7.5.3 Feedforward inhibition -- 7.6 Volley decoding and analysis -- 7.6.1 Temporal flattening -- 7.6.2 Decoding to estimate clustering quality -- 7.6.3 Decoding for classification -- 7.7 Training inhibition -- 7.7.1 FFI: establishing tF and kF -- 7.7.2 LI: establishing tL and kL -- 7.7.3 Excitatory neuron training in the presence of inhibition -- Part III: extended design study: clustering the MNIST dataset --
8. Simulator implementation -- 8.1 Simulator overview -- 8.2 Inter-unit communication -- 8.3 Simulating time -- 8.4 Synaptic weight training -- 8.5 Evaluation -- 8.5.1 EC block -- 8.5.2 IC block -- 8.5.3 VA block -- 8.6 Design methodology --
9. Clustering the MNIST dataset -- 9.1 MNIST workload -- 9.2 Prototype clustering architecture -- 9.3 OnOff encoding -- 9.4 Intra-CC network -- 9.5 Excitatory column (EC) -- 9.6 Lateral inhibition -- 9.7 144 RFs -- 9.8 Feedforward inhibition -- 9.9 Layer 1 result summary -- 9.10 Related work -- 9.11 Considering layer 2 --
10. Summary and conclusions -- References -- Author biography.
Abstract: Understanding and implementing the brain's computational paradigm is the one true grand challenge facing computer researchers. Not only are the brain's computational capabilities far beyond those of conventional computers, its energy efficiency is truly remarkable. This book, written from the perspective of a computer designer and targeted at computer researchers, is intended to give both background and lay out a course of action for studying the brain's computational paradigm. It contains a mix of concepts and ideas drawn from computational neuroscience, combined with those of the author. As background, relevant biological features are described in terms of their computational and communication properties. The brain's neocortex is constructed of massively interconnected neurons that compute and communicate via voltage spikes, and a strong argument can be made that precise spike timing is an essential element of the paradigm. Drawing from the biological features, a mathematics-based computational paradigm is constructed. The key feature is spiking neurons that perform communication and processing in space-time, with emphasis on time. In these paradigms, time is used as a freely available resource for both communication and computation. Neuron models are first discussed in general, and one is chosen for detailed development. Using the model, single-neuron computation is first explored. Neuron inputs are encoded as spike patterns, and the neuron is trained to identify input pattern similarities. Individual neurons are building blocks for constructing larger ensembles, referred to as "columns". These columns are trained in an unsupervised manner and operate collectively to perform the basic cognitive function of pattern clustering. Similar input patterns are mapped to a much smaller set of similar output patterns, thereby dividing the input patterns into identifiable clusters. Larger cognitive systems are formed by combining columns into a hierarchical architecture. These higher level architectures are the subject of ongoing study, and progress to date is described in detail in later chapters. Simulation plays a major role in model development, and the simulation infrastructure developed by the author is described.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode Item holds
E books E books PK Kelkar Library, IIT Kanpur
Available EBKE766
Total holds: 0

Mode of access: World Wide Web.

System requirements: Adobe Acrobat Reader.

Part of: Synthesis digital library of engineering and computer science.

Includes bibliographical references (pages 205-214).

Part I. Introduction to space-time computing and temporal neural networks -- 1. Introduction -- 1.1 Basics of neuron operation -- 1.2 Space-time communication and computation -- 1.2.1 Communication -- 1.2.2 Computation -- 1.2.3 Discussion -- 1.3 Background: neural network models -- 1.3.1 Rate coding -- 1.3.2 Temporal coding -- 1.3.3 Rate processing -- 1.3.4 Spike processing -- 1.3.5 Summary and taxonomy -- 1.4 Background: machine learning -- 1.5 Approach: interaction of computer engineering and neuroscience -- 1.6 Bottom-up analysis: a guiding analogy -- 1.7 Overview --

2. Space-time computing -- 2.1 Definition of terms -- 2.2 Feedforward computing networks -- 2.3 General TNN model -- 2.4 Space-time computing systems -- 2.5 Implications of invariance -- 2.6 TNN system architecture -- 2.6.1 Training -- 2.6.2 Computation (evaluation) -- 2.6.3 Encoding -- 2.6.4 Decoding -- 2.7 Summary: meta-architecture -- 2.7.1 Simulation -- 2.7.2 Implied functions -- 2.8 Special case: feedforward McCulloch-Pitts networks -- 2.9 Race logic --

3. Biological overview -- 3.1 Overall brain structure (very brief ) -- 3.2 Neurons -- 3.2.1 Synapses -- 3.2.2 Synaptic plasticity -- 3.2.3 Frequency-current relationship -- 3.2.4 Inhibition -- 3.3 Hierarchy and columnar organization -- 3.3.1 Neurons -- 3.3.2 Columns (micro-columns) -- 3.3.3 Macro-columns -- 3.3.4 Regions -- 3.3.5 Lobes -- 3.3.6 Uniformity -- 3.4 Inter-neuron connections -- 3.4.1 Path distances -- 3.4.2 Propagation velocities -- 3.4.3 Transmission delays -- 3.4.4 Numbers of connections -- 3.4.5 Attenuation of excitatory responses -- 3.4.6 Connections summary -- 3.5 Sensory processing -- 3.5.1 Receptive fields -- 3.5.2 Saccades and whisks -- 3.5.3 Vision pathway -- 3.5.4 Waves of spikes -- 3.5.5 Feedforward processing path -- 3.5.6 Precision -- 3.5.7 Information content -- 3.5.8 Neural processing -- 3.6 Oscillations -- 3.6.1 Theta oscillations -- 3.6.2 Gamma oscillations --

Part II. Modeling temporal neural networks -- 4. Connecting TNNs with biology -- 4.1 Communication via voltage spikes -- 4.2 Columns and spike bundles -- 4.3 Spike synchronization -- 4.3.1 Aperiodic synchronization: saccades, whisks, and sniffs -- 4.3.2 Periodic synchronization -- 4.4 First spikes carry information -- 4.5 Feedforward processing -- 4.6 Simplifications summary -- 4.7 Plasticity and training -- 4.8 Fault tolerance and temporal stability -- 4.8.1 Interwoven fault tolerance -- 4.8.2 Temporal stability -- 4.8.3 Noise (or lack thereof ) -- 4.9 Discussion: reconciling biological complexity with model simplicity -- 4.10 Prototype architecture overview --

5. Neuron modeling -- 5.1 Basic models -- 5.1.1 Hodgkin Huxley neuron model -- 5.1.2 Derivation of the leaky integrate and fire (LIF) model -- 5.1.3 Spike response model (SRM0) -- 5.2 Modeling synaptic connections -- 5.3 Excitatory neuron implementation -- 5.4 The menagerie of LIF neurons -- 5.4.1 Synaptic conductance model -- 5.4.2 Biexponential SRM0 model -- 5.4.3 Single stage SRM0 -- 5.4.4 Linear leak integrate and fire (LLIF) -- 5.5 Other neuron models -- 5.5.1 Alpha function -- 5.5.2 Quadratic integrate-and-fire -- 5.6 Synaptic plasticity and training --

6. Computing with excitatory neurons -- 6.1 Single neuron clustering -- 6.1.1 Definitions -- 6.1.2 Excitatory neuron function, approximate description -- 6.1.3 Looking ahead -- 6.2 Spike coding -- 6.2.1 Volleys -- 6.2.2 Nonlinear mappings -- 6.2.3 Distance functions -- 6.3 Prior work: radial basis function (RBF) neurons -- 6.4 Excitatory neuron I: training mode -- 6.4.1 Modeling excitatory response functions -- 6.4.2 Training set -- 6.4.3 STDP update rule -- 6.4.4 Weight stabilization -- 6.5 Excitatory neuron I: compound response functions -- 6.6 Excitatory neuron model II -- 6.6.1 Neuron model derivation -- 6.6.2 Training mode -- 6.6.3 Evaluation mode -- 6.7 Attenuation of excitatory responses -- 6.8 Threshold detection -- 6.9 Excitatory neuron model II summary --

7. System architecture -- 7.1 Overview -- 7.2 Interconnection structure -- 7.3 Input encoding -- 7.4 Excitatory column operation -- 7.4.1 Evaluation -- 7.4.2 Training -- 7.4.3 Unsupervised synaptic weight training -- 7.4.4 Supervised weight training -- 7.5 Inhibition -- 7.5.1 Feedback inhibition -- 7.5.2 Lateral inhibition -- 7.5.3 Feedforward inhibition -- 7.6 Volley decoding and analysis -- 7.6.1 Temporal flattening -- 7.6.2 Decoding to estimate clustering quality -- 7.6.3 Decoding for classification -- 7.7 Training inhibition -- 7.7.1 FFI: establishing tF and kF -- 7.7.2 LI: establishing tL and kL -- 7.7.3 Excitatory neuron training in the presence of inhibition -- Part III: extended design study: clustering the MNIST dataset --

8. Simulator implementation -- 8.1 Simulator overview -- 8.2 Inter-unit communication -- 8.3 Simulating time -- 8.4 Synaptic weight training -- 8.5 Evaluation -- 8.5.1 EC block -- 8.5.2 IC block -- 8.5.3 VA block -- 8.6 Design methodology --

9. Clustering the MNIST dataset -- 9.1 MNIST workload -- 9.2 Prototype clustering architecture -- 9.3 OnOff encoding -- 9.4 Intra-CC network -- 9.5 Excitatory column (EC) -- 9.6 Lateral inhibition -- 9.7 144 RFs -- 9.8 Feedforward inhibition -- 9.9 Layer 1 result summary -- 9.10 Related work -- 9.11 Considering layer 2 --

10. Summary and conclusions -- References -- Author biography.

Abstract freely available; full-text restricted to subscribers or individual document purchasers.

Compendex

INSPEC

Google scholar

Google book search

Understanding and implementing the brain's computational paradigm is the one true grand challenge facing computer researchers. Not only are the brain's computational capabilities far beyond those of conventional computers, its energy efficiency is truly remarkable. This book, written from the perspective of a computer designer and targeted at computer researchers, is intended to give both background and lay out a course of action for studying the brain's computational paradigm. It contains a mix of concepts and ideas drawn from computational neuroscience, combined with those of the author. As background, relevant biological features are described in terms of their computational and communication properties. The brain's neocortex is constructed of massively interconnected neurons that compute and communicate via voltage spikes, and a strong argument can be made that precise spike timing is an essential element of the paradigm. Drawing from the biological features, a mathematics-based computational paradigm is constructed. The key feature is spiking neurons that perform communication and processing in space-time, with emphasis on time. In these paradigms, time is used as a freely available resource for both communication and computation. Neuron models are first discussed in general, and one is chosen for detailed development. Using the model, single-neuron computation is first explored. Neuron inputs are encoded as spike patterns, and the neuron is trained to identify input pattern similarities. Individual neurons are building blocks for constructing larger ensembles, referred to as "columns". These columns are trained in an unsupervised manner and operate collectively to perform the basic cognitive function of pattern clustering. Similar input patterns are mapped to a much smaller set of similar output patterns, thereby dividing the input patterns into identifiable clusters. Larger cognitive systems are formed by combining columns into a hierarchical architecture. These higher level architectures are the subject of ongoing study, and progress to date is described in detail in later chapters. Simulation plays a major role in model development, and the simulation infrastructure developed by the author is described.

Also available in print.

Title from PDF title page (viewed on June 26, 2017).

There are no comments for this item.

Log in to your account to post a comment.

Powered by Koha