記憶と計算する脳:認知科学が神経科学を変える理由<br>Memory and the Computational Brain : Why Cognitive Science Will Transform Neuroscience (Blackwell/maryland Lectures in Language and Cognition)

個数:
電子版価格
¥9,839
  • 電子版あり

記憶と計算する脳:認知科学が神経科学を変える理由
Memory and the Computational Brain : Why Cognitive Science Will Transform Neuroscience (Blackwell/maryland Lectures in Language and Cognition)

  • 在庫がございません。海外の書籍取次会社を通じて出版社等からお取り寄せいたします。
    通常6~9週間ほどで発送の見込みですが、商品によってはさらに時間がかかることもございます。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合がございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 319 p.
  • 言語 ENG
  • 商品コード 9781405122870
  • DDC分類 573

基本説明

Offers a provocative argument that goes to the heart of neuroscience, proposing that the field can and should benefit from the recent advances of cognitive science and the development of information theory over the course of the last several decades.

Full Description

Memory and the Computational Brain offers a provocative argument that goes to the heart of neuroscience, proposing that the field can and should benefit from the recent advances of cognitive science and the development of information theory over the course of the last several decades. 

A provocative argument that impacts across the fields of linguistics, cognitive science, and neuroscience, suggesting new perspectives on learning mechanisms in the brain
Proposes that the field of neuroscience can and should benefit from the recent advances of cognitive science and the development of information theory
Suggests that the architecture of the brain is structured precisely for learning and for memory, and integrates the concept of an addressable read/write memory mechanism into the foundations of neuroscience
Based on lectures in the prestigious Blackwell-Maryland Lectures in Language and Cognition, and now significantly reworked and expanded to make it ideal for students and faculty

Contents

Preface.

1. Information.

Shannon's Theory of Communication.

Measuring Information.

Efficient Coding.

Information and the Brain.

Digital and Analog Signals.

Appendix: The Information Content of Rare Versus Common Events and Signals.

2. Bayesian Updating.

Bayes' Theorem and Our Intuitions About Evidence.

Using Bayes' Rule.

Summary.

3. Functions.

Functions of One Argument.

Composition and Decomposition of Functions.

Functions of More than One Argument.

The Limits to Functional Decomposition.

Functions Can Map to Multi-Part Outputs.

Mapping to Multiple-Element Outputs Does Not Increase Expressive Power.

Defining Particular Functions.

Summary: Physical/Neurobiological Implications of Facts about Functions.

4. Representations.

Some Simple Examples.

Notation.

The Algebraic Representation of Geometry.

5. Symbols.

Physical Properties of Good Symbols.

Symbol Taxonomy.

Summary.

6. Procedures.

Algorithms.

Procedures, Computation, and Symbols.

Coding and Procedures.

Two Senses of Knowing.

A Geometric Example.

7. Computation.

Formalizing Procedures.

The Turing Machine.

Turing Machine for the Successor Function.

Turing Machines for ƒ is _even

Turing Machines for ƒ+

Minimal Memory Structure.

General Purpose Computer.

Summary.

8. Architectures.

One-Dimensional Look-Up Tables (If-Then Implementation).

Adding State Memory: Finite-State Machines.

Adding Register Memory.

Summary.

9. Data Structures.

Finding Information in Memory.

An Illustrative Example.

Procedures and the Coding of Data Structures.

The Structure of the Read-Only Biological Memory.

10. Computing with Neurons.

Transducers and Conductors.

Synapses and the Logic Gates.

The Slowness of It All.

The Time-Scale Problem.

Synaptic Plasticity.

Recurrent Loops in Which Activity Reverberates.

11. The Nature of Learning.

Learning As Rewiring.

Synaptic Plasticity and the Associative Theory of Learning.

Why Associations Are Not Symbols.

Distributed Coding.

Learning As the Extraction and Preservation of Useful Information.

Updating an Estimate of One's Location.

12. Learning Time and Space.

Computational Accessibility.

Learning the Time of Day.

Learning Durations.

Episodic Memory.

13. The Modularity of Learning.

Example 1: Path Integration.

Example 2: Learning the Solar Ephemeris.

Example 3: "Associative" Learning.

Summary.

14. Dead Reckoning in a Neural Network.

Reverberating Circuits as Read/Write Memory Mechanisms.

Implementing Combinatorial Operations by Table-Look-Up.

The Full Model.

The Ontogeny of the Connections?

How Realistic is the Model?

Lessons to be Drawn.

Summary.

15. Neural Models of Interval Timing.

Timing an Interval on First Encounter.

Dworkin's Paradox.

Neurally Inspired Models.

The Deeper Problems.

16. The Molecular Basis of Memory.

The Need to Separate Theory of Memory from Theory of Learning.

The Coding Question.

A Cautionary Tale.

Why Not Synaptic Conductance?

A Molecular or Sub-Molecular Mechanism?

Bringing the Data to the Computational Machinery.

Is It Universal?

References.

Glossary.

Index.

最近チェックした商品