pbwm

package
v1.2.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 7, 2024 License: BSD-3-Clause Imports: 12 Imported by: 3

README

PBWM

GoDoc

See sir2 example for working model.

In the cemer (C++ emergent) versioning system, this is version 5 of PBWM, reflecting a number of intermediate unpublished versions.

As of 9/2020, this implementation has been separated from the deep code, and there is just a PFCDeepLayer that manages all of the PFC-specific functionality, grabbing values from corresponding neurons in the Super layer, which can be of any type. Also, the rl dopamine component is now independent and any source of dopamine can be used. Current development in this space of ideas is taking place in agate and pcore packages, so this pbwm package represents its own separate stable branch going forward.

PBWM is the prefrontal-cortex basal-ganglia working memory model O'Reilly & Frank, 2006, where the basal ganglia (BG) drives gating of PFC working memory maintenance, switching it dynamically between updating of new information vs. maintenance of existing information. It was originally inspired by existing data, biology, and theory about the role of the BG in motor action selection, and the LSTM (long short-term-memory) computational model of Hochreiter & Schmidhuber, which solved limitations in existing recurrent backpropagation networks by adding dynamic input and output gates. These LSTM models have experienced a significant resurgence along with the backpropagation neural networks in general.

The simple computational idea is that the BG gating signals fire phasically to disinhibit corticothalamic loops through the PFC, enabling the robust maintenance of new information there. In the absence of such gating signals, the PFC will continue to maintain existing information. The output of the BG through the GPi (globus pallidus internal segment) and homologous SNr (substantia nigra pars reticulata) represents a major bottleneck with a relatively small number of neurons, meaning that each BG gating output affects a relatively large group of PFC neurons. One idea is that these BG gating signals target different PFC hypercolumns or stripes -- these correspond to Pools of neurons within the layers in the current implementation.

There are two PFC lamina, PFCSuper and PFCDeep, which are inspired by the broader DeepLeabra framework (in the deep directory) that incorporates the separation between superficial and deep layers in cortex and their connections with the thalamus: the thalamocortical loops are principally between the deep layers. Thus, within a given PFC area, you can have the superficial layers being more sensitive to current inputs, while the deep layers are more robustly maintaining information through the thalamocortical loops. The deep layer neurons mirror those in the superficial layer (as in a microcolumn), and receive activation from the superficial layer only when gating occurs -- after that point their dynamics can be controlled by specific profiles (stable, rising, falling etc), with multiple possible such trajectories per superficial neuron.

The super vs. deep distinction allows a unification of maintenance and output gating, both of which are effectively opening up a gate between superficial to deep (via the thalamocortical loops) -- deep layers the drive principal output of frontal areas (e.g., in M1, deep layers directly drive motor actions through subcortical projections). In PFC, deep layers are a major source of top-down activation to other cortical areas, in keeping with the idea of executing "cognitive actions" that influence processing elsewhere in the brain. The only real difference is whether the neurons exhibit significant sustained maintenance, or are only phasically activated by gating. Both profiles are widely observed e.g., Sommer & Wurtz, 2000.

The key, more complex computational challenges are:

  • How to actually sequence the updating of PFC from maintaining prior information to now encoding new information, which requires some mechanism for clearing out the prior information.

  • How maintenance and output gating within a given area are organized and related to each other.

  • Learning when the BG should drive update vs. maintain signals, which is particularly challenging because of the temporally-delayed nature of the consequence of an earlier gating action -- you only know if it was useful to maintain something later when you need it. This is the temporal credit assignment problem.

Updating

For the updating question, we compute a BG gating signal in the middle of the 1st quarter (set by GPiThal.Timing.GateQtr) of the overall AlphaCycle of processing (cycle 18 of 25, per GPiThal.Timing.Cycle parameter), which has the immediate effect of clearing out the existing PFC activations (see PFCLayer.Maint.Clear, such that by the end of the next quarter (2), the new information is sufficiently represented in the superficial PFC neurons. At the end of 2nd quarter (per PFCLayer.Gate.GateQtr, which is typically 1 quarter after the GPiThal version), the superficial activations drive updating of the deep layers, to maintain the new information. In keeping with the beta frequency cycle of the BG / PFC circuit (20 Hz, 50 msec cycle time), we support a second round of gating in the middle of the 2nd quarter (again by GPiThal.Timing.GateQtr), followed by maintenance activating in deep layers after the 4th quarter.

For PFCout layers (with PFCLayer.Gate.OutGate set), there is an OutQ1Only option (default true) which, with PFCLayer.Gate.GateQtr set to Q1, causes output gating to update at the end of the 1st quarter, which gives more time for it to drive output responding. And the 2nd beta-frequency gating comes too late in a standard AlphaCycle based update sequence to drive output, so it is not useful. However, supporting two phases of maintenance updating allows for stripes cleared by output gating (see next subsection) to update in the 2nd half of the alpha cycle, which is useful.

In summary, for PFCmnt maintenance gating:

  • Q1, cycle 18: BG gating, PFC clearing of any existing act
  • Q2, end: Super -> Deep
  • Q2, cycle 18: BG gating, PFC clearing
  • Q4, end: Super -> Deep

And PFCout output gating:

  • Q1, cycle 18: BG gating -- triggers clearing of corresponding Maint stripe
  • Q1, end: Super -> Deep, so Deep can drive network output layers

Maint & Output Organization

For the organization of Maint and Out gating, we make the simplifying assumption that each hypercolumn ("stripe") of maintenance PFC has a corresponding output stripe, so you can separately decide to maintain something for an arbitrary amount of time, and subsequently use that information via output gating. A key question then becomes: what happens to the maintained information? Empirically, many studies show a sudden termination of active maintenance at the point of an action using maintained information Sommer & Wurtz, 2000, which makes computational sense: "use it and lose it". In addition, it is difficult to come up with a good positive signal to independently drive clearing: it is much easier to know when you do need information, than to know the point at which you no longer need it. Thus, we have output gating clear corresponding maintenance gating (there is an option to turn this off too, if you want to experiment). The availability of "open" stripes for subsequent maintenance after this clearing seems to be computationally beneficial in our tests.

Learning

Finally, for the learning question, we adopt a computationally powerful form of trace-based dopamine-modulated learning (in MatrixTracePrjn), where each BG gating action leaves a synaptic trace, which is finally converted into a weight change as a function of the next phasic dopamine signal, providing a summary "outcome" evaluation of the net value of the recent gating actions. This directly solves the temporal credit assignment problem, by allowing the synapses to bridge the temporal gap between action and outcome, over a reasonable time window, with multiple such gating actions separately encodable.

Biologically, we suggest that widely-studied synaptic tagging mechanisms have the appropriate properties for this trace mechanism. Extensive research has shown that these synaptic tags, based on actin fiber networks in the synapse, can persist for up to 90 minutes, and when a subsequent strong learning event occurs, the tagged synapses are also strongly potentiated (Redondo & Morris, 2011, Rudy, 2015, Bosch & Hayashi, 2012).

This form of trace-based learning is very effective computationally, because it does not require any other mechanisms to enable learning about the reward implications of earlier gating events. In earlier versions of the PBWM model, we relied on CS (conditioned stimulus) based phasic dopamine to reinforce gating, but this scheme requires that the PFC maintained activations function as a kind of internal CS signal, and that the amygdala learn to decode these PFC activation states to determine if a useful item had been gated into memory.

The CS-driven DA under the trace-based framework effectively serves to reinforce sub-goal actions that lead to the activation of a CS, which in turn is predicting final reward outcomes. Thus, the CS DA provides an intermediate bridging kind of reinforcement evaluating the set of actions leading up to that point. Kind of a "check point" of success prior to getting the real thing.

Layers

Here are the details about each different layer type in PBWM:

  • MatrixLayer: this is the dynamic gating system representing the matrix units within the dorsal striatum of the basal ganglia. The MatrixGo layer contains the "Go" (direct pathway) units (DaR = D1), while the MatrixNoGo layer contains "NoGo" (indirect pathway, DaR = D2). The Go units, expressing more D1 receptors, increase their weights from dopamine bursts, and decrease weights from dopamine dips, and vice-versa for the NoGo units with more D2 receptors. As is more consistent with the BG biology than earlier versions of this model, most of the competition to select the final gating action happens in the GPe and GPi (with the hyperdirect pathway to the subthalamic nucleus also playing a critical role, but not included in this more abstracted model), with only a relatively weak level of competition within the Matrix layers. We also combine the maintenance and output gating stripes all in the same Matrix layer, which allows them to all compete with each other here, and more importantly in the subsequent GPi and GPe stripes. This competitive interaction is critical for allowing the system to learn to properly coordinate maintenance when it is appropriate to update/store new information for maintenance vs. when it is important to select from currently stored representations via output gating.

  • GPeNoGo: This is a standard provides a first round of competition between all the NoGo stripes, which critically prevents the model from driving NoGo to all of the stripes at once. Indeed, there is physiological and anatomical evidence for NoGo unit collateral inhibition onto other NoGo units. Without this NoGo-level competition, models frequently ended up in a state where all stripes were inhibited by NoGo, and when nothing happens, nothing can be learned, so the model essentially fails at that point!

  • GPiThalLayer: Has a strong competition for selecting which stripe gets to gate, based on projections from the MatrixGo units, and the NoGo influence from GPeNoGo, which can effectively veto a few of the possible stripes to prevent gating. We have combined the functions of the GPi (or SNr) and the Thalamus into a single abstracted layer, which has the excitatory kinds of outputs that we would expect from the thalamus, but also implements the stripe-level competition mediated by the GPi/SNr. If there is more overall Go than NoGo activity, then the GPiThal unit gets activated, which then effectively establishes an excitatory loop through the corresponding deep layers of the PFC, with which the thalamus neurons are bidirectionally interconnected. This layer uses GateLayer framework to update GateState which is broadcast to the Matrix and PFC, so they have current gating state information.

  • PFCDeepLayer: Uses deep-like super vs. deep dynamics with gating (in GateState values broadcast from GPiThal) determining when deep updates from super. Actual maintenance in deep layer can be set using PFCDyn fixed dynamics that provides a simple way of shaping a temporally-evolving activation pattern over the layer, with a minimal case of just stable fixed maintenance. The different dyn patterns take place in extra Y-axis rows, with the inner-loop being the super y axis, and outer loop being the different dyn types. Whereas there was some ambiguity previously in whether the super layer reflected the maintenance in deep to some extent, or just the current input state, the current version makes a stronger distinction here: super is only representing the current sensory inputs. This also avoids any need for explicit clearing of super prior to gating in new information. Furthermore, the super layer can be any type of layer -- all of the specific functionality is now in the PFCDeepLayer type.

Dopamine Layers

See the rl package for the DALayer interface and simpler forms of phasic dopamine algorithms, including Rescorla-Wagner and Temporal Differences (TD).

Given that PBWM minimally requires a RW-level "primary value" dopamine signal, basic models can use this as follows:

  • Rew, RWPred, SNc: The Rew layer represents the reward activation driven on the Recall trials based on whether the model gets the problem correct or not, with either a 0 (error, no reward) or 1 (correct, reward) activation. RWPred is the prediction layer that learns based on dopamine signals to predict how much reward will be obtained on this trial. The SNc is the final dopamine unit activation, reflecting reward prediction errors. When outcomes are better (worse) than expected or states are predictive of reward (no reward), this unit will increase (decrease) activity. For convenience, tonic (baseline) states are represented here with zero values, so that phasic deviations above and below this value are observable as positive or negative activations. (In the real system negative activations are not possible, but negative prediction errors are observed as a pause in dopamine unit activity, such that firing rate drops from baseline tonic levels). Biologically the SNc actually projects dopamine to the dorsal striatum, while the VTA projects to the ventral striatum, but there is no functional difference in this level of model.

Implementation Details

Network

The pbwm.Network provides "wizard" methods for constructing and configuring standard PBWM and RL components.

It extends the core Cycle method called every cycle of updating as follows:

func (nt *Network) Cycle(ltime *leabra.Time) {
	nt.Network.CycleImpl(ltime) // basic version from leabra.Network
	nt.GateSend(ltime)          // GateLayer (GPiThal) computes gating, sends to other layers
	nt.RecGateAct(ltime)        // Record activation state at time of gating (in ActG neuron var)

	nt.EmerNet.(leabra.LeabraNetwork).CyclePostImpl(ltime) // always call this after std cycle..
}

which determines the additional steps of computation after the activations have been updated in the current cycle, supporting the extra gating and DA modulation functions.

There is also a key addition to QuarterFinal method that calls DeepMaint where deep layers get their updated maintenance inputs from corresponding superficial layers (mediated through layer 5IB neurons in the biology, which burst periodically). This is when the PFC layers update deep from super.

GPiThal and GateState

GPiThalLayer is source of key GateState:

GateState.Cnt provides key tracker of gating state. It is separately updated in each layer -- GPiThal only broadcasts the basic Act signal and Now signals. For PFC, Cnt is:

  • -1 = initialized to this value, not maintaining.
  • 0 = just gated -- any time the GPiThal activity exceeds the gating threshold (at specified Timing.Cycle) we reset counter (re-gate)
  • >= 1: maintaining -- first gating goes to 1 in QuarterFinal of the BurstQtr gating quarter, counts up thereafter.
  • <= -1: not maintaining – when cleared, reset to -1 in Quarter_Init just following clearing quarter, counts down thereafter.

All gated PBWM layers are of type GateLayer which just has infrastructure to maintain GateState values and synchronize across layers.

PFCLayer

PFCDeepLayer supports mnt and out types, and handles all the PFC-specific gating logic.

  • Cycle: ActFmG calls Gating, which only does something when GateState.Now = true: it resets Cnt = 0 indicating just gated.

  • QuarterFinal: calls DeepMaint which grabs the corresponding superficial activation, if gating has happened, and otherwise it just updates the MaintGe conductance based on dynamics.

In summary, for PFCmnt maintenance gating:

  • Q1, cycle 18: BG gating, PFC clearing of any existing act: Gating call
  • Q2, end: Super -> Deep DeepMaint
  • ...

And PFCout output gating:

  • Q1, cycle 18: BG gating -- triggers clearing of corresponding Maint stripe
  • Q1, end: Super -> Deep so Deep can drive network output layers

TODO

  • Matrix uses net_gain = 0.5 -- why?? important for SIR2?, but not SIR1

  • patch -- not essential for SIR1, test in SIR2

  • CIN -- did not work well at all surprisingly -- works much better in pcore..

  • del_inhib -- delta inhibition -- SIR1 MUCH faster learning without! test for SIR2

  • slow_wts -- not important for SIR1, test for SIR2

References

Bosch, M., & Hayashi, Y. (2012). Structural plasticity of dendritic spines. Current Opinion in Neurobiology, 22(3), 383–388. https://doi.org/10.1016/j.conb.2011.09.002

Hochreiter, S., & Schmidhuber, J. (1997). Long Short Term Memory. Neural Computation, 9, 1735–1780.

O'Reilly, R.C. & Frank, M.J. (2006), Making Working Memory Work: A Computational Model of Learning in the Frontal Cortex and Basal Ganglia. Neural Computation, 18, 283-328.

Redondo, R. L., & Morris, R. G. M. (2011). Making memories last: The synaptic tagging and capture hypothesis. Nature Reviews Neuroscience, 12(1), 17–30. https://doi.org/10.1038/nrn2963

Rudy, J. W. (2015). Variation in the persistence of memory: An interplay between actin dynamics and AMPA receptors. Brain Research, 1621, 29–37. https://doi.org/10.1016/j.brainres.2014.12.009

Sommer, M. A., & Wurtz, R. H. (2000). Composition and topographic organization of signals sent from the frontal eye field to the superior colliculus. Journal of Neurophysiology, 83(4), 1979–2001.

Documentation

Overview

Package pbwm provides the prefrontal cortex basal ganglia working memory (PBWM) model of the basal ganglia (BG) and prefrontal cortex (PFC) circuitry that supports dynamic BG gating of PFC robust active maintenance.

In the Go framework, it is version 1 (was version 5 in cemer).

This package builds on the deep package for defining thalamocortical circuits involved in predictive learning -- the BG basically acts to gate these circuits.

It provides a basis for dopamine-modulated processing of all types, and is the base package for the PVLV model package built on top of it.

There are multiple levels of functionality to allow for flexibility in exploring new variants.

Each different Layer type defines and manages its own Neuron type, despite some redundancy, so only one type is needed and it is exactly what that layer needs. However, a Network must have a single consistent set of Neuron variables, which is given by NeuronVars and NeurVars enum. In many cases, those "neuron" variables are actually stored in the layer itself instead of on per-neuron level.

Naming rule: DA when a singleton, DaMod (lowercase a) when CamelCased with something else

############## # Basic Level

* pbwm.Layer has DA, ACh, SE -- can be modulated

* ModLayer adds DA-modulated learning on top of basic Leabra learning

  • GateLayer has GateStates in 1-to-1 correspondence with Pools, to keep track of gating state -- source gating layers can send updates to other layers.

################ # PBWM specific

  • MatrixLayer for dorsal striatum gating of DLPFC areas, separate D1R = Go, D2R = NoGo Each layer contains Maint and Out GateTypes, as function of outer 4D Pool X dimension (Maint on the left, Out on the right)

  • GPiThalLayer receives from Matrix Go and GPe NoGo to compute final WTA gating, and broadcasts GateState info to its SendTo layers. See Timing params for timing.

  • PFCLayer for active maintenance -- reproduces a DeepLeabra like framework, with update timing according to BurstQtr. Gating is computed in quarter *before* updating in BurstQtr. At *end* of BurstQtr, Super Burst -> Deep Ctxt to drive maintenance via Ctxt in Deep.

Index

Constants

This section is empty.

Variables

View Source
var (
	// NeuronVars are the pbwm neurons plus some custom variables that sub-types use for their
	// algo-specific cases -- need a consistent set of overall network-level vars for display / generic
	// interface.
	NeuronVars    = []string{"DA", "DALrn", "ACh", "SE", "GateAct", "GateNow", "GateCnt", "ActG", "Maint", "MaintGe"}
	NeuronVarsMap map[string]int
	NeuronVarsAll []string
)
View Source
var KiT_CINLayer = kit.Types.AddType(&CINLayer{}, leabra.LayerProps)
View Source
var KiT_DaHebbPrjn = kit.Types.AddType(&DaHebbPrjn{}, leabra.PrjnProps)
View Source
var KiT_DaReceptors = kit.Enums.AddEnum(DaReceptorsN, kit.NotBitFlag, nil)
View Source
var KiT_GPiThalLayer = kit.Types.AddType(&GPiThalLayer{}, leabra.LayerProps)
View Source
var KiT_GPiThalPrjn = kit.Types.AddType(&GPiThalPrjn{}, leabra.PrjnProps)
View Source
var KiT_GateLayer = kit.Types.AddType(&GateLayer{}, leabra.LayerProps)
View Source
var KiT_GateTypes = kit.Enums.AddEnum(GateTypesN, kit.NotBitFlag, nil)
View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, leabra.LayerProps)
View Source
var KiT_MatrixLayer = kit.Types.AddType(&MatrixLayer{}, leabra.LayerProps)
View Source
var KiT_ModLayer = kit.Types.AddType(&ModLayer{}, leabra.LayerProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_PFCDeepLayer = kit.Types.AddType(&PFCDeepLayer{}, leabra.LayerProps)
View Source
var KiT_Valences = kit.Enums.AddEnum(ValencesN, kit.NotBitFlag, nil)
View Source
var NetworkProps = leabra.NetworkProps
View Source
var SynVarsAll []string

SynVarsAll is the pbwm collection of all synapse-level vars (includes TraceSynVars)

View Source
var TraceSynVars = []string{"NTr", "Tr"}

Functions

func AddDorsalBG added in v1.1.11

func AddDorsalBG(nt *leabra.Network, prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (mtxGo, mtxNoGo, gpe, gpi, cin leabra.LeabraLayer)

AddDorsalBG adds MatrixGo, NoGo, GPe, GPiThal, and CIN layers, with given optional prefix. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made to drive GPiThal, with BgFixed class name set so they can be styled appropriately (no learning, WtRnd.Mean=0.8, Var=0)

func AddDorsalBGPy added in v1.1.15

func AddDorsalBGPy(nt *leabra.Network, prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) []leabra.LeabraLayer

AddDorsalBGPy adds MatrixGo, NoGo, GPe, GPiThal, and CIN layers, with given optional prefix. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made to drive GPiThal, with BgFixed class name set so they can be styled appropriately (no learning, WtRnd.Mean=0.8, Var=0) Py is Python version, returns layers as a slice

func AddPBWM added in v1.1.11

func AddPBWM(nt *leabra.Network, prefix string, nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX int) (mtxGo, mtxNoGo, gpe, gpi, cin, pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)

AddPBWM adds a DorsalBG and PFC with given params Defaults to simple case of basic maint dynamics in Deep

func AddPBWMPy added in v1.1.15

func AddPBWMPy(nt *leabra.Network, prefix string, nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX int) []leabra.LeabraLayer

AddPBWMPy adds a DorsalBG and PFC with given params Defaults to simple case of basic maint dynamics in Deep Py is Python version, returns layers as a slice

func AddPFC added in v1.1.11

func AddPFC(nt *leabra.Network, prefix string, nY, nMaint, nOut, nNeurY, nNeurX int, dynMaint bool) (pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)

AddPFC adds paired PFCmnt, PFCout and associated Deep layers, with given optional prefix. nY = number of pools in Y dimension, nMaint, nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. dynMaint is true for maintenance-only dyn, else full set of 5 dynamic maintenance types. Appropriate OneToOne connections are made between PFCmntD -> PFCout.

func AddPFCLayer added in v1.1.11

func AddPFCLayer(nt *leabra.Network, name string, nY, nX, nNeurY, nNeurX int, out, dynMaint bool) (sp, dp leabra.LeabraLayer)

AddPFCLayer adds a PFCLayer, super and deep, of given size, with given name. nY, nX = number of pools in Y, X dimensions, and each pool has nNeurY, nNeurX neurons. out is true for output-gating layer, and dynmaint is true for maintenance-only dyn, else Full set of 5 dynamic maintenance types. Both have the class "PFC" set. deep is positioned behind super.

func AddPFCPy added in v1.1.15

func AddPFCPy(nt *leabra.Network, prefix string, nY, nMaint, nOut, nNeurY, nNeurX int, dynMaint bool) []leabra.LeabraLayer

AddPFCPy adds paired PFCmnt, PFCout and associated Deep layers, with given optional prefix. nY = number of pools in Y dimension, nMaint, nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. dynMaint is true for maintenance-only dyn, else full set of 5 dynamic maintenance types. Appropriate OneToOne connections are made between PFCmntD -> PFCout. Py is Python version, returns layers as a slice

Types

type CINLayer added in v1.1.11

type CINLayer struct {
	leabra.Layer

	// threshold on reward values from RewLays, to count as a significant reward event, which then drives maximal ACh -- set to 0 to disable this nonlinear behavior
	RewThr float32 `` /* 164-byte string literal not displayed */

	// Reward-representing layer(s) from which this computes ACh as Max absolute value
	RewLays emer.LayNames `desc:"Reward-representing layer(s) from which this computes ACh as Max absolute value"`

	// list of layers to send acetylcholine to
	SendACh rl.SendACh `desc:"list of layers to send acetylcholine to"`

	// acetylcholine value for this layer
	ACh float32 `desc:"acetylcholine value for this layer"`
}

CINLayer (cholinergic interneuron) reads reward signals from named source layer(s) and sends the Max absolute value of that activity as the positively-rectified non-prediction-discounted reward signal computed by CINs, and sent as an acetylcholine (ACh) signal. To handle positive-only reward signals, need to include both a reward prediction and reward outcome layer.

func AddCINLayer added in v1.1.11

func AddCINLayer(nt *leabra.Network, name string) *CINLayer

AddCINLayer adds a CINLayer, with a single neuron.

func (*CINLayer) ActFmG added in v1.1.11

func (ly *CINLayer) ActFmG(ltime *leabra.Time)

func (*CINLayer) Build added in v1.1.11

func (ly *CINLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*CINLayer) CyclePost added in v1.1.11

func (ly *CINLayer) CyclePost(ltime *leabra.Time)

CyclePost is called at end of Cycle We use it to send ACh, which will then be active for the next cycle of processing.

func (*CINLayer) Defaults added in v1.1.11

func (ly *CINLayer) Defaults()

func (*CINLayer) GetACh added in v1.1.11

func (ly *CINLayer) GetACh() float32

func (*CINLayer) MaxAbsRew added in v1.1.11

func (ly *CINLayer) MaxAbsRew() float32

MaxAbsRew returns the maximum absolute value of reward layer activations

func (*CINLayer) SetACh added in v1.1.11

func (ly *CINLayer) SetACh(ach float32)

func (*CINLayer) UnitVal1D added in v1.1.11

func (ly *CINLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*CINLayer) UnitVarIdx added in v1.1.11

func (ly *CINLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*CINLayer) UnitVarNum added in v1.1.11

func (ly *CINLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type DaHebbPrjn

type DaHebbPrjn struct {
	leabra.Prjn
}

DaHebbPrjn does dopamine-modulated Hebbian learning -- i.e., the 3-factor learning rule: Da * Recv.Act * Send.Act

func (*DaHebbPrjn) DWt

func (pj *DaHebbPrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*DaHebbPrjn) Defaults

func (pj *DaHebbPrjn) Defaults()

type DaModParams

type DaModParams struct {

	// whether to use dopamine modulation
	On bool `desc:"whether to use dopamine modulation"`

	// [viewif: On] modulate gain instead of Ge excitatory synaptic input
	ModGain bool `viewif:"On" desc:"modulate gain instead of Ge excitatory synaptic input"`

	// [viewif: On] how much to multiply Da in the minus phase to add to Ge input -- use negative values for NoGo/indirect pathway/D2 type neurons
	Minus float32 `` /* 145-byte string literal not displayed */

	// [viewif: On] how much to multiply Da in the plus phase to add to Ge input -- use negative values for NoGo/indirect pathway/D2 type neurons
	Plus float32 `` /* 144-byte string literal not displayed */

	// [viewif: On&&ModGain] for negative dopamine, how much to change the default gain value as a function of dopamine: gain = gain * (1 + da * NegNain) -- da is multiplied by minus or plus depending on phase
	NegGain float32 `` /* 208-byte string literal not displayed */

	// [viewif: On&&ModGain] for positive dopamine, how much to change the default gain value as a function of dopamine: gain = gain * (1 + da * PosGain) -- da is multiplied by minus or plus depending on phase
	PosGain float32 `` /* 208-byte string literal not displayed */
}

Params for effects of dopamine (Da) based modulation, typically adding a Da-based term to the Ge excitatory synaptic input. Plus-phase = learning effects relative to minus-phase "performance" dopamine effects

func (*DaModParams) Defaults

func (dm *DaModParams) Defaults()

func (*DaModParams) Gain

func (dm *DaModParams) Gain(da, gain float32, plusPhase bool) float32

Gain returns da-modulated gain value

func (*DaModParams) GainModOn

func (dm *DaModParams) GainModOn() bool

GainModOn returns true if modulating Gain

func (*DaModParams) Ge

func (dm *DaModParams) Ge(da, ge float32, plusPhase bool) float32

Ge returns da-modulated ge value

func (*DaModParams) GeModOn

func (dm *DaModParams) GeModOn() bool

GeModOn returns true if modulating Ge

type DaReceptors

type DaReceptors int

DaReceptors for D1R and D2R dopamine receptors

const (
	// D1R primarily expresses Dopamine D1 Receptors -- dopamine is excitatory and bursts of dopamine lead to increases in synaptic weight, while dips lead to decreases -- direct pathway in dorsal striatum
	D1R DaReceptors = iota

	// D2R primarily expresses Dopamine D2 Receptors -- dopamine is inhibitory and bursts of dopamine lead to decreases in synaptic weight, while dips lead to increases -- indirect pathway in dorsal striatum
	D2R

	DaReceptorsN
)

func (*DaReceptors) FromString

func (i *DaReceptors) FromString(s string) error

func (DaReceptors) MarshalJSON

func (ev DaReceptors) MarshalJSON() ([]byte, error)

func (DaReceptors) String

func (i DaReceptors) String() string

func (*DaReceptors) UnmarshalJSON

func (ev *DaReceptors) UnmarshalJSON(b []byte) error

type GPiGateParams

type GPiGateParams struct {

	// [def: 3] extra netinput gain factor to compensate for reduction in Ge from subtracting away NoGo -- this is *IN ADDITION* to adding the NoGo factor as an extra gain: Ge = (GeGain + NoGo) * (GoIn - NoGo * NoGoIn)
	GeGain float32 `` /* 217-byte string literal not displayed */

	// [def: 1,0.1] [min: 0] how much to weight NoGo inputs relative to Go inputs (which have an implied weight of 1 -- this also up-scales overall Ge to compensate for subtraction
	NoGo float32 `` /* 178-byte string literal not displayed */

	// [def: 0.2] threshold for gating, applied to activation -- when any GPiThal unit activation gets above this threshold, it counts as having gated, driving updating of GateState which is broadcast to other layers that use the gating signal
	Thr float32 `` /* 242-byte string literal not displayed */

	// [def: true] Act value of GPiThal unit reflects gating threshold: if below threshold, it is zeroed -- see ActLrn for underlying non-thresholded activation
	ThrAct bool `` /* 159-byte string literal not displayed */
}

GPiGateParams has gating parameters for gating in GPiThal layer, including threshold

func (*GPiGateParams) Defaults

func (gp *GPiGateParams) Defaults()

func (*GPiGateParams) GeRaw

func (gp *GPiGateParams) GeRaw(goRaw, nogoRaw float32) float32

GeRaw returns the net GeRaw from go, nogo specific values

type GPiNeuron

type GPiNeuron struct {

	// gating activation -- the activity value when gating occurred in this pool
	ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`
}

GPiNeuron contains extra variables for GPiThalLayer neurons -- stored separately

type GPiThalLayer

type GPiThalLayer struct {
	GateLayer

	// [view: inline] timing parameters determining when gating happens
	Timing GPiTimingParams `view:"inline" desc:"timing parameters determining when gating happens"`

	// [view: inline] gating parameters determining threshold for gating etc
	Gate GPiGateParams `view:"inline" desc:"gating parameters determining threshold for gating etc"`

	// list of layers to send GateState to
	SendTo []string `desc:"list of layers to send GateState to"`

	// slice of GPiNeuron state for this layer -- flat list of len = Shape.Len().  You must iterate over index and use pointer to modify values.
	GPiNeurs []GPiNeuron `` /* 144-byte string literal not displayed */
}

GPiThalLayer represents the combined Winner-Take-All dynamic of GPi (SNr) and Thalamus. It is the final arbiter of gating in the BG, weighing Go (direct) and NoGo (indirect) inputs from MatrixLayers (indirectly via GPe layer in case of NoGo). Use 4D structure for this so it matches 4D structure in Matrix layers

func AddGPiThalLayer added in v1.1.11

func AddGPiThalLayer(nt *leabra.Network, name string, nY, nMaint, nOut int) *GPiThalLayer

AddGPiThalLayer adds a GPiThalLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.

func (*GPiThalLayer) AddSendTo

func (ly *GPiThalLayer) AddSendTo(laynm string)

AddSendTo adds given layer name to list of those to send DA to

func (*GPiThalLayer) AlphaCycInit

func (ly *GPiThalLayer) AlphaCycInit(updtActAvg bool)

func (*GPiThalLayer) Build

func (ly *GPiThalLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*GPiThalLayer) Defaults

func (ly *GPiThalLayer) Defaults()

func (*GPiThalLayer) GFmInc

func (ly *GPiThalLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*GPiThalLayer) GateFmAct

func (ly *GPiThalLayer) GateFmAct(ltime *leabra.Time)

GateFmAct updates GateState from current activations, at time of gating

func (*GPiThalLayer) GateSend

func (ly *GPiThalLayer) GateSend(ltime *leabra.Time)

GateSend updates gating state and sends it along to other layers

func (*GPiThalLayer) GateType

func (ly *GPiThalLayer) GateType() GateTypes

func (*GPiThalLayer) InitActs

func (ly *GPiThalLayer) InitActs()

func (*GPiThalLayer) MatrixPrjns

func (ly *GPiThalLayer) MatrixPrjns() (goPrjn, nogoPrjn *GPiThalPrjn, err error)

MatrixPrjns returns the recv prjns from Go and NoGo MatrixLayer pathways -- error if not found or if prjns are not of the GPiThalPrjn type

func (*GPiThalLayer) RecGateAct

func (ly *GPiThalLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now

func (*GPiThalLayer) SendGateShape

func (ly *GPiThalLayer) SendGateShape() error

SendGateShape send GateShape info to all SendTo layers -- convenient config-time way to ensure all are consistent -- also checks validity of SendTo's

func (*GPiThalLayer) SendGateStates

func (ly *GPiThalLayer) SendGateStates()

SendGateStates sends GateStates to other layers

func (*GPiThalLayer) SendToCheck

func (ly *GPiThalLayer) SendToCheck() error

SendToCheck is called during Build to ensure that SendTo layers are valid

func (*GPiThalLayer) SendToMatrixPFC

func (ly *GPiThalLayer) SendToMatrixPFC(prefix string)

SendToMatrixPFC adds standard SendTo layers for PBWM: MatrixGo, NoGo, PFCmntD, PFCoutD with optional prefix -- excludes mnt, out cases if corresp shape = 0

func (*GPiThalLayer) UnitValByIdx

func (ly *GPiThalLayer) UnitValByIdx(vidx NeurVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type GPiThalPrjn

type GPiThalPrjn struct {
	leabra.Prjn // access as .Prjn

	// per-recv, per-prjn raw excitatory input
	GeRaw []float32 `desc:"per-recv, per-prjn raw excitatory input"`
}

GPiThalPrjn accumulates per-prjn raw conductance that is needed for separately weighting NoGo vs. Go inputs

func (*GPiThalPrjn) Build

func (pj *GPiThalPrjn) Build() error

func (*GPiThalPrjn) InitGInc

func (pj *GPiThalPrjn) InitGInc()

func (*GPiThalPrjn) RecvGInc

func (pj *GPiThalPrjn) RecvGInc()

RecvGInc increments the receiver's GeInc or GiInc from that of all the projections.

type GPiTimingParams

type GPiTimingParams struct {

	// Quarter(s) when gating takes place, typically Q1 and Q3, which is the quarter prior to the PFC GateQtr when deep layer updating takes place. Note: this is a bitflag and must be accessed using bitflag.Set / Has etc routines, 32 bit versions.
	GateQtr leabra.Quarters `` /* 247-byte string literal not displayed */

	// [def: 18] cycle within Qtr to determine if activation over threshold for gating.  We send GateState updates on this cycle either way.
	Cycle int `` /* 139-byte string literal not displayed */
}

GPiTimingParams has timing parameters for gating in the GPiThal layer

func (*GPiTimingParams) Defaults

func (gt *GPiTimingParams) Defaults()

type GateLayer

type GateLayer struct {
	Layer

	// shape of overall Maint + Out gating system that this layer is part of
	GateShp GateShape `desc:"shape of overall Maint + Out gating system that this layer is part of"`

	// slice of gating state values for this layer, one for each separate gating pool, according to its GateType.  For MaintOut, it is ordered such that 0:MaintN are Maint and MaintN:n are Out
	GateStates []GateState `` /* 192-byte string literal not displayed */
}

GateLayer is a layer that cares about thalamic (BG) gating signals, and has slice of GateState fields that a gating layer will update.

func (*GateLayer) AsGate

func (ly *GateLayer) AsGate() *GateLayer

func (*GateLayer) Build

func (ly *GateLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*GateLayer) GateShape

func (ly *GateLayer) GateShape() *GateShape

func (*GateLayer) GateState

func (ly *GateLayer) GateState(poolIdx int) *GateState

GateState returns the GateState for given pool index (0 based) on this layer

func (*GateLayer) InitActs

func (ly *GateLayer) InitActs()

func (*GateLayer) SetGateState

func (ly *GateLayer) SetGateState(poolIdx int, state *GateState)

SetGateState sets the GateState for given pool index (individual pools start at 1) on this layer

func (*GateLayer) SetGateStates

func (ly *GateLayer) SetGateStates(states []GateState, typ GateTypes)

SetGateStates sets the GateStates from given source states, of given gating type

func (*GateLayer) UnitValByIdx

func (ly *GateLayer) UnitValByIdx(vidx NeurVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type GateLayerer

type GateLayerer interface {
	// AsGate returns the layer as a GateLayer layer, for direct access to fields
	AsGate() *GateLayer

	// GateType returns the type of gating supported by this layer
	GateType() GateTypes

	// GateShape returns the shape of gating system that this layer is part of
	GateShape() *GateShape

	// GateState returns the GateState for given pool index (0-based) on this layer
	GateState(poolIdx int) *GateState

	// SetGateState sets the GateState for given pool index (0-based) on this layer
	SetGateState(poolIdx int, state *GateState)

	// SetGateStates sets the GateStates from given source states, of given gating type
	SetGateStates(states []GateState, typ GateTypes)
}

GateLayerer is an interface for GateLayer layers

type GateShape

type GateShape struct {

	// overall shape dimensions for the full set of gating pools, e.g., as present in the Matrix and GPiThal levels
	Y int `desc:"overall shape dimensions for the full set of gating pools, e.g., as present in the Matrix and GPiThal levels"`

	// how many pools in the X dimension are Maint gating pools -- rest are Out
	MaintX int `desc:"how many pools in the X dimension are Maint gating pools -- rest are Out"`

	// how many pools in the X dimension are Out gating pools -- comes after Maint
	OutX int `desc:"how many pools in the X dimension are Out gating pools -- comes after Maint"`
}

GateShape defines the shape of the outer pool dimensions of gating layers, organized into Maint and Out subsets which are arrayed along the X axis with Maint first (to the left) then Out. Individual layers may only represent Maint or Out subsets of this overall shape, but all need to have this coordinated shape information to be able to share gating state information. Each layer represents gate state information in their native geometry -- FullIndex1D provides access from a subset to full set.

func (*GateShape) FullIndex1D

func (gs *GateShape) FullIndex1D(idx int, fmTyp GateTypes) int

FullIndex1D returns the index into full MaintOut GateStates for given 1D pool idx (0-based) *from given GateType*.

func (*GateShape) Index

func (gs *GateShape) Index(pY, pX int, typ GateTypes) int

Index returns the index into GateStates for given 2D pool coords for given GateType. Each type stores gate info in its "native" 2D format.

func (*GateShape) Set

func (gs *GateShape) Set(nY, maintX, outX int)

Set sets the shape parameters: number of Y dimension pools, and numbers of maint and out pools along X axis

func (*GateShape) TotX

func (gs *GateShape) TotX() int

TotX returns the total number of X-axis pools (Maint + Out)

type GateState

type GateState struct {

	// gating activation value, reflecting thalamic gating layer activation at time of gating (when Now = true) -- will be 0 if gating below threshold for this pool, and prior to first Now for AlphaCycle
	Act float32 `` /* 203-byte string literal not displayed */

	// gating timing signal -- true if this is the moment when gating takes place
	Now bool `desc:"gating timing signal -- true if this is the moment when gating takes place"`

	// unique to each layer -- not copied.  Generally is a counter for interval between gating signals -- starts at -1, goes to 0 at first gating, counts up from there for subsequent gating.  Can be reset back to -1 when gate is reset (e.g., output gating) and counts down from -1 while not gating.
	Cnt int `` /* 307-byte string literal not displayed */
}

GateState is gating state values stored in layers that receive thalamic gating signals including MatrixLayer, PFCLayer, GPiThal layer, etc -- use GateLayer as base layer to include.

func (*GateState) CopyFrom

func (gs *GateState) CopyFrom(fm *GateState)

CopyFrom copies from another GateState -- only the Act and Now signals are copied

func (*GateState) Init

func (gs *GateState) Init()

Init initializes the values -- call during InitActs()

type GateTypes

type GateTypes int

GateTypes for region of striatum

const (
	// Maint is maintenance gating -- toggles active maintenance in PFC
	Maint GateTypes = iota

	// Out is output gating -- drives deep layer activation
	Out

	// MaintOut for maint and output gating
	MaintOut

	GateTypesN
)

func (*GateTypes) FromString

func (i *GateTypes) FromString(s string) error

func (GateTypes) MarshalJSON

func (ev GateTypes) MarshalJSON() ([]byte, error)

func (GateTypes) String

func (i GateTypes) String() string

func (*GateTypes) UnmarshalJSON

func (ev *GateTypes) UnmarshalJSON(b []byte) error

type Layer

type Layer struct {
	leabra.Layer

	// current dopamine level for this layer
	DA float32 `inactive:"+" desc:"current dopamine level for this layer"`

	// current acetylcholine level for this layer
	ACh float32 `inactive:"+" desc:"current acetylcholine level for this layer"`

	// current serotonin level for this layer
	SE float32 `inactive:"+" desc:"current serotonin level for this layer"`
}

pbwm.Layer is the base layer type for PBWM framework -- has variables for the layer-level neuromodulatory variables: dopamine, ach, serotonin. See ModLayer for a version that includes DA-modulated learning parameters,

func AddGPeLayer added in v1.1.11

func AddGPeLayer(nt *leabra.Network, name string, nY, nMaint, nOut int) *Layer

AddGPeLayer adds a pbwm.Layer to serve as a GPe layer, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.

func (*Layer) AsGate added in v1.1.10

func (ly *Layer) AsGate() *GateLayer

AsGate returns this layer as a pbwm.GateLayer -- nil for Layer

func (*Layer) AsPBWM added in v1.1.10

func (ly *Layer) AsPBWM() *Layer

AsPBWM returns this layer as a pbwm.Layer

func (*Layer) Defaults added in v1.1.10

func (ly *Layer) Defaults()

func (*Layer) DoQuarter2DWt added in v1.1.10

func (ly *Layer) DoQuarter2DWt() bool

DoQuarter2DWt indicates whether to do optional Q2 DWt

func (*Layer) GateSend added in v1.1.10

func (ly *Layer) GateSend(ltime *leabra.Time)

GateSend updates gating state and sends it along to other layers. most layers don't implement -- only gating layers

func (*Layer) GetACh added in v1.1.11

func (ly *Layer) GetACh() float32

func (*Layer) GetDA added in v1.1.10

func (ly *Layer) GetDA() float32

func (*Layer) GetSE added in v1.1.11

func (ly *Layer) GetSE() float32

func (*Layer) InitActs added in v1.1.10

func (ly *Layer) InitActs()

func (*Layer) Quarter2DWt added in v1.1.10

func (ly *Layer) Quarter2DWt()

Quarter2DWt is optional Q2 DWt -- define where relevant

func (*Layer) QuarterFinal added in v1.1.10

func (ly *Layer) QuarterFinal(ltime *leabra.Time)

QuarterFinal does updating after end of a quarter

func (*Layer) RecGateAct added in v1.1.10

func (ly *Layer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now -- only for gating layers

func (*Layer) SendMods added in v1.1.10

func (ly *Layer) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

func (*Layer) SetACh added in v1.1.11

func (ly *Layer) SetACh(ach float32)

func (*Layer) SetDA added in v1.1.10

func (ly *Layer) SetDA(da float32)

func (*Layer) SetSE added in v1.1.11

func (ly *Layer) SetSE(se float32)

func (*Layer) UnitVal1D added in v1.1.10

func (ly *Layer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Layer) UnitValByIdx added in v1.1.10

func (ly *Layer) UnitValByIdx(vidx NeurVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one). This must be updated for specialized PBWM layer types to return correct variables!

func (*Layer) UnitVarIdx added in v1.1.10

func (ly *Layer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNames added in v1.1.10

func (ly *Layer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer.

func (*Layer) UnitVarNum added in v1.1.11

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*Layer) UpdateParams added in v1.1.10

func (ly *Layer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer

type MatrixLayer

type MatrixLayer struct {
	GateLayer

	// number of Maint Pools in X outer dimension of 4D shape -- Out gating after that
	MaintN int `desc:"number of Maint Pools in X outer dimension of 4D shape -- Out gating after that"`

	// dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo
	DaR DaReceptors `desc:"dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo"`

	// [view: inline] matrix parameters
	Matrix MatrixParams `view:"inline" desc:"matrix parameters"`

	// slice of MatrixNeuron state for this layer -- flat list of len = Shape.Len().  You must iterate over index and use pointer to modify values.
	MatrixNeurs []MatrixNeuron `` /* 147-byte string literal not displayed */
}

MatrixLayer represents the dorsal matrisome MSN's that are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM. D1R = Go, D2R = NoGo, and outer 4D Pool X dimension determines GateTypes per MaintN (Maint on the left up to MaintN, Out on the right after)

func AddMatrixLayer added in v1.1.11

func AddMatrixLayer(nt *leabra.Network, name string, nY, nMaint, nOut, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer

AddMatrixLayer adds a MatrixLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*MatrixLayer) ActFmG

func (ly *MatrixLayer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. Matrix extends to call DaAChFmLay

func (*MatrixLayer) Build

func (ly *MatrixLayer) Build() error

Build constructs the layer state, including calling Build on the projections you MUST have properly configured the Inhib.Pool.On setting by this point to properly allocate Pools for the unit groups if necessary.

func (*MatrixLayer) DALrnFmDA

func (ly *MatrixLayer) DALrnFmDA(da float32) float32

DALrnFmDA returns effective learning dopamine value from given raw DA value applying Burst and Dip Gain factors, and then reversing sign for D2R.

func (*MatrixLayer) DaAChFmLay

func (ly *MatrixLayer) DaAChFmLay(ltime *leabra.Time)

DaAChFmLay computes Da and ACh from layer and Shunt received from PatchLayer units

func (*MatrixLayer) Defaults

func (ly *MatrixLayer) Defaults()

func (*MatrixLayer) DoQuarter2DWt

func (ly *MatrixLayer) DoQuarter2DWt() bool

DoQuarter2DWt indicates whether to do optional Q2 DWt

func (*MatrixLayer) GateType

func (ly *MatrixLayer) GateType() GateTypes

func (*MatrixLayer) InhibFmGeAct

func (ly *MatrixLayer) InhibFmGeAct(ltime *leabra.Time)

InhibiFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools Matrix version applies OutAChInhib to bias output gating on reward trials

func (*MatrixLayer) InitActs

func (ly *MatrixLayer) InitActs()

func (*MatrixLayer) RecGateAct

func (ly *MatrixLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now

func (*MatrixLayer) UnitValByIdx

func (ly *MatrixLayer) UnitValByIdx(vidx NeurVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type MatrixNeuron

type MatrixNeuron struct {

	// per-neuron modulated dopamine level, derived from layer DA and Shunt
	DA float32 `desc:"per-neuron modulated dopamine level, derived from layer DA and Shunt"`

	// per-neuron effective learning dopamine value -- gain modulated and sign reversed for D2R
	DALrn float32 `desc:"per-neuron effective learning dopamine value -- gain modulated and sign reversed for D2R"`

	// per-neuron modulated ACh level, derived from layer ACh and Shunt
	ACh float32 `desc:"per-neuron modulated ACh level, derived from layer ACh and Shunt"`

	// shunting input received from Patch neurons (in reality flows through SNc DA pathways)
	Shunt float32 `desc:"shunting input received from Patch neurons (in reality flows through SNc DA pathways)"`

	// gating activation -- the activity value when gating occurred in this pool
	ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`
}

MatrixNeuron contains extra variables for MatrixLayer neurons -- stored separately

type MatrixParams

type MatrixParams struct {

	// Quarter(s) when learning takes place, typically Q2 and Q4, corresponding to the PFC GateQtr. Note: this is a bitflag and must be accessed using bitflag.Set / Has etc routines, 32 bit versions.
	LearnQtr leabra.Quarters `` /* 199-byte string literal not displayed */

	// [def: 0.2,0.5] [min: 0] [max: 1] how much the patch shunt activation multiplies the dopamine values -- 0 = complete shunting, 1 = no shunting -- should be a factor < 1.0
	PatchShunt float32 `` /* 173-byte string literal not displayed */

	// [def: true] also shunt the ACh value driven from CIN units -- this prevents clearing of MSNConSpec traces -- more plausibly the patch units directly interfere with the effects of CIN's rather than through ach, but it is easier to implement with ach shunting here.
	ShuntACh bool `` /* 269-byte string literal not displayed */

	// [def: 0,0.3] how much does the LACK of ACh from the CIN units drive extra inhibition to output-gating Matrix units -- gi += out_ach_inhib * (1-ach) -- provides a bias for output gating on reward trials -- do NOT apply to NoGo, only Go -- this is a key param -- between 0.1-0.3 usu good -- see how much output gating happening and change accordingly
	OutAChInhib float32 `` /* 354-byte string literal not displayed */

	// [def: 1] multiplicative gain factor applied to positive (burst) dopamine signals in computing DALrn effect learning dopamine value based on raw DA that we receive (D2R reversal occurs *after* applying Burst based on sign of raw DA)
	BurstGain float32 `` /* 237-byte string literal not displayed */

	// [def: 1] multiplicative gain factor applied to positive (burst) dopamine signals in computing DALrn effect learning dopamine value based on raw DA that we receive (D2R reversal occurs *after* applying Burst based on sign of raw DA)
	DipGain float32 `` /* 237-byte string literal not displayed */
}

MatrixParams has parameters for Dorsal Striatum Matrix computation These are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM

func (*MatrixParams) Defaults

func (mp *MatrixParams) Defaults()

type MatrixTracePrjn

type MatrixTracePrjn struct {
	leabra.Prjn

	// [view: inline] special parameters for matrix trace learning
	Trace TraceParams `view:"inline" desc:"special parameters for matrix trace learning"`

	// trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array
	TrSyns []TraceSyn `desc:"trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"`
}

MatrixTracePrjn does dopamine-modulated, gated trace learning, for Matrix learning in PBWM context

func (*MatrixTracePrjn) Build

func (pj *MatrixTracePrjn) Build() error

func (*MatrixTracePrjn) ClearTrace added in v1.0.4

func (pj *MatrixTracePrjn) ClearTrace()

func (*MatrixTracePrjn) DWt

func (pj *MatrixTracePrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*MatrixTracePrjn) Defaults

func (pj *MatrixTracePrjn) Defaults()

func (*MatrixTracePrjn) InitWts

func (pj *MatrixTracePrjn) InitWts()

func (*MatrixTracePrjn) SynVal1D added in v1.1.10

func (pj *MatrixTracePrjn) SynVal1D(varIdx int, synIdx int) float32

SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixTracePrjn) SynVarIdx added in v1.1.10

func (pj *MatrixTracePrjn) SynVarIdx(varNm string) (int, error)

SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

type ModLayer

type ModLayer struct {
	Layer

	// dopamine modulation effects, typically affecting Ge or gain -- a phase-based difference in modulation will result in learning effects through standard error-driven learning.
	DaMod DaModParams `` /* 180-byte string literal not displayed */
}

ModLayer provides DA modulated learning to basic Leabra layers.

func (*ModLayer) ActFmG added in v1.1.10

func (ly *ModLayer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act

func (*ModLayer) GFmInc added in v1.1.10

func (ly *ModLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

type Network

type Network struct {
	leabra.Network
}

pbwm.Network has methods for configuring specialized PBWM network components

func (*Network) AddCINLayer added in v1.1.11

func (nt *Network) AddCINLayer(name string) *CINLayer

AddCINLayer adds a CINLayer, with a single neuron.

func (*Network) AddDorsalBG

func (nt *Network) AddDorsalBG(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (mtxGo, mtxNoGo, gpe, gpi, cin leabra.LeabraLayer)

AddDorsalBG adds MatrixGo, NoGo, GPe, GPiThal, and CIN layers, with given optional prefix. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made to drive GPiThal, with BgFixed class name set so they can be styled appropriately (no learning, WtRnd.Mean=0.8, Var=0)

func (*Network) AddGPeLayer

func (nt *Network) AddGPeLayer(name string, nY, nMaint, nOut int) *Layer

AddGPeLayer adds a pbwm.Layer to serve as a GPe layer, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.

func (*Network) AddGPiThalLayer

func (nt *Network) AddGPiThalLayer(name string, nY, nMaint, nOut int) *GPiThalLayer

AddGPiThalLayer adds a GPiThalLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.

func (*Network) AddMatrixLayer

func (nt *Network) AddMatrixLayer(name string, nY, nMaint, nOut, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer

AddMatrixLayer adds a MatrixLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*Network) AddPBWM

func (nt *Network) AddPBWM(prefix string, nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX int) (mtxGo, mtxNoGo, gpe, gpi, cin, pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)

AddPBWM adds a DorsalBG and PFC with given params Defaults to simple case of basic maint dynamics in Deep

func (*Network) AddPFC

func (nt *Network) AddPFC(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int, dynMaint bool) (pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)

AddPFC adds paired PFCmnt, PFCout and associated Deep layers, with given optional prefix. nY = number of pools in Y dimension, nMaint, nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. dynMaint is true for maintenance-only dyn, else full set of 5 dynamic maintenance types. Appropriate OneToOne connections are made between PFCmntD -> PFCout.

func (*Network) AddPFCLayer

func (nt *Network) AddPFCLayer(name string, nY, nX, nNeurY, nNeurX int, out, dynMaint bool) (sp, dp leabra.LeabraLayer)

AddPFCLayer adds a PFCLayer, super and deep, of given size, with given name. nY, nX = number of pools in Y, X dimensions, and each pool has nNeurY, nNeurX neurons. out is true for output-gating layer, and dynmaint is true for maintenance-only dyn, else Full set of 5 dynamic maintenance types. Both have the class "PFC" set. deep is positioned behind super.

func (*Network) CycleImpl added in v1.1.10

func (nt *Network) CycleImpl(ltime *leabra.Time)

CycleImpl runs one cycle of activation updating PBWM calls GateSend after Cycle and before DeepBurst

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and projections

func (*Network) GateSend

func (nt *Network) GateSend(ltime *leabra.Time)

GateSend is called at end of Cycle, computes Gating and sends to other layers

func (*Network) NewLayer

func (nt *Network) NewLayer() emer.Layer

NewLayer returns new layer of default pbwm.Layer type

func (*Network) NewPrjn

func (nt *Network) NewPrjn() emer.Prjn

NewPrjn returns new prjn of default type

func (*Network) RecGateAct

func (nt *Network) RecGateAct(ltime *leabra.Time)

RecGateAct is called after GateSend, to record gating activations at time of gating

func (*Network) SendMods

func (nt *Network) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

func (*Network) SynVarNames added in v1.1.10

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network.

func (*Network) UnitVarNames added in v1.1.10

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and projections

type NeurVars added in v1.1.10

type NeurVars int

NeurVars are indexes into extra PBWM neuron-level variables

const (
	DA NeurVars = iota
	DALrn
	ACh
	SE
	GateAct
	GateNow
	GateCnt
	ActG
	NrnMaint
	MaintGe
	NeurVarsN
)

type PBWMLayer

type PBWMLayer interface {
	leabra.LeabraLayer

	// AsPBWM returns this layer as a pbwm.Layer (base Layer in PBWM)
	AsPBWM() *Layer

	// AsGate returns this layer as a pbwm.GateLayer (gated layer type) -- nil if not impl
	AsGate() *GateLayer

	// UnitValByIdx returns value of given PBWM-specific variable by variable index
	// and flat neuron index (from layer or neuron-specific one).
	UnitValByIdx(vidx NeurVars, idx int) float32

	// GateSend updates gating state and sends it along to other layers.
	// Called after std Cycle methods.
	// Only implemented for gating layers.
	GateSend(ltime *leabra.Time)

	// RecGateAct records the gating activation from current activation, when gating occcurs
	// based on GateState.Now
	RecGateAct(ltime *leabra.Time)

	// SendMods is called at end of Cycle to send modulator signals (DA, etc)
	// which will then be active for the next cycle of processing
	SendMods(ltime *leabra.Time)

	// Quarter2DWt is optional Q2 DWt -- PFC and matrix layers can do this as appropriate
	Quarter2DWt()

	// DoQuarter2DWt returns true if this recv layer should have its weights updated
	DoQuarter2DWt() bool
}

PBWMLayer defines the essential algorithmic API for PBWM at the layer level. Builds upon the leabra.LeabraLayer API

type PFCDeepLayer added in v1.1.11

type PFCDeepLayer struct {
	GateLayer

	// [view: inline] PFC Gating parameters
	Gate PFCGateParams `view:"inline" desc:"PFC Gating parameters"`

	// [view: inline] PFC Maintenance parameters
	Maint PFCMaintParams `view:"inline" desc:"PFC Maintenance parameters"`

	// PFC dynamic behavior parameters -- provides deterministic control over PFC maintenance dynamics -- the rows of PFC units (along Y axis) behave according to corresponding index of Dyns (inner loop is Super Y axis, outer is Dyn types) -- ensure Y dim has even multiple of len(Dyns)
	Dyns PFCDyns `` /* 286-byte string literal not displayed */

	// slice of PFCNeuron state for this layer -- flat list of len = Shape.Len().  You must iterate over index and use pointer to modify values.
	PFCNeurs []PFCNeuron `` /* 144-byte string literal not displayed */
}

PFCDeepLayer is a Prefrontal Cortex BG-gated deep working memory layer. This handles all of the PFC-specific functionality, looking for a corresponding Super layer with the same name except no final D. If Dyns are used, they are represented in extra Y-axis neurons, with the inner-loop being the basic Super Y axis values for each Dyn type, and outer-loop the Dyn types.

func (*PFCDeepLayer) ActFmG added in v1.1.11

func (ly *PFCDeepLayer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. PFC extends to call Gating.

func (*PFCDeepLayer) Build added in v1.1.11

func (ly *PFCDeepLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*PFCDeepLayer) ClearMaint added in v1.1.11

func (ly *PFCDeepLayer) ClearMaint(pool int)

ClearMaint resets maintenance in corresponding pool (0 based) in maintenance layer

func (*PFCDeepLayer) DeepMaint added in v1.1.11

func (ly *PFCDeepLayer) DeepMaint(ltime *leabra.Time)

DeepMaint updates deep maintenance activations

func (*PFCDeepLayer) Defaults added in v1.1.11

func (ly *PFCDeepLayer) Defaults()

func (*PFCDeepLayer) DoQuarter2DWt added in v1.1.11

func (ly *PFCDeepLayer) DoQuarter2DWt() bool

DoQuarter2DWt indicates whether to do optional Q2 DWt

func (*PFCDeepLayer) GFmInc added in v1.1.11

func (ly *PFCDeepLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*PFCDeepLayer) GateType added in v1.1.11

func (ly *PFCDeepLayer) GateType() GateTypes

func (*PFCDeepLayer) Gating added in v1.1.11

func (ly *PFCDeepLayer) Gating(ltime *leabra.Time)

Gating updates PFC Gating state

func (*PFCDeepLayer) InitActs added in v1.1.11

func (ly *PFCDeepLayer) InitActs()

func (*PFCDeepLayer) MaintPFC added in v1.1.11

func (ly *PFCDeepLayer) MaintPFC() *PFCDeepLayer

MaintPFC returns corresponding PFCDeep maintenance layer with same name but outD -> mntD could be nil

func (*PFCDeepLayer) QuarterFinal added in v1.1.11

func (ly *PFCDeepLayer) QuarterFinal(ltime *leabra.Time)

QuarterFinal does updating after end of a quarter

func (*PFCDeepLayer) RecGateAct added in v1.1.11

func (ly *PFCDeepLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now

func (*PFCDeepLayer) SuperPFC added in v1.1.11

func (ly *PFCDeepLayer) SuperPFC() leabra.LeabraLayer

SuperPFC returns corresponding PFC super layer with same name without D should not be nil. Super can be any layer type.

func (*PFCDeepLayer) UnitValByIdx added in v1.1.11

func (ly *PFCDeepLayer) UnitValByIdx(vidx NeurVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

func (*PFCDeepLayer) UpdtGateCnt added in v1.1.11

func (ly *PFCDeepLayer) UpdtGateCnt(ltime *leabra.Time)

UpdtGateCnt updates the gate counter

type PFCDyn

type PFCDyn struct {

	// initial value at point when gating starts -- MUST be > 0 when used.
	Init float32 `desc:"initial value at point when gating starts -- MUST be > 0 when used."`

	// time constant for linear rise in maintenance activation (per quarter when deep is updated) -- use integers -- if both rise and decay then rise comes first
	RiseTau float32 `` /* 161-byte string literal not displayed */

	// time constant for linear decay in maintenance activation (per quarter when deep is updated) -- use integers -- if both rise and decay then rise comes first
	DecayTau float32 `` /* 162-byte string literal not displayed */

	// description of this factor
	Desc string `desc:"description of this factor"`
}

PFC dynamic behavior element -- defines the dynamic behavior of deep layer PFC units

func (*PFCDyn) Defaults

func (pd *PFCDyn) Defaults()

func (*PFCDyn) Set

func (pd *PFCDyn) Set(init, rise, decay float32, desc string)

func (*PFCDyn) Value

func (pd *PFCDyn) Value(time float32) float32

Value returns dynamic value at given time point

type PFCDyns

type PFCDyns []*PFCDyn

PFCDyns is a slice of dyns. Provides deterministic control over PFC maintenance dynamics -- the rows of PFC units (along Y axis) behave according to corresponding index of Dyns. ensure layer Y dim has even multiple of len(Dyns).

func (*PFCDyns) FullDyn

func (pd *PFCDyns) FullDyn(tau float32)

FullDyn creates full dynamic Dyn configuration, with 5 different dynamic profiles: stable maint, phasic, rising maint, decaying maint, and up / down maint. tau is the rise / decay base time constant.

func (*PFCDyns) MaintOnly

func (pd *PFCDyns) MaintOnly()

MaintOnly creates basic default maintenance dynamic configuration -- every unit just maintains over time. This should be used for Output gating layer.

func (*PFCDyns) SetDyn

func (pd *PFCDyns) SetDyn(dyn int, init, rise, decay float32, desc string) *PFCDyn

SetDyn sets given dynamic maint element to given parameters (must be allocated in list first)

func (*PFCDyns) Value

func (pd *PFCDyns) Value(dyn int, time float32) float32

Value returns value for given dyn item at given time step

type PFCGateParams

type PFCGateParams struct {

	// Quarter(s) that the effect of gating on updating Deep from Super occurs -- this is typically 1 quarter after the GPiThal GateQtr
	GateQtr leabra.Quarters `` /* 135-byte string literal not displayed */

	// if true, this PFC layer is an output gate layer, which means that it only has transient activation during gating
	OutGate bool `desc:"if true, this PFC layer is an output gate layer, which means that it only has transient activation during gating"`

	// [def: true] [viewif: OutGate] for output gating, only compute gating in first quarter -- do not compute in 3rd quarter -- this is typically true, and GateQtr is typically set to only Q1 as well -- does Burst updating immediately after first quarter gating signal -- allows gating signals time to influence performance within a single trial
	OutQ1Only bool `` /* 344-byte string literal not displayed */
}

PFCGateParams has parameters for PFC gating

func (*PFCGateParams) Defaults

func (gp *PFCGateParams) Defaults()

type PFCMaintParams

type PFCMaintParams struct {

	// use fixed dynamics for updating deep_ctxt activations -- defined in dyn_table -- this also preserves the initial gating deep_ctxt value in Maint neuron val (view as Cust1) -- otherwise it is up to the recurrent loops between super and deep for maintenance
	UseDyn bool `` /* 262-byte string literal not displayed */

	// [def: 0.8] [min: 0] multiplier on maint current
	MaintGain float32 `min:"0" def:"0.8" desc:"multiplier on maint current"`

	// [def: false] on output gating, clear corresponding maint pool.  theoretically this should be on, but actually it works better off in most cases..
	OutClearMaint bool `` /* 151-byte string literal not displayed */

	// [def: 0] [min: 0] [max: 1] how much to clear out (decay) super activations when the stripe itself gates and was previously maintaining something, or for maint pfc stripes, when output go fires and clears.
	Clear    float32 `` /* 210-byte string literal not displayed */
	MaxMaint int     `` /* 200-byte string literal not displayed */
}

PFCMaintParams for PFC maintenance functions

func (*PFCMaintParams) Defaults

func (mp *PFCMaintParams) Defaults()

type PFCNeuron

type PFCNeuron struct {

	// gating activation -- the activity value when gating occurred in this pool
	ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`

	// maintenance value for Deep layers = sending act at time of gating
	Maint float32 `desc:"maintenance value for Deep layers = sending act at time of gating"`

	// maintenance excitatory conductance value for Deep layers
	MaintGe float32 `desc:"maintenance excitatory conductance value for Deep layers"`
}

PFCNeuron contains extra variables for PFCLayer neurons -- stored separately

type TraceParams

type TraceParams struct {

	// [def: 0.7] [min: 0] learning rate for all not-gated stripes, which learn in the opposite direction to the gated stripes, and typically with a slightly lower learning rate -- although there are different learning logics associated with each of these different not-gated cases, in practice the same learning rate for all works best, and is simplest
	NotGatedLR float32 `` /* 351-byte string literal not displayed */

	// [def: 0.1] [min: 0] learning rate for gated, NoGo (D2), positive dopamine (weights decrease) -- this is the single most important learning parameter here -- by making this relatively small (but non-zero), an asymmetry in the role of Go vs. NoGo is established, whereby the NoGo pathway focuses largely on punishing and preventing actions associated with negative outcomes, while those assoicated with positive outcomes only very slowly get relief from this NoGo pressure -- this is critical for causing the model to explore other possible actions even when a given action SOMETIMES produces good results -- NoGo demands a very high, consistent level of good outcomes in order to have a net decrease in these avoidance weights.  Note that the gating signal applies to both Go and NoGo MSN's for gated stripes, ensuring learning is about the action that was actually selected (see not_ cases for logic for actions that were close but not taken)
	GateNoGoPosLR float32 `` /* 947-byte string literal not displayed */

	// [def: 0] [min: 0] decay driven by receiving unit ACh value, sent by CIN units, for reseting the trace
	AChDecay float32 `min:"0" def:"0" desc:"decay driven by receiving unit ACh value, sent by CIN units, for reseting the trace"`

	// [def: 1] [min: 0] multiplier on trace activation for decaying prior traces -- new trace magnitude drives decay of prior trace -- if gating activation is low, then new trace can be low and decay is slow, so increasing this factor causes learning to be more targeted on recent gating changes
	Decay float32 `` /* 294-byte string literal not displayed */

	// [def: true] use the sigmoid derivative factor 2 * act * (1-act) in modulating learning -- otherwise just multiply by msn activation directly -- this is generally beneficial for learning to prevent weights from continuing to increase when activations are already strong (and vice-versa for decreases)
	Deriv bool `` /* 305-byte string literal not displayed */
}

Params for for trace-based learning in the MatrixTracePrjn

func (*TraceParams) Defaults

func (tp *TraceParams) Defaults()

func (*TraceParams) LrateMod

func (tp *TraceParams) LrateMod(gated, d2r, posDa bool) float32

LrateMod returns the learning rate modulator based on gating, d2r, and posDa factors

func (*TraceParams) LrnFactor

func (tp *TraceParams) LrnFactor(act float32) float32

LrnFactor resturns multiplicative factor for level of msn activation. If Deriv is 2 * act * (1-act) -- the factor of 2 compensates for otherwise reduction in learning from these factors. Otherwise is just act.

type TraceSyn

type TraceSyn struct {

	// new trace -- drives updates to trace value -- su * (1-ru_msn) for gated, or su * ru_msn for not-gated (or for non-thalamic cases)
	NTr float32 `` /* 136-byte string literal not displayed */

	//  current ongoing trace of activations, which drive learning -- adds ntr and clears after learning on current values -- includes both thal gated (+ and other nongated, - inputs)
	Tr float32 `` /* 183-byte string literal not displayed */
}

TraceSyn holds extra synaptic state for trace projections

func (*TraceSyn) VarByIndex added in v1.1.10

func (sy *TraceSyn) VarByIndex(varIdx int) float32

VarByIndex returns synapse variable by index

func (*TraceSyn) VarByName added in v1.1.10

func (sy *TraceSyn) VarByName(varNm string) float32

VarByName returns synapse variable by name

type Valences

type Valences int

Valences for Appetitive and Aversive valence coding

const (
	// Appetititve is a positive valence US (food, water, etc)
	Appetitive Valences = iota

	// Aversive is a negative valence US (shock, threat etc)
	Aversive

	ValencesN
)

func (*Valences) FromString

func (i *Valences) FromString(s string) error

func (Valences) MarshalJSON

func (ev Valences) MarshalJSON() ([]byte, error)

func (Valences) String

func (i Valences) String() string

func (*Valences) UnmarshalJSON

func (ev *Valences) UnmarshalJSON(b []byte) error

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL