Acessibilidade / Reportar erro

Employing a Multiple Associative Memory Model for Temporal Sequence Reproduction

Abstract

This paper introduces an associative memory model which associates n-tuples of patterns, employs continuous and limited pattern representation, performs both auto- and heteroassociative tasks, and has adaptable correlation matrices. This model called Temporal Multidirectional Associative Memory (TMAM) is an adaptation of the Multidirectional Associative Memory (MAM) which includes autoassociative links, real activation functions, and supervised learning rules. The experimental results suggest that the model presents fast learning, improves storage capacity of MAM, reproduces trained temporal sequences, interpolates states within a trained sequence, extrapolates states in both extremities of a given sequence, and accommodates sequences of different number of steps.

Multidirectional Associative Memory; temporal sequences; Widrow-Hoff rule


Employing a Multiple Associative Memory Model for Temporal Sequence Reproduction

Aluizio F. R. Araújo and Marcelo Vieira

Universidade de São Paulo - Depto. de Engenharia Elétrica

C.P. 359, São Carlos, SP Brazil

aluizioa@lasi01.sel.eesc.sc.usp.br vieira@lasi01.sel.eesc.sc.usp.br

Abstract - This paper introduces an associative memory model which associates n-tuples of patterns, employs continuous and limited pattern representation, performs both auto- and heteroassociative tasks, and has adaptable correlation matrices. This model called Temporal Multidirectional Associative Memory (TMAM) is an adaptation of the Multidirectional Associative Memory (MAM) which includes autoassociative links, real activation functions, and supervised learning rules. The experimental results suggest that the model presents fast learning, improves storage capacity of MAM, reproduces trained temporal sequences, interpolates states within a trained sequence, extrapolates states in both extremities of a given sequence, and accommodates sequences of different number of steps.

Keywords: Multidirectional Associative Memory, temporal sequences, Widrow-Hoff rule.

1. Introduction

An associative neural memory model [1] is understood as a class of artificial neural networks which stores information as stable attractors. These states are often recorded by Hebbian rules. When a perfect, partial, or noisy version of a trained piece of information is presented to the neural network, it responds to the stimulus with the closest of the stored memories. This response is generally reached through a self-relaxation process.

The associative memory models are classified depending on their retrieval mode, nature of the stored associations, way to memorize information, and architecture.

These models can update the activation state of their processing units simultaneously, sequentially, or randomly. The first way is called synchronous update whereas the two last alternatives are asynchronous update. A dynamic network recalls information in several steps through a self-relaxation process and a static model takes a single step to yield a response.

An associative memory model is classified, according to the nature of memorized associations, as autoassociative or heteroassociative. The former encodes the relation from a pattern to itself. The latter associates different patterns. The associative memories which are able to change their initial knowledge through learning are denominated adaptive models while the non-adaptive do not present such a feature.

Five are the architecture classes of the associative memory models [2]. A cross-associative architecture is a class of networks with a single processing layer to associate pairs of patterns. A cascade associative network consists of cascade connections of cross-associative networks. These feedforward links exist exclusively between neighbor layers. A cyclic associative topology is a cascade associative network in which the output of the last layer is fed back into the input of the first layer. An autoassociative network is a cyclic associative network with a single layer. Finally, an associative sequence generator is a cyclic associative network in which the connections are established from activation’s in different steps of a given sequence.

The first associative neural memory models were autoassociative and static [3], [4]. The linear associators [5], [6], were the next proposed models. After that step, a recursive and autoassociative network was proposed [7]. This approach was followed by dynamic heteroassociative models with two layers [8], [9]. More recently, these models were modified to a multiple associative discrete model [10] with more than two layers.

Initially, this paper discusses topologies sharing the following features: a set of layers densely interconnected, knowledge stored through correlation matrices, discrete representation, and self-relaxation dynamics. The next step is to introduce a modified version of the Multidirectional Associative Memory characterized by addition of autoassociative connections, use of real and limited representation, and simple and fast training strategies in order to improve the storage capacity of the model. The new model allows storing and recalling of temporal sequences without employing hidden layers and backpropagation to train the network as most of the works dealing with temporal sequences [11], [12].

This paper surveys briefly the main associative memory models in Section 2. The features of MAM are discussed in Section 3. TMAM, the adaptation of MAM, is introduced in Section 4. Section 5 reports on tests to evaluate the capacity of TMAM to reproduce temporal sequences. The main results are summarized in Section 6.

2. The Associative Memory Modeled by Neural Networks

This section surveys briefly the main associative memory models in the literature.

2.1. The Historical View

Anderson et al. [13] points out that one of the first published outer product associators was introduced by Nakano [4]: the Associatron. In this model, large state vectors are associated with themselves. These vectors represent entities which are formed by a number of patterns. Hence, Associatron associates different patterns within an entity, and associates distinct entities through patterns which are common to them.


Figure 1: The topology of Associatron with eight units.

The architecture of this model comprises processing units in which every single unit is connected to all other ones (Figure 1). Each entity is formed by a number of two-dimensional patterns that are mapped into a row vector. Thus, a particular entity is represented by the vector:

(1)

where are the components of .

The connections between the units are established by the correlation matrix constructed as follows:

(2)

where are the entities to be stored.

The activation rule of each processing unit is given by the following function:

(3)

where

Associatron recalls a whole entity given some of its patterns or different entities given patterns which are common to these entities. Associatron retrieves sequences of entities if there is at least one common pattern between a particular entity and the next one in the sequence. In sum, this model memorizes distributively the entities and recall them associatively.

Amari [3] proposed a self-organizing associative memory which can recall single patterns and sequences of patterns. In this approach, the stimuli presented to the network determine stable states or stable cycles which can be retrieved from any pattern which generated it. This model has an autoassociative architecture.

The activation function of each unit is:

(4)

where

The connection elements are updated according to the following strategy: the weights are added of one unit if the i-th input pattern and the j-th output pattern coincide in activation, otherwise the weights are subtracted from 1. This network reaches stability if its initial state is within a determined distance from the attractor. Amari discussed the condition for different patterns or pattern sequences to be correctly retrieved at the same time.

Hopfield [7] widened the horizons of the associative memory models with the introduction of the concept of Liapunov function to establish the network stability. This model can store and recall trained patterns, moreover, it can generalize, categorize, and execute error correction.

This is an autoassociative neural network with a single layer in which each unit is connected to all other units except itself (Figure 2).


Figure 2: The topology of Hopfield model.

In the original model, there are two possible activation states to the processing units:

(5)

where

An element of the correlation matrix is obtained as follows:

(6)

The Liapunov function was determined by Hopfield, for , as follows:

(7)

The storage capacity of the Hopfield’s model is defined as the maximum number of stored patterns that can be perfectly retrieved. Such a capacity was proved [14] to be:

(8)

where is the storage capacity of the model for patterns randomly generated.

Later, Hopfield extended his model to the continuous case [15] and many extensions and generalizations of the model were proposed by other authors [11]. However, this topic is out of the scope of this paper.

The same linear associator model for associative memory was independently and simultaneously proposed by Kohonen [6] and Anderson [5]. The first is mostly concerned with the mathematical modeling whilst the second keeps the focus on the physiological plausibility. This model is characterized by a set of input elements, the receptors, which send connections to a cluster of output components, the associators. A linear model of neuron is adopted as processing unit. Hence, the continuous valued output is proportional to the summation of every single input multiplied by its associated ‘synaptic weight’. This memory model associates linearly pairs of patterns through the activation rule:

(9)

where

The ‘synaptic influence’ is proportional to the product of the pre- and postsynaptic activity. This Hebbian learning rule is stated as:

(10)

Kohonen proved that a sufficient condition to occur a perfect recall is to have orthonormal input vectors independent of the encoding of . This condition eliminates the ‘cross-talk’ between each pair of input patterns, however it limits severely the use of this model.

Kohonen and Ruohonen [16] introduced the generalized inverse matrix to replace the correlation matrix. They proposed a technique to ensure perfect retrieval of a stored pair of patterns if is linearly independent from all vectors for . The activation rule is given by the equation:

In the equation (11), the term inside the brackets is the n-dimensional preprocessed input vector. That is, the operator (XXT)+ transforms the input vector into a vector which is orthogonal to the input vectors for and has an inner product with equal to one. Moreover, a noise version of generates a preprocessed approximately in the same direction of and more orthogonal to the other input vectors.

The necessity of linear independence between input vectors is a step forward when compared with the former restriction (orthonormal input vectors). Even though, such a constraint limits the storage capacity of the model to the dimension of input vectors.

The Hopfield model was extended with the introduction of two heteroassociative dynamic associative memory models [8], [9], simultaneously proposed. The Bidirectional Associative Memory (BAM) is the most studied among them.

BAM associates pairs of patterns. The correlation matrix in BAM is constructed through a Hebbian rule, the network always converges to stable attractors, and it is relatively easy to be implemented in hardware.

BAM is composed of two layers which are totally and bidirectionally connected. The input layer has processing units whereas the output layer has processing units (Figure 3).


Figure 3: The topology of BAM.

The correlation matrix, is constructed as follows:

(12)

The state of activation of the unit follows the rule below:

(13)

where

The state of activation of the unit follows the rule below:

(14)

where and are individual thresholds of the units.

In the testing stage, BAM should recall the pair after a relaxation process. Let an input pattern , a perfect or noisy version of be presented to the network. The input pattern, , becomes the activation state of the input layer. Such a state is propagated towards the output layer through the connections between the layers. The state of activation of the output layer is updated to . This state is propagated backwards, through (the transpose of ), producing . Eventually this dynamics converges to the pair . This algorithm always converges, however, very often, it does converge to an undesired pair: a spurious attractor. New local minima, different from the desired ones, are formed following the presentation of new pairs of patterns to the network. BAM’s storage capacity is reduced as consequence of the spurious attractors formation.

2.2. Summarizing the characteristics of the models

The autoassociative models are formed by a single layer in which all processing units are totally connected by feedback links. These models store discrete patterns and operate synchronously or sequentially. Their convergence process may end at a desired point, a spurious attractor or a limit cycle. The main range of application of these models involves retrieving of noisy, distorted or incomplete versions of stored patterns. This is reached through pattern completion.

The heteroassociative models are taken as an extension of the former models. They are formed by more than one layer and each layer is fully connected with all the others. In their original versions, the models construct mappings between couples of layers, store discrete patterns and operate synchronously. Moreover, they are tolerant to noise and incomplete stimuli.

The associative memory models deal mostly with discrete representation of information, might need preprocessed input vectors, have a very limited storage capacity, associate patterns to themselves or associative pairs of patterns.

3. The Multidirectional Associative Memory

The very last limitation encouraged the introduction of a generalization of BAM: the Multidirectional Associative Memory (MAM) [10]. This model correlates the mutual existence of n-tuples of patterns. The associations are embodied in correlation matrices, where is the number of layers of the network. The correlation matrix between the layers and is formed by the summation of the outer product of each pair of patterns which are presented to such layers. The matrix is involved in the information flow from layer to layer and is concerned with inverse flow.

The activation state of two different layers due to a pattern are represented by the vectors and . For the pattern , weighted sum of the layer for a time step is denoted by .

MAM is described by the following equations:

The correlation matrix between layers and is:

(15)

The propagation rule for the layer due to signal from the other layers is:

(16)

The activation rule for the layer is:

(17)

where,

Let a particular network be a version of MAM with five layers () in which bipolar 5-tuples will be stored (Figure 4).

Given an input pattern represented by a 5-nuple which is a degraded version of a desired pattern . Such a pattern is presented to the network to generate its next state. Each state is fed back into the network and originates another one. This dynamic process ends when the state of the network stops changing. Ideally, the final state is equal to the desired state . However, very often, this does not occur.

In sum, the characteristics of this model are:

  • The equilibrium state corresponds to a local minimum point;

  • The training stage is simply the construction of the correlation matrices;

  • The correct recall of a particular pattern is not guaranteed;

  • The storage capacity of the model is very low;

  • The discrete representation of the model limits its range of application;

A modified version of MAM is described below to deal with the last three undesired features.


Figure 4: The topology of a five-layer MAM.

4. The Temporal Multidirectional Associative Memory

This proposed model is treated as a dynamic system in a high-dimensional and continuous state space. Such a model preserves the original architecture of MAM in terms of connections between layers, however, it adds autoassociative connections in each layer. Hence, the network associates a particular state of activation of each processing unit with both activation state of the units in the other layers and the activation of the units in its own layer, all of them delayed in one time step. The correlation matrix between layers and are built employing equation (15) whereas the correlation matrix considering the connections between units in layer is constructed as follows:

(18)

As a consequence the new propagation rule is:

(19)

The second modification of MAM concerns the continuous representation, thus the activation value of each processing units ranges from to . The activation rule becomes:

(20)

This model aims at storing the associations between continuous patterns and recalling these patterns when required. The first experiments with this model showed poor performance in retrieving the stored patterns, that is, the difference between the desired and the obtained outputs were significant. Also the presence of spurious attractors were often detected. Thus, a training phase was introduced to diminish this error.

4.1 Two Learning Stages

The Temporal MAM has two learning stages. In the first one, derived from the original MAM, all correlation matrices are built. In the second learning phase, a supervised training occurs.

The endeavors to improve the storage capacity of MAM will follow those used to BAM. These efforts may be divided into three schemes: (i) variations BAM’s architecture and correlation rules [17], [18], [19], [20]; (ii) introduction of dynamic thresholds [21], [22]; (iii) and addition of a supervised learning stage [23]. The last two strategies provided the best results, thus they are the ‘natural candidates’ to do the same for MAM.

The training stage occurs following the construction of the correlation matrices. A previous attempt to train MAM was proposed in [24] and [25]. Such a proposal was based on the PRLAB [22], an algorithm that increases successfully the storage capacity of BAM through an adaptation of the relaxation method to solve a system of linear equations. Nevertheless, the Widrow-Hoff rule [26] and the unlearning technique [27] were chosen to train the correlation matrices since such options work better than PRLAB for BAM [23]. Initial results for MAM suggest that the Widrow-Hoff rule [28] is adequate. Thus, a combination of the Widrow-Hoff rule and Unlearning will be employed. This option produced the best results for BAM [23], so it might do the same here. Thus, the second training stage consists of adapting M following one of the learning rules described below.

The Widrow-Hoff rule consists of learning the correct unit activation value as function of the difference between the obtained and the desired activation values. Thus, the modification of the correlation matrix between layers and is:

(21)

where

The second strategy unlearns the spurious attractors through subtracting its influence from the correlation matrices. Hence, the correlation matrix change between layers and is:

(22)

where

In order to store temporal sequences the patterns are presented to the network as follows:

  • Let

    be the number of states of a given temporal sequence;

  • Let

    be a state of a given sequence;

  • Let

    be the number of layers in the network in which all layers have the same number of units

    ;

  • The number of patterns

    to be stored in the network is given by:

(23)

  • Generate

    patterns

    varying

    as follows:

(24)

where

The correlation matrices are constructed using equations (15) and (18). The training phase, in a 5-layer modified MAM, works as follows:

  • Let

    be an initial pattern presented to the network;

  • The network produces a spurious attractor

    as output;

  • Each correlation matrix is modified by Widrow-Hoff rule followed by a change through the unlearning rule.

The training stage stops when the error reaches a value smaller than maximum allowed error.

5. Experimental Results

This model reproduces temporal sequences. In this case, part of a trained sequence is presented to the network which yields all remaining states through pattern completion.

In these tests the task is to store and recall tuples representing three dimensional positions and the instant of time associated with each spatial position. A cluster of these tuples form a spatial movement of a point in the 3-D space. Each spatial position may vary from -1 to +1 and is represented by the activation state of each processing unit. The model has five layers of four processing units each.

The three testing sequences are formed by six, eight, and eight states respectively. The level of complexity of a particular sequence increases with the variations of the trajectory directions. Thus, such level increases from the first to the last sequence. The training patterns, derived from the sequences, are generated following equations (23) and (24). The correlation matrices are constructed according to equations (15) and (18), and adapted concerning the learning rule.

There are two main set of experiments. The first set of experiments, tests the capacity of the network to reproduce the trained sequences. The second group of tests appraises the capacity of the model to interpolate and extrapolate points to the temporal sequences.

In the current experiments, the training stage lasted 3000, 849, 484 epochs for the first, second, and third trajectories respectively. The final error in each case was 0.06, 0.033, 0.039. In previous experiments [28], trained simply by Widrow-Hoff Rule, the model took 2421, 736, and 212 epochs to be trained reaching a final error equal to 0.0606, 0.0528, 0.1446. The error was diminished at the expense a longer training stage.

From this point onwards, in all 3-D figures, the dotted lines and the circles represent the obtained trajectory and states respectively. The solid lines and the stars represent the desired trajectory and points.

5.1. Retrieval of Trained Sequences

For the following tests, the network input is a sequence composed by 5 time steps and 5 spatial positions. Initially, it is supposed only that the initial position and the time variations are know. Thus, in the first input, the 5 desired time steps are given and the value of the initial point of the trajectory is assigned to all the ‘slots’ of spatial positions. The network output provides a first approximation of the desired temporal sequence. In the second input the first network output is fed back, however, the first state is replaced by the repetition of the last output spatial position associated with the last time step added of a predetermined time interval. The last tuple generated in the second output is considered as another point to the sequence. This strategy is repeated for the subsequent states until the trajectory is retrieved.

The results sketched in Figures 5a illustrate that the trained and obtained trajectories are about the same however there are perceivable discrepancies between them in terms of instant of time for each retrieval state.

The second trajectory is more complex than the first one (Figure 5b). In this case, the obtained trajectory follows quite well the trained one for the first steps. However, the differences between the paths increase when the trained trajectory suddenly change its direction. Even though, the network output tries to follows the desired path.

This behavior is also verified for the third trajectory (Figure 5c), which is more irregular than the second one.


Figure 5: The recall of the trained points in the temporal sequences.

The results above suggest the points that change direction abruptly are poorer recalled than those of smooth direction variations. Even though, the obtained trajectories are quite close to the trained ones.

5.2. Interpolation and Extrapolation of States

The second set of experiments aims at evaluating the capacity of the network to interpolate and extrapolate points to the trained sequences. The state interpolation is achieved by presenting four trained points to the network. The point to be interpolated, in any position within the sequence, is initially set to the desired time step and either the value of its preceding or its posterior state. Point extrapolation is achieved as follows. Given an initial state, represented by a sequence of trained and/or interpolated states, and the desired instant of time which lies outside the trained time interval, the model is required to recall the sequence with the new point.


Figure 6: The obtained trajectories with interpolated and extrapolated points.

The results sketched in Figures 6 confirm the first set of experiments: the retrieval capacity of the model diminishes as the complexity of the task increases. The results also suggest that the model might insert points within a trained sequence. The non-trained spatial positions are coherent with the trained sequences.

Finally, the results suggest that the network extrapolates the trained states in order to infer new states to the trajectories. Note that the model may extrapolate points which lie outside any extremity of the trained sequence. Moreover, the model can interpolate and extrapolate points simultaneously.

5.2.1. A Bidimensional View

The third trajectory, the most complex one, is plotted in 2-D graphics. Thus, Figure 7 shows each spatial position of a point as function of the time.


Figure 7: A 2-D view of the third trajectory with interpolated and extrapolated points.

In all 2-D figures, the solid lines and the stars represent the desired trajectory and points. The dotted lines and the circles represent the obtained trajectory and states respectively. Finally, the dotted lines and the small represent the obtained trajectory and states with interpolated and extrapolated points.

The model capacity to follow the trained trajectories are emphasized in Figure 7. Note that the discrepancies between the obtained trajectories and the trained ones are quite small crosses considering the range of possible spatial positions.

6. Conclusions

This paper proposes a modified version of MAM: The Temporal MAM. This is an adaptable model which does not require long training processes, hidden layers, and backpropagation training. The model is tested to reproduce temporal sequences.

TMAM is characterized by inter- and intralayer connections, capacity to perform auto- and heteroassociative tasks, capacity to realize multiple associations, use of continuous and limited representation, adaptability of the correlation matrices, and dismissal of input preprocessing.

The results suggest that the model can reproduce a trained temporal sequence, interpolate points in the sequence, extrapolate points in the sequence, and to accommodate sequences of different sizes. Furthermore, the initial poor recollection of more complex trajectories seems to be ameliorated with the interpolation and extrapolation points in the trained trajectories. The model presented limitations for tracking abrupt changes of trajectories.

Acknowledgments

I would like to thank FAPESP, for providing financial support for this research.

  • [1] M. H. Hassoun (Ed.), Associative Neural Memories: Theory and Implementations, Oxford University Press, 1993.
  • [2] S. Amari and H. F. Yanai, "Statistical neurodynamics of various types of associative memories", in M. H. Hassoun (Ed.), Associative Neural Memories: Theory and Implementations, Oxford University Press, 1993.
  • [3] S. Amari, "Learning patterns and pattern sequences by self-organizing nets of threshold elements", IEEE Transactions on Computers, vol. C-21, no.11, pp. 1197-1206, 1972.
  • [4] K. Nakano, "Associatron - a model on associative memory", IEEE Transactions on Systems, Man and Cybernetics, vol. 2, no. 3, pp. 380 - 388, 1972.
  • [5] J. A. Anderson, "A simple neural network generating an interactive memory", Mathematical Biosciences, vol. 14, pp. 197-220, 1972.
  • [6] T. Kohonen , "Correlation matrix memories", IEEE Transactions on Computers, C-21, pp. 353-359, 1972.
  • [7] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities", Proceedings of the National Academy of Sciences, vol. 79, pp. 2554-2558, 1982.
  • [8] K. Okajima, S. Tanaka, and S. Fujiwara, "A heteroassociative memory network with feedback connection", IEEE First International Conference on Neural Networks, vol. 2, pp. 711-178, 1987.
  • [9] B. Kosko, "Bidirectional associative memories", IEEE Transactions on Systems Man and Cybernetics, vol. 18, pp. 49-60, 1988.
  • [10] M. Hagiwara, "Multidirectional associative memory", International Joint Conference on Neural Networks, vol. 1, pp. 3-6, 1990.
  • [11] J. Hertz, A. Krogh, and R. G. Palmer, Introduction to the Theory of Neural Computing, Addison-Wesley, 1991.
  • [12] M. H. Hassoun, Fundamentals of Artificial Neural Networks, MIT Press, 1995.
  • [13] J. A. Anderson, A. Pellionisz, and E. Rosenfeld," Associatron - a model of associative memory: Introduction", In Neurocomputing 2: Directions for Research, pp. 87-89, MIT Press, 1990.
  • [14] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, "The Capacity of Hopfield Associative Memory", IEEE Transactions on Information Theory, IT-33, pp. 461-482, 1987.
  • [15] J. J. Hopfield, "Neurons with graded response have collective computational properties like those of two-state neurons", Proceedings of the National Academy of Sciences, vol. 81, pp. 3088-3092, 1984.
  • [16] T. Kohonen and M. Ruohonen, " Representation of associated data by matrix operators", IEEE Transactions on Computers, C-22, pp. 701-702, 1973.
  • [17] P. K. Simpson, "High-Ordered and Intraconnected Bidirectional Associative Memories", IEEE Transactions on Systems, Man and Cybernetics, vol. 21, pp. 637-653, 1990.
  • [18] C. H. Wu, H. M. Tai, C. J. Wang, and T. L. Jong," High-Order Bidirectional Associative Memory and its Application to Frequency Classification", In Proceedings of the International Joint Conference on Neural Networks, vol. 1, pp. 31-34, 1990.
  • [19] Y. F. Wang, J. B. Cruz, and J. H. Mulligan Jr.," Guaranteed Recall of All Training Pairs for Bidirectional Associative Memory", IEEE Transactions on Neural Networks, vol. 2, no. 4: 559-567, 1991.
  • [20] X. Zhuang, Y. Huang, and S. S. Chen, "Better Learning for Bidirectional Associative Memory", Neural Networks, vol. 6, pp. 1131-1146, 1993.
  • [21] K. Haines and R. Hetch-Nielsen, "A Bidirectional Associative Memory with Increased Information Storage Capacity", IEEE International Conference on Neural Networks, vol. 1, pp. 181-190, 1988.
  • [22] H. Oh, and S. C. Kothari. "Adaptation of the relaxation method for learning in Bidirectional Associative Memory", IEEE Transactions on Neural Networks, vol. 5, no. 4, pp.576-583, 1994.
  • [23] A. F. R. Araújo and G. M. Haga , "Two Simple Strategies to Improve Bidirectional Associative Memory’s Performance: Unlearning and Delta Rule", III Brazilian Simposium on Neural Networks, pp. 39-46, 1996.
  • [24] M. Hattori, M. Hagiwara, and M. Nakagawa, ``Improved Multidirectional Associative Memories for training sets including common terms'"', IEICE Japan, vol.J77-D-II, no.3, pp.591-599, 1994 (in Japanese).
  • [25] M. Hattori, and M. Hagiwara, "Quick learning for multidirectional associative memories", IEEE International Conference on Neural Networks, pp.1949-1954, 1995.
  • [26] B. Widrow and M. E. Hoff, "Adaptative switching circuits", IRE WESCON Convention Record, pp. 96-104, 1960.
  • [27] J. J. Hopfield, D. I. Feinstein, and R. G. Palmer," Unlearning has a Stabilizing Effect in Collective Memories", Nature, vol. 304, 1983.
  • [28] A. F. R. Araújo and M. Vieira, "Temporal Multidirectional Associative Memory Generating Spatial Trajectories", International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’97), pp. 301-305, 1997.

Publication Dates

  • Publication in this collection
    07 Oct 1998
  • Date of issue
    July 1997
Sociedade Brasileira de Computação Sociedade Brasileira de Computação - UFRGS, Av. Bento Gonçalves 9500, B. Agronomia, Caixa Postal 15064, 91501-970 Porto Alegre, RS - Brazil, Tel. / Fax: (55 51) 316.6835 - Campinas - SP - Brazil
E-mail: jbcs@icmc.sc.usp.br