10
Hybrid AI & Machine Learning Systems Using Neural Networks and Subsumption Architecture By Logan Kearsley

Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Embed Size (px)

Citation preview

Page 1: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Hybrid AI & Machine Learning Systems

Using Neural Networks and Subsumption Architecture

By Logan Kearsley

Page 2: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Purpose

1. To design a hybrid system combining the capabilities of neural networks and subsumption architectures, and demonstrate that it affords increased performance.

2. To produce C libraries to allow others to make use of these algorithms as 'black boxes' in other AI/ML projects.

Page 3: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Similar Projects The Reactive Accompanist- generalized subsumption

architecture beyond robotic control.

“Evolution of the layers in a subsumption architecture robot controller”- combined subsumption architecture and genetic algorithms.

My Project- primarily focuses on neural networks. Major test problem is character recognition.

Page 4: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Design & Programming Modular / Black Box Design

The end user should be able to put together a working AI system with minimal knowledge of how the internals work

Extensibility

Data structures and the test program are designed to be scalable and make use of the modularity of the AI libraries.

Programming done in C

Page 5: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Testing Forced learning: make sure it will learn arbitrary noiseless input-

output mappings after a certain number of exposures (very successful, if the datasets aren't too large)

Scalability: try different input-output sets with different dimensions and different numbers of associations to check learning times; optimal network dimensions found through trial-and-error.

Extensibility: feed a previously-trained system new data to see how quickly and accurately it can be assimilated.

Page 6: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

AlgorithmsNeural Nets

Back-propagation learning: weights are adjusted based on the distance between the net's current output and the optimal output; errors are calculated based on changes made to lower layers

Hebbian learning: weights are adjusted to strengthen connections between co-firing neurons

Training: one back-propagation run is done for every association to be learned until, and the cycle repeats until accumulated errors are below a threshhold; Hebbian reinforcement should prevent corruption of old associations when adding new data (not highly successful so far)

Matrix simulation: weights are stored in an I (# of inputs) by O (# of outputs) matrix for each layer, rather than simulating each neuron individually; outputs for each layer are calculated separately and used as inputs for the next layer

Page 7: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

AlgorithmsNeural Nets

Network Structure Individual Neurons(Allows More Complex Network

Topologies)

vs. Weight Matrix (Allows for

simpler, faster learning algorithms and more efficient use of memory, given a known simple network topology)

Only using feed-forward networks

Page 8: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

AlgorithmsSubsumption Architecture

Scheduler calls a list of task-specific functions

Here, queries for character-specific neural networks Task functions return an output or null

Highest-priority non-null task has its output returned on each iteration

Page 9: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Problems Have to compromise on network

dimensions. Training new networks from scratch-

seems to take an unusually long time *Very* difficult to write a generic

subsumption architecture library.

Page 10: Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley

Results & Conclusions Moderately large datasets require an extremely long time to

train a single network.

Splitting datastes up among many different networks allows for rapid training and sufficient variance in outputs to be useful in a subsumption architecture.

Conclusion- for complex or multi-purpose AIs, it is highly beneficial to split up sub-tasks among many specialized sub-AIs (in this case many differently-trained neural networks).

However, it's not very practical to write a completely generic subsumption wrapper- I/O requirements are too variable.