Catastrophic Forgetting continues to be a prevalent issue for any neural-based approach to learning. Without better understanding and solutions to solving catastrophic forgetting, learning performance and efficacy in autonomous agents, particularly deep learning-based agents will become drastically hindered. Current literature contains approaches to reduce the effects of catastrophic forgetting via task decomposition, rehearsal, and layered learning. However, critical agent knowledge is still susceptible to being driven out of these agents with state-of-the-art techniques. This research explores how prioritized experience replay buffers impact an agent's ability to retain critical knowledge and combat catastrophic forgetting, using the logic problem: Leading Ones * Trailing Zeros. Additionally, this work introduces two new prioritization schemes that are compared against baseline non-buffered approaches. From the experiments conducted, this research accomplished optimalty for one type of prioritized buffer, beating the baseline approaches. Furthermore, the results show a tradeoff between the inclusion of experience replay buffers and how much rehearsal is most beneficial to aiding an agent's performance.
Sean Mondesire, Committee Chair.
Read More