Command Palette
Search for a command to run...
Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures
Devdhar Patel Terrence Sejnowski Hava Siegelmann

Abstract
The current reinforcement learning framework focuses exclusively on performance, often at the expense of efficiency. In contrast, biological control achieves remarkable performance while also optimizing computational energy expenditure and decision frequency. We propose a Decision Bounded Markov Decision Process (DB-MDP), that constrains the number of decisions and computational energy available to agents in reinforcement learning environments. Our experiments demonstrate that existing reinforcement learning algorithms struggle within this framework, leading to either failure or suboptimal performance. To address this, we introduce a biologically-inspired, Temporally Layered Architecture (TLA), enabling agents to manage computational costs through two layers with distinct time scales and energy requirements. TLA achieves optimal performance in decision-bounded environments and in continuous control environments, it matches state-of-the-art performance while utilizing a fraction of the compute cost. Compared to current reinforcement learning algorithms that solely prioritize performance, our approach significantly lowers computational energy expenditure while maintaining performance. These findings establish a benchmark and pave the way for future research on energy and time-aware control.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| openai-gym-on-ant-v2 | TLA | Action Repetition: .1268 Average Decisions: 860.21 Mean Reward: 5163.54 |
| openai-gym-on-halfcheetah-v2 | TLA | Action Repetition: .1805 Average Decisions: 831.42 Mean Reward: 9571.99 |
| openai-gym-on-hopper-v2 | TLA | Action Repetition: .5722 Average Decisions: 423.91 Mean Reward: 3458.22 |
| openai-gym-on-inverteddoublependulum-v2 | TLA | Action Repetition: .7522 Average Decisions: 247.76 Mean Reward: 9356.67 |
| openai-gym-on-invertedpendulum-v2 | TLA | Action Repetition: .8882 Average Decisions: 111.79 Mean Reward: 1000 |
| openai-gym-on-mountaincarcontinuous-v0 | TLA | Action Repetition: .914 Average Decisions: 10.6 Mean Reward: 93.88 |
| openai-gym-on-pendulum-v1 | TLA | Action Repetition: .7032 Average Decisions: 62.31 Mean Reward: -154.92 |
| openai-gym-on-walker2d-v2 | TLA | Action Repetition: .4745 Average Decisions: 513.12 Mean Reward: 3878.41 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.