Agent SPE Phase 2: Taking AI Integration to New Heights

We are delighted to announce that the Agent SPE has secured Phase 2 funding! Thanks to overwhelming support from the Livepeer Community Treasury, we are embarking on the next stage of our journey to revolutionize AI agent technology!
What’s New in Phase 2
Our talented team is growing with two key additions:
Dane – Expert 3D Animator bringing visual elements to life
Jodzilla – Community Manager dedicated to fostering user engagement
Together we are building autonomous, live-streaming AI agents that look, talk and think like humans. Each agent is a MetaHuman avatar rendered in Unreal Engine and driven by our NeuroSync Core pipeline that fuses LLM reasoning:
→ low-latency TTS
→ real-time facial blend-shapes
The result is lip-synced, emotive 60 fps animation that can react on-chain. The whole system plugs into the ElizaOS v2 agent OS for memory, multi-modal perception and on-chain actions, and is production-ready for any brand that wants a revenue-sharing virtual personality. It is modular, cloud-native and fully open-source.
Looking Ahead
Phase 2 represents our commitment to pushing technological boundaries while delivering practical solutions that matter. We have set ambitious goals to establish new industry standards for AI agent development.
Stay tuned for regular updates as we share our progress and innovations!
We are deeply grateful to the Livepeer Community for their continued belief in our vision, thank you Livepeer!