AI systems struggle with edge cases and understanding local context despite increasing model sizes. From our research at UC Berkeley into the evolution of intelligence from simple organisms, we’ve discovered the missing link is continuous learning (deep learning is pre-trained by design). Models built with our framework learn through customizable parameters similar to animal instincts, allowing for AI grounded with built-in memory and reasoning. We're a community of 160+ developers and researchers building general intelligence from the bottom-up from places like Berkeley, NYU, Imperial College, and Google.
We're building way outside of the current paradigm and we're looking for collaborators at all levels --hackers, contributors, the curious-- as we'll be making our first hires soon. Email with "HN Hiring" in subject line to: ali at aolabs.ai or chat with us in our discord: https://discord.gg/Zg9bHPYss5
This post is near identical to mine from last month; if you reached out then, please know that I'll respond to you soon (I've been busy wrapping up a fundraise).