I built Ivy to solve this. It’s an educational co-pilot designed to run entirely on-device (Edge-Inference).
How it works:
Offline Inference: Optimized local LLMs running on $150 entry-level Android hardware.
Native Support: Integrated logic for local languages (Amharic) to bridge the cultural gap.
Sovereignty: The system is designed to function when the grid or the gateway is down.
I’ve documented the architecture and a 4-minute demo of the system running in a real-world environment here: https://builder.aws.com/content/39w2EpJsgvWLg1yI3DNXfdX24tt/aideas-ivy-the-worlds-first-offline-capable-proactive-ai-tutoring-agent
I’m currently in the quarter-finals of a global AWS challenge to get this the resources it needs to scale. I’m looking for technical feedback on the edge-inference logic and the sync protocols for when these devices do hit a 2G signal.
Happy to answer questions about the reality of building LLM-infrastructure in a zero-connectivity environment.
0 comments