So I fixed it.... Or Claude did...
Using autoresearch running on Google Colab GPUs, the model saw a 43% improvement across 89 automated experiments in about 8 hours. The old model? Honestly, most of its output was nonsense. The new one almost always translates to something reasonable:
Our zero customers are thrilled.
The technical challenge: getting autoresearch to run on Colab
This was the real puzzle. Colab's default security settings block Claude's code tool from running properly. And if the GPU session shuts down — which it does, because Colab is temporary — it forgets its own research history.
I fixed this with a Google Drive rsync loop that saves learnings from both good and bad experiments, plus an extra step to commit the good code to GitHub.
One funny moment: I interrupted the research flow to check on progress and Claude seemed genuinely annoyed, then started right back up again without asking for confirmation. Relatable.
The whole thing is open source. So if you want to run your own autoresearch experiments — maybe to start writing in Ugaritic cuneiform, or Wingdings — go for it: github.com/JLansey/EgyptGPT
Here are a couple more good ones that might come in handy:
𓐝 𓂝 𓎡 𓃹 𓈖 𓀜 𓄣 𓏤 𓎡
Behold, your heart is tired.
𓊪 𓏏 𓂋 𓊪 𓂝 𓄂 𓂝 𓅓 𓂋 𓊹 𓍛 𓌸 𓂋 𓂋 𓂋 𓎟 𓆑 𓇳 𓎟 𓆓 𓏏 𓇿
Hereditary noble and local prince, overseer of priests, one of his Lord's lovers every day forever.
𓊃 𓅭 𓄿 𓏲 𓀸 𓀜 𓏲 𓀀
Be careful!
𓂜 𓈖 𓈙 𓈖 𓃩 𓀜
There is no shame.
𓏏 𓅱 𓏏 𓀚 𓏥 𓆑 𓅓 𓋴 𓎛 𓍃 𓅓 𓅬 𓅪 𓅓 𓂝 𓋴 𓐠 𓄿 𓀁 𓆑 𓅓 𓂝 𓂝 𓈖 𓀀
His officers were in trouble, his worshippers were in my hand.