Here’s a fascinating interview with the creator of GPT-4, Ilya Sutskever.
Some fascinating takeaways:
- It sounds like Ilya believes that AI might be more than just a collection of statistical predictions, and that eventually, given enough time, data, scale, and further improvements, it might actually develop understanding.
 - we can’t know the limits of a model. We might assume with high confidence that there are certain limitations to a given model, but it’s very possible that our current assumptions will turn out false.
 - prediction = compression
 - predicting the next word or the next pixel well enough could lead to things we’re not even considering yet
 - hallucinations can be reduced through reinforcement learning from humans
 - interactions with humans can help reduce hallucinations and improve outputs
 - large language models learn compressed representations of the world world processes that produce data
 - it’s possible to learn more from less data
 
Leave a Reply