In this talk, the speaker addresses the challenges and implications of extreme-scale artificial intelligence models. They highlight three main challenges:
1. Concentration of Power: Large AI models are expensive to train, limiting access to only a few tech companies. This concentration of power raises concerns about control and transparency.
2. Lack of Common Sense: Despite their vast capabilities, these models often lack basic common sense, as demonstrated by their inability to answer simple questions correctly.
3. The Need for Innovation: The speaker argues for a shift in AI research towards teaching AI models common sense, human norms, and values through innovative approaches, rather than relying solely on scaling up.
They emphasize the importance of transparency, open data, and ethical considerations in AI development. The speaker suggests that AI should learn more like humans, through hypothesis-making and experimentation with the world, rather than just scaling up. The possibility of combining their work with large AI models is acknowledged for potential synthesis.
**Key Facts:**
1. Voltaire, an 18th-century philosopher, said: "common sense is not so common."
2. AI is powerful and has beaten the world-class go champion, aced the Turing test, and passed the bar exam.
3. The speaker has 20 years of experience as a computer scientist in artificial intelligence.
4. Modern AI models are extremely large, with some trained on tens of thousands of GPUs and a trillion words.
5. Large language models show sparks of Artificial General Intelligence (AGI).
6. These models make small mistakes despite their massive capabilities.
7. Extreme scale AI models are expensive to train, making them accessible to only a few tech companies.
8. These few tech companies hold a concentration of power, posing challenges at the societal level.
9. AI models have a significant carbon footprint and environmental impact.
10. Current AI lacks robust common sense.
11. The speaker works at a university and a non-profit research institute.
12. There's a need to make AI smaller to democratize it and teach it human norms and values.
13. The Art of War suggests understanding the enemy and innovating weapons.
14. AI has inconsistent performance, being both highly intelligent and prone to basic errors.
15. The approach to AI learning is currently based on the brute-force method of increasing scale.
16. Children do not require extensive data to acquire basic common sense.
17. Common sense remains a long-standing challenge in AI.
18. The universe consists of 5% normal matter and 95% dark matter and dark energy.
19. For language, visible text represents the observable part, while unspoken world rules represent the "dark matter."
20. Without basic human understanding, AI might make decisions that are harmful to humans.
21. Common sense in AI has been deemed a research topic of the 70s and 80s.
22. Modern AI systems are trained on raw web data, crafted examples, and human feedback.
23. This training data should be transparent and publicly available.
24. The speaker's teams at UW and AI2 work on creating open common sense knowledge graphs and moral repositories.
25. There is a need to explore new algorithms to develop AI models with better knowledge representation.
26. Large language models acquire knowledge as a byproduct rather than a direct learning objective.
27. Human learning focuses on understanding the world, whereas current AI models predict the next word.
28. There is a concern about AI's inefficiency and the potential risks of over-relying on scale.
29. The quest is to teach AI common sense, norms, and values.