The speaker, Yejin Choi, a computer scientist with 20 years of experience in artificial intelligence, delivers a thought-provoking talk on the current state of AI and its future prospects. She draws parallels between AI and Goliath, highlighting the vastness and power of today's AI models, which are often trained on a massive scale using tens of thousands of GPUs and a trillion words of data. These models, often referred to as "large language models," demonstrate sparks of artificial general intelligence (AGI) but often make small, silly mistakes.
Choi discusses the societal challenges posed by these extreme-scale AI models. They are expensive to train and can only be afforded by a few tech companies, leading to a concentration of power. Moreover, these models are so complex that researchers in the broader community cannot truly inspect and dissect them, posing a threat to AI safety. The speaker also raises environmental concerns about the massive carbon footprint of these models.
Choi argues that AI, without robust common sense, cannot be truly safe for humanity. She presents a dilemma: while extreme-scale AI models are powerful, they are also shockingly stupid, making basic common sense errors. She uses examples to illustrate these errors, such as an AI system that fails to correctly calculate how long it would take to dry 30 clothes or to measure six liters of water using a six-liter jug and a 12-liter jug.
The speaker concludes by asserting that AI needs to be made safer and more humanistic. She proposes three steps: knowing your enemy (evaluating AI with scrutiny), choosing your battles (tackling fundamental questions such as common sense), and innovating your weapons (innovating data and algorithms). She emphasizes the importance of transparency and openness in AI research, and discusses her team's work on commonsense knowledge graphs and moral norm repositories.
Choi acknowledges the remarkable learning enhancement provided by the scale of compute and data in AI but argues that there's a quality of learning that is still not quite there. She questions whether we can fully achieve wisdom and knowledge just by scaling things up and raises concerns about the idea of having very extreme-scale AI models owned by only a few tech companies. She envisions a future where AI learning will be more like human learning, involving making hypotheses, making experiments, and interacting with the world.
1. The speaker is excited to share thoughts on artificial intelligence, using a quote by Voltaire to emphasize the relevance of common sense to AI today [Source: Document(page_content='00:00:03.71: So I\'m excited to share a few spicy\nthoughts on artificial intelligence.\n00:00:10.80: But first, let\'s get philosophical\n00:00:13.84: by starting with this quote by Voltaire,\n00:00:16.39: an 18th century Enlightenment philosopher,\n00:00:18.68: who said, "Common sense is not so common."\n00:00:21.68: Turns out this quote\ncouldn\'t be more relevant\n00:00:24.85: to artificial intelligence today.')].
2. Despite its potential, AI is a large and complex tool, often referred to as "large language models" [Source: Document(page_content='00:00:27.07: Despite that, AI\nis an undeniably powerful tool,\n00:00:31.03: beating the world-class "Go" champion,\n00:00:33.61: acing college admission tests\nand even passing the bar exam.\n00:00:38.12: Iām a computer scientist of 20 years,\n00:00:40.58: and I work on artificial intelligence.\n00:00:43.04: I am here to demystify AI.\n00:00:46.63: So AI today is like a Goliath.\n00:00:50.13: It is literally very, very large.\n00:00:53.51: It is speculated that the recent ones\nare trained on tens of thousands of GPUs\n00:00:59.39: and a trillion words.\n00:01:02.48: Such extreme-scale AI models,\n00:01:04.60: often referred to as "large\nlanguage models,"')].
3. AI often makes mistakes, which some believe can be fixed with more resources [Source: Document(page_content='00:01:14.28: Except when it makes\nsmall, silly mistakes,\n00:01:18.16: which it often does.\n00:01:20.37: Many believe that whatever\nmistakes AI makes today\n00:01:24.08: can be easily fixed with brute force,\n00:01:26.12: bigger scale and more resources.')].
4. AI models are so expensive to train that only a few tech companies can afford them, leading to a concentration of power [Source: Document(page_content='00:01:32.17: So there are three immediate challenges\nwe face already at the societal level.\n00:01:37.89: First, extreme-scale AI models\nare so expensive to train,\n00:01:44.06: and only a few tech companies\ncan afford to do so.\n00:01:48.10: So we already see\nthe concentration of power.')].
5. AI models have a massive carbon footprint and environmental impact [Source: Document(page_content='00:01:52.82: But what\'s worse for AI safety,\n00:01:55.32: we are now at the mercy\nof those few tech companies\n00:01:59.11: because researchers\nin the larger community\n00:02:02.95: do not have the means to truly inspect\nand dissect these models.\n00:02:08.42: And let\'s not forget\ntheir massive carbon footprint\n00:02:12.29: and the environmental impact.')].
6. AI lacks robust common sense, and there