Share to: share facebook share twitter share wa share telegram print page

Bitter lesson

The bitter lesson is a claim in artificial intelligence that, in the long run, simpler systems that can scale with available computational power will outperform more complex systems that integrate domain-specific human knowledge, because they take better advantage of Moore's law. The principle was proposed and named in a 2019 essay by Richard Sutton[1] and is now widely accepted.[2][3][4][5][6][7][8]

The essay

Sutton gives several examples that illustrate the lesson:

Sutton concludes that time is better invested in finding simple scalable solutions that take can advantage of Moore's law, rather than introducing ever-more-complex human insights, and calls this the "bitter lesson". He also cites two general-purpose techniques that have been shown to scale effectively: search and learning. The lesson is considered "bitter" because it is less anthropocentric than many researchers expected and so they have been slow to accept it.

Impact

The essay was published on Sutton's website incompleteideas.net in 2019, and has received hundreds of formal citations according to Google Scholar. Some of these provide alternative statements of the principle; for example, the 2022 paper "A Generalist Agent" from Google DeepMind summarized the lesson as:[2]

Historically, generic models that are better at leveraging computation have also tended to overtake more specialized domain-specific approaches, eventually.

Another phrasing of the principle is seen in a Google paper on switch transformers coauthored by Noam Shazeer:[3]

Simple architectures—backed by a generous computational budget, data set size and parameter count—surpass more complicated algorithms.

The principle is further referenced in many other works on artificial intelligence. For example, From Deep Learning to Rational Machines draws a connection to long-standing debates in the field, such as Moravec's paradox and the contrast between neats and scruffies.[9] In "Engineering a Less Artificial Intelligence", the authors concur that "flexible methods so far have always outperformed handcrafted domain knowledge in the long run" although note that "[w]ithout the right (implicit) assumptions, generalization is impossible".[5] More recently, "The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning" continues Sutton's argument, contending that (as of 2025) the lesson has not been fully learned in the fields of speech recognition and brain data.[6]

Other work has looked to apply the principle and validate it in new domains. For example, the 2022 paper "Beyond the Imitation Game" applies the principle to large language models to conclude that "it is vitally important that we understand their capabilities and limitations" in order to "avoid devoting research resources to problems that are likely to be solved by scale alone".[7] In 2024, "Learning the Bitter Lesson: Empirical Evidence from 20 Years of CVPR Proceedings" looked at further evidence from the field of computer vision and pattern recognition, and concludes that the previous twenty years of experience in the field shows "a strong adherence to the core principles of the 'bitter lesson'".[4] In "Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning", the authors look at generalization of actor-critic algorithms and find that "general methods that are motivated by stabilization of gradient-based learning significantly outperform RL-specific algorithmic improvements across a variety of environments" and note that this is consistent with the bitter lesson. [8]

References

  1. ^ Sutton, Rich (March 13, 2019). "The Bitter Lesson". www.incompleteideas.net. Retrieved September 7, 2025.
  2. ^ a b Reed, Scott; Zolna, Konrad; Parisotto, Emilio; et al. (2022). "A Generalist Agent". Transactions on Machine Learning Research (2834–8856). arXiv:2205.06175. Retrieved September 7, 2025.
  3. ^ a b Fedus, William; Zoph, Barret; Shazeer, Noam (2022). "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity". Journal of Machine Learning Research. 23 (120): 1–39. Retrieved September 14, 2025.
  4. ^ a b Yousefi, Mojtaba; Collins, Jack. "Learning the Bitter Lesson: Empirical Evidence from 20 Years of CVPR Proceedings". Proceedings of the 1st Workshop on NLP for Science (NLP4Science). Association for Computational Linguistics. pp. 175–187. Retrieved 7 September 2025.
  5. ^ a b Sinz, Fabian H.; Pitkow, Xaq; Reimer, Jacob; et al. (2019). "Engineering a Less Artificial Intelligence". Neuron. 103 (6). Elsevier: 967–979. doi:10.1016/j.neuron.2019.08.034. Retrieved September 13, 2025.
  6. ^ a b Jayalath, Dulhan; Landau, Gilad; Shillingford, Brendan; Woolrich, Mark; Parker Jones, ʻŌiwi (2025). "The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning". Forty-second International Conference on Machine Learning. Proceedings of Machine Learning Research. Retrieved September 13, 2025.
  7. ^ a b Srivastava, Aarohi; Rastogi, Abhinav; Rao, Abhishek; Awal, Abu; Abid, Abubakar; et al. "Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models". The Fourteenth International Conference on Learning Representations.
  8. ^ a b Nauman, Michal; Bortkiewicz, Michał; Miłoś, Piotr; Trzciński, Tomasz; Ostaszewski, Mateusz; et al. (2024). "Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning". Proceedings of the 41st International Conference on Machine Learning. Proceedings of Machine Learning Research. Retrieved September 13, 2025.
  9. ^ Buckner, Cameron J. (11 December 2023). From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. Oxford University Press. doi:10.1093/oso/9780197653302.001.0001. ISBN 9780197653302.
Prefix: a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9

Portal di Ensiklopedia Dunia

Kembali kehalaman sebelumnya