From Turing Models to Large Language Models: Evolution and Convergence of Symbolic and Connectionist Approaches in Artificial Intelligence

Authors

  • Shijie Ye

DOI:

https://doi.org/10.54097/m92y8a35

Keywords:

Turing Model, Neural Network Model, Large Language Models, artificial intelligence.

Abstract

This paper provides a comprehensive investigation into the evolution of artificial intelligence (AI). It focuses on the enduring scholarly debate between symbolism and connectionism, with a particular emphasis on the Turing model and the neural network model. The study situates Large Language Models (LLMs) within this theoretical framework, emphasizing their connectionist foundations while critically examining their historical interactions with symbolic approaches. Key issues addressed include the academic controversies surrounding symbolic and connectionist methodologies, the distinctive attributes of each paradigm, and the future development trajectory of LLMs—specifically exploring whether their advancement should prioritize algorithmic innovation or data-driven scalability. The primary contribution of this paper lies in its comparative analysis of the Turing model and the neural network model, offering a nuanced perspective on the respective strengths and limitations of each approach. By elucidating the research landscape, this comparative framework seeks to foster the convergence of these paradigms, thereby advancing the development of LLMs. The findings suggest that integrating symbolic and connectionist paradigms holds significant promise for enhancing LLM capabilities, with profound implications for both academic research and technological innovation. This paper contributes to a deeper understanding of AI, providing insights that may expedite the development of more resilient and adaptable AI systems, ultimately benefiting human welfare and fostering societal advancement.

Downloads

Download data is not yet available.

References

[1] Turing, A. M. Computing machinery and intelligence. Mind, 1950, 59(236): 433-460.

[2] McCarthy, J. Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes (pp. 75-91). London, UK: Her Majesty's Stationery Office, 1959.

[3] McCulloch, W. S., & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 1943, 5(4): 115-133.

[4] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. Learning representations by back-propagating errors. Nature, 1986, 323(6088), 533-536.

[5] Newell, A., & Simon, H. A. The logic theory machine: A complex information processing system. IRE Transactions on Information Theory, 1956, 2(3): 61-79.

[6] Minsky, M., & Papert, S. Perceptrons: An introduction to computational geometry. MIT Press, 1969.

[7] Hinton, G. E., Osindero, S., & Teh, Y. W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7), 1527-1554.

[8] Krizhevsky, A., Sutskever, I., & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097-1105), 2012.

[9] Marcus, G. The next decade in AI: Four steps towards robust artificial intelligence. AI Magazine, 2020, 41(3): 19-24.

[10] Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 1958, 65(6): 386-408.

[11] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008), 2017.

[12] Graves, A., Wayne, G., & Danihelka, I. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.

[13] Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S. G., Buesing, L., & Blundell, C. Hybrid computing using a neural network with dynamic external memory. Nature, 2016, 538(7626): 471-476.

[14] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., & Lillicrap, T. Meta-learning with memory-augmented neural networks. In Proceedings of the 33rd International Conference on Machine Learning (pp. 1842-1850), 2016.

[15] He, K., Zhang, X., Ren, S., & Sun, J. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.

[16] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems (pp. 2672-2680), 2014.

[17] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Hassabis, D. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529(7587): 484-489.

[18] Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Tunyasuvunakool, K., Hassabis, D. Highly accurate protein structure prediction with AlphaFold. Nature, 2021, 596(7873): 583-589.

[19] Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ng, A. Y. CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017.

[20] Clusmann J., et al, The future landscape of large language models in medicine. Nature Medicine, 2023, 3(1): 141.

[21] Raza, Marium M., et al. Generative AI and Large Language Models in Health Care: Pathways to Implementation. npj Digital Medicine, 2024, 7(1): 62.

[22] Stade, Elizabeth C., et al. Large Language Models Could Change the Future of Behavioral Healthcare: A Proposal for Responsible Development and Evaluation. npj Mental Health Research, 2024, 3(1): 12.

[23] Mouret, Jean-Baptiste. Large Language Models Help Computer Programs to Evolve. Nature, 2024, 625(7995): 452-453.

Downloads

Published

25-02-2025

How to Cite

Ye, S. (2025). From Turing Models to Large Language Models: Evolution and Convergence of Symbolic and Connectionist Approaches in Artificial Intelligence. Highlights in Science, Engineering and Technology, 128, 264-272. https://doi.org/10.54097/m92y8a35