AI Definitions: Neural Networks

Neural Networks (or artificial neural networks, ANNs) Mathematical systems that can identify patterns in text, images and sounds. In this type of machine learning, computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data and find complex associations. While symbolic artificial intelligence has been the dominant area of research for most of AI’s history with artificial neural networks, most recent developments in artificial intelligence have centered around neural networks. First proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts), they moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets remained a major research area in neuroscience and computer science until 1969. The technique enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned stronger in the second decade, fueled largely by the increased processing power of graphics chips. Also, see “Transformers.”

More AI definitions here.