jackmyers.info

Deep learning is explained well in this primer from DataRobot, which is also where the image on this page was found.

Deep Learning

There are several good articles on Deep Learning that you should read:

The following paragraphs from a New Yorker Article, provide some of the history of deep learning.

[Geoff Hinton, a professor at Carnegie-Mellon University made] an important advance in 2006, with a new technique that he dubbed deep learning, which itself extends important earlier work by my N.Y.U. colleague, Yann LeCun, and is still in use at Google, Microsoft, and elsewhere. A typical setup is this: a computer is confronted with a large set of data, and on its own asked to sort the elements of that data into categories, a bit like a child who is asked to sort a set of toys, with no specific instructions. The child might sort them by color, by shape, or by function, or by something else. Machine learners try to do this on a grander scale, seeing, for example, millions of handwritten digits, and making guesses about which digits looks more like one another, "clustering" them together based on similarity. Deep learning's important innovation is to have models learn categories incrementally, attempting to nail down lower-level categories (like letters) before attempting to acquire higher-level categories (like words).

Deep learning excels at this sort of problem, known as unsupervised learning. In some cases it performs far better than its predecessors. It can, for example, learn to identify syllables in a new language better than earlier systems. But it’s still not good enough to reliably recognize or sort objects when the set of possibilities is large. The much-publicized Google system that learned to recognize cats for example, works about seventy per cent better than its predecessors. But it still recognizes less than a sixth of the objects on which it was trained, and it did worse when the objects were rotated or moved to the left or right of an image.

Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson, the machine that beat humans in "Jeopardy," use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.

Modified: