Theoretical results suggest that in order to learn the kind of complicated
functions that can represent high-level abstractions (e.g., in
vision, language, and other AI-level tasks), one may need deep architectures.
Deep architectures are composed of multiple levels of non-linear
operations, such as in neural nets with many hidden layers or in complicated
propositional formulae...
Theoretical results suggest that in order to learn the kind of complicated
functions that can represent high-level abstractions (e.g., in
vision, language, and other AI-level tasks), one may need deep architectures.
Deep architectures are composed of multiple levels of non-linear
operations, such as in neural nets with many hidden layers or in complicated
propositional formulae re-using many sub-formulae. Searching
the parameter space of deep architectures is a difficult task, but learning
algorithms such as those for Deep Belief Networks have recently been
proposed to tackle this problem with notable success, beating the stateof-
the-art in certain areas. This monograph discusses the motivations
and principles regarding learning algorithms for deep architectures, in
particular those exploiting as building blocks unsupervised learning of
single-layer models such as Restricted Boltzmann Machines, used to
construct deeper models such as Deep Belief Networks.
不同的观点!
又买了一次
观点比较新颖,文笔流畅,通俗易懂。
大爱,好好看