Title
Making sense of Kernel Spaces in Neural Learning
Abstract
Kernel-based and Deep Learning methods are two of the most popular approaches in Computational Natural Language Learning. Although these models are rather different and characterized by distinct strong and weak aspects, they both had impressive impact on the accuracy of complex Natural Language Processing tasks. An advantage of kernel-based methods is their capability of exploiting structured information induced from examples. For instance, Sequence or Tree kernels operate over structures reflecting linguistic evidence, such as syntactic information encoded in syntactic parse trees. Deep Learning approaches are very effective as they can learn non-linear decision functions: however, general models require input instances to be explicitly modeled via vectors or tensors, and operating on structured data is made possible only by using ad-hoc architectures.
Year
DOI
Venue
2019
10.1016/j.csl.2019.03.006
Computer Speech & Language
Keywords
Field
DocType
Kernel-based learning,Neural methods,Semantic spaces,Nystrom embeddings
Kernel (linear algebra),Computer science,Support vector machine,Tree kernel,Theoretical computer science,Feature engineering,Artificial intelligence,Deep learning,Kernel method,Artificial neural network,Machine learning,Kernel (statistics)
Journal
Volume
ISSN
Citations 
58
0885-2308
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Danilo Croce131439.05
Simone Filice2898.75
Roberto Basili31308155.68