Recurrent Convolutional Neural Network with Sparse Stochastic Contexts for Adversarial Prediction – In this paper, we propose a new method on the training of stochastic recurrent neural networks with sparse features. We use the sparse embedding as a model (in this case sparse vector) to represent the model-related features. We use a new sparse representation of the hidden structure of the network as a vector. In the supervised learning setting, we only need to use the sparsity of its representation for the classification task in order to train the stochastic network. This allows learning and prediction in a more natural way. The proposed method is based on the Sparse embedding of the network. We observe that the sparse representation performs well in the supervised learning setting, although it is more robust.
We present a novel approach for learning deep neural networks (DNNs) on-the-fly. The approach addresses two distinct challenges: (1) is the DNN not only trained and optimized for all inputs at each time step, but also all layers are trained in all layers and learn to discriminate between inputs in a coherent representation; and (2) is the DNN trained on the learned representations of the input. The DNN training is accomplished by using a deep architecture and utilizes the data structure to capture the learned discriminative representation of the input, which is then used to train a DNN with the discriminative representation. Experiments on various challenging datasets demonstrate that our approach outperforms the state-of-the-art deep neural network architectures.
Constrained Two-Stage Multiple Kernel Learning for Graph Signals
A Unified Deep Learning Framework for Multi-object Tracking
Recurrent Convolutional Neural Network with Sparse Stochastic Contexts for Adversarial Prediction
A Random Fourier Transform Based Schemas for Bayesian Nonconvex Optimization
Multi-view Deep Reinforcement Learning with Dynamic CodingWe present a novel approach for learning deep neural networks (DNNs) on-the-fly. The approach addresses two distinct challenges: (1) is the DNN not only trained and optimized for all inputs at each time step, but also all layers are trained in all layers and learn to discriminate between inputs in a coherent representation; and (2) is the DNN trained on the learned representations of the input. The DNN training is accomplished by using a deep architecture and utilizes the data structure to capture the learned discriminative representation of the input, which is then used to train a DNN with the discriminative representation. Experiments on various challenging datasets demonstrate that our approach outperforms the state-of-the-art deep neural network architectures.