![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Article
Continual variational dropout: a view of auxiliary local variables in continual learning
Regularization/prior-based approach appears to be one of the critical strategies in continual learning, considering its mechanism for preserving and preventing forgetting the learned knowledge. Without any ret...
-
Article
Adaptive infinite dropout for noisy and sparse data streams
The ability to analyze data streams, which arrive sequentially and possibly infinitely, is increasingly vital in various online applications. However, data streams pose various challenges, including sparse and no...
-
Chapter and Conference Paper
Reducing Catastrophic Forgetting in Neural Networks via Gaussian Mixture Approximation
Our paper studies the continual learning (CL) problems in which data comes in sequence and the trained models are expected to be capable of utilizing existing knowledge to solve new tasks without losing perfor...
-
Chapter and Conference Paper
Auxiliary Local Variables for Improving Regularization/Prior Approach in Continual Learning
Regularization/prior approach emerges as one of the major directions in continual learning to help a neural network reduce forgetting the learned knowledge. This approach measures the importance of weights for...
-
Article
Open AccessPredicting miRNA–disease associations using improved random walk with restart and integrating multiple similarities
Predicting beneficial and valuable miRNA–disease associations (MDAs) by doing biological laboratory experiments is costly and time-consuming. Proposing a forceful and meaningful computational method for predic...
-
Article
Bag of biterms modeling for short texts
Analyzing texts from social media encounters many challenges due to their unique characteristics of shortness, massiveness, and dynamic. Short texts do not provide enough context information, causing the failu...
-
Chapter and Conference Paper
Evaluating Named-Entity Recognition Approaches in Plant Molecular Biology
Text mining research is becoming an important topic in biology with the aim to extract biological entities from scientific papers in order to extend the biological knowledge. However, few thorough studies are ...
-
Chapter and Conference Paper
A Fast Algorithm for Posterior Inference with Latent Dirichlet Allocation
Latent Dirichlet Allocation (LDA) [1], among various forms of topic models, is an important probabilistic generative model for analyzing large collections of text corpora. The problem of posterior inference for i...
-
Article
An effective and interpretable method for document classification
As the number of documents has been rapidly increasing in recent time, automatic text categorization is becoming a more important and fundamental task in information retrieval and text mining. Accuracy and int...
-
Chapter and Conference Paper
Stochastic Bounds for Inference in Topic Models
Topic models are popular for modeling discrete data (e.g., texts, images, videos, links), and provide an efficient way to discover hidden structures/semantics in massive data. The problem of posterior inferenc...
-
Chapter and Conference Paper
Kee** Priors in Streaming Bayesian Learning
Exploiting prior knowledge in the Bayesian learning process is one way to improve the quality of Bayesian model. To the best of our knowledge, however, there is no formal research about the influence of prior ...
-
Chapter and Conference Paper
Sparse Stochastic Inference with Regularization
The massive amount of digital text information and delivering them in streaming manner pose challenges for traditional inference algorithms. Recently, advances in stochastic inference algorithms have made it f...
-
Chapter and Conference Paper
Enabling Hierarchical Dirichlet Processes to Work Better for Short Texts at Large Scale
Analyzing texts from social media often encounters many challenges, including shortness, dynamic, and huge size. Short texts do not provide enough information so that statistical models often fail to work. In ...
-
Chapter and Conference Paper
An Effective NMF-Based Method for Supervised Dimension Reduction
Sparse topic modeling is a potential approach to learning meaningful hidden topics from large datasets with high dimension and complex distribution. We propose a sparse NMF-based method for supervised dimensio...
-
Chapter and Conference Paper
Effective and Interpretable Document Classification Using Distinctly Labeled Dirichlet Process Mixture Models of von Mises-Fisher Distributions
Document Classification is essential to information retrieval and text mining. Accuracy and interpretability are two important aspects of text classifiers. This paper proposes an interpretable classification m...
-
Chapter and Conference Paper
Fully Sparse Topic Models
In this paper, we propose Fully Sparse Topic Model (FSTM) for modeling large collections of documents. Three key properties of the model are: (1) the inference algorithm converges in linear time, (2) learning ...