-
Article
Moreau-Yoshida variational transport: a general framework for solving regularized distributional optimization problems
We address a general optimization problem involving the minimization of a composite objective functional defined over a class of probability distributions. The objective function consists of two components: on...
-
Article
Mirror variational transport: a particle-based algorithm for distributional optimization on constrained domains
We consider the optimization problem of minimizing an objective functional, which admits a variational form and is defined over probability distributions on a constrained domain, which poses challenges to both th...
-
Chapter and Conference Paper
Image Data Recoverability Against Data Collaboration and Its Countermeasure
The development machine learning and related techniques has accelerated the use of data in a variety of fields, including medicine, finance, and advertising. Because the amount of data is increasing extremely ...
-
Chapter and Conference Paper
Accelerating the Backpropagation Algorithm by Using NMF-Based Method on Deep Neural Networks
Backpropagation (BP) is the most widely used algorithm for the training of deep neural networks (DNN) and is also considered a de facto standard algorithm. However, the BP algorithm often requires a lot of com...
-
Chapter and Conference Paper
Collaborative Data Analysis: Non-model Sharing-Type Machine Learning for Distributed Data
This paper proposes a novel non-model sharing-type collaborative learning method for distributed data analysis, in which data are partitioned in both samples and features. Analyzing these types of distributed ...
-
Chapter and Conference Paper
Parameter Evolution Self-Adaptive Strategy and Its Application for Cuckoo Search
Cuckoo Search (CS) is a simple yet efficient swarm intelligence algorithm based on Lévy Flight. However, its performance can depend heavily on the parameter settings. Though many studies have designed control ...
-
Article
Parallel Implementation of the Nonlinear Semi-NMF Based Alternating Optimization Method for Deep Neural Networks
For computing weights of deep neural networks (DNNs), the backpropagation (BP) method has been widely used as a de-facto standard algorithm. Since the BP method is based on a stochastic gradient descent method...
-
Chapter and Conference Paper
Structure-Preserving Technique in the Block SS–Hankel Method for Solving Hermitian Generalized Eigenvalue Problems
The block SS–Hankel method is one of the most efficient methods for solving interior generalized eigenvalue problems (GEPs) when only the eigenvalues are required. However, even if the target GEP is Hermitian,...
-
Chapter and Conference Paper
Alternating Optimization Method Based on Nonnegative Matrix Factorizations for Deep Neural Networks
The backpropagation algorithm for calculating gradients has been widely used in computation of weights for deep neural networks (DNNs). This method requires derivatives of objective functions and has some diff...