![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Chapter and Conference Paper
Do Minimal Complexity Least Squares Support Vector Machines Work?
The minimal complexity support vector machine is a fusion of the support vector machine (SVM) and the minimal complexity machine (MCM), and results in maximizing the minimum margin and minimizing the maximum m...
-
Chapter and Conference Paper
Minimal Complexity Support Vector Machines
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, t...
-
Chapter and Conference Paper
Effect of Equality Constraints to Unconstrained Large Margin Distribution Machines
Unconstrained large margin distribution machines (ULDMs) maximize the margin mean and minimize the margin variance without constraints. In this paper, we first reformulate ULDMs as a special case of least squa...
-
Article
Impact of routine recurrent laryngeal nerve monitoring in prone esophagectomy with mediastinal lymph node dissection
The problem of recurrent laryngeal nerve (RLN) paralysis (RLNP) after radical esophagectomy remains unresolved. Several studies have confirmed that intraoperative nerve monitoring (IONM) of the RLN during thyr...
-
Article
Fusing sequential minimal optimization and Newton’s method for support vector training
Sequential minimal optimization (SMO) is widely used for training support vector machines (SVMs) because of fast training. But the training slows down when a large margin parameter value is used. Training by N...
-
Chapter and Conference Paper
Improving Generalization Abilities of Maximal Average Margin Classifiers
Maximal average margin classifiers (MAMCs) maximize the average margin without constraints. Although training is fast, the generalization abilities are usually inferior to support vector machines (SVMs). To im...
-
Article
Comparison of short-term outcomes between prone and lateral decubitus positions for thoracoscopic esophagectomy
Prone thoracoscopic esophagectomy was introduced at our institution from 2012. This study describes our experiences of the main differences between thoracoscopic esophagectomy in the prone and traditional lef...
-
Article
Open AccessMurine double minute 2 predicts response of advanced esophageal squamous cell carcinoma to definitive chemoradiotherapy
Definitive chemoradiotherapy (dCRT) has recently become one of the most effective therapies for the treatment of esophageal squamous cell carcinoma (ESCC). However, it is also true this treatment has not been ...
-
Article
Comments on: Support vector machines maximizing geometric margins for multi-class classification
-
Chapter and Conference Paper
Incremental Input Variable Selection by Block Addition and Block Deletion
In selecting input variables by block addition and block deletion (BABD), multiple input variables are added and then deleted, kee** the cross-validation error below that using all the input variables. The m...
-
Chapter and Conference Paper
Incremental Feature Selection by Block Addition and Block Deletion Using Least Squares SVRs
For a small sample problem with a large number of features, feature selection by cross-validation frequently goes into random tie breaking because of the discrete recognition rate. This leads to inferior featu...
-
Article
Three cases of esophageal cancer with aberrant right subclavian artery treated by thoracoscopic esophagectomy
An aberrant right subclavian artery (ARSA) is an anatomical abnormality that occurs at a frequency of 0.4–2 %. It is important to be aware of this abnormality when performing radical esophagectomy for esophage...
-
Chapter and Conference Paper
Training Mahalanobis Kernels by Linear Programming
The covariance matrix in the Mahalanobis distance can be trained by semi-definite programming, but training for a large size data set is inefficient. In this paper, we constrain the covariance matrix to be dia...
-
Chapter and Conference Paper
Feature Selection by Block Addition and Block Deletion
In our previous work, we have developed methods for selecting input variables for function approximation based on block addition and block deletion. In this paper, we extend these methods to feature selection....
-
Chapter and Conference Paper
Fast Support Vector Training by Newton’s Method
We discuss a fast training method of support vector machines using Newton’s method combined with fixed-size chunking. To speed up training, we limit the number of upper or lower bounded variables in the workin...
-
Book
-
Chapter and Conference Paper
Evaluation of Feature Selection by Multiclass Kernel Discriminant Analysis
In this paper, we propose and evaluate the feature selection criterion based on kernel discriminant analysis (KDA) for multiclass problems, which finds the number of classes minus one eigenvectors. The selecti...
-
Chapter and Conference Paper
Feature Extraction Using Support Vector Machines
We discuss feature extraction by support vector machines (SVMs). Because the coefficient vector of the hyperplane is orthogonal to the hyperplane, the vector works as a projection vector. To obtain more projec...
-
Chapter and Conference Paper
Convergence Improvement of Active Set Training for Support Vector Regressors
In our previous work we have discussed the training method of a support vector regressor (SVR) by active set training based on Newton’s method. In this paper, we discuss convergence improvement by modifying th...
-
Chapter and Conference Paper
A Fast Incremental Kernel Principal Component Analysis for Online Feature Extraction
In this paper, we present a modified version of Incremental Kernel Principal Component Analysis (IKPCA) which was originally proposed by Takeuchi et al. as an online nonlinear feature extraction method. The pr...