Skip to main content

previous disabled Page of 3
and
  1. No Access

    Chapter and Conference Paper

    Do Minimal Complexity Least Squares Support Vector Machines Work?

    The minimal complexity support vector machine is a fusion of the support vector machine (SVM) and the minimal complexity machine (MCM), and results in maximizing the minimum margin and minimizing the maximum m...

    Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2023)

  2. No Access

    Chapter and Conference Paper

    Minimal Complexity Support Vector Machines

    Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, t...

    Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2020)

  3. Chapter and Conference Paper

    Effect of Equality Constraints to Unconstrained Large Margin Distribution Machines

    Unconstrained large margin distribution machines (ULDMs) maximize the margin mean and minimize the margin variance without constraints. In this paper, we first reformulate ULDMs as a special case of least squa...

    Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2018)

  4. No Access

    Article

    Impact of routine recurrent laryngeal nerve monitoring in prone esophagectomy with mediastinal lymph node dissection

    The problem of recurrent laryngeal nerve (RLN) paralysis (RLNP) after radical esophagectomy remains unresolved. Several studies have confirmed that intraoperative nerve monitoring (IONM) of the RLN during thyr...

    Makoto Hikage, Takashi Kamei, Toru Nakano, Shigeo Abe in Surgical Endoscopy (2017)

  5. No Access

    Article

    Fusing sequential minimal optimization and Newton’s method for support vector training

    Sequential minimal optimization (SMO) is widely used for training support vector machines (SVMs) because of fast training. But the training slows down when a large margin parameter value is used. Training by N...

    Shigeo Abe in International Journal of Machine Learning and Cybernetics (2016)

  6. Chapter and Conference Paper

    Improving Generalization Abilities of Maximal Average Margin Classifiers

    Maximal average margin classifiers (MAMCs) maximize the average margin without constraints. Although training is fast, the generalization abilities are usually inferior to support vector machines (SVMs). To im...

    Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2016)

  7. No Access

    Article

    Comparison of short-term outcomes between prone and lateral decubitus positions for thoracoscopic esophagectomy

    Prone thoracoscopic esophagectomy was introduced at our institution from 2012. This study describes our experiences of the main differences between thoracoscopic esophagectomy in the prone and traditional lef...

    ** Teshima, Go Miyata, Takashi Kamei, Toru Nakano, Shigeo Abe in Surgical Endoscopy (2015)

  8. Article

    Open Access

    Murine double minute 2 predicts response of advanced esophageal squamous cell carcinoma to definitive chemoradiotherapy

    Definitive chemoradiotherapy (dCRT) has recently become one of the most effective therapies for the treatment of esophageal squamous cell carcinoma (ESCC). However, it is also true this treatment has not been ...

    Hiroshi Okamoto, Fumiyoshi Fujishima, Takashi Kamei, Yasuhiro Nakamura in BMC Cancer (2015)

  9. Article

    Comments on: Support vector machines maximizing geometric margins for multi-class classification

    Shigeo Abe in TOP (2014)

  10. No Access

    Chapter and Conference Paper

    Incremental Input Variable Selection by Block Addition and Block Deletion

    In selecting input variables by block addition and block deletion (BABD), multiple input variables are added and then deleted, kee** the cross-validation error below that using all the input variables. The m...

    Shigeo Abe in Artificial Neural Networks and Machine Learning – ICANN 2014 (2014)

  11. Chapter and Conference Paper

    Incremental Feature Selection by Block Addition and Block Deletion Using Least Squares SVRs

    For a small sample problem with a large number of features, feature selection by cross-validation frequently goes into random tie breaking because of the discrete recognition rate. This leads to inferior featu...

    Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2014)

  12. No Access

    Article

    Three cases of esophageal cancer with aberrant right subclavian artery treated by thoracoscopic esophagectomy

    An aberrant right subclavian artery (ARSA) is an anatomical abnormality that occurs at a frequency of 0.4–2 %. It is important to be aware of this abnormality when performing radical esophagectomy for esophage...

    ** Teshima, Go Miyata, Takashi Kamei, Toru Nakano, Shigeo Abe in Esophagus (2013)

  13. No Access

    Chapter and Conference Paper

    Training Mahalanobis Kernels by Linear Programming

    The covariance matrix in the Mahalanobis distance can be trained by semi-definite programming, but training for a large size data set is inefficient. In this paper, we constrain the covariance matrix to be dia...

    Shigeo Abe in Artificial Neural Networks and Machine Learning – ICANN 2012 (2012)

  14. Chapter and Conference Paper

    Feature Selection by Block Addition and Block Deletion

    In our previous work, we have developed methods for selecting input variables for function approximation based on block addition and block deletion. In this paper, we extend these methods to feature selection....

    Takashi Nagatani, Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2012)

  15. No Access

    Chapter and Conference Paper

    Fast Support Vector Training by Newton’s Method

    We discuss a fast training method of support vector machines using Newton’s method combined with fixed-size chunking. To speed up training, we limit the number of upper or lower bounded variables in the workin...

    Shigeo Abe in Artificial Neural Networks and Machine Learning – ICANN 2011 (2011)

  16. No Access

    Book

  17. Chapter and Conference Paper

    Evaluation of Feature Selection by Multiclass Kernel Discriminant Analysis

    In this paper, we propose and evaluate the feature selection criterion based on kernel discriminant analysis (KDA) for multiclass problems, which finds the number of classes minus one eigenvectors. The selecti...

    Tsuneyoshi Ishii, Shigeo Abe in Artificial Neural Networks in Pattern Recognition (2010)

  18. No Access

    Chapter and Conference Paper

    Feature Extraction Using Support Vector Machines

    We discuss feature extraction by support vector machines (SVMs). Because the coefficient vector of the hyperplane is orthogonal to the hyperplane, the vector works as a projection vector. To obtain more projec...

    Yasuyuki Tajiri, Ryosuke Yabuwaki in Neural Information Processing. Models and … (2010)

  19. No Access

    Chapter and Conference Paper

    Convergence Improvement of Active Set Training for Support Vector Regressors

    In our previous work we have discussed the training method of a support vector regressor (SVR) by active set training based on Newton’s method. In this paper, we discuss convergence improvement by modifying th...

    Shigeo Abe, Ryousuke Yabuwaki in Artificial Neural Networks – ICANN 2010 (2010)

  20. No Access

    Chapter and Conference Paper

    A Fast Incremental Kernel Principal Component Analysis for Online Feature Extraction

    In this paper, we present a modified version of Incremental Kernel Principal Component Analysis (IKPCA) which was originally proposed by Takeuchi et al. as an online nonlinear feature extraction method. The pr...

    Seiichi Ozawa, Yohei Takeuchi, Shigeo Abe in PRICAI 2010: Trends in Artificial Intelligence (2010)

previous disabled Page of 3