Skip to main content

previous disabled Page of 2
and
  1. No Access

    Chapter and Conference Paper

    A Distributed kWTA for Decentralized Auctions

    A distributed k-Winner-Take-All (kWTA) is presented in this paper. Its state-space model is given by $$\begin{aligned} \frac{d}{dt}x_i(t) = ...

    Gary Sum, John Sum, Andrew Chi-Sing Leung in Neural Information Processing (2024)

  2. No Access

    Article

    Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks

    This paper focuses on noise resistant incremental learning algorithms for single layer feed-forward neural networks (SLFNNs). In a physical implementation of a well trained neural network, faults or noise are ...

    Muideen Adegoke, Hiu Tung Wong in Journal of Ambient Intelligence and Humani… (2023)

  3. No Access

    Chapter and Conference Paper

    Effect of Logistic Activation Function and Multiplicative Input Noise on DNN-kWTA Model

    The dual neural network-based (DNN) k-winner-take-all (kWTA) model is one of the simplest analog neural network models for the kWTA process. This paper analyzes the behaviors of the DNN-kWTA model under these two...

    Wenhao Lu, Chi-Sing Leung, John Sum in Neural Information Processing (2023)

  4. No Access

    Chapter and Conference Paper

    Analysis on the Boltzmann Machine with Random Input Drifts in Activation Function

    The Boltzmann machine (BM) model is able to learn the probability distribution of input patterns. However, in analog realization, there are thermal noise and random offset voltages of amplifiers. Those realiza...

    Wenhao Lu, Chi-Sing Leung, John Sum in Neural Information Processing (2020)

  5. No Access

    Chapter and Conference Paper

    Constrained Center Loss for Image Classification

    In feature representation learning, robust features are expected to have intra-class compactness and inter-class separability. The traditional softmax loss concept ignores the intra-class compactness. Hence th...

    Zhanglei Shi, Hao Wang, Chi-Sing Leung, John Sum in Neural Information Processing (2020)

  6. No Access

    Chapter and Conference Paper

    Analysis on Dropout Regularization

    Dropout, including Bernoulli dropout (equivalently random node fault) and multiplicative Gaussian noise (MGN) dropout (equivalently multiplicative node noise), has been a technique in training a neural networ...

    John Sum, Chi-Sing Leung in Neural Information Processing (2019)

  7. No Access

    Chapter and Conference Paper

    Fault Tolerant Broad Learning System

    The broad learning system (BLS) approach provides low computational complexity solutions for training flat structure feedforward networks. However, many BLS algorithms deal with the faultless situation only. ...

    Muideen Adegoke, Chi-Sing Leung, John Sum in Neural Information Processing (2019)

  8. No Access

    Chapter and Conference Paper

    Explicit Center Selection and Training for Fault Tolerant RBF Networks

    Although some noise tolerant center selection training algorithms for RBF networks have been developed, they usually have some disadvantages. For example, some of them cannot select the RBF centers and train t...

    Hiu Tung Wong, Zhenni Wang, Chi-Sing Leung, John Sum in Neural Information Processing (2019)

  9. No Access

    Chapter and Conference Paper

    Fault-Resistant Algorithms for Single Layer Neural Networks

    Incremental extreme learning machine (IELM), convex incremental extreme learning machine (C-IELM) and other variants of extreme learning machine (ELM) algorithms provide low computational complexity techniques...

    Muideen Adegoke, Andrew Chi Sing Leung, John Sum in Neural Information Processing (2018)

  10. No Access

    Chapter and Conference Paper

    A Robust LPNN Technique for Target Localization Under Hybrid TOA/AOA Measurements

    This paper presents an approach based on the Lagrange programming neural network (LPNN) framework for target localization under the outlier situation. The problem is formulated as a minimization problem of the...

    Muideen Adegoke, Andrew Chi Sing Leung, John Sum in Neural Information Processing (2018)

  11. No Access

    Chapter and Conference Paper

    MCP Based Noise Resistant Algorithm for Training RBF Networks and Selecting Centers

    In the implementation of a neural network, some imperfect issues, such as precision error and thermal noise, always exist. They can be modeled as multiplicative noise. This paper studies the problem of trainin...

    Hao Wang, Andrew Chi Sing Leung, John Sum in Neural Information Processing (2018)

  12. Article

    Editorial for Special Issue on ICONIP 2014

    John Sum, Andrew C. S. Leung in Neural Processing Letters (2017)

  13. No Access

    Chapter and Conference Paper

    Analysis of the DNN-kWTA Network Model with Drifts in the Offset Voltages of Threshold Logic Units

    The structure of the dual neural network-based (DNN) k-winner-take-all (kWTA) model is much simpler than that of other kWTA models. Its convergence time and capability under the perfect condition were reported. H...

    Ruibin Feng, Chi-Sing Leung, John Sum in Neural Information Processing (2016)

  14. No Access

    Chapter and Conference Paper

    Noise on Gradient Systems with Forgetting

    In this paper, we study the effect of noise on a gradient system with forgetting. The noise include multiplicative noise, additive noise and chaotic noise. For multiplicative or additive noise, the noise is a ...

    Chang Su, John Sum, Chi-Sing Leung, Kevin I.-J. Ho in Neural Information Processing (2015)

  15. No Access

    Chapter and Conference Paper

    Non-Line-of-Sight Mitigation via Lagrange Programming Neural Networks in TOA-Based Localization

    A common measurement model for locating a mobile source is time-of-arrival (TOA). However, when non-line-of-sight (NLOS) bias error exists, the error can seriously degrade the estimation accuracy. This paper f...

    Zi-Fa Han, Chi-Sing Leung, Hing Cheung So, John Sum in Neural Information Processing (2015)

  16. No Access

    Article

    An Improved Fault-Tolerant Objective Function and Learning Algorithm for Training the Radial Basis Function Neural Network

    As the concept of artificial neural networks is based on the mechanism of the human brain, it is essential that a trained artificial neural network should exhibit certain amount of fault-tolerant ability. In t...

    Ruibin Feng, Yi **ao, Chi Sing Leung, Peter W. M. Tsang, John Sum in Cognitive Computation (2014)

  17. No Access

    Chapter and Conference Paper

    The Performance of the Stochastic DNN-kWTA Network

    Recently, the dual neural network (DNN) model has been used to synthesize the k-winners-take-all (kWTA) process. The advantage of this DNN-kWTA model is that its structure is very simple. It contains 2n + 1 conne...

    Ruibin Feng, Chi-Sing Leung, Kai-Tat Ng, John Sum in Neural Information Processing (2014)

  18. No Access

    Article

    Lagrange programming neural networks for time-of-arrival-based source localization

    Finding the location of a mobile source from a number of separated sensors is an important problem in global positioning systems and wireless sensor networks. This problem can be achieved by making use of the...

    Chi Sing Leung, John Sum, Hing Cheung So in Neural Computing and Applications (2014)

  19. No Access

    Article

    HEALPIX DCT technique for compressing PCA-based illumination adjustable images

    An illumination adjustable image (IAI), containing a set of pre-captured reference images under various light directions, represents the appearance of a scene with adjustable illumination. One of drawbacks of ...

    John Sum, Chi-Sing Leung, Ray C. C. Cheung, Tze-Yiu Ho in Neural Computing and Applications (2013)

  20. Article

    Guest editorial: special issue on the emerging applications of neural networks

    Tommy W. S. Chow, John Sum in Neural Computing and Applications (2011)

previous disabled Page of 2