![Loading...](https://link.springer.com/static/c4a417b97a76cc2980e3c25e2271af3129e08bbe/images/pdf-preview/spacer.gif)
-
Chapter and Conference Paper
A Distributed kWTA for Decentralized Auctions
A distributed k-Winner-Take-All (kWTA) is presented in this paper. Its state-space model is given by $$\begin{aligned} \frac{d}{dt}x_i(t) = ...
-
Article
Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks
This paper focuses on noise resistant incremental learning algorithms for single layer feed-forward neural networks (SLFNNs). In a physical implementation of a well trained neural network, faults or noise are ...
-
Chapter and Conference Paper
Effect of Logistic Activation Function and Multiplicative Input Noise on DNN-kWTA Model
The dual neural network-based (DNN) k-winner-take-all (kWTA) model is one of the simplest analog neural network models for the kWTA process. This paper analyzes the behaviors of the DNN-kWTA model under these two...
-
Chapter and Conference Paper
Analysis on the Boltzmann Machine with Random Input Drifts in Activation Function
The Boltzmann machine (BM) model is able to learn the probability distribution of input patterns. However, in analog realization, there are thermal noise and random offset voltages of amplifiers. Those realiza...
-
Chapter and Conference Paper
Constrained Center Loss for Image Classification
In feature representation learning, robust features are expected to have intra-class compactness and inter-class separability. The traditional softmax loss concept ignores the intra-class compactness. Hence th...
-
Chapter and Conference Paper
Analysis on Dropout Regularization
Dropout, including Bernoulli dropout (equivalently random node fault) and multiplicative Gaussian noise (MGN) dropout (equivalently multiplicative node noise), has been a technique in training a neural networ...
-
Chapter and Conference Paper
Fault Tolerant Broad Learning System
The broad learning system (BLS) approach provides low computational complexity solutions for training flat structure feedforward networks. However, many BLS algorithms deal with the faultless situation only. ...
-
Chapter and Conference Paper
Explicit Center Selection and Training for Fault Tolerant RBF Networks
Although some noise tolerant center selection training algorithms for RBF networks have been developed, they usually have some disadvantages. For example, some of them cannot select the RBF centers and train t...
-
Chapter and Conference Paper
Fault-Resistant Algorithms for Single Layer Neural Networks
Incremental extreme learning machine (IELM), convex incremental extreme learning machine (C-IELM) and other variants of extreme learning machine (ELM) algorithms provide low computational complexity techniques...
-
Chapter and Conference Paper
A Robust LPNN Technique for Target Localization Under Hybrid TOA/AOA Measurements
This paper presents an approach based on the Lagrange programming neural network (LPNN) framework for target localization under the outlier situation. The problem is formulated as a minimization problem of the...
-
Chapter and Conference Paper
MCP Based Noise Resistant Algorithm for Training RBF Networks and Selecting Centers
In the implementation of a neural network, some imperfect issues, such as precision error and thermal noise, always exist. They can be modeled as multiplicative noise. This paper studies the problem of trainin...
-
Article
Editorial for Special Issue on ICONIP 2014
-
Chapter and Conference Paper
Analysis of the DNN-kWTA Network Model with Drifts in the Offset Voltages of Threshold Logic Units
The structure of the dual neural network-based (DNN) k-winner-take-all (kWTA) model is much simpler than that of other kWTA models. Its convergence time and capability under the perfect condition were reported. H...
-
Chapter and Conference Paper
Noise on Gradient Systems with Forgetting
In this paper, we study the effect of noise on a gradient system with forgetting. The noise include multiplicative noise, additive noise and chaotic noise. For multiplicative or additive noise, the noise is a ...
-
Chapter and Conference Paper
Non-Line-of-Sight Mitigation via Lagrange Programming Neural Networks in TOA-Based Localization
A common measurement model for locating a mobile source is time-of-arrival (TOA). However, when non-line-of-sight (NLOS) bias error exists, the error can seriously degrade the estimation accuracy. This paper f...
-
Article
An Improved Fault-Tolerant Objective Function and Learning Algorithm for Training the Radial Basis Function Neural Network
As the concept of artificial neural networks is based on the mechanism of the human brain, it is essential that a trained artificial neural network should exhibit certain amount of fault-tolerant ability. In t...
-
Chapter and Conference Paper
The Performance of the Stochastic DNN-kWTA Network
Recently, the dual neural network (DNN) model has been used to synthesize the k-winners-take-all (kWTA) process. The advantage of this DNN-kWTA model is that its structure is very simple. It contains 2n + 1 conne...
-
Article
Lagrange programming neural networks for time-of-arrival-based source localization
Finding the location of a mobile source from a number of separated sensors is an important problem in global positioning systems and wireless sensor networks. This problem can be achieved by making use of the...
-
Article
HEALPIX DCT technique for compressing PCA-based illumination adjustable images
An illumination adjustable image (IAI), containing a set of pre-captured reference images under various light directions, represents the appearance of a scene with adjustable illumination. One of drawbacks of ...
-
Article
Guest editorial: special issue on the emerging applications of neural networks