
To: IEEE copyrights@ieee.org
IEEE SMC Exco Members
Hi,
We feel that this case should be investigated by the IEEE SMC Society because the SMC Society is a technical cosponsor of the ICARCV 2014 (possibly a cosponsor of ICARCV 2004 as well) and the SMC Society published a journal article in the IEEE T. on SMCPart B in 2012. We believe that IEEE must consider nonIEEE publications also in this investigation, as nonIEEE publications are cited in IEEE publications to reinforce misleading materials published in nonIEEE sources.
We are requesting the IEEE to obtain views from the following researchers whose works have been abused by the inventor of the ELM (Dr Guangbin Huang):
Dr Dave Lowe d.lowe@aston.ac.uk
Dr Dave Broomhead david.broomhead@manchester.ac.uk
Dr CLP Chen philipchen@umac.mo
Dr J Suykens Johan.Suykens@esat.kuleuven.be
Dr R Duin r.duin@ieee.org
In addition, Profs L Wang (ELPWANG@NTU.EDU.SG) and DH Wang (dh.wang@latrobe.edu.au) are also very much familiar with the depth of ethical violations committed by Dr GB Huang.
We firmly believe that strong evidences are provided in this email to take appropriate actions against the systematic violation of publishing ethics. Further, this document must be made available to the IEEE Fellows committees, in case Dr GB Huang is nominated to be elevated in March 2015.
If you require further clarifications, please feel free to contact us.
Thank you and look forward to hearing from you in the near future.
Dave Chen Pao

A Case Against the Creation & Creator of ELM: The ELM Scandal
ELM was created in 2004. It has certainly been the most controversial name in the field of artificial intelligence. This document presents numerous reasons for taking appropriate actions against ELM and its creator in order to ensure that we will not face similar situation in the future.
1. The ELMRBF Case
In order to get the first ELMRBF paper accepted, the 1st author (& creator GuangBin Huang) knowingly excluded closely related publications. A closely related publication [6] (which was known to Huang, but excluded from the reference lists of [11,12]) states in their abstract that random selection of RBF centers from the training data performs poorly compared to other intelligent selection methods. In [11,12], Huang proposed totally randomly selecting RBF centers and widths from the feature space independent of training data. It is certain that proper citation of relevant references and/OR proper experimental comparisons would have resulted in the rejection of the paper due to two important reasons: (1) the primary concepts of completely randomly selecting center points (or from the training data) has been presented by the inventors of the RBF in 1988 [3] on page 325 in footnote 1 (Footnote 1 from [3]: “In particular, we do not necessarily require that the radial basis function centers correspond to any of the data points.”) with a closedform pseudoinverse solution & (2) the generalization of this randomization of the center points of the RBF is known to perform poorly [6] as the center points may not be located according to the density and class labels of the training data. Hence, it is apparent that in order to publish the first two ELMRBF publications the first author knowingly excluded the almost identical concepts in [6] and excluded known results stating that the randomization concepts would be worse as stated in [6] in the abstract itself (Please also note that this article [6] is cited in Huang et al [13] as reference number [4]) as follows:
“A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular value decomposition to solve for the weights of the network. Such a procedure has several drawbacks and, in particular, an arbitrary selection of centers is clearly unsatisfactory.”
It is also well known that the first author had widely published in the field of RBF with several journal publications (as listed in the Appendix on page 8, during 20032007) during the same period in which his two ELMRBF papers were published in 20042005. Hence, the first author must not make a claim that he had never seen the original papers by Broomhead et al. [3] while preparing these 2 papers [11,12]. Even if the first author makes this claim, the article [6] which are listed as reference #4 in his publication [13] have restated the same facts in the abstract itself.
These two papers were published in a conference and a poor quality outofthefield journal which may use general reviewers who are not specialized in this domain. Hence, the author selected these venues to publish these unethical works by knowingly suppressing the almost identical works of superior quality. It is apparent that Huang’s ELMRBF papers would have been certainly rejected if all relevant facts had been presented in his first two publications [11,12].
Based on all the evidences above, a stern action is needed for this case.
2. The ELMSigmoidal Case
When authors publish articles genuinely unaware of the closely related work, they discontinue such activities when the facts were made to be known to them. In the domain of ELMSigmoidal, articles such as [9,30,32] are examples of such cases in which the authors stopped pursuing their work in this domain. However, Huang has been unethically promoting the ELM after initially publishing them by selecting the fast review process of a conference without citing the relevant literatures (either knowingly or unknowingly).
The ELM [14] differs from RVFL [4,5,18,21,23] by not having direct links from the input to the output. ELM [14] differs from RNN [27] by not having a bias at the output node. An inspection of references in [14] reveals the absence of all these very closely related publications [4,5,18,21,23, 27]. Reviewers would not have certainly accepted ELM [14], if the relevant articles published 5 years [5], 8 years [4], 10 years [23], 12 years [27] and 16 years [3] before had been cited, exact differences had been pointed out and experimental comparisons had been presented. We must take into consideration that terminologies such as hidden neurons, weights, biases, connections, activations and so on had been defined in the mid 1980s [26]. Therefore, we are certain that if all relevant previous articles have been cited and experimentally compared correctly in 2004, this article would not have been accepted. Among these closely related works, [4,5,18,23] have been published in top journals and well captured by numerous databases in 2004 (when the first ELM papers were published). In this internetera, we must not tolerate independent invention of previously published concepts that are well captured in several publications and databases. Further, we can safely assume that RNN [27] and RVFL [4,5,18,21,23] are likely to perform either better or as good as ELM, because ELM is a simplified version of these much older methods. Hence, based on all the justification presented in this section, appropriate actions should be taken on the IJCNN 2004 publication by editing PDF of [14] to highlight that ELM is a variant of RNN and RVFL.
3. The ELMKernel Case (which is LSSVM with b=0)
Naming of algorithms is also an ethical issue. For instance, by converting the inequality condition in SVM [7] to equality condition, a least squares formulation was obtained. The variant was appropriately named as LSSVM [28] to acknowledge the inventors of SVM. If completely new names are given for minor variants with substantial conceptual similarities to the existing works, the inventors of existing works will not be acknowledged in the future. Avoiding confusion is another reason for naming algorithms showing the past connections. If all variants of SVM had been named without showing any relationship to SVM, there would have been 1000s of unrelated names for variants of SVM. It is easy to imagine the consequences. Fortunately, researchers name their variants giving due credits to the original inventors. ELMKernel version is an exception to this ethical tradition adopted by all researchers. Hence, the PDF of the IEEE SMCB 2012 [15] article should be edited to show the new name as “LSSVM without Bias” in order to correct the ethical violation committed by the ELMkernel inventor, GB Huang.
4. Continued Exclusion of Closely Related Works
Even though the comment published by Wang & Wan [29] highlighted the exclusion of closely related works of Broomhead et al [3], Chen et al [4,5], Huang has excluded them in his recent over view publication [17] also. The reason is very simple. These excluded works are almost identical to ELM (RBF & Sigmoidal). We are also reasonably certain that the performances of these excluded works are likely to be either similar or superior to the ELM versions, as the minor variations introduced by Huang simplified them by reducing their degrees of freedom in relation to parameter tuning flexibilities.
5. Making Incorrect Statements
The inventor of ELM frequently makes incorrect statements. In his reply [16], he claims the following:
‘Lowe’s work will not have the universal approximation capability if it moves one more step towards ELM direction. The same impact factor b is selected heuristically for all the hidden nodes in Lowe’s network”
The above is incorrect according to the proof in [25] (assuming that this proof has not been discredited yet). The same or different impact factors can be used as stated on page 248 and 255 in [25]. The inventor must be asked to present solid evidences for making such a wrong claim in [16].
(Please insert Fig. 4 from [17] here)
Fig 4 in [17], (Cognitive Computation J, 2014).
This figure wrongly identifies RVFL [23] as an iterative method after it was pointed out correctly in [29] on page 1494 on right column.
This figure wrongly identifies reference [31] as Quicknet with a closedform method.
This figure excludes the works of Broomhead et al [3] & Chen et al [4,5] which are almost identical to ELMRBF & ELMSigmoidal, respectively.
The other cases are found in Fig. 4 (appears above) in his recent publication [17]. The author claims RVFL as an iterative method. This is wrong. Pao [23] & Chen [4,5] have presented closed form solutions too. Wang & Wan [29] have informed the ELM inventor these details in 2008. However, ELM inventor wanted to forget these facts, instead wanted to convey wrong message to the followers of ELM!! On page 383 in the left column in [17], Huang states: “RVFL uses the conventional gradient descent method which ELM tries to avoid.” On the other hand, in [23] on page 167, the following is stated: “This means that the unique minimum can be found in no more than N + d iterations of a learning procedure such as the conjugate gradient (CG) approach [11, 12], if the explicit matrix inversion needs to be avoided. If matrix inversion with use of a pseudoinverse is feasible, then a single step learning would suffice.”
The reason for giving preference for an iterative method in [23] was that in the early 1990s, iterative solutions are preferred over matrix inversion as the Intel 287 and 387 (computers of that time) would struggle to perform matrix inversions beyond 10x10!!
Further, ELM’s inventor identifies reference [31] as Quicknet (1989) with a closed form solution. This is wrong. Quicknet name with a closed form solution was first published in 2006 [32].
Identifying RVFL [23] as an iterative method in [17] (while excluding the almost identical works of Chen et al on RVFL [4,5]), even after Wang & Wan [29] have correctly identified the RVFL [23] as a closed form method, shows clearly that Huang is intentionally and knowingly making wrong statements to mislead researchers. Hence, appropriate actions must be taken to stop these activities.
6. Making Experimentally Unsubstantiated Claims
Inventor of ELM frequently makes claims without solid experimental results. For example, the texts below are copied from [17] to demonstrate unscientific publishing activities of ELMinventor.
Page 385: “If same kernels are used in ELM and SVM/LSSVM, SVM and LSSVM naturally lead to suboptimal solutions.”
Page 385: “This dilemma may have existed to other random methods with biases in the output nodes [40] if the structure risks were considered in order to improve the generalization performance. In this case, Schmidt et al. [40] would provide suboptimal solutions too” (Ref [40] in [17] is the same as ref [27] in this document)
Huang, the author of [17], must be asked to show extensive simulation results to support these claims. Proofs can be sometimes wrong or insignificant due to the assumptions, etc. Hence, experimental results are more important. Recent extensive comparisons [8] using 120 datasets and 179 classifiers does not support the above claims by the inventor of ELM.
7. Reinventing the Wheel: Now and Then
Reinventing the wheel has taken place on numerous occasions in the past and the credit for inventing the same was shared among different inventors under one condition: the earlier inventors have not published their inventions in widely circulated public domain materials. There are several such examples from the previous century when the internet and digital databases were not at all available. Artificial neural network researchers are aware of the reinvention of the backpropagation learning method in the early 1980s by the PDP research group [26] whereas a very similar work was done Paul Werbos in the early 1970s. Another example is the KuhnTucker (KT) condition becoming KarushKuhnTucker (KKT) condition [20] when the same unpublished ½ a century older invention [19] was located. An important issue to consider is whether we can approve independent invention of the same old concepts in this century when the old concepts were published in top journals and very well captured by numerous databases. The obvious answer is no. If we permit independent invention of the old concepts (which are widely available in the public domain databases), junior research would feel that it would be a good idea NOT to conduct literature search thoroughly by using all means so that they can declare themselves as independent inventors of the existing concepts which are well captured by several databases. ELM certainly falls into this category as the first ELM papers were published in 2004 [11,12,14] even though all these highly similar papers [4,5,6,18,23] are available for several years prior to 2004 in many databases with very similar or superior solution procedures. Therefore, stern actions must be taken on ELM and its creator in order to convey a clear message to junior researchers that “You cannot become an independent inventor of existing concepts widely available in digital database by not doing literature search thoroughly or by pretending not to know the literature even when it clearly known to you.”
Hence, the following message must be conveyed to junior researchers without any ambiguity:
“It is your responsibility to conduct a thorough literature search. It is possible to reinvent the wheel and getting the work published in a conference or a journal. But, when you are told by others about highly similar work in the literature that is well captured by databases and search engines, there is no ground for you to claim that you are an independent inventor of the same concept. You must just accept the fact that you had not done the literature search sufficiently by using all possible synonyms and your socalled novel work must not be regarded as novel any further.”
Another strange observation in relation to ELM is that the inventor is labeling older methods (published in the 1990s) as the variants of the ELM which was published first in 2004. This is really a strange phenomenon never heard of in the scientific community. This atrocious action must be stopped with a strong message. We have only heard of newer algorithms identified as variants of older methods. The correct way to state the facts would be: ELMSigmoidal is the RVFL [4,5,18,23] without direct links from the input to the output OR the RNN [27] without a bias in the output neuron. Therefore, all researchers must be asked to use RVFL or RNN instead of ELMSigmoidal in their future publications.
8. Unscientifically Promoting the ELM Name
The inventor organizes numerous journal special issues, ELM conference series, etc to publish ELM related works. Even though the inventor has closely involved in the review process on many occasions, citations to the relevant publications and experimental comparisons with the relevant works have been mostly nonexistent. Further, by offering SCI indexed publication option, the ELM inventor has multiplied ELM publications even though it is very likely to be either worse or as good as the older methods [3,4,5,18,21,23]. These activities have misguided far too many junior researchers and resulted in wastage of resources. By taking appropriate action against GB Huang, junior researchers will be informed that excluding closely related works in the literature review and experimental comparisons are not at all tolerated.
9. What Does ELM Mean?
ELM was presented to researchers as a method with noniterative solution and randomization. However, ELMkernel [15] is the LSSVM [28] with b = 0 and without any randomization. Hence, identifying these ELMversions [11,14,15] as a method in the domain of randomization with a closed form solution procedure is not justified due to the existence of ELMkernel without randomization. If ELM does not represent randomization, it would just be a linguistic invention as a synonym for MoorPenrose pseudo inverse, as ELM does not move the stateoftheart beyond what existed when it was proposed in 2004 without citing the relevant literatures. But, ELM has certainly taken the researchpublishing ethics and principles beyond an acceptable extreme.
10. Correct Time to Act?
Reinventing the same wheel again and again has taken place too many times in the recent literature. Some of these were identified as plagiarisms as there were sufficient evidences of obvious duplication of materials. In some other cases, the authors and reviewers have genuinely failed to locate the previous works. In the latter case, there is no need to take any action provided the authors realize their unsatisfactory literature reviewing and subsequently accept the original works.
In the case of ELM, it was published by intentionally suppressing the closely related works and without offering experimental comparisons with such works. In fact, it remains to be experimentally compared to determine the significance of these minor variations (by the inventor and the followers of ELM) which are insignificant to be published as conference or journal articles if such tiny differences were explicitly stated in the first submissions themselves.
These tiny differences were highlighted by Wang Wan [29]. Unfortunately, the inventor of ELM continues to exclude the closely related references [17] while comparing ELM with fairly unrelated methods OR incorrectly comparing. The inventor has not offered any experimental comparisons with these closely related methods over a decade. All these now clearly show the unethical behavior of the inventor.
Due to the special circumstances of ELM, we indeed required much longer time to arrive at this conclusion. Hence, this is the correct time to act on the unethical publishing activities around ELM.
11. Summary
Based on the evidences presented in Sections 19, it is clear that the inventor of ELM, GB Huang, has been violating ethical publishing principles in numerous ways. These evidences also clearly show that the inventor of ELM has conducted these unethical activities in a systematic manner, as follows:
1. Preparing articles [11,12,14] with a very minor variation of the previously published methods
2. Submitting them to conferences or a poor quality journal without citing any of the related previous works. (Obviously, if all references are correctly cited and described even a poor quality journal would not accept these works.)
3. After publishing such a work, the inventor claims (This is certainly not true for ELMRBF case, as the inventor knew all relevant works) not to know the literature while pointing to a minor variation that the inventor introduced in the first place.
4. Instead of accepting that work was almost the same as the literature and reverting back to the older names (the ethically correct step taken by almost all researchers in this situation), this inventor promotes his own name by: (1) repeating the tiny variations in an unethical manner; (2) excluding the most identical works; (3) excluding thorough experimental comparisons with near identical works in the literature;
(4) making negative statements about other methods without solid experimental results; (5) comparing with apparently different methods [1,31].
5. The theories and proofs derived for other neural network models (such as universal approximation [10], wavelet [33], trigonometric [22], etc) are copied and applied to ELM to claim that ELM has got a lot of theories while RVFL and RNN do not have. (In fact, as ELM is a tiny variant of RVFL and RNN, it is trivial to have all these theories for RVFL and RNN as well.)
6. Along the way, the inventor makes either false claims or claims without any experimental support of extensive datasets that ELM is superior to others because of this or that reason.
7. Organize special issues, conferences, etc to promote these unethical research practices among junior researchers who would not bother to read the original works published during 19881996 periods.
The above observations are well supported by evidences presented in Sections 19. Hence, it is the correct time to act on this elaborate “ELM Scandal” which is comparable to the scandal discussed in [2].
12. References
1. E. Baum, “On the capabilities of multilayer perceptrons,” Journal of Complexity, vol. 4, 193–215, 1988.
2. P. K. Bondyopadhyay, “Sir JC Bose diode detector received Marconi’s first transatlantic wireless signal of December 1901 (the Italian Navy Coherer Scandal Revisited),” Proc. of IEEE, Vol. 86, No. 1, 259–285, 1998.
3. D. S. Broomhead and D. Lowe, “Multivariable functional interpolation and adaptive networks,” Complex Systems, vol. 2, 321–355, 1988.
4. C. L. P. Chen, “A rapid supervised learning neural network for function interpolation and approximation,” IEEE Trans on Neural Net, 7(5):1220–1230, 1996.
5. C. L. P. Chen and J. Z. Wan, “A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to timeseries prediction,” IEEE Trans on Systems, Man, and Cybernetics, Part B, 29(1):62–72, 1999.
6. S. Chen, C. F. Cowan, P. M. Grant, “Orthogonal least squares learning algorithm for radial basis function networks,” IEEE Trans. Neural Networks, 2(2):302–309, 1991.
7. C. Cortes, V. Vapnik, “Support vector networks,” Machine Learning, Vol. 20, No. 3, 273–297, 1995.
8. M. FernandezDelgado, E. Cernadas, S. Barro, and D. Amorim, “Do we need hundreds of classifiers to solve real world classification problems?” Journal of Machine Learning Research, vol. 15, No. 1, 3133–3181, 2014.
9. S. Ferrari, R. F. Stengel, “Smooth function approximation using neural networks,” IEEE Trans on Neural Networks, Vol. 16, no. 1, 24–38, 2005.
10. K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, vol. 4, pp. 251–257, 1991.
11. G.B. Huang, C.K. Siew, “Extreme learning machine: RBF network case,” Proc. ICARCV 2004, pp. 1029–1036 (Int. Conf on Control, Automation, robotics and Vision).
12. G. B. Huang, C. K. Siew, “Extreme learning machine with randomly assigned RBF Kernels,” Int. J of Information Technology, 11(1):16–24, 2005.
13. G.B. Huang, P. Saratchandran, N. Sundararajan, “A generalized growing and pruning RBF (GGAPRBF) neural network for function approximation,” IEEE Trans on Neural Networks, 16(1):57–67, 2005.
14. G.B. Huang, Q.Y. Zhu, and C.K. Siew, “Extreme learning machine: A new learning scheme of feedforward neural networks,” Proc. of IEEE Int. Joint Conf. on Neural Networks, Vol. 2, 2004, pp. 985–990.
15. G.B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 42, no. 2, 513–529, 2012.
16. G.B. Huang, “Reply to comments on ‘the extreme learning machine’,” Trans. Neur. Netw., vol. 19, no. 8, pp. 1495–1496, Aug. 2008.
17. G.B. Huang, “An insight into extreme learning machines: Random neurons, random features and kernels,” Cognitive Computation, Vol. 6, 376–390, 2014.
18. B. Igelnik and Y.H. Pao, “Stochastic choice of basis functions in adaptive function approximation and the functionallink net,” IEEE Trans on Neural Networks, 6(6):1320–1329, 1995.
19. W. Karush, “Minima of functions of several variables with inequalities as side constraints,” MS thesis, Department of Mathematics, U of Chicago, 1939.
20. T. H. Kjeldsen, “A contextualized historical analysis of the Kuhn–Tucker theorem in nonlinear programming: The impact of World War II,” Historia Mathematica, 27(4):331–361, 2000.
21. H. Li, C. L. P. Chen, and H.P. Huang, Fuzzy neural intelligent systems: Mathematical foundation and the applications in engineering. CRC Press, 2000.
22. N. Y. Nikolaev, H. Iba, “Polynomial harmonic GMDH learning networks for time series modeling,” Neural Networks 16 (2003) 1527–1540.
23. Y.H. Pao, G.H. Park, and D. J. Sobajic, “Learning and generalization characteristics of the random vector functionallink net,” Neurocomputing, 6(2):163–180, 1994.
24. Y.H. Pao and Y. Takefuji, “Functionallink net computing,” IEEE Computer, 25(5):76–79, 1992.
25. J. Park and I. W. Sandberg, “Universal approximation using radialbasis function networks,” Neural Comput., vol. 3, no. 2, pp. 246–257, June 1991.
26. D. E. Rumelhart, J. L. McClelland, and PDP Research Group, Eds., Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations, MIT Press, 1986.
27. W. F. Schmidt, M. A. Kraaijveld, and R. P. W. Duin, “Feedforward neural networks with random weights,” Proc. of 11th IAPR Int. Conf. on Pattern Recog., Conf. B: Pattern Recognition Methodology and Systems, Vol. 2, 1992, pp. 1–4.
28. J. A. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, No. 3, 293–300, 1999.
29. L. P. Wang and C. R. Wan, “Comments on ‘The extreme learning machine’,” IEEE Trans on Neural Networks, Vol. 19, No. 8, 1494– 1495, 2008.
30. B. Widrow, A. Greenblatt, Y. Kim, and D. Park, “The noprop algorithm: A new learning algorithm for multilayer neural networks,” Neural Netw., vol. 37, 182–188, Jan. 2013.
31. H. White, “An additional hidden unit test for neglected nonlinearity in multilayer feedforward networks,” Proc. of Int. conf. on Neural Networks, 1989, pp. 451–455.
32. H. White, “Approximate nonlinear forecasting methods,” in Handbook of Economic Forecasting. Elsevier, 2006, pp. 460–512.
33. Q. Zhang and A. Benveniste, “Wavelet networks,” IEEE Trans. Neural Networks, vol. 3, 889–898, 1992.
13. Appendix
This appendix lists the RBF journal papers coauthored by the ELM inventor during the same period or just after publishing ELMRBF without citing any of these relevant papers. As these are journal articles, the ELM inventor must have been working on RBF starting from 2002 or earlier. ELMRBF concepts have been known since 1988 and repeated several times in RBF related papers as the simplest (but not so accurate) RBF solution.
An efficient sequential learning algorithm for growing and pruning RBF (GAPRBF) networks
By: Huang, GB; Saratchandran, P; Sundararajan, N
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART BCYBERNETICS Volume: 34 Issue: 6 Pages: 22842292 Published: DEC 2004
A generalized growing and pruning RBF (GGAPRBF) neural network for function approximation
By: Huang, GB; Saratchandran, P; Sundararajan, N
IEEE TRANSACTIONS ON NEURAL NETWORKS Volume: 16 Issue: 1 Pages: 5767 Published: JAN 2005
Performance evaluation of GAPRBF network in channel equalization , By: Li, MB; Huang, GB; Saratchandran, P; et al. , NEURAL PROCESSING LETTERS Volume: 22 Issue: 2 Pages: 223233 Published: OCT 2005
Neuron selection for RBF neural network classifier based on data structure preserving criterion
By: Mao, KZ; Huang, GB , IEEE TRANS. ON NEURAL NETWORKS Vol: 16 Issue: 6 Pages: 15311540 Published: NOV 2005
Complexvalued growing and pruning RBF neural networks for communication channel equalisation
By: Li, M. B.; Huang, G. B.; Saratchandran, P.; et al.
IEE PROCVISION IMAGE AND SIGNAL PROCESSING Vol.: 153 No: 4 pp. 411418, AUG 2006
Improved GAPRBF network for classification problems
By: Zhang, Runxuan; Huang, GuangBin; Sundararajan, N.; et al. , NEUROCOMPUTING Volume: 70 Issue: 1618 Pages: 30113018 Published: OCT 2007
