Технология извлечения знаний из нейронных сетей: апробация, проектирование ПО, использование в психо...

Реферат - Компьютеры, программирование

Другие рефераты по предмету Компьютеры, программирование

?ей // Труды VI Международной конференции "Математика. Компьютер. Образование" / - М.: Прогресс-традиция, 1999. - Ч.I. - С.110-116.

  • Царегородцев В.Г. Извлечение явных знаний из таблиц данных при помощи обучаемых и упрощаемых искусственных нейронных сетей // Материалы XII Международной конференции по нейрокибернетике. - Ростов-на-Дону. Изд-во СКНЦ ВШ. 1999.- 323с. - С.245-249.
  • Reed R. Pruning Algorithms - a Survey / IEEE Trans. on Neural Networks, 1993, Vol.4, №5. - pp.740-747.
  • Depenau J., Moller M. Aspects of Generalization and Pruning / Proc. WCNN94, 1994, Vol.3. - pp.504-509.
  • Гилев С.Е., Коченов Д.А., Миркес Е.М., Россиев Д.А. Контрастирование, оценка значимости параметров, оптимизация их значений и их интерпретация в нейронных сетях // Доклады III Всероссийского семинара “Нейроинформатика и ее приложения”. Красноярск, 1995.- С.66-78.
  • Weigend A.S., Rumelhart D.E., Huberman B.A. Generalization by Weights-elimination with Application to Forecasting / Advances in Neural Information Processing Systems. Morgan Kaufmann, 1991. Vol.3. - pp. 875-882.
  • Yasui S. Convergence Suppression and Divergence Facilitation for Pruning Multi-Output Backpropagation Networks / Proc. 3rd Int. Conf. on Fuzzy Logic, Neural Nets and Soft Computing, Iizuka, Japan, 1994. - pp.137-139.
  • Yasui S. A New Method to Remove Redundant Connections in Backpropagation Neural Networks: Inproduction of Parametric Lateral Inhibition Fields / Proc. IEEE INNS Int. Joint Conf. on Neural Networks, Beijing, Vol.2. - pp.360-367.
  • Yasui S., Malinowski A., Zurada J.M. Convergence Suppression and Divergence Facilitation: New Approach to Prune Hidden Layer and Weights in Feedforward Neural Networks / Proc. IEEE Int. Symposium on Circuits and Systems 1995, Seattle, WA, USA. Vol.1. - pp.121-124.
  • Malinowski A., Miller D.A., Zurada J.M. Reconciling Training and Weight Suppression: New Guidelines for Pruning-efficient Training / Proc. WCNN 1995, Washington, DC, USA. Vol.1. - pp.724-728.
  • Krogh A., Hertz J. A Simple Weight Decay can Improve Generalization / Advances in Neural Infromation Processing Systems 4, 1992. - pp. 950-957.
  • Kamimura R., Nakanishi S. Weight-decay as a Process of Redundancy Reduction / Proc. WCNN, 1994, Vol.3. - pp.486-489.
  • Karnin E.D. A Simple Procedure for Pruning Back-propagation Trained Network / IEEE Trans. on Neural Networks, June 1990. Vol. 1, No.2. - pp.239-242.
  • Le Cun Y., Denker J.S., Solla S.A. Optimal Brain Damage / Advances in Neural Information Processing Systems 2. - Morgan Kaufmann, 1990. - pp.598-605.
  • Hassibi B., Stork D.G., Wolff G. Optimal Brain Surgeon: Extensions and Performance Comparisions / Advances in Neural Information Processing Systems 6, 1994. pp.263-270.
  • Гилев С.Е. Алгоритм сокращения нейронных сетей, основанный на разностной оценке вторых производных целевой функции // Нейроинформатика и ее приложения : Тезисы докладов V Всеросс. семинара, 1997. Красноярск. КГТУ. 1997. - 190с. - C.45-46.
  • Tanpraset C., Tanpraset T., Lursinsap C. Neuron and Dendrite Pruning by Synaptic Weight Shifting in Polynomial Time / Proc. IEEE ICNN 1996, Washington, DC, USA. Vol.2. - pp.822-827.
  • Kamimura R. Principal Hidden Unit Analysis: Generation of Simple Networks by Minimum Entropy Method / Proc. IJCNN 1993, Nagoya, Japan. - Vol.1. - pp.317-320.
  • Mozer M.C., Smolensky P. Using Relevance to Reduce Network Size Automatically / Connection Science. 1989. Vol.1. - pp.3-16.
  • Mozer M.C., Smolensky P. Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment / Advances in Neural Network Information Processing Systems 1, Morgan Kaufmann, 1989. - pp.107-115.
  • Watanabe E., Shimizu H. Algorithm for Pruning Hidden Units in Multi Layered Neural Network for Binary Pattern Classification Problem / Proc. IJCNN 1993, Nagoya, Japan. - Vol.1. - pp.327-330.
  • Yoshimura A., Nagano T. A New Measure for the Estimation of the Effectiveness of Hidden Units / Proc. Annual Conf. JNNS, 1992. - pp.82-83.
  • Murase K., Matsunaga Y., Nakade Y. A Back-propagation Algorithm which Automatically Determines the Number of Association Units / Proc. IJCNN, Singapore, 1991. - Vol.1. - pp.783-788.
  • Matsunaga Y., Nakade Y., Yamakawa O., Murase K, A Back-propagation Algorithm with Automatic Reduction of Association Units in Multi-layered Neural Network / Trans. on IEICE, 1991. Vol. J74-DII, №8. - pp.1118-1121.
  • Hagiwara M. Removal of Hidden Units and Weights for Back Propagation Networks / Proc. IJCNN 1993, Nagoya, Japan. - Vol.1. - pp.351-354.
  • Majima N., Watanabe A., Yoshimura A., Nagano T. A New Criterion "Effectiveness Factor" for Pruning Hidden Units / Proc. ICNN 1994, Seoul, Korea. - Vol.1. - pp. 382-385.
  • Царегородцев В.Г. Производство полуэмпирических знаний из таблиц данных с помощью обучаемых искусственных нейронных сетей // Методы нейроинформатики. Красноярск: Изд-во КГТУ, 1998. - 205c. - C.176-198.
  • Sietsma J., Dow R.J.F. Neural Net Pruning - Why and How / Proc. IEEE IJCNN 1988, San Diego, CA. Vol.1. - pp. 325-333.
  • Sietsma J., Dow R.J.F. Creating Artificial Neural Network that Generalize / Neural Networks, 1991. Vol.4, No.1. - pp.67-79.
  • Yamamoto S., Oshino T., Mori T., Hashizume A., Motoike J. Gradual Reduction of Hidden Units in the Back Propagation Algorithm, and its Application to Blood Cell Classification / Proc. IJCNN 1993, Nagoya, Japan. - Vol.3. - pp.2085-2088.
  • Sarle W.S. How to measure importance of inputs? SAS Institute Inc., Cary, NC, USA, 1999. ftp://ftp.sas.com/pub/neural/importance.html
  • Goh T.-H. Semantic Extraction Using Neural Network Modelling and Sensitivity Analisys / Proc. IJCNN 1993, Nagoya, Japan. - Vol.1. - pp.1031-1034.
  • Howlan S.J., Hinton G.E. Simplifying Neural Network by Soft Weight Sharing / Neural Computations, 1992. Vol.4. №4. - pp.473-493.
  • Keegstra H., Jansen W.J., Nijhuis J.A.G., Spaanenburg L., Stevens H., Udding J.T. Exploiting Network Redundancy for Low-Cost Neural Network Realizations / Proc. IEEE ICNN 1996, Washington, DC, USA. Vol.2. - pp.951-955.
  • Chen A.M., Lu H.-M., Hecht-Nielsen R. On the Geometry of Feedforward Neural Network Error Surfaces // Neural Computations, 1993. - 5. pp. 910-927.
  • Гордиенко П. Стратегии контрастирования // Нейроинформатика и ее приложения : Тезисы докладов V Всероссийского семинара, 1997 / Под ред. А.Н.Горбаня. Красноярск. КГТУ. 1997. - 190с. - C.69.
  • Gorban A.N., Mirkes Ye.M., Tsaregorodtsev V.G. Generation of explicit knowledge from empirical data through pruning of trainable neural networks / Int. Joint Conf. on Neural Networks, Washington, DC, USA, 1999.
  • Ishibuchi H., Nii M. Generating Fuzzy If-Then Rules from Trained Neural Networks: Linguistic Analysis of Neural Networks / Proc. 1996 IEEE ICNN, Washington, DC, USA. Vol.2. - pp.1133-1138.
  • Lozowski A., Cholewo T.J., Zurada J.M. Crisp Rule Extraction from Perceptron Network Classifiers / Proc. 1996 IEEE ICNN, Washington, DC, USA. Plenary, Panel and Special Sessions Volume. - pp.94-99.
  • Lu H., Setiono R., Liu H. Effective Data Mining Using Neural Networks / IEEE Trans. on Knowledge and Data Engineering, 1996, Vol.8, №6. pp.957-961.
  • Duch W., Adamczak R., Grabczewski K. Optimization of Logical Rules Derived by Neural Procedures / Proc. 1999 IJCNN, Washington, DC, USA, 1999.
  • Duch W., Adamczak R., Grabczewski K. Neural Optimization of Linguistic Variables and Membership Functions / Proc. 1999 ICONIP, Perth, Australia.
  • Ishikawa M. Rule Extraction by Successive Regularization / Proc. 1996 IEEE ICNN, Washington, DC, USA. Vol.2. - pp.1139-1143.
  • Sun R., Peterson T. Learning in Reactive Sequential Decision Tasks: the CLARION Model / Proc. 1996 IEEE ICNN, Washington, DC, USA. Plenary, Panel and Special Sessions Volume. - pp.70-75.
  • Gallant S.I. Connectionist Expert Systems / Communications of the ACM, 1988, №31. pp.152-169.
  • Saito K., Nakano R. Medical Diagnostic Expert System Based on PDP Model / Proc. IEEE ICNN, 1988. pp.255-262.
  • Fu L.M. Rule Learning by Searching on Adapted Nets / Proc. AAAI, 1991. - pp.590-595.
  • Towell G., Shavlik J.W. Interpretation of Artificial Neural Networks: Mapping Knowledge-based Neural Networks into Rules / Advances in Neural Information Processing Systems 4 (Moody J.E., Hanson S.J., Lippmann R.P. eds.). Morgan Kaufmann, 1992. - pp. 977-984.
  • Fu L.M. Rule Generation From Neural Networks / IEEE Trans. on Systems, Man. and Cybernetics, 1994. Vol.24, №8. - pp.1114-1124.
  • Yi L., Hongbao S. The N-R Method of Acquiring Multi-step Reasoning Production Rules Based on NN / Proc. 1996 IEEE ICNN, Washington, DC, USA. Vol.2. - pp.1150-1155.
  • Towell G., Shavlik J.W., Noodewier M.O. Refinemen