Exploratory Analysis of Metallurgical Process Data with Neural Networks and Related MethodsElsevier, 2002. gada 19. apr. - 386 lappuses This volume is concerned with the analysis and interpretation of multivariate measurements commonly found in the mineral and metallurgical industries, with the emphasis on the use of neural networks. The book is primarily aimed at the practicing metallurgist or process engineer, and a considerable part of it is of necessity devoted to the basic theory which is introduced as briefly as possible within the large scope of the field. Also, although the book focuses on neural networks, they cannot be divorced from their statistical framework and this is discussed in length. The book is therefore a blend of basic theory and some of the most recent advances in the practical application of neural networks. |
No grāmatas satura
1.–5. rezultāts no 51.
8. lappuse
... (equations 1.3-1.4). The parameter A is proportional to the gain of the neuron, and determines the steepness of the continuous activation function. These functions are depicted graphically in Figure 1.12. For obvious reasons, the sign ...
... (equations 1.3-1.4). The parameter A is proportional to the gain of the neuron, and determines the steepness of the continuous activation function. These functions are depicted graphically in Figure 1.12. For obvious reasons, the sign ...
9. lappuse
... equations 1.6-1.7. Once the structure of the network (number of layers, number of nodes per layer, types of nodes, etc.) is fixed, the parameters (weights) of the network have to be determined. This is done by training (optimization) of ...
... equations 1.6-1.7. Once the structure of the network (number of layers, number of nodes per layer, types of nodes, etc.) is fixed, the parameters (weights) of the network have to be determined. This is done by training (optimization) of ...
12. lappuse
... equation, with regard to wi, gives VE = -[d - f(w'z)|f(w'x)x (1.19) The adjustment of the weights in this supervisory procedure takes place as follows Awi = -ÉVE (1.20) Or Aw; - B[d - f(w'x)|f(w's)x (1.21) for a single 12 Introduction ...
... equation, with regard to wi, gives VE = -[d - f(w'z)|f(w'x)x (1.19) The adjustment of the weights in this supervisory procedure takes place as follows Awi = -ÉVE (1.20) Or Aw; - B[d - f(w'x)|f(w's)x (1.21) for a single 12 Introduction ...
14. lappuse
... Equation (1.23) is known as the generalized delta rule, since it includes the delta rule (0. = 0). The inclusion of a momentum term has the following benefits. • When the partial derivative ÓE(k)/ów;(k) has the same algebraic 14 ...
... Equation (1.23) is known as the generalized delta rule, since it includes the delta rule (0. = 0). The inclusion of a momentum term has the following benefits. • When the partial derivative ÓE(k)/ów;(k) has the same algebraic 14 ...
16. lappuse
... (equation 1.12), so that Awi = Bd;x, or (127) Aw; = Bd;xj, for j= 0, 1, 2, ...n (1.28) The increment in the weight vector is directly proportional to the product of the target value of a particular exemplar, and the exemplar (input) ...
... (equation 1.12), so that Awi = Bd;x, or (127) Aw; = Bd;xj, for j= 0, 1, 2, ...n (1.28) The increment in the weight vector is directly proportional to the product of the target value of a particular exemplar, and the exemplar (input) ...
Saturs
1 | |
50 | |
CHAPTER 3 LATENT VARIABLE METHODS | 74 |
CHAPTER 4 REGRESSION MODELS | 112 |
CHAPTER 5 TOPOGRAPHICAL MAPPINGS WITH NEURAL NETWORKS | 172 |
CHAPTER 6 CLUSTER ANALYSIS | 199 |
CHAPTER 7 EXTRACTION OF RULES FROM DATA WITH NEURAL NETWORKS | 228 |
CHAPTER 8 INTRODUCTION TO THE MODELLING OF DYNAMIC SYSTEMSCHAPTER | 262 |
DYNAMIC SYSTEMS ANALYSIS AND MODELLING | 285 |
CHAPTER 10 EMBEDDING OF MULTIVARIATE DYNAMIC PROCESS SYSTEMS | 299 |
CHAPTER 11 FROM EXPLORATORY DATA ANALYSIS TO DECISION SUPPORT AND PROCESS CONTROL | 313 |
REFERENCES | 333 |
INDEX | 366 |
DATA FILES | 370 |
Citi izdevumi - Skatīt visu
Exploratory Analysis of Metallurgical Process Data with Neural Networks and ... C. Aldrich Ierobežota priekšskatīšana - 2002 |
Exploratory Analysis of Metallurgical Process Data with Neural ..., 1. sējums Chris Aldrich Priekšskatījums nav pieejams - 2002 |
Bieži izmantoti vārdi un frāzes
activation addition algorithm analysis application approach approximately associated attractor attribute calculated classification cluster coefficients complexity computational considered consists constructed containing continuous correlation curve data set decision defined dependent derived determined dimension direction distance distribution dynamic embedding equation error estimated example exemplars extracted Figure fitted follows fuzzy rules Gaussian given hidden layer indicated individual initial input learning least linear matrix means measure methods mill minimize multivariate neural network nodes noise nonlinear objects observations obtained operator optimal original output parameters pattern performance plant points possible prediction principal component principal component analysis problem projection radial basis function reconstructed region regression represented respectively rules sample scale selected separation shown in Figure similar single space squares statistical step structure Table techniques tree values variables variance vector weight
Populāri fragmenti
335. lappuse - The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network.
360. lappuse - Differential evolution: A simple and efficient adaptive scheme for global optimization over continuous spaces.
338. lappuse - Shavlik, JW (1994). Using sampling and queries to extract rules from trained neural networks.
341. lappuse - A growing neural gas network learns topologies. In: Tesauro, G., Touretzky, DS, Leen, TK (eds.) Advances in Neural Information Processing Systems, vol.
363. lappuse - The GENITOR Algorithm and Selective Pressure: Why Rank-Based Allocation of Reproductive Trials is Best, in Proc.
127. lappuse - Unstandardized Coefficients Standardized Coefficients Model B Std. Error Beta t Sig. 1 (Constant...
348. lappuse - Kramer, MA (1991). Nonlinear Principal Component Analysis Using Autoassociative Neural Networks.
190. lappuse - Zi£j[(i-Hx)G-Hy){f(iJ,d,a)/CTxCTy}] (5.14) where \\.x and a^ are respectively the mean and standard deviation of the row sums of the matrix, and \iy and ay are the mean and standard deviation of the column sums.
80. lappuse - The sum of the variances of all n principal components is equal to the sum of the variances of the original variables.
21. lappuse - Each process element in the Kohonen layer measures the Euclidean distance of its weights to the input values (exemplars) fed to the layer. For example, if the input data consist of M-dimensional vectors of the form x = {x\,xi, . . .xM}, then each Kohonen element will have M weight values, which can be denoted by w, = {Wi\,Wii,...WiM}.