Exploratory Analysis of Metallurgical Process Data with Neural Networks and Related MethodsElsevier, 2002. gada 19. apr. - 386 lappuses This volume is concerned with the analysis and interpretation of multivariate measurements commonly found in the mineral and metallurgical industries, with the emphasis on the use of neural networks. The book is primarily aimed at the practicing metallurgist or process engineer, and a considerable part of it is of necessity devoted to the basic theory which is introduced as briefly as possible within the large scope of the field. Also, although the book focuses on neural networks, they cannot be divorced from their statistical framework and this is discussed in length. The book is therefore a blend of basic theory and some of the most recent advances in the practical application of neural networks. |
No grāmatas satura
6.–10. rezultāts no 75.
20. lappuse
... error of the network be apportioned to each node in the network. b) Back propagation algorithm The back propagation algorithm can be summarized as follows, for a network with a single hidden layer with q nodes and an output layer with p ...
... error of the network be apportioned to each node in the network. b) Back propagation algorithm The back propagation algorithm can be summarized as follows, for a network with a single hidden layer with q nodes and an output layer with p ...
21. lappuse
... error function scales with O(NW), for a sufficiently large number of weights Nw. Since the number of weights are usually much larger than the number of nodes in the neural network, most of the computational effort in the forward cycle ...
... error function scales with O(NW), for a sufficiently large number of weights Nw. Since the number of weights are usually much larger than the number of nodes in the neural network, most of the computational effort in the forward cycle ...
32. lappuse
... error techniques. Each hidden unit of a radial basis function network can be seen as having its own receptive field, which is used to cover the input space. The output weights leading from the hidden units to the output nodes ...
... error techniques. Each hidden unit of a radial basis function network can be seen as having its own receptive field, which is used to cover the input space. The output weights leading from the hidden units to the output nodes ...
39. lappuse
... errors. In this case, the hyperplane is constructed so as to minimize the probability of classification errors, averaged over the training data set. The margin of separation is referred to as soft, if the following condition is violated ...
... errors. In this case, the hyperplane is constructed so as to minimize the probability of classification errors, averaged over the training data set. The margin of separation is referred to as soft, if the following condition is violated ...
40. lappuse
... error for the training data set is minimized, the functional o(0) =X-1"I(9; - 1) (1.89) is minimized with respect to the weight vector w, subject to the constraint in equation (1.79) and the constraint on ||ws', i.e. = =2|w*| The ...
... error for the training data set is minimized, the functional o(0) =X-1"I(9; - 1) (1.89) is minimized with respect to the weight vector w, subject to the constraint in equation (1.79) and the constraint on ||ws', i.e. = =2|w*| The ...
Saturs
1 | |
50 | |
CHAPTER 3 LATENT VARIABLE METHODS | 74 |
CHAPTER 4 REGRESSION MODELS | 112 |
CHAPTER 5 TOPOGRAPHICAL MAPPINGS WITH NEURAL NETWORKS | 172 |
CHAPTER 6 CLUSTER ANALYSIS | 199 |
CHAPTER 7 EXTRACTION OF RULES FROM DATA WITH NEURAL NETWORKS | 228 |
CHAPTER 8 INTRODUCTION TO THE MODELLING OF DYNAMIC SYSTEMSCHAPTER | 262 |
DYNAMIC SYSTEMS ANALYSIS AND MODELLING | 285 |
CHAPTER 10 EMBEDDING OF MULTIVARIATE DYNAMIC PROCESS SYSTEMS | 299 |
CHAPTER 11 FROM EXPLORATORY DATA ANALYSIS TO DECISION SUPPORT AND PROCESS CONTROL | 313 |
REFERENCES | 333 |
INDEX | 366 |
DATA FILES | 370 |
Citi izdevumi - Skatīt visu
Exploratory Analysis of Metallurgical Process Data with Neural Networks and ... C. Aldrich Ierobežota priekšskatīšana - 2002 |
Exploratory Analysis of Metallurgical Process Data with Neural ..., 1. sējums Chris Aldrich Priekšskatījums nav pieejams - 2002 |
Bieži izmantoti vārdi un frāzes
activation addition algorithm analysis application approach approximately associated attractor attribute calculated classification cluster coefficients complexity computational considered consists constructed containing continuous correlation curve data set decision defined dependent derived determined dimension direction distance distribution dynamic embedding equation error estimated example exemplars extracted Figure fitted follows fuzzy rules Gaussian given hidden layer indicated individual initial input learning least linear matrix means measure methods mill minimize multivariate neural network nodes noise nonlinear objects observations obtained operator optimal original output parameters pattern performance plant points possible prediction principal component principal component analysis problem projection radial basis function reconstructed region regression represented respectively rules sample scale selected separation shown in Figure similar single space squares statistical step structure Table techniques tree values variables variance vector weight
Populāri fragmenti
335. lappuse - The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network.
360. lappuse - Differential evolution: A simple and efficient adaptive scheme for global optimization over continuous spaces.
338. lappuse - Shavlik, JW (1994). Using sampling and queries to extract rules from trained neural networks.
341. lappuse - A growing neural gas network learns topologies. In: Tesauro, G., Touretzky, DS, Leen, TK (eds.) Advances in Neural Information Processing Systems, vol.
363. lappuse - The GENITOR Algorithm and Selective Pressure: Why Rank-Based Allocation of Reproductive Trials is Best, in Proc.
127. lappuse - Unstandardized Coefficients Standardized Coefficients Model B Std. Error Beta t Sig. 1 (Constant...
348. lappuse - Kramer, MA (1991). Nonlinear Principal Component Analysis Using Autoassociative Neural Networks.
190. lappuse - Zi£j[(i-Hx)G-Hy){f(iJ,d,a)/CTxCTy}] (5.14) where \\.x and a^ are respectively the mean and standard deviation of the row sums of the matrix, and \iy and ay are the mean and standard deviation of the column sums.
80. lappuse - The sum of the variances of all n principal components is equal to the sum of the variances of the original variables.
21. lappuse - Each process element in the Kohonen layer measures the Euclidean distance of its weights to the input values (exemplars) fed to the layer. For example, if the input data consist of M-dimensional vectors of the form x = {x\,xi, . . .xM}, then each Kohonen element will have M weight values, which can be denoted by w, = {Wi\,Wii,...WiM}.