Exploratory Analysis of Metallurgical Process Data with Neural Networks and Related MethodsElsevier, 2002. gada 19. apr. - 386 lappuses This volume is concerned with the analysis and interpretation of multivariate measurements commonly found in the mineral and metallurgical industries, with the emphasis on the use of neural networks. The book is primarily aimed at the practicing metallurgist or process engineer, and a considerable part of it is of necessity devoted to the basic theory which is introduced as briefly as possible within the large scope of the field. Also, although the book focuses on neural networks, they cannot be divorced from their statistical framework and this is discussed in length. The book is therefore a blend of basic theory and some of the most recent advances in the practical application of neural networks. |
No grāmatas satura
1.5. rezultāts no 89.
7. lappuse
... indication of the strength of the connection. The flow of information through the node is unidirectional, as indicated by the arrows in this figure. X1 W1 X2 W2 X3 W3 Z O O O Wm Xm Figure 1.1. Model of a single neuron. The output of the ...
... indication of the strength of the connection. The flow of information through the node is unidirectional, as indicated by the arrows in this figure. X1 W1 X2 W2 X3 W3 Z O O O Wm Xm Figure 1.1. Model of a single neuron. The output of the ...
9. lappuse
... indicated in Figure 1.3. The ability of the neural network to do so is typically assessed by means of cross-validation, where the performance of the network is evaluated against a novel set of test data, not used during training. Modes ...
... indicated in Figure 1.3. The ability of the neural network to do so is typically assessed by means of cross-validation, where the performance of the network is evaluated against a novel set of test data, not used during training. Modes ...
15. lappuse
... indicated above, it is referred to as per sample training or pattern training. An alternative is to train per epoch, by accumulating weight changes prior to adjustment, i.e.. A'w': -. 2.Aw;. The weights in the network are thus only ...
... indicated above, it is referred to as per sample training or pattern training. An alternative is to train per epoch, by accumulating weight changes prior to adjustment, i.e.. A'w': -. 2.Aw;. The weights in the network are thus only ...
24. lappuse
... indicated in Figure 1.7, for p = 2 and m = 3. The matrix of parameters determines the mapping (which represents the weights and biases in the case of a neural network model). 2D latent variable - space embedded in p-dimen- 3D data space ...
... indicated in Figure 1.7, for p = 2 and m = 3. The matrix of parameters determines the mapping (which represents the weights and biases in the case of a neural network model). 2D latent variable - space embedded in p-dimen- 3D data space ...
25. lappuse
... indicated in Figure 1.9. Learning vector quantization networks differ from supervised neural networks, in that they construct their own representations of categories among input data. A learning vector quantization network contains an ...
... indicated in Figure 1.9. Learning vector quantization networks differ from supervised neural networks, in that they construct their own representations of categories among input data. A learning vector quantization network contains an ...
Saturs
1 | |
50 | |
CHAPTER 3 LATENT VARIABLE METHODS | 74 |
CHAPTER 4 REGRESSION MODELS | 112 |
CHAPTER 5 TOPOGRAPHICAL MAPPINGS WITH NEURAL NETWORKS | 172 |
CHAPTER 6 CLUSTER ANALYSIS | 199 |
CHAPTER 7 EXTRACTION OF RULES FROM DATA WITH NEURAL NETWORKS | 228 |
CHAPTER 8 INTRODUCTION TO THE MODELLING OF DYNAMIC SYSTEMSCHAPTER | 262 |
DYNAMIC SYSTEMS ANALYSIS AND MODELLING | 285 |
CHAPTER 10 EMBEDDING OF MULTIVARIATE DYNAMIC PROCESS SYSTEMS | 299 |
CHAPTER 11 FROM EXPLORATORY DATA ANALYSIS TO DECISION SUPPORT AND PROCESS CONTROL | 313 |
REFERENCES | 333 |
INDEX | 366 |
DATA FILES | 370 |
Citi izdevumi - Skatīt visu
Exploratory Analysis of Metallurgical Process Data with Neural ..., 1. sējums Chris Aldrich Priekšskatījums nav pieejams - 2002 |
Bieži izmantoti vārdi un frāzes
activation addition algorithm analysis application approach approximately associated attractor attribute calculated classification cluster coefficients complexity computational considered consists constructed containing continuous correlation curve data set decision defined dependent derived determined dimension direction distance distribution dynamic embedding equation error estimated example exemplars extracted Figure fitted follows fuzzy rules Gaussian given hidden layer indicated individual initial input learning least linear matrix means measure methods mill minimize multivariate neural network nodes noise nonlinear objects observations obtained operator optimal original output parameters pattern performance plant points possible prediction principal component principal component analysis problem projection radial basis function reconstructed region regression represented respectively rules sample scale selected separation shown in Figure similar single space squares statistical step structure Table techniques tree values variables variance vector weight
Populāri fragmenti
335. lappuse - The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network.
360. lappuse - Differential evolution: A simple and efficient adaptive scheme for global optimization over continuous spaces.
338. lappuse - Shavlik, JW (1994). Using sampling and queries to extract rules from trained neural networks.
341. lappuse - A growing neural gas network learns topologies. In: Tesauro, G., Touretzky, DS, Leen, TK (eds.) Advances in Neural Information Processing Systems, vol.
363. lappuse - The GENITOR Algorithm and Selective Pressure: Why Rank-Based Allocation of Reproductive Trials is Best, in Proc.
127. lappuse - Unstandardized Coefficients Standardized Coefficients Model B Std. Error Beta t Sig. 1 (Constant...
348. lappuse - Kramer, MA (1991). Nonlinear Principal Component Analysis Using Autoassociative Neural Networks.
190. lappuse - Zi£j[(i-Hx)G-Hy){f(iJ,d,a)/CTxCTy}] (5.14) where \\.x and a^ are respectively the mean and standard deviation of the row sums of the matrix, and \iy and ay are the mean and standard deviation of the column sums.
80. lappuse - The sum of the variances of all n principal components is equal to the sum of the variances of the original variables.
21. lappuse - Each process element in the Kohonen layer measures the Euclidean distance of its weights to the input values (exemplars) fed to the layer. For example, if the input data consist of M-dimensional vectors of the form x = {x\,xi, . . .xM}, then each Kohonen element will have M weight values, which can be denoted by w, = {Wi\,Wii,...WiM}.