keras_input_explanation
Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédente | ||
keras_input_explanation [2020/10/03 11:07] – [Réponse 1] serge | keras_input_explanation [2020/12/27 15:14] (Version actuelle) – serge | ||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
====== Keras input explanation: | ====== Keras input explanation: | ||
+ | <WRAP center round box 60% centeralign> | ||
+ | **{{tagpage> | ||
+ | </ | ||
+ | <WRAP center round box 60% centeralign> | ||
+ | **[[les_pages_intelligence_artificielle_en_details|Les Pages Intelligence Artificielle en détails]]** | ||
+ | </ | ||
+ | |||
Mise en forme d'un post de **stackoverflow.com** non traduit en français, car très technique ce qui ne se comprend qu'en anglais. | Mise en forme d'un post de **stackoverflow.com** non traduit en français, car très technique ce qui ne se comprend qu'en anglais. | ||
**[[https:// | **[[https:// | ||
+ | |||
+ | =====Ressources===== | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[https:// | ||
=====Question===== | =====Question===== | ||
Ligne 13: | Ligne 25: | ||
{{ : | {{ : | ||
- | =====Réponse 1===== | + | =====Définitions===== |
- | ===Units=== | + | ====Units==== |
- | The amount of " | + | The amount of " |
It's a property of each layer, and yes, it's related to the output shape (as we will see later). In your picture, except for the input layer, which is conceptually different from other layers, you have: | It's a property of each layer, and yes, it's related to the output shape (as we will see later). In your picture, except for the input layer, which is conceptually different from other layers, you have: | ||
* Hidden layer 1: 4 units (4 neurons) | * Hidden layer 1: 4 units (4 neurons) | ||
* Hidden layer 2: 4 units | * Hidden layer 2: 4 units | ||
* Last layer: 1 unit | * Last layer: 1 unit | ||
- | * | + | |
- | ===Shapes=== | + | ====Shapes==== |
- | Shapes are consequences of the model' | + | Shapes are consequences of the model' |
Ex: a shape (30,4,10) means an array or tensor with 3 dimensions, containing 30 elements in the first dimension, 4 in the second and 10 in the third, totaling 30*4*10 = 1200 elements or numbers. | Ex: a shape (30,4,10) means an array or tensor with 3 dimensions, containing 30 elements in the first dimension, 4 in the second and 10 in the third, totaling 30*4*10 = 1200 elements or numbers. | ||
- | ===The input shape=== | + | ====The input shape==== |
- | What flows between layers are tensors. Tensors can be seen as matrices, with shapes. | + | What flows between layers are tensors. Tensors can be seen as matrices, with shapes. |
- | In Keras, the input layer itself is not a layer, but a tensor. It's the starting tensor you send to the first hidden layer. This tensor must have the same shape as your training data. | + | In Keras, the input layer itself is not a layer, but a tensor. It's the starting tensor you send to the first hidden layer. |
- | Example: if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30, | + | Example: if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30, |
Each type of layer requires the input with a certain number of dimensions: | Each type of layer requires the input with a certain number of dimensions: | ||
* Dense layers require inputs as (batch_size, | * Dense layers require inputs as (batch_size, | ||
Ligne 37: | Ligne 49: | ||
*if using channels_first: | *if using channels_first: | ||
* 1D convolutions and recurrent layers use (batch_size, | * 1D convolutions and recurrent layers use (batch_size, | ||
- | *Details on how to prepare data for recurrent layers | + | *Details on [[https:// |
- | Now, the input shape is the only one you must define, because your model cannot know it. Only you know that, based on your training data. | + | |
+ | Now, the input shape is the only one you must define, because your model cannot know it. Only you know that, based on your training data. \\ | ||
All the other shapes are calculated automatically based on the units and particularities of each layer. | All the other shapes are calculated automatically based on the units and particularities of each layer. | ||
- | ===Relation between shapes and units - The output shape=== | + | ====Relation between shapes and units - The output shape==== |
- | Given the input shape, all other shapes are results of layers calculations. | + | Given the input shape, all other shapes are results of layers calculations. |
- | The " | + | The " |
- | Each type of layer works in a particular way. Dense layers have output shape based on " | + | Each type of layer works in a particular way. Dense layers have output shape based on " |
- | Let's show what happens with " | + | Let's show what happens with " |
A dense layer has an output shape of (batch_size, | A dense layer has an output shape of (batch_size, | ||
* Hidden layer 1: 4 units, output shape: (batch_size, | * Hidden layer 1: 4 units, output shape: (batch_size, | ||
Ligne 51: | Ligne 64: | ||
* Last layer: 1 unit, output shape: (batch_size, | * Last layer: 1 unit, output shape: (batch_size, | ||
- | ===Weights=== | + | ====Weights==== |
- | Weights will be entirely automatically calculated based on the input and the output shapes. Again, each type of layer works in a certain way. But the weights will be a matrix capable of transforming the input shape into the output shape by some mathematical operation. | + | Weights will be entirely automatically calculated based on the input and the output shapes. Again, each type of layer works in a certain way. But the weights will be a matrix capable of transforming the input shape into the output shape by some mathematical operation. |
- | In a dense layer, weights multiply all inputs. It's a matrix with one column per input and one row per unit, but this is often not important for basic works. | + | In a dense layer, weights multiply all inputs. It's a matrix with one column per input and one row per unit, but this is often not important for basic works. |
In the image, if each arrow had a multiplication number on it, all numbers together would form the weight matrix. | In the image, if each arrow had a multiplication number on it, all numbers together would form the weight matrix. | ||
- | ===Shapes in Keras=== | + | ====Shapes in Keras==== |
- | Earlier, I gave an example of 30 images, 50x50 pixels and 3 channels, having an input shape of (30, | + | Earlier, I gave an example of 30 images, 50x50 pixels and 3 channels, having an input shape of (30, |
Since the input shape is the only one you need to define, Keras will demand it in the first layer. | Since the input shape is the only one you need to define, Keras will demand it in the first layer. | ||
But in this definition, Keras ignores the first dimension, which is the batch size. Your model should be able to deal with any batch size, so you define only the other dimensions: | But in this definition, Keras ignores the first dimension, which is the batch size. Your model should be able to deal with any batch size, so you define only the other dimensions: | ||
- | input_shape = (50,50,3) | + | |
- | | + | input_shape = (50,50,3) # regardless of how many images I have, each image has this shape |
- | Optionally, or when it's required by certain kinds of models, you can pass the shape containing the batch size via batch_input_shape=(30, | + | |
- | Either way you choose, tensors in the model will have the batch dimension. | + | Optionally, or when it's required by certain kinds of models, you can pass the shape containing the batch size via batch_input_shape=(30, |
- | So, even if you used input_shape=(50, | + | Either way you choose, tensors in the model will have the batch dimension.\\ |
- | The first dimension is the batch size, it's None because it can vary depending on how many examples you give for training. (If you defined the batch size explicitly, then the number you defined will appear instead of None) | + | So, even if you used input_shape=(50, |
- | Also, in advanced works, when you actually operate directly on the tensors (inside Lambda layers or in the loss function, for instance), the batch size dimension will be there. | + | The first dimension is the batch size, it's None because it can vary depending on how many examples you give for training. (If you defined the batch size explicitly, then the number you defined will appear instead of None)\\ |
+ | Also, in advanced works, when you actually operate directly on the tensors (inside Lambda layers or in the loss function, for instance), the batch size dimension will be there. | ||
* So, when defining the input shape, you ignore the batch size: input_shape=(50, | * So, when defining the input shape, you ignore the batch size: input_shape=(50, | ||
* When doing operations directly on tensors, the shape will be again (30, | * When doing operations directly on tensors, the shape will be again (30, | ||
* When keras sends you a message, the shape will be (None, | * When keras sends you a message, the shape will be (None, | ||
- | ===Dim=== | + | ====Dim==== |
- | And in the end, what is dim? | + | And in the end, what is dim? \\ |
- | If your input shape has only one dimension, you don't need to give it as a tuple, you give input_dim as a scalar number. | + | If your input shape has only one dimension, you don't need to give it as a tuple, you give input_dim as a scalar number. |
So, in your model, where your input layer has 3 elements, you can use any of these two: | So, in your model, where your input layer has 3 elements, you can use any of these two: | ||
* input_shape=(3, | * input_shape=(3, | ||
Ligne 79: | Ligne 93: | ||
But when dealing directly with the tensors, often dim will refer to how many dimensions a tensor has. For instance a tensor with shape (25,10909) has 2 dimensions. | But when dealing directly with the tensors, often dim will refer to how many dimensions a tensor has. For instance a tensor with shape (25,10909) has 2 dimensions. | ||
- | ====Defining your image in Keras==== | + | =====Defining your image in Keras===== |
Keras has two ways of doing it, Sequential models, or the functional API Model. I don't like using the sequential model, later you will have to forget it anyway because you will want models with branches. | Keras has two ways of doing it, Sequential models, or the functional API Model. I don't like using the sequential model, later you will have to forget it anyway because you will want models with branches. | ||
PS: here I ignored other aspects, such as activation functions. | PS: here I ignored other aspects, such as activation functions. | ||
Ligne 89: | Ligne 103: | ||
model = Sequential() | model = Sequential() | ||
- | #start from the first hidden layer, since the input is not actually a layer | + | # Start from the first hidden layer, since the input is not actually a layer |
- | #but inform the shape of the input, with 3 elements. | + | # but inform the shape of the input, with 3 elements. |
- | model.add(Dense(units=4, | + | model.add(Dense(units=4, |
- | #further | + | # Further |
- | model.add(Dense(units=4)) #hidden layer 2 | + | model.add(Dense(units=4)) # hidden layer 2 |
- | model.add(Dense(units=1)) #output layer | + | model.add(Dense(units=1)) # output layer |
- | With the functional API Model: | + | </ |
+ | |||
+ | **With the functional API Model:** | ||
+ | |||
+ | <code python> | ||
from keras.models import Model | from keras.models import Model | ||
from keras.layers import * | from keras.layers import * | ||
- | #Start defining the input tensor: | + | # Start defining the input tensor: |
inpTensor = Input((3, | inpTensor = Input((3, | ||
- | #create | + | # Create |
hidden1Out = Dense(units=4)(inpTensor) | hidden1Out = Dense(units=4)(inpTensor) | ||
hidden2Out = Dense(units=4)(hidden1Out) | hidden2Out = Dense(units=4)(hidden1Out) | ||
finalOut = Dense(units=1)(hidden2Out) | finalOut = Dense(units=1)(hidden2Out) | ||
- | #define | + | # Define |
- | model = Model(inpTensor, | + | model = Model(inpTensor, |
</ | </ | ||
====Shapes of the tensors==== | ====Shapes of the tensors==== | ||
- | Remember you ignore batch sizes when defining layers: | + | Remember you ignore batch sizes when defining layers, so set it to None: |
* inpTensor: (None, | * inpTensor: (None, | ||
* hidden1Out: (None, | * hidden1Out: (None, | ||
* hidden2Out: (None, | * hidden2Out: (None, | ||
* finalOut: (None, | * finalOut: (None, | ||
- | Comments | + | |
- | * One question about the input_shape= parameter remains: to which dimension the first value of the argument refers? I see things like input_shape=(728, | + | ====Comments 1==== |
+ | To which dimension the first value of the argument refers? I see things like input_shape=(728, | ||
* That comma does not create a second dimension. It's just python notation for creating a tuple that contains only one element. input_shape(728, | * That comma does not create a second dimension. It's just python notation for creating a tuple that contains only one element. input_shape(728, | ||
- | * @DanielMöller: | + | |
- | * A vector has one dimension, but many elements. It has shape (n,) ---- A matrix has two dimensions, dimension 0 has m elements, dimension 1 has n elements, totaling m x n elements, shape (m,n). If you imagine a " | + | ===Q=== |
- | | + | Could you please elaborate a little bit what the difference between "input elements" |
+ | * A vector has one dimension, but many elements. It has shape (n,) ---- A matrix has two dimensions, dimension 0 has m elements, dimension 1 has n elements, totaling m x n elements, shape (m,n). If you imagine a " | ||
+ | |||
+ | ===Q=== | ||
+ | But when dealing directly with the tensors, often dim will refer to how many dimensions a tensor has. So for 1-dimensional 3-length input, input_dim=3, | ||
* input_shape=(25, | * input_shape=(25, | ||
* For those who have inputs and output tensors of different dimensions, see Daniel' | * For those who have inputs and output tensors of different dimensions, see Daniel' | ||
- | * question @DanielMöller - you said if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30, | + | |
- | * @Prince, | + | ===Q=== |
- | | + | You said if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is (30, |
- | * @DuncanJones, | + | * the order matters. You can configure Keras to use data_format = ' |
+ | |||
+ | ===Q=== | ||
+ | "or (batch_size, | ||
+ | * They're whatever you want, if your data has this format. The Dense layer will only work on the last dimension, leaving the other dimensions untouched. | ||
* Examples, video frames that should be treated equally. Image pixels when you want to process only the color channels for each pixel instead of mixing pixels, etc. | * Examples, video frames that should be treated equally. Image pixels when you want to process only the color channels for each pixel instead of mixing pixels, etc. | ||
- | | + | |
- | * not sure someone will see till this point, but, anyway, if you are using cv2.resize for resizing then you should keep in mind that the output | + | =====Input Dimension Clarified===== |
+ | It (the word dimension alone) can refer to: | ||
+ | - **The dimension of Input Data (or stream)** such as # N of sensor axes to beam the time series signal, or RGB color channel (3): suggested word=> " | ||
+ | - **The total number /length of Input Features** (or Input layer) (28 x 28 = 784 for the MINST color image) or 3000 in the FFT transformed Spectrum Values, or "Input Layer / Input Feature Dimension" | ||
+ | - **The dimensionality** (# of dimension) of the input (typically 3D as expected in Keras LSTM) or (# | ||
+ | | ||
+ | |||
+ | Keras has its input_dim refers to the Dimension of Input Layer / Number of Input Feature | ||
+ | |||
+ | <code python> | ||
+ | model = Sequential() | ||
+ | model.add(Dense(32, | ||
+ | model.add(Activation(' | ||
+ | </ | ||
+ | |||
+ | In Keras LSTM, it refers to the total Time Steps | ||
+ | |||
+ | The term has been very confusing, is correct | ||
+ | |||
+ | I find one of the challenge in Machine Learning is to deal with different languages or dialects and terminologies | ||
- | {{tag>sb ia}} | + | {{tag>sb ia keras}} |
keras_input_explanation.1601723244.txt.gz · Dernière modification : 2020/10/03 11:07 de serge