View source on GitHub
|
1D Convolutional LSTM.
Inherits From: RNN, Layer, Operation
tf.keras.layers.ConvLSTM1D(
filters,
kernel_size,
strides=1,
padding='valid',
data_format=None,
dilation_rate=1,
activation='tanh',
recurrent_activation='sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros',
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
seed=None,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
**kwargs
)
Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
Args |
|---|
filters
kernel_size
strides
strides > 1 is incompatible with
dilation_rate > 1.
padding
"valid" or "same" (case-insensitive).
"valid" means no padding. "same" results in padding evenly to
the left/right or up/down of the input such that output has the
same height/width dimension as the input.
data_format
"channels_last" or "channels_first".
The ordering of the dimensions in the inputs. "channels_last"
corresponds to inputs with shape (batch, steps, features)
while "channels_first" corresponds to inputs with shape
(batch, features, steps). It defaults to the image_data_format
value found in your Keras config file at ~/.keras/keras.json.
If you never set it, then it will be "channels_last".
dilation_rate
activation
tanh(x)).
recurrent_activation
use_bias
kernel_initializer
kernel weights matrix,
used for the linear transformation of the inputs.
recurrent_initializer
recurrent_kernel weights
matrix, used for the linear transformation of the recurrent state.
bias_initializer
unit_forget_bias
True, add 1 to the bias of
the forget gate at initialization.
Use in combination with bias_initializer="zeros".
This is recommended in Jozefowicz et al., 2015
kernel_regularizer
kernel weights
matrix.
recurrent_regularizer
recurrent_kernel weights matrix.
bias_regularizer
activity_regularizer
kernel_constraint
kernel weights
matrix.
recurrent_constraint
recurrent_kernel weights matrix.
bias_constraint
dropout
recurrent_dropout
seed
return_sequences
False.
return_state
False.
go_backwards
False).
If True, process the input sequence backwards and return the
reversed sequence.
stateful
True, the last state
for each sample at index i in a batch will be used as initial
state for the sample of index i in the following batch.
unroll
False).
If True, the network will be unrolled,
else a symbolic loop will be used.
Unrolling can speed-up a RNN,
although it tends to be more memory-intensive.
Unrolling is only suitable for short sequences.
Call arguments |
|---|
inputs
initial_state
mask
(samples, timesteps) indicating whether a
given timestep should be masked.
training
dropout or recurrent_dropout are set.
Input shape:
- If
data_format="channels_first": 4D tensor with shape:(samples, time, channels, rows) - If
data_format="channels_last": 4D tensor with shape:(samples, time, rows, channels)
Output shape:
- If
return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each 3D tensor with shape:(samples, filters, new_rows)ifdata_format='channels_first'or shape:(samples, new_rows, filters)ifdata_format='channels_last'.rowsvalues might have changed due to padding. - If
return_sequences: 4D tensor with shape:(samples, timesteps, filters, new_rows)if data_format='channels_first' or shape:(samples, timesteps, new_rows, filters)ifdata_format='channels_last'. - Else, 3D tensor with shape:
(samples, filters, new_rows)ifdata_format='channels_first'or shape:(samples, new_rows, filters)ifdata_format='channels_last'.
References:
- Shi et al., 2015 (the current implementation does not include the feedback loop on the cells output).
Attributes |
|---|
activation
bias_constraint
bias_initializer
bias_regularizer
data_format
dilation_rate
dropout
filters
input
Only returns the tensor(s) corresponding to the first time the operation was called.
kernel_constraint
kernel_initializer
kernel_regularizer
kernel_size
output
Only returns the tensor(s) corresponding to the first time the operation was called.
padding
recurrent_activation
recurrent_constraint
recurrent_dropout
recurrent_initializer
recurrent_regularizer
strides
unit_forget_bias
use_bias
Methods
from_config
@classmethodfrom_config( config )
Creates a layer from its config.
This method is the reverse of get_config,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights).
| Args |
|---|
config
| Returns | |
|---|---|
| A layer instance. |
get_initial_state
get_initial_state(
batch_size
)
inner_loop
inner_loop(
sequences, initial_state, mask, training=False
)
reset_state
reset_state()
reset_states
reset_states()
symbolic_call
symbolic_call(
*args, **kwargs
)
View source on GitHub