Hello,
I am using lstm to run a neural network but i have problems with tensor dimensions and run time, i don't get the pb
LSTM works with 3D tensors so i reshaped my train data.
We initially have a matrix (150'000, 20) that i reshape into torch.Size([1, 150000, 20]).
Ytrain is a tensor (150000) (i test also (1,150000) and (150000,1))
I get a warning during the run : UserWarning: Using a target size (torch.Size([1, 150000])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
That's why i guess it's linked to dim
Here is the function to train the model:
for epoch in range(num_epochs):
outputs = lstm(trainX)
optimizer.zero_grad()
# obtain the loss function
loss = criterion(outputs, trainY)
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print("Epoch: %d, loss: %1.5f" % (epoch, loss.item()))
LSTM problem
Hello,
I am using lstm to run a neural network but i have problems with tensor dimensions and run time, i don't get the pb
LSTM works with 3D tensors so i reshaped my train data.
We initially have a matrix (150'000, 20) that i reshape into torch.Size([1, 150000, 20]).
Ytrain is a tensor (150000) (i test also (1,150000) and (150000,1))
I get a warning during the run : UserWarning: Using a target size (torch.Size([1, 150000])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
That's why i guess it's linked to dim
Here is the function to train the model:
#########definition of the lstm
class LSTM(nn.Module):
#
TRAINING:
num_epochs = 5 ###add a zero
learning_rate = 0.01
input_size = 20
hidden_size = 2
num_layers = 1
seq_length=1
num_classes = 1
lstm = LSTM(num_classes, input_size, hidden_size, num_layers)
lstm = LSTM(num_classes, input_size, hidden_size, num_layers)
criterion = torch.nn.MSELoss() # mean-squared error for regression
optimizer = torch.optim.Adam(lstm.parameters(), lr=learning_rate)
optimizer = torch.optim.SGD(lstm.parameters(), lr=learning_rate)
Train the model
for epoch in range(num_epochs):
outputs = lstm(trainX)
optimizer.zero_grad()
#
Add comment