OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Wrong shape at fully connected layer: mat1 and mat2 shapes cannot be multiplied

  • Thread starter Thread starter Carlos Vega
  • Start date Start date
C

Carlos Vega

Guest
I have the following model. It is training well. The shapes of my splits are:

  • X_train (98, 1, 40, 844)
  • X_val (21, 1, 40, 844)
  • X_test (21, 1, 40, 844)

However, I am getting the following error at x = F.relu(self.fc1(x)) in forward. When I attempt to interpret the model on the validation set.

Code:
# Create a DataLoader for the validation set
valid_dl = learn.dls.test_dl(X_val, y_val)

# Get predictions and interpret them on the validation set
interp = ClassificationInterpretation.from_learner(learn, dl=valid_dl)

RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x2110 and 67520x128)

I have checked dozens of similar questions but I am unable to find a solution.

Code:
class DraftCNN(nn.Module):
    def __init__(self):
        super(AudioCNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1)
        
        # Calculate flattened size based on input dimensions
        with torch.no_grad():
            dummy_input = torch.zeros(1, 1, 40, 844)  # shape of one input sample
            dummy_output = self.pool(self.conv2(self.pool(F.relu(self.conv1(dummy_input)))))
            self.flattened_size = dummy_output.view(dummy_output.size(0), -1).size(1)
        
        self.fc1 = nn.Linear(self.flattened_size, 128)
        self.fc2 = nn.Linear(128, 4)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(x.size(0), -1)  # Flatten the output of convolutions
        x = F.relu(self.fc1(x))

Code:
   x = self.fc2(x)
    return x

I tried changing the forward function and the shapes of the layers but I keep getting the same error.
<p>I have the following model. It is training well. The shapes of my splits are:</p>
<ul>
<li>X_train (98, 1, 40, 844)</li>
<li>X_val (21, 1, 40, 844)</li>
<li>X_test (21, 1, 40, 844)</li>
</ul>
<p>However, I am getting the following error at <code>x = F.relu(self.fc1(x))</code> in <code>forward</code>. When I attempt to interpret the model on the validation set.</p>
<pre><code># Create a DataLoader for the validation set
valid_dl = learn.dls.test_dl(X_val, y_val)

# Get predictions and interpret them on the validation set
interp = ClassificationInterpretation.from_learner(learn, dl=valid_dl)
</code></pre>
<p><code>RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x2110 and 67520x128)</code></p>
<p>I have checked dozens of similar questions but I am unable to find a solution.</p>
<pre><code>class DraftCNN(nn.Module):
def __init__(self):
super(AudioCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1)

# Calculate flattened size based on input dimensions
with torch.no_grad():
dummy_input = torch.zeros(1, 1, 40, 844) # shape of one input sample
dummy_output = self.pool(self.conv2(self.pool(F.relu(self.conv1(dummy_input)))))
self.flattened_size = dummy_output.view(dummy_output.size(0), -1).size(1)

self.fc1 = nn.Linear(self.flattened_size, 128)
self.fc2 = nn.Linear(128, 4)

def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(x.size(0), -1) # Flatten the output of convolutions
x = F.relu(self.fc1(x))

</code></pre>
<pre><code> x = self.fc2(x)
return x
</code></pre>
<p>I tried changing the forward function and the shapes of the layers but I keep getting the same error.</p>
 

Latest posts

Top