OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Signal processing Conv1D Keras

  • Thread starter Thread starter stevGates
  • Start date Start date
S

stevGates

Guest
I am learning Keras using signal classification where all values are binary (0 or 1). The input data is a vector of [,32] and the output is [,16]. I have a large dataset more then 400K samples.

This is the dataset: LINK

I want to build a CNN model to predict Y from X. I built the following code:

Code:
import tensorflow as tf

import numpy as np
import pandas as pd
from numpy import std
from tensorflow import keras
from keras import layers
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
import time
import matplotlib.pyplot as plt
from keras.models import load_model
from keras.layers import Lambda, Input, Dense, Dropout, BatchNormalization



LinkOfDataset= 'NewNew.csv'
Data=pd.read_csv(LinkOfDataset,encoding= 'utf-8')

ChallengeLen= 32
ResponseLen= 16

NumberOfTraining= 0.6
NumberOfValidation= 0.2
NumberOfTesting=0.2
AllData=Data;

TrainData , TestData=train_test_split(AllData, train_size=0.8);
TestData , ValidationData=train_test_split(TestData, train_size=0.8);

XTrainData = TrainData.iloc[:, :ChallengeLen]
YTrainData = TrainData.iloc[:, ChallengeLen:]

XValidationData= ValidationData.iloc[:, :ChallengeLen]
YValidationData= ValidationData.iloc[:, ChallengeLen:]

XTestData = TestData.iloc[:, :ChallengeLen]
YTestData = TestData.iloc[:, ChallengeLen:]




# Split the data into X and y



n_inputs = 32 #input size (num of columns)
n_outputs = 16 #output size (num of columns)



# cnn model
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.utils import to_categorical
import keras
from keras.layers import Dense, Conv1D, Dropout, Reshape, MaxPooling1D



#%% AE
from keras.layers import Input, Dense, BatchNormalization, Dropout
from keras.models import Model
dropout_rate = 0.5 # You can adjust this value

# Create the encoder layers

# Define model parameters
n_timesteps = 32  # length of the input sequence
n_features = 1    # number of features per timestep
n_outputs = 16    # number of output classes

# Define the model
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps, n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))


model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))


model.add(Conv1D(filters=128, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))


model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))



optimizer = tf.keras.optimizers.Adam(0.0001)
model.compile(loss='categorical_crossentropy',
                optimizer=optimizer,
                metrics=['accuracy'])


history = model.fit(XTrainData, YTrainData, epochs=10, batch_size=100,
                   validation_data=(XValidationData, YValidationData))




model.save("PAutoencoder.h5")

# Plotting the training and validation loss
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss') # Now this should work
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()


plt.subplot(1, 2, 2)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()

I don't understand why my model did not learn. The output are:

Code:
Epoch 1/10
: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
  super().__init__(activity_regularizer=activity_regularizer, **kwargs)
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0527 - loss: 18883880.0000 - val_accuracy: 0.0116 - val_loss: 433219648.0000
Epoch 2/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 24s 10ms/step - accuracy: 0.0362 - loss: 1131962368.0000 - val_accuracy: 0.0116 - val_loss: 5247283200.0000
Epoch 3/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 22s 9ms/step - accuracy: 0.0307 - loss: 8491829760.0000 - val_accuracy: 0.0116 - val_loss: 21394169856.0000
Epoch 4/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0265 - loss: 30088540160.0000 - val_accuracy: 3.1636e-04 - val_loss: 60751552512.0000
Epoch 5/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 18s 7ms/step - accuracy: 0.0255 - loss: 77107576832.0000 - val_accuracy: 3.1636e-04 - val_loss: 132881571840.0000
Epoch 6/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 17s 7ms/step - accuracy: 0.0243 - loss: 168126316544.0000 - val_accuracy: 0.0116 - val_loss: 278413082624.0000
Epoch 7/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0248 - loss: 339567247360.0000 - val_accuracy: 0.0226 - val_loss: 518753681408.0000
Epoch 8/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 8ms/step - accuracy: 0.0261 - loss: 616867364864.0000 - val_accuracy: 3.1636e-04 - val_loss: 891050786816.0000
Epoch 9/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 20s 8ms/step - accuracy: 0.0248 - loss: 1045693661184.0000 - val_accuracy: 0.0116 - val_loss: 1515715559424.0000
Epoch 10/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 8ms/step - accuracy: 0.0264 - loss: 1706346020864.0000 - val_accuracy: 3.1636e-04 - val_loss: 2313546366976.0000
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.

enter image description here

I want to make my model can predict the Y correctly and a lower loss !
<p>I am learning Keras using signal classification where all values are binary (0 or 1). The input data is a vector of [,32] and the output is [,16]. I have a large dataset more then 400K samples.</p>
<p>This is the dataset: <a href="https://www.dropbox.com/scl/fi/bx51...ey=mpqdttoq74f9s1xlwzrp5scct&st=km4ixm7e&dl=0" rel="nofollow noreferrer">LINK</a></p>
<p>I want to build a CNN model to predict Y from X. I built the following code:</p>
<pre><code>import tensorflow as tf

import numpy as np
import pandas as pd
from numpy import std
from tensorflow import keras
from keras import layers
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
import time
import matplotlib.pyplot as plt
from keras.models import load_model
from keras.layers import Lambda, Input, Dense, Dropout, BatchNormalization



LinkOfDataset= 'NewNew.csv'
Data=pd.read_csv(LinkOfDataset,encoding= 'utf-8')

ChallengeLen= 32
ResponseLen= 16

NumberOfTraining= 0.6
NumberOfValidation= 0.2
NumberOfTesting=0.2
AllData=Data;

TrainData , TestData=train_test_split(AllData, train_size=0.8);
TestData , ValidationData=train_test_split(TestData, train_size=0.8);

XTrainData = TrainData.iloc[:, :ChallengeLen]
YTrainData = TrainData.iloc[:, ChallengeLen:]

XValidationData= ValidationData.iloc[:, :ChallengeLen]
YValidationData= ValidationData.iloc[:, ChallengeLen:]

XTestData = TestData.iloc[:, :ChallengeLen]
YTestData = TestData.iloc[:, ChallengeLen:]




# Split the data into X and y



n_inputs = 32 #input size (num of columns)
n_outputs = 16 #output size (num of columns)



# cnn model
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.utils import to_categorical
import keras
from keras.layers import Dense, Conv1D, Dropout, Reshape, MaxPooling1D



#%% AE
from keras.layers import Input, Dense, BatchNormalization, Dropout
from keras.models import Model
dropout_rate = 0.5 # You can adjust this value

# Create the encoder layers

# Define model parameters
n_timesteps = 32 # length of the input sequence
n_features = 1 # number of features per timestep
n_outputs = 16 # number of output classes

# Define the model
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps, n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))


model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))


model.add(Conv1D(filters=128, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))


model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))



optimizer = tf.keras.optimizers.Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])


history = model.fit(XTrainData, YTrainData, epochs=10, batch_size=100,
validation_data=(XValidationData, YValidationData))




model.save("PAutoencoder.h5")

# Plotting the training and validation loss
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss') # Now this should work
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()


plt.subplot(1, 2, 2)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
</code></pre>
<p>I don't understand why my model did not learn. The output are:</p>
<pre><code>Epoch 1/10
: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0527 - loss: 18883880.0000 - val_accuracy: 0.0116 - val_loss: 433219648.0000
Epoch 2/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 24s 10ms/step - accuracy: 0.0362 - loss: 1131962368.0000 - val_accuracy: 0.0116 - val_loss: 5247283200.0000
Epoch 3/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 22s 9ms/step - accuracy: 0.0307 - loss: 8491829760.0000 - val_accuracy: 0.0116 - val_loss: 21394169856.0000
Epoch 4/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0265 - loss: 30088540160.0000 - val_accuracy: 3.1636e-04 - val_loss: 60751552512.0000
Epoch 5/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 18s 7ms/step - accuracy: 0.0255 - loss: 77107576832.0000 - val_accuracy: 3.1636e-04 - val_loss: 132881571840.0000
Epoch 6/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 17s 7ms/step - accuracy: 0.0243 - loss: 168126316544.0000 - val_accuracy: 0.0116 - val_loss: 278413082624.0000
Epoch 7/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0248 - loss: 339567247360.0000 - val_accuracy: 0.0226 - val_loss: 518753681408.0000
Epoch 8/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 8ms/step - accuracy: 0.0261 - loss: 616867364864.0000 - val_accuracy: 3.1636e-04 - val_loss: 891050786816.0000
Epoch 9/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 20s 8ms/step - accuracy: 0.0248 - loss: 1045693661184.0000 - val_accuracy: 0.0116 - val_loss: 1515715559424.0000
Epoch 10/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 8ms/step - accuracy: 0.0264 - loss: 1706346020864.0000 - val_accuracy: 3.1636e-04 - val_loss: 2313546366976.0000
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
</code></pre>
<p><a href="https://i.sstatic.net/JO3tzs2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JO3tzs2C.png" alt="enter image description here" /></a></p>
<p>I want to make my model can predict the Y correctly and a lower loss !</p>
 

Latest posts

Top