Rock-Paper-Scissors Classifier
A deep learning image classifier that identifies hand gestures for Rock, Paper, and Scissors with over 99% validation accuracy.
2 images • Use arrows or dots to navigate
Description
Built using TensorFlow and Keras, this CNN-based model classifies images of hand gestures into rock, paper, or scissors.
The dataset is sourced from a public GitHub repository and processed using custom data generators with augmentation.
Model architecture includes multiple Conv2D and MaxPooling layers, followed by dropout and dense layers for classification.
Training achieves 99.22% validation accuracy with visualized training history for performance tracking.
Includes an image prediction module for real-time gesture recognition using uploaded images.
Features
Dataset Processor
Downloads and prepares the Rock-Paper-Scissors dataset with validation split and augmentation.
Model Builder
Defines and trains a CNN model with dropout and softmax output for gesture classification.
Training History Plot
Visualizes accuracy and loss trends across 75 epochs.
Image Predictor
Predicts uploaded image class using the trained model and displays results with matplotlib.
Performance Metrics
Achieves 99.22% validation accuracy with low loss, ensuring robust classification.