I implemented the Vision Transformer (ViT) architecture from scratch, visualizing the embedding creation process and training the model on the Food-101 dataset for image classification. To further enhance performance, I incorporated transfer learning by leveraging pretrained weights from PyTorch models, which significantly improved the overall accuracy.
Domain: Paper implementation, Computer Vision, Deep Learning, PyTorch.
Code: Link
I utilized pretrained EfficientNet-B0 weights to classify specific subcategories of the Food-101 dataset, testing the model’s generalization on real-world data beyond the test set. To streamline the workflow, I developed custom wrappers that simplified data preparation, training, evaluation, and visualization, making the experimentation process more efficient and reproducible.
Domain: Transfer learning, Computer Vision, Deep Learning, PyTorch.
Code: Link
I worked on predictions using the FashionMNIST dataset, modularizing the code to simplify training and testing across three different models. The implementation was designed to be device-agnostic, ensuring compatibility across CPUs and GPUs, and included detailed time analysis for both training and testing phases.
Domain: Deep Learning, PyTorch.
Code: Link
Adversarial tic-tac-toe implementation using minimax algorithm in C++.
Domain: Artificial intelligence, Algorithm.
Code: Link
A star wars inspired short film made using Blender. The film showcases a futuristic interplanetary vehicle navigating through space, passing nearby a planet.
Domain: Short Film, 3D graphics, Blender, Animation
Source:: Link
A small cannon model made using Blender. The model showcases basic use of graph editor to animate the cannon firing.
Domain: 3D graphics, Graph editor, Blender, Animation
Source:: Link