MSCS · George Washington University · May 2026

Chang
Li.

Machine Learning Engineer & Deep Learning Researcher

MSCS student specializing in computer vision, transfer learning, and reinforcement learning. I build production-grade ML pipelines — from data preprocessing and controlled experimentation through rigorous evaluation and failure-mode analysis. Seeking full-time MLE roles where models ship to production.

91%
Best Test F1 Plant Disease Multi-label Classification
+29pp
OOD Accuracy Gain via Test-Time Training on blur covariates
18K
Training Images ResNet50 apple-leaf disease pipeline
3+
End-to-End ML Projects CV · Reinforcement Learning · TTT
01 /

Technical Skills

ML Frameworks
PyTorch TensorFlow Keras scikit-learn timm
ML Concepts
Transfer Learning Multi-label Classification Deep Q-Network Test-Time Training Data Augmentation
Languages
Python Java JavaScript R SQL
Data & Vision
NumPy Pandas Stable Diffusion 2 OpenAI API Hugging Face
Developer Tools
GitHub Jupyter Notebook Google Colab VS Code MySQL
Compute & Infra
CUDA / GPU Colab A100 Pygame HTML / CSS
02 /

Project Experience

Plant Disease Multi-Label Classification

Nov – Dec 2025

Multi-label deep learning classifier for apple leaf diseases on the Plant Pathology 2021 dataset. Tackled class imbalance, multi-symptom co-occurrence, and uncontrolled field conditions.

  • Built ResNet50 backbone (sigmoid + BCEWithLogitsLoss) on 18K orchard images with binary label encoding for 6 disease categories
  • Systematic augmentation sweep (none → light → medium → heavy) identified medium augmentation as optimal — lifted Test F1 from 0.8808 → 0.9167
  • Hyperparameter tuning across LR (5e-5–5e-4), batch size (32–128), and optimizers (Adam/AdamW/SGD) — Adam + LR 5e-4 + BS 32 won
  • Per-class FN/FP + confusion-pair analysis identified "complex" (F1=0.669) as the primary generalization bottleneck
  • 87.7% of 2,795 test samples predicted perfectly; 0% completely wrong
PyTorch ResNet50 Transfer Learning Multi-label Data Augmentation Google Colab
0.9167
Test F1
96.5%
Accuracy

Adaptive Image Classifier via Test-Time Training

Mar – May 2025

Addressed performance degradation under covariate shift by implementing Dynamic TTT — enabling a trained model to self-adapt per test sample without full retraining or ground-truth labels.

  • Implemented TTT in PyTorch with self-supervised per-sample gradient updates targeting OOD blur and cartoon-style covariates
  • Improved OOD accuracy from 51.17% → 80.83% (blur) and 45.67% → 71.50% (cartoon) — gains of ~29pp and ~26pp respectively
  • Extended model with auxiliary transformation heads (rotation, blur, brightness) to guide structured feature realignment under covariate shift
  • Automated large-scale synthetic covariate generation via Stable Diffusion 2 (Hugging Face) — benchmarked on Colab A100
  • Designed full adaptive inference pipeline: generation → mini-training set construction → per-sample adaptation → evaluation
PyTorch Test-Time Training Stable Diffusion 2 Covariate Shift Computer Vision A100 GPU
+29pp
OOD Gain
80.8%
OOD Acc.

Goal-Driven Autonomous Agent via Deep Q-Network

Apr – May 2025

Built a DQN-powered autonomous agent to play Battle City Tanks — a stochastic, dynamic combat environment requiring coordinated attack-defense decision-making.

  • Designed state vector encoding player/enemy positions and health; normalized for stable training across dynamic game states
  • Implemented DQN with experience replay, epsilon-greedy exploration (ε decay), and target network stabilization (γ=0.95)
  • Engineered multi-condition reward function: game-over penalties, enemy-kill bonuses, proximity rewards for strategic behavior
  • Hybrid rule-based fallback (shooting heuristics + nearest-enemy targeting) supplements RL policy during early exploration
  • Achieved consistent single-enemy victories; diagnosed state-space expansion and reward sparsity as multi-enemy scaling bottleneck
TensorFlow/Keras Deep Q-Network Reinforcement Learning Reward Shaping Pygame Python
DQN
Architecture
ε-greedy
Exploration
03 /

Blog Posts

Nov 9, 2025 · Computer Vision
Camera Calibration: When Math Meets Real-World Uncertainty

Explored camera calibration stability through leave-one-out cross-validation and Monte Carlo noise testing. Tested on two real Boston webcams — one yielding a 6px reprojection error (good), the other 48px (needs work). Key takeaway: OpenCV converges, but convergence only means as good as the inputs.

w/ Ziang Chen
OpenCV Camera Calibration Monte Carlo Python
Nov
2025
Oct 9, 2025 · Computer Vision
CLIP Text-Image Alignment: How Language and Vision Meet

Investigated how OpenAI's CLIP (ViT-B/32) aligns text and images using cosine similarity. Single-image matching showed that structured prompts ("A photo of a …") consistently outperform bare keywords. Multi-image experiments confirmed: richer, descriptive captions boost similarity scores by up to +0.045.

w/ Ziang Chen
CLIP ViT-B/32 Text-Image Alignment Python
Oct
2025
Dec 6, 2025 · Deep Learning
Plant Disease Classification with Deep Learning

Classified apple leaf diseases on the Plant Pathology 2021 dataset using a ResNet50 backbone, achieving a 0.9167 F1 score through systematic augmentation sweeps and hyperparameter tuning. Multi-disease cases with overlapping symptoms remained the primary generalization bottleneck.

PyTorch ResNet50 Transfer Learning Multi-label
Dec
2025
Sep 11, 2025 · AI Research
Can AI See Colors Better Than Me?

Tested ChatGPT-5's vision on 11 Ishihara color-blindness plates — it correctly identified numbers, animals, and two-digit patterns, but hallucinated content on "trick" plates that contain nothing. Surprisingly good pattern recognition, but it struggles to confidently say "nothing is there."

ChatGPT-5 Vision Ishihara Test AI Evaluation
Sep
2025
04 /

Education

George Washington University
M.S. Computer Science
Aug 2024 – May 2026 (Expected) · Washington, D.C.
Machine Learning
Neural Networks & Deep Learning
Introduction to Computer Vision
Artificial Intelligence
Design & Analysis of Algorithms
Database Management Systems
University of Pittsburgh
B.S. Computer Science, Minor in Mathematics
Aug 2019 – May 2024 · Pittsburgh, PA
🏆 Dean's List
Computer Science Core Curriculum
Mathematics Minor
Dean's List Honors
05 /

Experience

Undergraduate Teaching Assistant
University of Pittsburgh
Aug 2023 – Dec 2023
  • Led weekly lab sessions teaching Python programming and core computational problem-solving for Introduction to Computing for Scientists
  • Guided students through algorithm design, debugging workflows, and end-to-end solution development for scientific applications
  • Developed supplemental instructional materials and held office hours to reinforce programming fundamentals and analytical thinking
06 /

Let's Connect

Looking for an MLE full-time opportunity.

Actively seeking full-time Machine Learning Engineer positions. I bring hands-on experience building CV pipelines, RL agents, and robust training workflows — eager to contribute to teams shipping real ML systems.

Email
cl885@gwmail.gwu.edu
Phone
+1 (917) 794-8865
GitHub
github.com/chl0817
Currently Open To
  • Machine Learning Engineer (Full-Time)
  • ML Research Engineer
  • Computer Vision Engineer
  • Applied Scientist / ML Scientist
  • Open to Relocate
  • Start date: May – June 2026
# Research areas I'm excited about interests = { "vision": ["OOD robustness", "SSL"], "rl": ["multi-agent", "sim2real"], "infra": ["scalable training", "eval"], }