Sam Stites

Clipping, that extra bump for loss criteria

For the past couple of days I’ve been writing a transfer-learning workflow for the dog breed identification kaggle competition. This was done solely in pytorch and torchvision, heavily referencing the beginner pytorch tutorials, and you can find them at stites/circus. This code, which doesn’t do anything fancier than a fastai notebook, was able to produce results that placed in the 50th-percentile of the competition. This isn’t enough to qualify for anything like a bronze medal on kaggle (there are ~1k participants), but as a tool to familiarize myself with a CV pipeline from scratch, I would say this is a success.

There are four reasons why this workflow was successful and there are a plethora of improvements to be made. First, reasons how this was successful:

That said, there are a number of places where this could be improved:

I think the most impressive thing was that, as alluded to in the title, clipping the loss function brought down the kaggle loss from ~1.0 to the current loss (0.5843).