Sam Stites

ML Notes: autoencoders and dropout

Some quick reminders:

Question: Is there a notion of things being “approximate” in CT? For instance, a neural network requires an initial parameter space and moves towards an ideal, terminal, parameter space. That said, it will never get there. It’s the same with reinforcement learning and approximate optimization (which is probably the most formal of these three research areas) (edited)

Also: