Setting goals and mapping the space

We set goals with my friend Arindam Biswas to submit a paper at NeurIPS 2024, and the deadline is the 17th of May. It was the 10th of February and we didn’t even had a subject yet.

So naturally, we started finding areas of common interest to tackle. I really want to do something either in Neural Architecture Search, Meta-Learning, Transfer-Learning and Reinforcement Learning. Arindam is very interested in the mathematical properties of Neural Networks, pruning and compression methods for Network representation and information extraction.

What is Neural Architecture Search?

To really understand where to start, i stumbled upon this great survey ,with Frank Hutter as an author, one of the leading experts in the field, summing up very well what has been tried. And a very clear mapping of the problem space.

How about changing representation of the models?

Using the sign of weights instead the full values is an interesting proposal for transfer learning, and allows for sparse neural networks knowledge transfer.

Frameworks as a sane code base

I’m thinking of using Neural Network Intelligence framework, by Microsoft, to setup the experiment.