
No commercial reproduction, distribution, display or performance rights in this work are provided. Achille et al., "Task2Vec: Task Embedding for Meta-Learning," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. Selecting a feature extractor with task embedding yields performance close to the best available feature extractor, with substantially less computational effort than exhaustively training and evaluating all available models.Ī. A major advantage of Task2Vec embedding is the fact that it does not rely on any knowledge of class label semantics and doesn’t depend on the number of classes. Given a dataset with ground-truth labels and a loss function defined over those labels, we process images through a probe network and compute an embedding based on estimates of the Fisher information matrix associated. This technique, called Task2Vec, is capable of building fixed-dimensional embedding for a variety of visual tasks, while the task is represented by its dataset with ground truth labels. (iv) We provide a thorough analysis of the behavior of downstream. We introduce a method to provide vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. (iii) Task2Sim can generalize to novel unseen tasks, not encountered during training, a feature of signif-icant practical value as an application. such as semantic-based search strategies 6,27,34), and contrast it to the search strategies we consider in this work. metric model that learns to map Task2Vec representations of downstream tasks to simulation parameters for optimal pre-training. In Section 6, we elaborate this approach, along with other related work in this category (e.g. We present a simple meta-learning framework for learning a metric on embeddings that is capable of predicting which feature extractors will perform well on which task. namely Task2Vec 1, and provide some preliminary results in Appendix H. We demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a novel task. We demonstrate that this embedding is capable of predicting task similarities that match our intuition about semantic and taxonomic relations between different visual tasks. This provides a fixed-dimensional embedding of the task that is independent of details such as the number of classes and requires no understanding of the class label semantics. Given a dataset with ground-truth labels and a loss function, we process images through a "probe network" and compute an embedding based on estimates of the Fisher information matrix associated with the probe network parameters. Given a dataset with ground-truth labels and a loss function, we process images through a probe network and compute an. We introduce a method to generate vectorial representations of visual classication tasks that can be used to reason about the nature of those tasks and their relations. Selecting a feature extractor with task embedding yields performance close to the best available feature extractor, with substantially less computational effort than exhaustively training and evaluating all available models.We introduce a method to generate vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. TASK2VEC: Task Embedding for Meta-Learning. We present a simple meta-learning framework for learning a metric on embeddings that is capable of predicting which feature extractors will perform well on which task. We demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a novel task. Task2Vec: Task Embedding for Meta-Learning.

We introduce a method to generate vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations.
