Large deep-learning foundation models have become the dominant paradigm across many fields. Their ability to extract useful representations allows for efficient finetuning to target tasks. Even more interestingly we can predict exactly how the performance of these models will improve as we increase the model size or the amount of data without even training them, which motivated the building of ever larger language and vision models. I am interested in whether similar scaling laws can be found for immunology task modelling. My work aims to establish the data requirements for reliable immune modelling and guide more efficient therapeutic antibody discovery.