Categories
Artificial Intelligence (AI) Mastering Development

How to calculate GPU memory requirements for ultra-high-resolution image classification?

I am working on deep CNN image classification for images which are natively about 3200×3200 at full resolution. Although our research will explore downsampling and windowing methods, we also want to compare to the results from using the full resolution images natively, as one of the options. We intend to explore various SOTA models including InceptionResNetv2 and similar.

  1. Is it possible to anticipate the GPU memory requirements for ultra-high-res images?

  2. Can you estimate in advance, from the model+image resolution, what the supported BATCHSIZE is likely to be?

  3. Do there exist publicly available datasets for ultra-high-resolution images for testing?

  4. Is a RTX8000 (with 48GB RAM) likely to be up to the task? RTX6000 (with 24GB RAM)?

Leave a Reply

Your email address will not be published. Required fields are marked *