- iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN |
__GFP_NORETRY);
- if (!iter.start) {
dev_warn(pfdev->dev, "failed to allocate devcoredump file\n");
return;
- }
...
- memset(iter.hdr, 0, iter.data - iter.start);
Why are we using __GFP_NOWARN and __GFP_NORETRY? Why not plain vmalloc?
Also, why vmalloc instead of vzalloc? (Or adding __GFP_ZERO to the list of __vmalloc flags if __GFP_NOWARN/__GFP_NORETRY are really needed?) Are there relevant performance or security considerations?
I borrowed this code from Etnaviv a while ago and the same doubt struck me then. My understanding of its intended behaviour is that because the dump file might be huge, we don't want the memory manager to trigger the OOM killer and annoy quite a few running processes because of a debug feature. Also since the code already handles the situation when an allocation fails by refusing to generate a dump, there's no need for the allocator to generate specific error messages.
So I guess it boils down to 'if there's quite enough memory to allocate a huge dump file, go ahead, otherwise don't reclaim any processes' pages for something that isn't essential'.
I don't see much use for __GFP_ZERO in this case, because the dump file gets memcpy'd with the contents of every single bo so whatever the original contents of the memory were at the time of the allocation, they're overwritten immediately.
I think that's a reasonable explanation, bearing in mind I'm firmly a userspace person ;-)
Please add a comment explaining the assumptions here, though, because the code will live longer than this ML thread.
I've also rebased v3 on top of drm-misc-next and the compiler error because of the removed panfrost_job structure member is gone.
Excellent