I was going through the lecture videos and I had a question about one of the proposed “pros” of paging.
The slides mention that there’s negligible internal fragmentation due to the small allocation sizes. But let’s say a process starts by allocating a billion sized array. Presumably this would result in a request for thousands of pages which, for the sake of simplicity, let’s assume that the Kernel is able to grant. Now if the process just goes to sleep, all these allocated pages would just sit there and be unavailable for other processes that could’ve made better use of them. Would this not be an example of (significant) internal fragmentation?
P.S. - Couldn’t find a good category for this. No one seems to have questions on lecture