Hi Thomas,
- /* this is to find and return the vmalloc-ed fb pages */ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) {
@@ -59,7 +113,7 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) printk(KERN_ERR "no mapping available\n"); BUG_ON(!page->mapping);
- page->index = vmf->pgoff;
- page->index = vmf->pgoff; /* for page_mkclean() */ vmf->page = page; return 0;
@@ -95,7 +149,11 @@ static vm_fault_t fb_deferred_io_mkwrite(struct vm_fault *vmf) struct page *page = vmf->page; struct fb_info *info = vmf->vma->vm_private_data; struct fb_deferred_io *fbdefio = info->fbdefio;
- struct list_head *pos = &fbdefio->pagelist;
- struct fb_deferred_io_pageref *pageref;
- unsigned long offset;
- vm_fault_t ret;
- offset = (vmf->address - vmf->vma->vm_start); /* this is a callback we get when userspace first tries to write to the page. we schedule a workqueue. that workqueue
@@ -112,6 +170,12 @@ static vm_fault_t fb_deferred_io_mkwrite(struct vm_fault *vmf) if (fbdefio->first_io && list_empty(&fbdefio->pagelist)) fbdefio->first_io(info);
- pageref = fb_deferred_io_pageref_get(info, offset, page);
Compared to the old code we now do all the sorting and stuff without the page locked, which seem like a big change.
We never touch any of the page's fields in fb_deferred_io_pageref_get(). It's only used to initialize the pageref's page pointer. The pagerefs are all protected by fbdev-internal locking. Is there a reason why we should further hold the page lock?
I only commented because it was a change in scope of the lock, I did not see anything wrong in the locking, but then I do not understand locking so that does not say much.
All sorting is done by the pageref addresses, which implicitly correspond to 'offset'. After looking at the new function again, I'll change it to sort directly by offset. It's clearer in its intend.
Looks forward for the re-spin.
Sam