On Mon, Jul 01, 2013 at 09:54:17PM +0200, David Herrmann wrote:
On Mon, Jul 1, 2013 at 9:42 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, Jul 01, 2013 at 08:33:00PM +0200, David Herrmann wrote:
+/** drm_vma_offset_manager_destroy()
- Destroy an object manager which was previously created via
- drm_vma_offset_manager_init(). The caller must remove all allocated nodes
- before destroying the manager. Otherwise, drm_mm will refuse to free the
- requested resources.
- The manager must not be accessed after this function is called.
- */
+void drm_vma_offset_manager_destroy(struct drm_vma_offset_manager *mgr) +{
BUG_ON(!drm_mm_clean(&mgr->vm_addr_space_mm));
Please convert this to a WARN_ON - drm_mm has code to cobble over buggy drivers, and killing the driver with a BUG_ON if we could keep on going with just a WARN_ON is a real pain for development.
Ok.
Actually I think the check in the takedown function should be good enough already. I've just dumped a patch onto dri-devel which will convert that to a WARN, so I think adding a second one here is redundant.
/* take the lock to protect against buggy drivers */
write_lock(&mgr->vm_lock);
drm_mm_takedown(&mgr->vm_addr_space_mm);
write_unlock(&mgr->vm_lock);
+} +EXPORT_SYMBOL(drm_vma_offset_manager_destroy);
+/** drm_vma_offset_lookup()
- Find a node given a start address and object size. This returns the _best_
- match for the given node. That is, @start may point somewhere into a valid
- region and the given node will be returned, as long as the node spans the
- whole requested area (given the size in number of pages as @pages).
- Returns NULL if no suitable node can be found. Otherwise, the best match
- is returned. It's the caller's responsibility to make sure the node doesn't
- get destroyed before the caller can access it.
- */
+struct drm_vma_offset_node *drm_vma_offset_lookup(struct drm_vma_offset_manager *mgr,
unsigned long start,
unsigned long pages)
+{
struct drm_vma_offset_node *node, *best;
struct rb_node *iter;
unsigned long offset;
read_lock(&mgr->vm_lock);
iter = mgr->vm_addr_space_rb.rb_node;
best = NULL;
while (likely(iter)) {
node = rb_entry(iter, struct drm_vma_offset_node, vm_rb);
offset = node->vm_node.start;
if (start >= offset) {
iter = iter->rb_right;
best = node;
if (start == offset)
break;
} else {
iter = iter->rb_left;
}
}
/* verify that the node spans the requested area */
if (likely(best)) {
offset = best->vm_node.start + best->vm_pages;
if (offset > start + pages)
best = NULL;
}
read_unlock(&mgr->vm_lock);
return best;
+} +EXPORT_SYMBOL(drm_vma_offset_lookup);
+/* internal helper to link @node into the rb-tree */ +static void _drm_vma_offset_add_rb(struct drm_vma_offset_manager *mgr,
struct drm_vma_offset_node *node)
+{
struct rb_node **iter = &mgr->vm_addr_space_rb.rb_node;
struct rb_node *parent = NULL;
struct drm_vma_offset_node *iter_node;
while (likely(*iter)) {
parent = *iter;
iter_node = rb_entry(*iter, struct drm_vma_offset_node, vm_rb);
if (node->vm_node.start < iter_node->vm_node.start)
iter = &(*iter)->rb_left;
else if (node->vm_node.start > iter_node->vm_node.start)
iter = &(*iter)->rb_right;
else
BUG();
}
rb_link_node(&node->vm_rb, parent, iter);
rb_insert_color(&node->vm_rb, &mgr->vm_addr_space_rb);
+}
+/** drm_vma_offset_add()
- Add a node to the offset-manager. If the node was already added, this does
- nothing and return 0. @pages is the size of the object given in number of
- pages.
- After this call succeeds, you can access the offset of the node until it
- is removed again.
- If this call fails, it is safe to retry the operation or call
- drm_vma_offset_remove(), anyway. However, no cleanup is required in that
- case.
- */
+int drm_vma_offset_add(struct drm_vma_offset_manager *mgr,
struct drm_vma_offset_node *node, unsigned long pages)
+{
int ret;
write_lock(&mgr->vm_lock);
if (unlikely(drm_mm_node_linked(&node->vm_node))) {
ret = 0;
goto out_unlock;
}
ret = drm_mm_insert_node_generic(&mgr->vm_addr_space_mm,
&node->vm_node, pages, 0, 0);
This allocates memory, so potentially blocks. This doesn't mesh well with the spinning rwlock (lockdep should seriously complain about this btw). Please use an rwsemaphore instead.
Ugh? This shouldn't allocate any memory. I use pre-allocated nodes and looking at drm_mm.c this just links the node at the correct position. As long as the color_adjust() callbacks are atomic, this should be fine here. Did you mix this up with get_block()?
Oops, indeed I've mixed stuff up ;-)
I'd still vote for an rwsem though since these codepaths shouldn't be performance critical and in pathalogical corner cases the O(n) list walking drm_mm does could result in quite awful long spinlock holding times. -Daniel