Hi,
So if i do not start the discussion now it might be already too late. Given plan to converge open source driver and closed source driver to use a single common kernel driver and that this would be a new kernel driver. This is an opportunity to fix some of the radeon design issues (at least things that i would have done differently if only i could get some gas for my DeLorean).
Among the thing that i will not do is the chunk stuff associated with cs ioctl. I find it ugly, if my memory serve me well i was trying to be future proof and allow the cs ioctl to be extended. While this original aim have been somewhat successfully, i think it was the wrong day to do it.
My lastest (and what i still believe to be a good idea until proven wrong), is to change the way we do ioctl and use a little trick. This idea was also spark by the continuous additions we do to info ioctl which is getting ugly.
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
The version field is the key here, think of it as an index into an array of ioctl dispatch functions. So something like :
struct radeon_ioctls { int (*iotcl)[MAX_IOCTL_NUM](void *data, ...); };
struct radeon_ioctls rdispatch_ioctls[N];
And now all ioctl go through this single entry point :
int radeon_ioctl_stub(int ioctl, void *data, ...) { struct radeon_ioctl *rio = data;
return rdispatch_ioctls[rio->version][ioctl](data, ...); }
So this is rough idea but the point is that we can do proper ioctl versioning and have separate functions for each new versions and avoid ioctl cruft, or at least this is the theory.
The two things this gave us, is feedback from userspace as we the version the kernel will know which version of userspace it is dealing with. The others one is that it does allow you to introduce a completely new API either for new generation of hardware or just as an evolution. And small bonus is that it does allow to slowly phase out API that we consider broken (ioctl per ioctl).
So this is the main design change that i would do. I should probably rought up something that goes deeper for the cs ioctl.
Cheers, Jérôme
Hi Jerome,
yeah that's what we have already planned for the IOCTLs anyway.
The main question currently is rather how we do the fence representation and synchronization between different engines.
For fences I think we can agree to use 64bit values (maybe plus engine index) for internal representation and android style fence fds for sharing them between applications/drivers (assuming the android fences ever leave staging).
But what should we do for inter engine synchronization? One requirement from our closed source teams is that we have semaphore/mutex style synchronization, e.g. fences are only used for GPU to CPU sync, but for GPU to GPU sync you have a different sync object type.
Semaphores work like having a mutex that protects resources from concurrent access, e.g. we can do two command submissions that need access to buffer A and we don't care which command submission runs first as long as they don't run at the same time.
The obvious down side is that you inherit problems like lock inversions and so you can build deadlocks with them, e.g. command submission A needs to wait for B and B needs to wait for A. Which are hard to detect and resolve in the kernel except for using timeouts.
Ideas or thoughts on that?
Regards, Christian.
Am 08.10.2014 um 18:00 schrieb Jerome Glisse:
Hi,
So if i do not start the discussion now it might be already too late. Given plan to converge open source driver and closed source driver to use a single common kernel driver and that this would be a new kernel driver. This is an opportunity to fix some of the radeon design issues (at least things that i would have done differently if only i could get some gas for my DeLorean).
Among the thing that i will not do is the chunk stuff associated with cs ioctl. I find it ugly, if my memory serve me well i was trying to be future proof and allow the cs ioctl to be extended. While this original aim have been somewhat successfully, i think it was the wrong day to do it.
My lastest (and what i still believe to be a good idea until proven wrong), is to change the way we do ioctl and use a little trick. This idea was also spark by the continuous additions we do to info ioctl which is getting ugly.
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
The version field is the key here, think of it as an index into an array of ioctl dispatch functions. So something like :
struct radeon_ioctls { int (*iotcl)[MAX_IOCTL_NUM](void *data, ...); };
struct radeon_ioctls rdispatch_ioctls[N];
And now all ioctl go through this single entry point :
int radeon_ioctl_stub(int ioctl, void *data, ...) { struct radeon_ioctl *rio = data;
return rdispatch_ioctls[rio->version][ioctl](data, ...); }
So this is rough idea but the point is that we can do proper ioctl versioning and have separate functions for each new versions and avoid ioctl cruft, or at least this is the theory.
The two things this gave us, is feedback from userspace as we the version the kernel will know which version of userspace it is dealing with. The others one is that it does allow you to introduce a completely new API either for new generation of hardware or just as an evolution. And small bonus is that it does allow to slowly phase out API that we consider broken (ioctl per ioctl).
So this is the main design change that i would do. I should probably rought up something that goes deeper for the cs ioctl.
Cheers, Jérôme _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Jerome,
Do you think your proposed change should also be applied to amdkfd's IOCTLs ?
Oded
On 08/10/14 19:00, Jerome Glisse wrote:
Hi,
So if i do not start the discussion now it might be already too late. Given plan to converge open source driver and closed source driver to use a single common kernel driver and that this would be a new kernel driver. This is an opportunity to fix some of the radeon design issues (at least things that i would have done differently if only i could get some gas for my DeLorean).
Among the thing that i will not do is the chunk stuff associated with cs ioctl. I find it ugly, if my memory serve me well i was trying to be future proof and allow the cs ioctl to be extended. While this original aim have been somewhat successfully, i think it was the wrong day to do it.
My lastest (and what i still believe to be a good idea until proven wrong), is to change the way we do ioctl and use a little trick. This idea was also spark by the continuous additions we do to info ioctl which is getting ugly.
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
The version field is the key here, think of it as an index into an array of ioctl dispatch functions. So something like :
struct radeon_ioctls { int (*iotcl)[MAX_IOCTL_NUM](void *data, ...); };
struct radeon_ioctls rdispatch_ioctls[N];
And now all ioctl go through this single entry point :
int radeon_ioctl_stub(int ioctl, void *data, ...) { struct radeon_ioctl *rio = data;
return rdispatch_ioctls[rio->version][ioctl](data, ...); }
So this is rough idea but the point is that we can do proper ioctl versioning and have separate functions for each new versions and avoid ioctl cruft, or at least this is the theory.
The two things this gave us, is feedback from userspace as we the version the kernel will know which version of userspace it is dealing with. The others one is that it does allow you to introduce a completely new API either for new generation of hardware or just as an evolution. And small bonus is that it does allow to slowly phase out API that we consider broken (ioctl per ioctl).
So this is the main design change that i would do. I should probably rought up something that goes deeper for the cs ioctl.
Cheers, Jérôme _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Oct 09, 2014 at 09:54:14AM +0300, Oded Gabbay wrote:
Hi Jerome,
Do you think your proposed change should also be applied to amdkfd's IOCTLs ?
It might make sense it really depends on the lifespan you expect for amdkfd, do you expect that you will need substential API evolution during that timespan or do you foresee that the currently propose API is good enough.
The thing i am proposing here is really to address the issue of IOCTL needing fixes because new generation of GPU requires new informations or do not care about old and now meaningless informations. The radeon ioctl sets is a testimony to the fact that we can not predict what kind of broken API we expose and have to leave with.
My proposal is really about allowing to evolve each ioctl in what i believe to be the simplest and cleanest way to do it while still preserving backward compatibility ie i expect that old version of an ioctl will continue to live in there own functions untouch and unlove but still working. While new development can move to updated ioctl.
Other solution would be to just keep adding new ioctl but i fear we might run out of ioctl number.
Cheers, Jérôme
Oded
On 08/10/14 19:00, Jerome Glisse wrote:
Hi,
So if i do not start the discussion now it might be already too late. Given plan to converge open source driver and closed source driver to use a single common kernel driver and that this would be a new kernel driver. This is an opportunity to fix some of the radeon design issues (at least things that i would have done differently if only i could get some gas for my DeLorean).
Among the thing that i will not do is the chunk stuff associated with cs ioctl. I find it ugly, if my memory serve me well i was trying to be future proof and allow the cs ioctl to be extended. While this original aim have been somewhat successfully, i think it was the wrong day to do it.
My lastest (and what i still believe to be a good idea until proven wrong), is to change the way we do ioctl and use a little trick. This idea was also spark by the continuous additions we do to info ioctl which is getting ugly.
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
The version field is the key here, think of it as an index into an array of ioctl dispatch functions. So something like :
struct radeon_ioctls { int (*iotcl)[MAX_IOCTL_NUM](void *data, ...); };
struct radeon_ioctls rdispatch_ioctls[N];
And now all ioctl go through this single entry point :
int radeon_ioctl_stub(int ioctl, void *data, ...) { struct radeon_ioctl *rio = data;
return rdispatch_ioctls[rio->version][ioctl](data, ...); }
So this is rough idea but the point is that we can do proper ioctl versioning and have separate functions for each new versions and avoid ioctl cruft, or at least this is the theory.
The two things this gave us, is feedback from userspace as we the version the kernel will know which version of userspace it is dealing with. The others one is that it does allow you to introduce a completely new API either for new generation of hardware or just as an evolution. And small bonus is that it does allow to slowly phase out API that we consider broken (ioctl per ioctl).
So this is the main design change that i would do. I should probably rought up something that goes deeper for the cs ioctl.
Cheers, Jérôme _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On 09/10/14 11:02, Jerome Glisse wrote:
On Thu, Oct 09, 2014 at 09:54:14AM +0300, Oded Gabbay wrote:
Hi Jerome,
Do you think your proposed change should also be applied to amdkfd's IOCTLs ?
It might make sense it really depends on the lifespan you expect for amdkfd, do you expect that you will need substential API evolution during that timespan or do you foresee that the currently propose API is good enough.
Well, I don't expect to reach 100 ioctls anytime soon, but I can tell you that for the features we have in the pipeline, I can see the IOCTL number go up to 20-30, just for the current H/W generation.
The thing i am proposing here is really to address the issue of IOCTL needing fixes because new generation of GPU requires new informations or do not care about old and now meaningless informations. The radeon ioctl sets is a testimony to the fact that we can not predict what kind of broken API we expose and have to leave with.
Understood, and its really hard for me to predict now if we would absolutely need fixing IOCTLS for newer hardware generations. However, better be safe than sorry.
My proposal is really about allowing to evolve each ioctl in what i believe to be the simplest and cleanest way to do it while still preserving backward compatibility ie i expect that old version of an ioctl will continue to live in there own functions untouch and unlove but still working. While new development can move to updated ioctl.
Yeah, and I think this is a very good forward thinking and I would like to adopt it as well. This is the perfect time to do it as we still don't have any history to maintain. I will run it by our internal team and hopefully publish a new set with this fix for you to review :)
Other solution would be to just keep adding new ioctl but i fear we might run out of ioctl number.
Cheers, Jérôme
Oded
On 08/10/14 19:00, Jerome Glisse wrote:
Hi,
So if i do not start the discussion now it might be already too late. Given plan to converge open source driver and closed source driver to use a single common kernel driver and that this would be a new kernel driver. This is an opportunity to fix some of the radeon design issues (at least things that i would have done differently if only i could get some gas for my DeLorean).
Among the thing that i will not do is the chunk stuff associated with cs ioctl. I find it ugly, if my memory serve me well i was trying to be future proof and allow the cs ioctl to be extended. While this original aim have been somewhat successfully, i think it was the wrong day to do it.
My lastest (and what i still believe to be a good idea until proven wrong), is to change the way we do ioctl and use a little trick. This idea was also spark by the continuous additions we do to info ioctl which is getting ugly.
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
The version field is the key here, think of it as an index into an array of ioctl dispatch functions. So something like :
struct radeon_ioctls { int (*iotcl)[MAX_IOCTL_NUM](void *data, ...); };
struct radeon_ioctls rdispatch_ioctls[N];
And now all ioctl go through this single entry point :
int radeon_ioctl_stub(int ioctl, void *data, ...) { struct radeon_ioctl *rio = data;
return rdispatch_ioctls[rio->version][ioctl](data, ...); }
So this is rough idea but the point is that we can do proper ioctl versioning and have separate functions for each new versions and avoid ioctl cruft, or at least this is the theory.
The two things this gave us, is feedback from userspace as we the version the kernel will know which version of userspace it is dealing with. The others one is that it does allow you to introduce a completely new API either for new generation of hardware or just as an evolution. And small bonus is that it does allow to slowly phase out API that we consider broken (ioctl per ioctl).
So this is the main design change that i would do. I should probably rought up something that goes deeper for the cs ioctl.
Cheers, Jérôme _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Oct 9, 2014 at 12:15 PM, Oded Gabbay oded.gabbay@amd.com wrote:
Well, I don't expect to reach 100 ioctls anytime soon, but I can tell you that for the features we have in the pipeline, I can see the IOCTL number go up to 20-30, just for the current H/W generation.
So our Android folks seem to routinely run out of ioctl numbers in their product/downstream branches. But ime after some decent review of the new interfaces and not just blindly creating interfaces because that's easier we have rather slow ioctl number consumption in i915. So presuming you do decent review and dont fork your driver for every generation (which seems to be how every blob gfx driver team works) I don't think you have a problem. Of course that's a massive change in approach and in my experience with intel-internal discussion will require a lot of training. -Daniel
Thanks for the input. I do _not_ intend to fork IOCTLs for new H/W generations, if possible. i.e, our driver now supports 2 h/w generations with the exact same set of IOCTLs and I don't see how that would change in the future..
What I'm more worried about is supporting different sets of UMD, which will require different IOCTLs for the same operation, e.g. CreateQueue for HSA runtime and OpenCL runtime.
However, due to a very limited amount of UMDs, the "regular" way of adding IOCTLs may be sufficient.
Bottom line, need to think more about it :)
Oded
On 09/10/14 18:55, Daniel Vetter wrote:
On Thu, Oct 9, 2014 at 12:15 PM, Oded Gabbay oded.gabbay@amd.com wrote:
Well, I don't expect to reach 100 ioctls anytime soon, but I can tell you that for the features we have in the pipeline, I can see the IOCTL number go up to 20-30, just for the current H/W generation.
So our Android folks seem to routinely run out of ioctl numbers in their product/downstream branches. But ime after some decent review of the new interfaces and not just blindly creating interfaces because that's easier we have rather slow ioctl number consumption in i915. So presuming you do decent review and dont fork your driver for every generation (which seems to be how every blob gfx driver team works) I don't think you have a problem. Of course that's a massive change in approach and in my experience with intel-internal discussion will require a lot of training. -Daniel
On Thu, Oct 9, 2014 at 8:33 PM, Oded Gabbay oded.gabbay@amd.com wrote:
Thanks for the input. I do _not_ intend to fork IOCTLs for new H/W generations, if possible. i.e, our driver now supports 2 h/w generations with the exact same set of IOCTLs and I don't see how that would change in the future..
What I'm more worried about is supporting different sets of UMD, which will require different IOCTLs for the same operation, e.g. CreateQueue for HSA runtime and OpenCL runtime.
However, due to a very limited amount of UMDs, the "regular" way of adding IOCTLs may be sufficient.
Bottom line, need to think more about it :)
Hm, generally the ioctls should be modelled on the hw for a generic umd. Of course that's a bit hard in practice since predicting the unkown is difficult ;-). But on intel hw we have about 5+ different umd stacks if you count them all, and they all seem to be more-or-less happy with the same ioctl interface. Like I've said it does require a bit a mindset change though since clean-slate designs should only be done when there's overwhelming reasons that the old interfaces just don't cut it any more. Otoh you also need to make sure that all the different umd teams talk to each another since ime they also err on the other side and each come up with their own special hack to enable a given new feature. -Daniel
On 11/10/14 21:30, Daniel Vetter wrote:
On Thu, Oct 9, 2014 at 8:33 PM, Oded Gabbay oded.gabbay@amd.com wrote:
Thanks for the input. I do _not_ intend to fork IOCTLs for new H/W generations, if possible. i.e, our driver now supports 2 h/w generations with the exact same set of IOCTLs and I don't see how that would change in the future..
What I'm more worried about is supporting different sets of UMD, which will require different IOCTLs for the same operation, e.g. CreateQueue for HSA runtime and OpenCL runtime.
However, due to a very limited amount of UMDs, the "regular" way of adding IOCTLs may be sufficient.
Bottom line, need to think more about it :)
Hm, generally the ioctls should be modelled on the hw for a generic umd.
Agreed. Of course that's a bit hard in practice since predicting the
unkown is difficult ;-). But on intel hw we have about 5+ different umd stacks if you count them all, and they all seem to be more-or-less happy with the same ioctl interface.
The problems start when you start migrating UMD from one kernel driver to another, which already has other UMDs. Then, you may need to create new IOCTLs. Like I've said it does require a
bit a mindset change though since clean-slate designs should only be done when there's overwhelming reasons that the old interfaces just don't cut it any more. Otoh you also need to make sure that all the different umd teams talk to each another since ime they also err on the other side and each come up with their own special hack to enable a given new feature.
Sounds familiar ;)
-Daniel
Am 11.10.2014 um 20:30 schrieb Daniel Vetter:
On Thu, Oct 9, 2014 at 8:33 PM, Oded Gabbay oded.gabbay@amd.com wrote:
Thanks for the input. I do _not_ intend to fork IOCTLs for new H/W generations, if possible. i.e, our driver now supports 2 h/w generations with the exact same set of IOCTLs and I don't see how that would change in the future..
What I'm more worried about is supporting different sets of UMD, which will require different IOCTLs for the same operation, e.g. CreateQueue for HSA runtime and OpenCL runtime.
However, due to a very limited amount of UMDs, the "regular" way of adding IOCTLs may be sufficient.
Bottom line, need to think more about it :)
Hm, generally the ioctls should be modelled on the hw for a generic umd. Of course that's a bit hard in practice since predicting the unkown is difficult ;-).
Yeah, exactly what I'm telling to the closed source people here at AMD as well :) It took me a while, but I think it's slowly sinking in now that IOCTL interfaces shouldn't be specialized for a certain use case.
The good news is that as far as I've seen the HSA IOCTL interface it actually looks quite generic to me.
Regards, Christian.
But on intel hw we have about 5+ different umd stacks if you count them all, and they all seem to be more-or-less happy with the same ioctl interface. Like I've said it does require a bit a mindset change though since clean-slate designs should only be done when there's overwhelming reasons that the old interfaces just don't cut it any more. Otoh you also need to make sure that all the different umd teams talk to each another since ime they also err on the other side and each come up with their own special hack to enable a given new feature. -Daniel
On Wed, Oct 8, 2014 at 12:00 PM, Jerome Glisse j.glisse@gmail.com wrote:
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
fwiw, drm_ioctl() will do the right thing (zero-pad) for growing ioctls these days..
BR, -R
On Thu, Oct 09, 2014 at 03:32:26AM -0400, Rob Clark wrote:
On Wed, Oct 8, 2014 at 12:00 PM, Jerome Glisse j.glisse@gmail.com wrote:
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
fwiw, drm_ioctl() will do the right thing (zero-pad) for growing ioctls these days..
It's more about userspace knowing about ioctl XY and having a way to override/ask for special version of each ioctl. So if we screw ioctl XY, we can add a new version to XY and we can try to work around if userspace still request the old version. It is mostly about trying to keep the code clean and avoiding things like the chunk stuff of the cs ioctl.
For instance if you cs ioctl with the following struct as ioctl parameter:
struct drm_radeon_cs_ioctl_version_00 { u64 *cs; u32 size; u32 flags; };
And now you have a new generation of hardware or are just adding a new feature :
struct drm_radeon_cs_ioctl_version_01 { u64 *cs; u32 size; u32 flags; u32 newfeature fields; };
Of course you can argue that you could use the size of user space paremeter to do this simple example. But with versioning you can also move fields around, remove fields, ... basicly it is just more flexible with small overhead of having one more indirection but this will be lost into the ioctl cost anyway (i highly doubt it would turns as a bottleneck).
Cheers, Jérôme
BR, -R
Am 09.10.2014 um 09:54 schrieb Jerome Glisse:
On Thu, Oct 09, 2014 at 03:32:26AM -0400, Rob Clark wrote:
On Wed, Oct 8, 2014 at 12:00 PM, Jerome Glisse j.glisse@gmail.com wrote:
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
fwiw, drm_ioctl() will do the right thing (zero-pad) for growing ioctls these days..
It's more about userspace knowing about ioctl XY and having a way to override/ask for special version of each ioctl. So if we screw ioctl XY, we can add a new version to XY and we can try to work around if userspace still request the old version. It is mostly about trying to keep the code clean and avoiding things like the chunk stuff of the cs ioctl.
For instance if you cs ioctl with the following struct as ioctl parameter:
struct drm_radeon_cs_ioctl_version_00 { u64 *cs; u32 size; u32 flags; };
And now you have a new generation of hardware or are just adding a new feature :
struct drm_radeon_cs_ioctl_version_01 { u64 *cs; u32 size; u32 flags; u32 newfeature fields; };
Of course you can argue that you could use the size of user space paremeter to do this simple example. But with versioning you can also move fields around, remove fields, ... basicly it is just more flexible with small overhead of having one more indirection but this will be lost into the ioctl cost anyway (i highly doubt it would turns as a bottleneck).
I think we can live perfectly fine with adding new fields to the end of the IOCTL interface structure and if we really find a need to completely redesign an IOCTL just use a new IOCTL number. IIRC they are 32bit at least so running out of IOCTL numbers is rather unlikely.
Regards, Christian.
Cheers, Jérôme
BR, -R
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Oct 9, 2014 at 3:54 AM, Jerome Glisse j.glisse@gmail.com wrote:
On Thu, Oct 09, 2014 at 03:32:26AM -0400, Rob Clark wrote:
On Wed, Oct 8, 2014 at 12:00 PM, Jerome Glisse j.glisse@gmail.com wrote:
So idea is simple, each ioctl would use some struct like :
struct radeon_ioctl { u32 version; u32 size; };
fwiw, drm_ioctl() will do the right thing (zero-pad) for growing ioctls these days..
It's more about userspace knowing about ioctl XY and having a way to override/ask for special version of each ioctl. So if we screw ioctl XY, we can add a new version to XY and we can try to work around if userspace still request the old version. It is mostly about trying to keep the code clean and avoiding things like the chunk stuff of the cs ioctl.
For instance if you cs ioctl with the following struct as ioctl parameter:
struct drm_radeon_cs_ioctl_version_00 { u64 *cs; u32 size; u32 flags; };
And now you have a new generation of hardware or are just adding a new feature :
struct drm_radeon_cs_ioctl_version_01 { u64 *cs; u32 size; u32 flags; u32 newfeature fields; };
Of course you can argue that you could use the size of user space paremeter to do this simple example. But with versioning you can also move fields around, remove fields, ... basicly it is just more flexible with small overhead of having one more indirection but this will be lost into the ioctl cost anyway (i highly doubt it would turns as a bottleneck).
sure, explicit version is more flexible (and I don't even think you need an extra layer of indirection for that).. just pointing out that we do have a way to grow ioctls now (with the constraint that zero's is a sane backwards compat value for the new field(s))
BR, -R
Cheers, Jérôme
BR, -R
On Wed, Oct 8, 2014 at 6:00 PM, Jerome Glisse j.glisse@gmail.com wrote:
struct radeon_ioctl { u32 version; u32 size; };
How is this any different from just another ioctl multiplexer and why can't we just add a new version if an ioctl needs to be revised? E.g. in i915 we've just added execbuf2 and within execbuf2 there's tons of flags to enable/disable code. I don't see what a version field buys us on top of having flags fields and just creating a new ioctl if that gets too fuzzy. In the end you still have to keep all the old crap working, and imo that's easier to do long-term if you don't make it too easy to add new interfaces.
Also, the size is encoded in the ioctl itself and like Rob said the core takes care of properly zero-filling this. -Daniel
On 9 October 2014 19:20, Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Oct 8, 2014 at 6:00 PM, Jerome Glisse j.glisse@gmail.com wrote:
struct radeon_ioctl { u32 version; u32 size; };
How is this any different from just another ioctl multiplexer and why can't we just add a new version if an ioctl needs to be revised? E.g. in i915 we've just added execbuf2 and within execbuf2 there's tons of flags to enable/disable code. I don't see what a version field buys us on top of having flags fields and just creating a new ioctl if that gets too fuzzy. In the end you still have to keep all the old crap working, and imo that's easier to do long-term if you don't make it too easy to add new interfaces.
Also, the size is encoded in the ioctl itself and like Rob said the core takes care of properly zero-filling this.
I'm kinda with Daniel on this, I don't see the point in adding a multiplexer to a multiplexer interface,
Do we envisage adding a lot of new ioctls? so the driver will run out of ioctls?
Otherwise I don't erally see the difference between this
submit (0, - submit (1, - submit(2, and just adding
submit1, submit2, submit3 ioctls,
You still know what userspace is asking for by what entry point it comes in,
Dave.
Hi,
On Wed, Oct 08, 2014 at 12:00:27PM -0400, Jerome Glisse wrote:
[...]
And now all ioctl go through this single entry point :
int radeon_ioctl_stub(int ioctl, void *data, ...) { struct radeon_ioctl *rio = data;
return rdispatch_ioctls[rio->version][ioctl](data, ...); }
I advise against this: it's essentially implementing IOCTLs on top of an IOCTL...
It doesn't really reduce the maintenance burden: just moves it from the standard IOCTL dispatching to a custom mechanism; while adding extra complexity, and making the interface more opaque. (Think strace, or Valgrind.)
-antrik-
dri-devel@lists.freedesktop.org