On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to free processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including
- Intel/Mobileye EyeQ
- Intel/Movidius Keembay
- Nvidia NVDLA
- Gyrfalcon Lightspeeur
- Apple Neural Engine
- Google TPU
- Arm Ethos
plus many more that are somewhat less likely to gain fully open source driver stacks.
We also had this entire discussion 2 years ago with habanalabs. The hang-up is that drivers/gpu folks require fully open source userspace, including compiler and anything else you need to actually use the chip. Greg doesn't, he's happy if all he has is the runtime library with some tests.
These two drivers here look a lot more like classic gpus than habanalabs did, at least from a quick look they operate with explicit buffer allocations/registration model. So even more reasons to just reuse all the stuff we have already. But also I don't expect these drivers here to come with open compilers, they never do, not initially at least before you started talking with the vendor. Hence I expect there'll be more drivers/totally-not-drm acceleration subsystem nonsense.
Anyway this horse has been throughroughly beaten to death and more, the agreement is that accel drivers in drivers/misc must not use any gpu stuff, so that drivers/gpu people dont end up in a prickly situation they never signed up for. E.g. I removed some code sharing from habanalabs. This means interop between gpu and nn/ai drivers will be no-go until this is resolved, but *shrug*.
Cheers, Daniel
On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to free processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including
- Intel/Mobileye EyeQ
- Intel/Movidius Keembay
- Nvidia NVDLA
- Gyrfalcon Lightspeeur
- Apple Neural Engine
- Google TPU
- Arm Ethos
plus many more that are somewhat less likely to gain fully open source driver stacks.
We also had this entire discussion 2 years ago with habanalabs. The hang-up is that drivers/gpu folks require fully open source userspace, including compiler and anything else you need to actually use the chip. Greg doesn't, he's happy if all he has is the runtime library with some tests.
All you need is a library, what you write on top of that is always application-specific, so how can I ask for "more"?
These two drivers here look a lot more like classic gpus than habanalabs did, at least from a quick look they operate with explicit buffer allocations/registration model. So even more reasons to just reuse all the stuff we have already. But also I don't expect these drivers here to come with open compilers, they never do, not initially at least before you started talking with the vendor. Hence I expect there'll be more drivers/totally-not-drm acceleration subsystem nonsense.
As these are both from Intel, why aren't they using the same open compiler? Why aren't they using the same userspace api as well? What's preventing them from talking to each other about this and not forcing the community (i.e. outsiders) from being the one to force this to happen?
Anyway this horse has been throughroughly beaten to death and more, the agreement is that accel drivers in drivers/misc must not use any gpu stuff, so that drivers/gpu people dont end up in a prickly situation they never signed up for. E.g. I removed some code sharing from habanalabs. This means interop between gpu and nn/ai drivers will be no-go until this is resolved, but *shrug*.
I'm all for making this unified, but these are not really devices doing graphics so putting it all into DRM always feels wrong to me. The fact that people abuse GPU devices for not graphic usages would indicate to me that that code should be moving _out_ of the drm subsystem :)
thanks,
greg k-h
On Mon, May 17, 2021 at 10:00 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to free processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including
- Intel/Mobileye EyeQ
- Intel/Movidius Keembay
- Nvidia NVDLA
- Gyrfalcon Lightspeeur
- Apple Neural Engine
- Google TPU
- Arm Ethos
plus many more that are somewhat less likely to gain fully open source driver stacks.
We also had this entire discussion 2 years ago with habanalabs. The hang-up is that drivers/gpu folks require fully open source userspace, including compiler and anything else you need to actually use the chip. Greg doesn't, he's happy if all he has is the runtime library with some tests.
I guess we're really going to beat this horse into pulp ... oh well.
All you need is a library, what you write on top of that is always application-specific, so how can I ask for "more"?
This is like accepting a new cpu port, where all you require is that the libc port is open source, but the cpu compiler is totally fine as a blob (doable with llvm now being supported). It makes no sense at all, at least to people who have worked with accelerators like this before.
We are not requiring that applications are open. We're only requiring that at least one of the compilers you need (no need to open the fully optimized one with all the magic sauce) to create any kind of applications is open, because without that you can't use the device, you can't analyze the stack, and you have no idea at all about what exactly it is you're merging. With these devices, the uapi visible in include/uapi is the smallest part of the interface exposed to userspace.
These two drivers here look a lot more like classic gpus than habanalabs did, at least from a quick look they operate with explicit buffer allocations/registration model. So even more reasons to just reuse all the stuff we have already. But also I don't expect these drivers here to come with open compilers, they never do, not initially at least before you started talking with the vendor. Hence I expect there'll be more drivers/totally-not-drm acceleration subsystem nonsense.
As these are both from Intel, why aren't they using the same open compiler? Why aren't they using the same userspace api as well? What's preventing them from talking to each other about this and not forcing the community (i.e. outsiders) from being the one to force this to happen?
I'm unfortuantely not the CEO of this company. Also you're the one who keeps accepting drivers that the accel folks (aka dri-devel community) said shouldn't be merged, so my internal bargaining power is zero to force something reaonable here. So please don't blame me for this mess, this is yours entirely.
Anyway this horse has been throughroughly beaten to death and more, the agreement is that accel drivers in drivers/misc must not use any gpu stuff, so that drivers/gpu people dont end up in a prickly situation they never signed up for. E.g. I removed some code sharing from habanalabs. This means interop between gpu and nn/ai drivers will be no-go until this is resolved, but *shrug*.
I'm all for making this unified, but these are not really devices doing graphics so putting it all into DRM always feels wrong to me. The fact that people abuse GPU devices for not graphic usages would indicate to me that that code should be moving _out_ of the drm subsystem :)
Like I said, if the 'g' really annoys you that much, feel free to send in a patch to rename drivers/gpu to drivers/xpu. -Daniel
On Mon, May 17, 2021 at 10:49:09AM +0200, Daniel Vetter wrote:
On Mon, May 17, 2021 at 10:00 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to free processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including
- Intel/Mobileye EyeQ
- Intel/Movidius Keembay
- Nvidia NVDLA
- Gyrfalcon Lightspeeur
- Apple Neural Engine
- Google TPU
- Arm Ethos
plus many more that are somewhat less likely to gain fully open source driver stacks.
We also had this entire discussion 2 years ago with habanalabs. The hang-up is that drivers/gpu folks require fully open source userspace, including compiler and anything else you need to actually use the chip. Greg doesn't, he's happy if all he has is the runtime library with some tests.
I guess we're really going to beat this horse into pulp ... oh well.
All you need is a library, what you write on top of that is always application-specific, so how can I ask for "more"?
This is like accepting a new cpu port, where all you require is that the libc port is open source, but the cpu compiler is totally fine as a blob (doable with llvm now being supported). It makes no sense at all, at least to people who have worked with accelerators like this before.
We are not requiring that applications are open. We're only requiring that at least one of the compilers you need (no need to open the fully optimized one with all the magic sauce) to create any kind of applications is open, because without that you can't use the device, you can't analyze the stack, and you have no idea at all about what exactly it is you're merging. With these devices, the uapi visible in include/uapi is the smallest part of the interface exposed to userspace.
Ok, sorry, I was not aware that the habanalabs compiler was not available to all under an open source license. All I was trying to enforce was that the library to use the kernel api was open so that anyone could use it. Trying to enforce compiler requirements like this might feel to be a bit of a reach as the CPU on the hardware really doesn't fall under the license of the operating system running on this CPU over here :)
thanks,
greg k-h
On Mon, May 17, 2021 at 10:55 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, May 17, 2021 at 10:49:09AM +0200, Daniel Vetter wrote:
On Mon, May 17, 2021 at 10:00 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote: > Dear kernel maintainers, > > This submission is a kernel driver to support Intel(R) Gaussian & Neural > Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor > available on multiple Intel platforms. AI developers and users can offload > continuous inference workloads to an Intel(R) GNA device in order to free > processor resources and save power. Noise reduction and speech recognition > are the examples of the workloads Intel(R) GNA deals with while its usage > is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including
- Intel/Mobileye EyeQ
- Intel/Movidius Keembay
- Nvidia NVDLA
- Gyrfalcon Lightspeeur
- Apple Neural Engine
- Google TPU
- Arm Ethos
plus many more that are somewhat less likely to gain fully open source driver stacks.
We also had this entire discussion 2 years ago with habanalabs. The hang-up is that drivers/gpu folks require fully open source userspace, including compiler and anything else you need to actually use the chip. Greg doesn't, he's happy if all he has is the runtime library with some tests.
I guess we're really going to beat this horse into pulp ... oh well.
All you need is a library, what you write on top of that is always application-specific, so how can I ask for "more"?
This is like accepting a new cpu port, where all you require is that the libc port is open source, but the cpu compiler is totally fine as a blob (doable with llvm now being supported). It makes no sense at all, at least to people who have worked with accelerators like this before.
We are not requiring that applications are open. We're only requiring that at least one of the compilers you need (no need to open the fully optimized one with all the magic sauce) to create any kind of applications is open, because without that you can't use the device, you can't analyze the stack, and you have no idea at all about what exactly it is you're merging. With these devices, the uapi visible in include/uapi is the smallest part of the interface exposed to userspace.
Ok, sorry, I was not aware that the habanalabs compiler was not available to all under an open source license. All I was trying to enforce was that the library to use the kernel api was open so that anyone could use it. Trying to enforce compiler requirements like this might feel to be a bit of a reach as the CPU on the hardware really doesn't fall under the license of the operating system running on this CPU over here :)
Experience says if you don't, forget about supporting your drivers/subsystem long-term. At best you're stuck with a per-device fragmented mess that vendors might or might not support. This has nothing to do with GPL licensing or not, but about making sure you can do proper engineering/support/review of the driver stack. At least in the GPU world we're already making it rather clear that running blobby userspace is fine with us (as long as it's using the exact same uapi as the truly open stack, no exceptions/hacks/abuse are supported).
Also yes vendors don't like it. But they also don't like that they have to open source their kernel drivers, or runtime library. Lots of background chats over years, and a very clear line in the sand helps to get there, and also makes sure that the vendors who got here don't return to the old closed source ways they love so much.
Anyway we've had all this discussions 2 years ago, nothing has changed (well on the gpu side we managed to get ARM officially on board with fully open stack paid by them meanwhile, other discussions still ongoing). I just wanted to re-iterate that if we'd really care about having a proper accel subsystem, there's people who've been doing this for decades.
-Daniel
On Mon, 17 May 2021 at 19:12, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, May 17, 2021 at 10:55 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, May 17, 2021 at 10:49:09AM +0200, Daniel Vetter wrote:
On Mon, May 17, 2021 at 10:00 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote: > On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote: > > Dear kernel maintainers, > > > > This submission is a kernel driver to support Intel(R) Gaussian & Neural > > Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor > > available on multiple Intel platforms. AI developers and users can offload > > continuous inference workloads to an Intel(R) GNA device in order to free > > processor resources and save power. Noise reduction and speech recognition > > are the examples of the workloads Intel(R) GNA deals with while its usage > > is not limited to the two. > > How does this compare with the "nnpi" driver being proposed here: > https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com > > Please work with those developers to share code and userspace api and > tools. Having the community review two totally different apis and > drivers for the same type of functionality from the same company is > totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
We have support for Intel habanalabs hardware in drivers/misc, and there are countless hardware solutions out of tree that would hopefully go the same way with an upstream submission and open source user space, including
- Intel/Mobileye EyeQ
- Intel/Movidius Keembay
- Nvidia NVDLA
- Gyrfalcon Lightspeeur
- Apple Neural Engine
- Google TPU
- Arm Ethos
plus many more that are somewhat less likely to gain fully open source driver stacks.
We also had this entire discussion 2 years ago with habanalabs. The hang-up is that drivers/gpu folks require fully open source userspace, including compiler and anything else you need to actually use the chip. Greg doesn't, he's happy if all he has is the runtime library with some tests.
I guess we're really going to beat this horse into pulp ... oh well.
All you need is a library, what you write on top of that is always application-specific, so how can I ask for "more"?
This is like accepting a new cpu port, where all you require is that the libc port is open source, but the cpu compiler is totally fine as a blob (doable with llvm now being supported). It makes no sense at all, at least to people who have worked with accelerators like this before.
We are not requiring that applications are open. We're only requiring that at least one of the compilers you need (no need to open the fully optimized one with all the magic sauce) to create any kind of applications is open, because without that you can't use the device, you can't analyze the stack, and you have no idea at all about what exactly it is you're merging. With these devices, the uapi visible in include/uapi is the smallest part of the interface exposed to userspace.
Ok, sorry, I was not aware that the habanalabs compiler was not available to all under an open source license. All I was trying to enforce was that the library to use the kernel api was open so that anyone could use it. Trying to enforce compiler requirements like this might feel to be a bit of a reach as the CPU on the hardware really doesn't fall under the license of the operating system running on this CPU over here :)
Experience says if you don't, forget about supporting your drivers/subsystem long-term. At best you're stuck with a per-device fragmented mess that vendors might or might not support. This has nothing to do with GPL licensing or not, but about making sure you can do proper engineering/support/review of the driver stack. At least in the GPU world we're already making it rather clear that running blobby userspace is fine with us (as long as it's using the exact same uapi as the truly open stack, no exceptions/hacks/abuse are supported).
Also yes vendors don't like it. But they also don't like that they have to open source their kernel drivers, or runtime library. Lots of background chats over years, and a very clear line in the sand helps to get there, and also makes sure that the vendors who got here don't return to the old closed source ways they love so much.
Anyway we've had all this discussions 2 years ago, nothing has changed (well on the gpu side we managed to get ARM officially on board with fully open stack paid by them meanwhile, other discussions still ongoing). I just wanted to re-iterate that if we'd really care about having a proper accel subsystem, there's people who've been doing this for decades.
I think the other point worth reiterating is that most of these devices are unobtanium for your average kernel maintainer. It's hard to create a subsystem standard when you don't have access to a collection of devices + the complete picture of what the stack is doing and how it interoperates with the ecosystem at large, not just the kernel. Kernel maintainers need to help ensure there is a viable ecosystem beyond the kernel before merging stuff that is clearly a large kernel + user stack architecture. i.e. misc USB drivers, merge away, misc small layer drivers for larger vendor-specific ecosystems we need to tread more carefully as longterm we do nobody any favours.
Dave.
-Daniel
Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Hi
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to
free
processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't. Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
And as Dave mentioned, these devices are hard to obtain. We don't really know what we sign up for.
Just my 2 cents.
Best regards Thomas
On Mon, May 17, 2021 at 3:12 PM Thomas Zimmermann tzimmermann@suse.de wrote:
Hi
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to
free
processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't. Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
The putting something on the screen is just a tiny part of what GPUs do these days. Many GPUs don't even have display hardware anymore. Even with drawing APIs, it's just some operation that you do with memory. The display may be another device entirely. GPUs also do video encode and decode, jpeg acceleration, etc. drivers/gpu seems like a logical place to me. Call it drivers/accelerators if you like. Other than modesetting most of the shared infrastructure in drivers/gpu is around memory management and synchronization which are all the hard parts. Better to try and share that than to reinvent that in some other subsystem.
Alex
And as Dave mentioned, these devices are hard to obtain. We don't really know what we sign up for.
Just my 2 cents.
Best regards Thomas
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On Mon, May 17, 2021 at 9:23 PM Alex Deucher alexdeucher@gmail.com wrote:
On Mon, May 17, 2021 at 3:12 PM Thomas Zimmermann tzimmermann@suse.de wrote:
Hi
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can offload continuous inference workloads to an Intel(R) GNA device in order to
free
processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't. Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
The putting something on the screen is just a tiny part of what GPUs do these days. Many GPUs don't even have display hardware anymore. Even with drawing APIs, it's just some operation that you do with memory. The display may be another device entirely. GPUs also do video encode and decode, jpeg acceleration, etc. drivers/gpu seems like a logical place to me. Call it drivers/accelerators if you like. Other than modesetting most of the shared infrastructure in drivers/gpu is around memory management and synchronization which are all the hard parts. Better to try and share that than to reinvent that in some other subsystem.
Maybe to add: Most of our driver stack is in userspace (like for NN/AI chips too), both where high amounts of code sharing are the norm (like with mesa3d) and areas there the landscape is a lot more fragmented (like compute and media, where the userspace driver APIs are all different for each vendor, or at least highly specialized). That's another thing which I don't think any other kernel subsystem has, at least as much as we do.
So for both the big design questions on how the overall stack is organized down to the details like code sharing, drivers/g^Hxpu should be the best place. Aside from the pesky problem that we do actually look at the userspace side and have some expectations on that too, not just on the kernel code alone. -Daniel
Alex
And as Dave mentioned, these devices are hard to obtain. We don't really know what we sign up for.
Just my 2 cents.
Best regards Thomas
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
Hi
Am 17.05.21 um 21:23 schrieb Alex Deucher:
On Mon, May 17, 2021 at 3:12 PM Thomas Zimmermann tzimmermann@suse.de
wrote:
Hi
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
Dear kernel maintainers,
This submission is a kernel driver to support Intel(R) Gaussian & Neural Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor available on multiple Intel platforms. AI developers and users can
offload
continuous inference workloads to an Intel(R) GNA device in order to
free
processor resources and save power. Noise reduction and speech recognition are the examples of the workloads Intel(R) GNA deals with while its usage is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't. Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
The putting something on the screen is just a tiny part of what GPUs do these days. Many GPUs don't even have display hardware anymore. Even with drawing APIs, it's just some operation that you do with memory. The display may be another device entirely. GPUs also do video encode and decode, jpeg acceleration, etc. drivers/gpu seems like a logical place to me. Call it drivers/accelerators if you like. Other than modesetting most of the shared infrastructure in drivers/gpu is around memory management and synchronization which are all the hard parts. Better to try and share that than to reinvent that in some other subsystem.
I'm not sure whether we're on the same page or not.
I look at this from the UAPI perspective: the only interfaces that we really standardize among GPUs is modesetting, dumb buffers, GEM. The sophisticated rendering is done with per-driver interfaces. And modesetting is the thing that AI does not do.
Sharing common code among subsystems is not a problem. Many of our more-sophisticated helpers are located in DRM because no other subsystems have the requirements yet. Maybe AI now has and we can move the rsp shareable code to a common location. But AI is still no GPU. To give a bad analogy: GPUs transmit audio these days. Yet we don't treat them as sound cards.
Best regards Thomas
Alex
And as Dave mentioned, these devices are hard to obtain. We don't really know what we sign up for.
Just my 2 cents.
Best regards Thomas
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On Mon, May 17, 2021 at 9:49 PM Thomas Zimmermann tzimmermann@suse.de wrote:
Hi
Am 17.05.21 um 21:23 schrieb Alex Deucher:
On Mon, May 17, 2021 at 3:12 PM Thomas Zimmermann tzimmermann@suse.de
wrote:
Hi
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote: > Dear kernel maintainers, > > This submission is a kernel driver to support Intel(R) Gaussian & Neural > Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor > available on multiple Intel platforms. AI developers and users can
offload
> continuous inference workloads to an Intel(R) GNA device in order to
free
> processor resources and save power. Noise reduction and speech recognition > are the examples of the workloads Intel(R) GNA deals with while its usage > is not limited to the two.
How does this compare with the "nnpi" driver being proposed here: https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
Please work with those developers to share code and userspace api and tools. Having the community review two totally different apis and drivers for the same type of functionality from the same company is totally wasteful of our time and energy.
Agreed, but I think we should go further than this and work towards a subsystem across companies for machine learning and neural networks accelerators for both inferencing and training.
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't. Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
The putting something on the screen is just a tiny part of what GPUs do these days. Many GPUs don't even have display hardware anymore. Even with drawing APIs, it's just some operation that you do with memory. The display may be another device entirely. GPUs also do video encode and decode, jpeg acceleration, etc. drivers/gpu seems like a logical place to me. Call it drivers/accelerators if you like. Other than modesetting most of the shared infrastructure in drivers/gpu is around memory management and synchronization which are all the hard parts. Better to try and share that than to reinvent that in some other subsystem.
I'm not sure whether we're on the same page or not.
I look at this from the UAPI perspective: the only interfaces that we really standardize among GPUs is modesetting, dumb buffers, GEM. The sophisticated rendering is done with per-driver interfaces. And modesetting is the thing that AI does not do.
Yeah, but the peole who know what should be standardized and what should not be standardized for accel drivers are here. Because we've done both models in the past, and pretty much everything in between.
Also like Daniel said, we support hw (and know how to drive it) for anything from "kernel bashes register values" (gpus worked like that 20 years ago) to "mostly direct userspace submit (amdkfd and parts of nouveau work like this).
There isn't any other subsystem with that much knowledge about how to stand up the entire accelerator stack and not making it suck too badly. That is the real value of dri-devel and the community we have here, not the code sharing we occasionally tend to do.
Sharing common code among subsystems is not a problem. Many of our more-sophisticated helpers are located in DRM because no other subsystems have the requirements yet. Maybe AI now has and we can move the rsp shareable code to a common location. But AI is still no GPU. To give a bad analogy: GPUs transmit audio these days. Yet we don't treat them as sound cards.
We actually do, there are full blown sound drivers for them over in sound/ (ok I think they're all in sound/hda for pci gpus or in sound/soc actually). There's some glue to tie it together because it requires coordination between the gpu and sound side of things, but that's it.
Also I think it would be extremely silly to remove all the drm_ stuff just because it's originated from GPUs, and therefore absolutely cannot be used by other accelarators. I'm not seeing the point in that, but if someone has convincing technical argument for this we could do it. A tree wide s/drm_/xpu_ might make some sense perhaps if that makes people more comfortable with the idea of reusing code from gpu origins for accelerators in general. -Daniel
Hi
Am 17.05.21 um 22:00 schrieb Daniel Vetter:
Sharing common code among subsystems is not a problem. Many of our more-sophisticated helpers are located in DRM because no other subsystems have the requirements yet. Maybe AI now has and we can move the rsp shareable code to a common location. But AI is still no GPU. To give a bad analogy: GPUs transmit audio these days. Yet we don't treat them as sound cards.
We actually do, there are full blown sound drivers for them over in sound/ (ok I think they're all in sound/hda for pci gpus or in sound/soc actually). There's some glue to tie it together because it requires coordination between the gpu and sound side of things, but that's it.
I know. But we don't merge both subsystems, just because the devices have some overlap in functionality.
Best regards Thomas
Also I think it would be extremely silly to remove all the drm_ stuff just because it's originated from GPUs, and therefore absolutely cannot be used by other accelarators. I'm not seeing the point in that, but if someone has convincing technical argument for this we could do it. A tree wide s/drm_/xpu_ might make some sense perhaps if that makes people more comfortable with the idea of reusing code from gpu origins for accelerators in general. -Daniel
Hi,
On Mon, 17 May 2021 at 20:12, Thomas Zimmermann tzimmermann@suse.de wrote:
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't.
But it isn't. A GPU is a device that has a kernel-arbitrated MMU hosting kernel-managed buffers, executes user-supplied compiled programs with reference to those buffers and other jobs, and informs the kernel about progress.
KMS lies under the same third-level directory, but even when GPU and display are on the same die, they're totally different IP blocks developed on different schedules which are just periodically glued together.
Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
Why not? All we have in common in GPU land right now is MMU + buffer references + job scheduling + synchronisation. None of this has common top-level API, or even a common top-level model. It's not just ISA differences, but we have very old-school devices where the kernel needs to register fill on every job, living next to middle-age devices where the kernel and userspace co-operate to fill a ring buffer, living next to modern devices where userspace does some stuff and then the hardware makes it happen with the bare minimum of kernel awareness.
Honestly I think there's more difference between lima and amdgpu then there is between amdgpu and current NN/ML devices.
Cheers, Daniel
Hi
Am 17.05.21 um 21:32 schrieb Daniel Stone:
Hi,
On Mon, 17 May 2021 at 20:12, Thomas Zimmermann tzimmermann@suse.de wrote:
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't.
But it isn't. A GPU is a device that has a kernel-arbitrated MMU hosting kernel-managed buffers, executes user-supplied compiled programs with reference to those buffers and other jobs, and informs the kernel about progress.
KMS lies under the same third-level directory, but even when GPU and display are on the same die, they're totally different IP blocks developed on different schedules which are just periodically glued together.
I mentioned this elsewhere: it's not about the chip architecture, it's about the UAPI. In the end, the GPU is about displaying things on a screen. Even if the rendering and the scanout engines are on different IP blocks. (Or different devices.)
The fact that one can do general purpose computing on a GPU is a byproduct of the evolution of graphics hardware. It never was the goal.
Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
Why not? All we have in common in GPU land right now is MMU + buffer references + job scheduling + synchronisation. None of this has common top-level API, or even a common top-level model. It's not just ISA differences, but we have very old-school devices where the kernel needs to register fill on every job, living next to middle-age devices where the kernel and userspace co-operate to fill a ring buffer, living next to modern devices where userspace does some stuff and then the hardware makes it happen with the bare minimum of kernel awareness.
I see all this as an example why AI should not live under gpu/. There are already many generations of GPUs with different feature sets supported. Why lump more behind the same abstractions if AI can take a fresh start? Why should we care about AI and why should AI care about all our legacy.
We can still share all the internal code if AI needs any of it. Meanwhile AI drivers can provide their own UAPIs until a common framework emerges.
Again, just my 2 cents.
Best regards Thomas
Honestly I think there's more difference between lima and amdgpu then there is between amdgpu and current NN/ML devices.
Cheers, Daniel
On Mon, May 17, 2021 at 10:10 PM Thomas Zimmermann tzimmermann@suse.de wrote:
Hi
Am 17.05.21 um 21:32 schrieb Daniel Stone:
Hi,
On Mon, 17 May 2021 at 20:12, Thomas Zimmermann tzimmermann@suse.de wrote:
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't.
But it isn't. A GPU is a device that has a kernel-arbitrated MMU hosting kernel-managed buffers, executes user-supplied compiled programs with reference to those buffers and other jobs, and informs the kernel about progress.
KMS lies under the same third-level directory, but even when GPU and display are on the same die, they're totally different IP blocks developed on different schedules which are just periodically glued together.
I mentioned this elsewhere: it's not about the chip architecture, it's about the UAPI. In the end, the GPU is about displaying things on a screen. Even if the rendering and the scanout engines are on different IP blocks. (Or different devices.)
Sure, but that's ignoring the reality there there's enormous amounts of code needed to make this rendering possible. All of which keeps existing if you take away the display, use your gpu to do compute, throw out the the raster and texture fetch blocks and rebalance your compute units to be much faster at the bfloat16 and u8 math (or whatever it is the NN people love today) than fp32, where traditional render gpus are kind. At that point you have a NN/AI chip, and like Daniel Stone says, the difference here is often much smaller than the difference between drm/lima and drm/amdgpu. Which at least on the 3d side happen to share large chunks of our stack (more sharing in userspace than the kernel, but still quite some sharing overall in concepts and code).
There's overall substantially more code to make this work than the modeset drivers you think are the corner stone of a gpu driver.
Also if you want to do broad strokes refactoring like pulling the memory management/command submission stuff out of drm, then the right thing would be to pull the modeset stuff out and merge it with maybe v4l. modesetting was a 10 years later addition to drm, this entire thing started with memory/command submission management.
And a lot of people got rather mad that the drm folks reinvented their own modeset api and didn't use one of the existing ones. We eclipsed them by now with atomic support, so somewhat moot point now, but not when it landed 10 years ago.
The fact that one can do general purpose computing on a GPU is a byproduct of the evolution of graphics hardware. It never was the goal.
I think we've crossed now the point where 50% of gpu sales are displayless. It stopped being a byproduct long ago and became the main goal in many areas and markets.
But also the core of drivers/gpu _is_ the memory management stuff. That's what this subsystem has been doing for 20 years or so by now. The modeset stuff is a comparitively recent addition (but has grown a lot thanks to tons of new drivers that landed and fbdev essentially dying).
Treating both as the same, even if they share similar chip architectures, seems like a stretch. They might evolve in different directions and fit less and less under the same umbrella.
Why not? All we have in common in GPU land right now is MMU + buffer references + job scheduling + synchronisation. None of this has common top-level API, or even a common top-level model. It's not just ISA differences, but we have very old-school devices where the kernel needs to register fill on every job, living next to middle-age devices where the kernel and userspace co-operate to fill a ring buffer, living next to modern devices where userspace does some stuff and then the hardware makes it happen with the bare minimum of kernel awareness.
I see all this as an example why AI should not live under gpu/. There are already many generations of GPUs with different feature sets supported. Why lump more behind the same abstractions if AI can take a fresh start? Why should we care about AI and why should AI care about all our legacy.
Fresh start here means "ignore all the lessons learned from 20 years of accelerator driver hacking" I think.
We can still share all the internal code if AI needs any of it. Meanwhile AI drivers can provide their own UAPIs until a common framework emerges.
Again the no 1 lesson of writing accel drivers is that you need the fully open userspace stack, or it's game over long term. No amount of "we'll share code later on" will save you from that, because that's just not going to be an option. There's a few other lessons like you don't actually want to have a standardized uapi for the accelerator command submission and memory management, but there are some standardized approaches that make sense (we've probably tried them all).
This has nothing to do with how you organize the kernel subsystem, but all about how you set up the overall driver stack. Of which the userspace side is the important part.
And back to your point that display is the main reason why drivers/gpu exists: None of this has anything to do with display, but is exactly what the render _accelerator_ part of dri-devel has been doing for a rather long time by now. Which is why other accelarators should probably do the same thing instead of going "nah we're different, there's no DP output connected to our accelator".
Cheers, Daniel
PS: Also there are NN chips with DP/HDMI ports thrown in for the lolz. Turns out that these NN things are pretty useful when building video processing pipelines.
Again, just my 2 cents.
Best regards Thomas
Honestly I think there's more difference between lima and amdgpu then there is between amdgpu and current NN/ML devices.
Cheers, Daniel
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer
On Tue, 18 May 2021 at 06:10, Thomas Zimmermann tzimmermann@suse.de wrote:
Hi
Am 17.05.21 um 21:32 schrieb Daniel Stone:
Hi,
On Mon, 17 May 2021 at 20:12, Thomas Zimmermann tzimmermann@suse.de wrote:
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now evolved/specialized into something of its own. I can imagine sharing some code among the various subsystems, say GEM/TTM internals for memory management. Besides that there's probably little that can be shared in the userspace interfaces. A GPU is device that puts an image onto the screen and an AI accelerator isn't.
But it isn't. A GPU is a device that has a kernel-arbitrated MMU hosting kernel-managed buffers, executes user-supplied compiled programs with reference to those buffers and other jobs, and informs the kernel about progress.
KMS lies under the same third-level directory, but even when GPU and display are on the same die, they're totally different IP blocks developed on different schedules which are just periodically glued together.
I mentioned this elsewhere: it's not about the chip architecture, it's about the UAPI. In the end, the GPU is about displaying things on a screen. Even if the rendering and the scanout engines are on different IP blocks. (Or different devices.)
The fact that one can do general purpose computing on a GPU is a byproduct of the evolution of graphics hardware. It never was the goal.
But then we would have a subsystem for AI accelerators excluding GPUs, do we then start to layer that subsystem onto drivers/gpu? at which point why bother.
The thing is UAPI and stack architecture are important, but what is more important than any of that is that there is a place where the people invested in the area can come together outside of company boundaries and discuss ideas and bounce designs around each other to come to an agreement without the overheads of company interactions. dri-devel + mesa have managed this for graphics but it's taken years and we are still fighting that battle within major companies who even when they know it produces good results can't drag themselves to give up control over anything unless given no other choice.
I expect the accel teams in these companies need to step outside their productization timelines and powerpoints and start discussing uAPI designs with the other companies in the area. Until that happens I expect upstreaming any of these should be a default no.
Dave.
dri-devel@lists.freedesktop.org