Hi,
Ok, next round. Patches 1+2 are unmodified, driver updates are left out for now.
Patch #3 adds a new drm_mode_legacy_fb_format_he() function instead of changing drm_mode_legacy_fb_format behavior, so we keep things working for now.
Comments?
Suggestions how to handle the drm_mode_legacy_fb_format() call in drm_mode_addfb() ? Add some driver flag?
Gerd Hoffmann (3): drm: fourcc byteorder: drop DRM_FORMAT_BIG_ENDIAN drm: fourcc byteorder: add DRM_FORMAT_CPU_* drm: fourcc byteorder: add drm_mode_legacy_fb_format_he
include/drm/drm_fourcc.h | 15 +++++++++++ include/uapi/drm/drm_fourcc.h | 2 -- drivers/gpu/drm/drm_fourcc.c | 57 ++++++++++++++++++++++++++++++++++++--- drivers/gpu/drm/drm_framebuffer.c | 2 +- 4 files changed, 70 insertions(+), 6 deletions(-)
It's unused.
Suggested-by: Daniel Vetter daniel.vetter@intel.com Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- include/uapi/drm/drm_fourcc.h | 2 -- drivers/gpu/drm/drm_fourcc.c | 3 +-- drivers/gpu/drm/drm_framebuffer.c | 2 +- 3 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 995c8f9c69..305bc34be0 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -33,8 +33,6 @@ extern "C" { #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \ ((__u32)(c) << 16) | ((__u32)(d) << 24))
-#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */ - /* color index */ #define DRM_FORMAT_C8 fourcc_code('C', '8', ' ', ' ') /* [7:0] C */
diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c index 9c0152df45..adb3ff59a4 100644 --- a/drivers/gpu/drm/drm_fourcc.c +++ b/drivers/gpu/drm/drm_fourcc.c @@ -86,12 +86,11 @@ EXPORT_SYMBOL(drm_mode_legacy_fb_format); const char *drm_get_format_name(uint32_t format, struct drm_format_name_buf *buf) { snprintf(buf->str, sizeof(buf->str), - "%c%c%c%c %s-endian (0x%08x)", + "%c%c%c%c (0x%08x)", printable_char(format & 0xff), printable_char((format >> 8) & 0xff), printable_char((format >> 16) & 0xff), printable_char((format >> 24) & 0x7f), - format & DRM_FORMAT_BIG_ENDIAN ? "big" : "little", format);
return buf->str; diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c index fc8ef42203..efe8b5ece5 100644 --- a/drivers/gpu/drm/drm_framebuffer.c +++ b/drivers/gpu/drm/drm_framebuffer.c @@ -152,7 +152,7 @@ static int framebuffer_check(struct drm_device *dev, int i;
/* check if the format is supported at all */ - info = __drm_format_info(r->pixel_format & ~DRM_FORMAT_BIG_ENDIAN); + info = __drm_format_info(r->pixel_format); if (!info) { struct drm_format_name_buf format_name; DRM_DEBUG_KMS("bad framebuffer format %s\n",
Hi Gerd,
I did not have the change to follow through the discussion. Pardon if my suggestion have already been discussed.
On 2 May 2017 at 14:34, Gerd Hoffmann kraxel@redhat.com wrote:
It's unused.
Suggested-by: Daniel Vetter daniel.vetter@intel.com Signed-off-by: Gerd Hoffmann kraxel@redhat.com
include/uapi/drm/drm_fourcc.h | 2 -- drivers/gpu/drm/drm_fourcc.c | 3 +-- drivers/gpu/drm/drm_framebuffer.c | 2 +- 3 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 995c8f9c69..305bc34be0 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -33,8 +33,6 @@ extern "C" { #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \ ((__u32)(c) << 16) | ((__u32)(d) << 24))
-#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
Weston references DRM_FORMAT_BIG_ENDIAN thus this patch will lead to build breakage. That is never fun, so please carefully coordinate with the Weston devs to minimise the fireworks.
Personally I would leave the symbol, since it's UAPI and document that should not be used. Not my call, so feel free to ignore.
Thanks Emil
diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 995c8f9c69..305bc34be0 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -33,8 +33,6 @@ extern "C" { #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \ ((__u32)(c) << 16) | ((__u32)(d) << 24))
-#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
Weston references DRM_FORMAT_BIG_ENDIAN thus this patch will lead to build breakage.
Personally I would leave the symbol, since it's UAPI and document that should not be used.
Fair enough. I can surely add a deprecated comment instead of just dropping it.
I'm wondering how it is used though, given that none of the drivers in the kernel actually support this flag ...
cheers, Gerd
On Tue, 2 May 2017 14:53:43 +0100 Emil Velikov emil.l.velikov@gmail.com wrote:
Hi Gerd,
I did not have the change to follow through the discussion. Pardon if my suggestion have already been discussed.
On 2 May 2017 at 14:34, Gerd Hoffmann kraxel@redhat.com wrote:
It's unused.
Suggested-by: Daniel Vetter daniel.vetter@intel.com Signed-off-by: Gerd Hoffmann kraxel@redhat.com
include/uapi/drm/drm_fourcc.h | 2 -- drivers/gpu/drm/drm_fourcc.c | 3 +-- drivers/gpu/drm/drm_framebuffer.c | 2 +- 3 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 995c8f9c69..305bc34be0 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -33,8 +33,6 @@ extern "C" { #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \ ((__u32)(c) << 16) | ((__u32)(d) << 24))
-#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
Weston references DRM_FORMAT_BIG_ENDIAN thus this patch will lead to build breakage. That is never fun, so please carefully coordinate with the Weston devs to minimise the fireworks.
Personally I would leave the symbol, since it's UAPI and document that should not be used. Not my call, so feel free to ignore.
Hi,
indeed, weston does have one occurrence of it. I don't think it has actually been needed in practice ever, but removing it will cause a build failure: https://cgit.freedesktop.org/wayland/weston/tree/libweston/gl-renderer.c?id=... Funnily enough, it's only ever used to get rid of the bit, "just in case".
I also think that this patch requires more comments than the commit message has at the moment.
Removing the definition also removes the possibility to describe a lot of pixel formats, so that should definitely be mentioned. I think it would also be good to have some kind of justified claim that no hardware actually needs the pixel formats this is removing (e.g. RGB565 BE). Maybe this was already in the long discussions, but I feel it should be summarized in the commit message.
Thanks, pq
Hi,
I also think that this patch requires more comments than the commit message has at the moment.
Removing the definition also removes the possibility to describe a lot of pixel formats, so that should definitely be mentioned. I think it would also be good to have some kind of justified claim that no hardware actually needs the pixel formats this is removing (e.g. RGB565 BE).
That and RGB2101010 BE are the only candidates I can think of.
Dealing with those in none-native byteorder is a PITA though. Display hardware is little endian (pci byte order) for quite a while already.
Maybe this was already in the long discussions, but I feel it should be summarized in the commit message.
Radeon and nvidia (nv40) cards where mentioned. I'll try to summarize (feel free to correct me if I'm wrong).
nvidia has support for 8 bit-per-color formats only on bigendian hosts. Not sure whenever this is a driver or hardware limitation.
radeon looks differently on pre-R600 and R600+ hardware.
pre-R600 can byteswap on cpu access, so the cpu view is bigendian whereas things are actually stored on little endian byte order. Hardware supports both 16bit and 32bit swaps. Used for fbdev emulation (probably 32bit swaps, but not fully sure). xorg radeon driver doesn't use the byteswapping feature, because it is a PITA when bo's are moved between vram and system memory.
R600+ supports bigendian framebuffer formats, so no byteswapping on access is needed. Not sure whenever that includes 16bpp formats or whenever this is limited to the 8 bit-per-color formats (simliar to nvidia). Discussion focused on the pre-R600 hardware and how this on-acpu-access byteswapping is more a problem than it helps ...
cheers, Gerd
On Tue, May 2, 2017 at 11:06 AM, Gerd Hoffmann kraxel@redhat.com wrote:
Radeon and nvidia (nv40) cards where mentioned. I'll try to summarize (feel free to correct me if I'm wrong).
nvidia has support for 8 bit-per-color formats only on bigendian hosts. Not sure whenever this is a driver or hardware limitation.
Let me just summarize the NVIDIA situation. First off, pre-nv50 and nv50+ are entirely different and unrelated beasts.
The (pre-nv50) hardware has (a few) "big endian mode" bits. Those bits are kind of unrelated to each other and control their own "domains". One of the domains is reading of the scanout fb. So as a result, the hardware can scan out XRGB8888, RGB565, and XRGB1555 stored in either little or big endian packings, irrespective of the "mode" that other parts of the hardware are in.
However there's the delicate little question of the GPU *generating* the data. These older GPUs don't have quite the format flexibility offered by newer hw. So only XRGB8888 is supported, packed in whatever "mode" the whole PGRAPH unit is in. (I say this because things seem to work when rendering using the XRGB8888 format while scanning out with the BE flag set.)
There are no APIs for controlling the endianness of each engine in nouveau, so it ends up being in "big endian" mode on BE hosts, so the GPU can only render to big-endian-packed framebuffers.
None of this applies to nv50+ hw. (Although it might in broad strokes.)
Currently the driver is exposing XRGB8888 and ARGB8888 formats as that's what drm_crtc_init does for it. However the ARGB8888 format doesn't work (and shouldn't be exposed, the alpha is meaningless on a single-plane setup), and the XRGB8888 format is assumed to be packed in cpu host endian (and the "BE" bit is set accordingly).
Hope this helps!
-ilia
On 03/05/17 12:06 AM, Gerd Hoffmann wrote:
Removing the definition also removes the possibility to describe a lot of pixel formats, so that should definitely be mentioned. I think it would also be good to have some kind of justified claim that no hardware actually needs the pixel formats this is removing (e.g. RGB565 BE).
That and RGB2101010 BE are the only candidates I can think of.
Dealing with those in none-native byteorder is a PITA though. Display hardware is little endian (pci byte order) for quite a while already.
Maybe by default, but not exclusively.
radeon looks differently on pre-R600 and R600+ hardware.
pre-R600 can byteswap on cpu access, so the cpu view is bigendian whereas things are actually stored on little endian byte order. Hardware supports both 16bit and 32bit swaps. Used for fbdev emulation (probably 32bit swaps, but not fully sure).
32-bit swaps for 32 bpp, 16-bit swaps for 16 bpp.
R600+ supports bigendian framebuffer formats, so no byteswapping on access is needed. Not sure whenever that includes 16bpp formats or whenever this is limited to the 8 bit-per-color formats [...]
It includes 16bpp. Looking at drivers/gpu/drm/radeon/atombios_crtc.c:dce4_crtc_do_set_base(), it sets up byte-swapping for all multi-byte formats, so it effectively treats all those formats as if they had DRM_FORMAT_BIG_ENDIAN set.
If the radeon (and amdgpu) driver were to be changed to use drm_mode_legacy_fb_format_he for >= R600, that must also handle 16 bpp, which requires DRM_FORMAT_BIG_ENDIAN. So I still don't see how that can be removed or even deprecated.
From Ilia's followup it sounds like there's a similar situation with
nouveau.
Hi,
R600+ supports bigendian framebuffer formats, so no byteswapping on access is needed. Not sure whenever that includes 16bpp formats or whenever this is limited to the 8 bit-per-color formats [...]
It includes 16bpp. Looking at drivers/gpu/drm/radeon/atombios_crtc.c:dce4_crtc_do_set_base(), it sets up byte-swapping for all multi-byte formats, so it effectively treats all those formats as if they had DRM_FORMAT_BIG_ENDIAN set.
If the radeon (and amdgpu) driver were to be changed to use drm_mode_legacy_fb_format_he for >= R600, that must also handle 16 bpp, which requires DRM_FORMAT_BIG_ENDIAN. So I still don't see how that can be removed or even deprecated.
Ok.
Dropped patch #1.
Updated patch #2 to include all formats returned by drm_mode_legacy_fb_format, and also renamed them to DRM_FORMAT_HOST_*.
Question is how to go forward with patch #3. I'd prefer to not add drm_mode_legacy_fb_format_he if possible. Is there a chance to adapt the radeon and nvidia drivers to a fixed drm_mode_legacy_fb_format function (returning be formats on be) without invasive changes? Given they both treat formats as if they had DRM_FORMAT_BIG_ENDIAN set this could (with the help of the extended patch #2) be a simple s/DRM_FORMAT_/DRM_FORMAT_HOST/ at the right places ...
Michael? Ilia?
cheers, Gerd
On 03/05/17 06:24 PM, Gerd Hoffmann wrote:
Hi,
R600+ supports bigendian framebuffer formats, so no byteswapping on access is needed. Not sure whenever that includes 16bpp formats or whenever this is limited to the 8 bit-per-color formats [...]
It includes 16bpp. Looking at drivers/gpu/drm/radeon/atombios_crtc.c:dce4_crtc_do_set_base(), it sets up byte-swapping for all multi-byte formats, so it effectively treats all those formats as if they had DRM_FORMAT_BIG_ENDIAN set.
If the radeon (and amdgpu) driver were to be changed to use drm_mode_legacy_fb_format_he for >= R600, that must also handle 16 bpp, which requires DRM_FORMAT_BIG_ENDIAN. So I still don't see how that can be removed or even deprecated.
Ok.
Dropped patch #1.
Updated patch #2 to include all formats returned by drm_mode_legacy_fb_format, and also renamed them to DRM_FORMAT_HOST_*.
Question is how to go forward with patch #3. I'd prefer to not add drm_mode_legacy_fb_format_he if possible. Is there a chance to adapt the radeon and nvidia drivers to a fixed drm_mode_legacy_fb_format function (returning be formats on be) without invasive changes? Given they both treat formats as if they had DRM_FORMAT_BIG_ENDIAN set this could (with the help of the extended patch #2) be a simple s/DRM_FORMAT_/DRM_FORMAT_HOST/ at the right places ...
For radeon this doesn't work with pre-R600 GPUs, which only support little endian formats for display.
Add fourcc variants in cpu byte order. With these at hand we don't need #ifdefs in drivers want support framebuffers in cpu endianess.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- include/drm/drm_fourcc.h | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/include/drm/drm_fourcc.h b/include/drm/drm_fourcc.h index 6942e84b6e..cae05153e8 100644 --- a/include/drm/drm_fourcc.h +++ b/include/drm/drm_fourcc.h @@ -25,6 +25,18 @@ #include <linux/types.h> #include <uapi/drm/drm_fourcc.h>
+/* + * DRM formats are little endian. define cpu endian variants here, to + * reduce the #ifdefs needed in drivers. + */ +#ifdef __BIG_ENDIAN +# define DRM_FORMAT_CPU_XRGB8888 DRM_FORMAT_BGRX8888 +# define DRM_FORMAT_CPU_ARGB8888 DRM_FORMAT_BGRA8888 +#else +# define DRM_FORMAT_CPU_XRGB8888 DRM_FORMAT_XRGB8888 +# define DRM_FORMAT_CPU_ARGB8888 DRM_FORMAT_ARGB8888 +#endif + struct drm_device; struct drm_mode_fb_cmd2;
Add drm_mode_legacy_fb_format variant which returns fourcc codes for framebuffer format in host byte order.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- include/drm/drm_fourcc.h | 3 +++ drivers/gpu/drm/drm_fourcc.c | 54 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/include/drm/drm_fourcc.h b/include/drm/drm_fourcc.h index cae05153e8..729e9d1b11 100644 --- a/include/drm/drm_fourcc.h +++ b/include/drm/drm_fourcc.h @@ -73,7 +73,10 @@ const struct drm_format_info *drm_format_info(u32 format); const struct drm_format_info * drm_get_format_info(struct drm_device *dev, const struct drm_mode_fb_cmd2 *mode_cmd); + uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth); +uint32_t drm_mode_legacy_fb_format_he(uint32_t bpp, uint32_t depth); + int drm_format_num_planes(uint32_t format); int drm_format_plane_cpp(uint32_t format, int plane); int drm_format_horz_chroma_subsampling(uint32_t format); diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c index adb3ff59a4..97ff2cdf4e 100644 --- a/drivers/gpu/drm/drm_fourcc.c +++ b/drivers/gpu/drm/drm_fourcc.c @@ -36,12 +36,24 @@ static char printable_char(int c) }
/** - * drm_mode_legacy_fb_format - compute drm fourcc code from legacy description + * drm_mode_legacy_fb_format - compute drm fourcc code from legacy description, + * little endian. * @bpp: bits per pixels * @depth: bit depth per pixel * + * Deprecated, use drm_mode_legacy_fb_format_he instead. + * * Computes a drm fourcc pixel format code for the given @bpp/@depth values. * Useful in fbdev emulation code, since that deals in those values. + * + * Note that drm_mode_addfb (DRM_IOCTL_MODE_ADDFB implementation) uses this + * too. + * + * For historical reasons, this function returns fourcc codes for framebuffer + * formats in little endian byte order unconditinally, even though fbdev + * emulation expects the framebuffer in host byte order (i.e. big endian on + * big endian machines). Ideally we would simply fix this function, but that + * would break drivers expecting the broken behavior ... */ uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth) { @@ -79,6 +91,46 @@ uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth) EXPORT_SYMBOL(drm_mode_legacy_fb_format);
/** + * drm_mode_legacy_fb_format_he - compute drm fourcc code from legacy + * description, host endian. + * @bpp: bits per pixels + * @depth: bit depth per pixel + * + * Computes a drm fourcc pixel format code for the given @bpp/@depth values. + * Useful in fbdev emulation code, since that deals in those values. + */ +uint32_t drm_mode_legacy_fb_format_he(uint32_t bpp, uint32_t depth) +{ +#ifdef __BIG_ENDIAN + uint32_t fmt; + + switch (bpp) { + case 8: + fmt = DRM_FORMAT_C8; + break; + case 24: + fmt = DRM_FORMAT_BGR888; + break; + case 32: + if (depth == 24) + fmt = DRM_FORMAT_BGRX8888; + else + fmt = DRM_FORMAT_BGRA8888; + break; + default: + DRM_ERROR("bad bpp, assuming b8g8r8x8 pixel format\n"); + fmt = DRM_FORMAT_BGRX8888; + break; + } + + return fmt; +#else + return drm_mode_legacy_fb_format(bpp, depth); +#endif +} +EXPORT_SYMBOL(drm_mode_legacy_fb_format_he); + +/** * drm_get_format_name - fill a string with a drm fourcc format's name * @format: format to compute name of * @buf: caller-supplied buffer
dri-devel@lists.freedesktop.org