https://bugs.freedesktop.org/show_bug.cgi?id=33078
Summary: Sauerbraten refuses to start
Product: Mesa
Version: git
Platform: x86-64 (AMD64)
OS/Version: Linux (All)
Status: NEW
Severity: blocker
Priority: medium
Component: Drivers/Gallium/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: barisurum(a)gmail.com
Trying to set the video (most likely opengl rather than SDL) the game crashes
and gives this:
init: sdl
init: net
init: game
init: video: mode
X Error of failed request: BadValue (integer parameter out of range for
operation)
Major opcode of failed request: 129 (XFree86-VidModeExtension)
Minor opcode of failed request: 10 (XF86VidModeSwitchToMode)
Value in failed request: 0x156
Serial number of failed request: 125
Current serial number in output stream: 127
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=32946
Summary: piglit glx-make-current gives X error BadMatch
Product: Mesa
Version: git
Platform: x86 (IA32)
OS/Version: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: bugzi09.fdo.tormod(a)xoxy.net
It is a M26 (RV410) card using gallium.
$ bin/glx-make-current
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 155 (GLX)
Minor opcode of failed request: 11 (X_GLXSwapBuffers)
Serial number of failed request: 44
Current serial number in output stream: 45
This is with xserver 1.9.0.902 and mesa 7.9 (Ubuntu natty). Same with latest
mesa git and xserver git on 1.9 branch 2010-11-29 (Ubuntu 10.10 + xorg-edgers).
When using latest mesa git but older xserver 1.7.6, the xserver itself
segfaults in libglx.so.
There have been similar, fixed bugs like bug 30234 on intel and swrast, and
maybe bug 20291 on intel. The piglit test was written for bug 30457.
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=34418
--- Comment #12 from Wiktor Janas <wixorpeek(a)gmail.com> 2011-02-22 08:10:18 PST ---
Hooray, I've nailed it!
Look at setup_interleaved_attribs in st_draw.c. There's that little snippet
that computes minimum from array[...]->Ptr and... it's wrong! ->Ptr can be very
well NULL, so when there are two arrays, with one having offset 0 (and thus
NULL ->Ptr), and the other having non-zero offset, the non-zero value is taken
as minimum, which leads to negative velements[attr].src_offset being assigned
later. The trick is: that negative value is cast to unsigned, so it ends up
being a very large number. Later (in r300g), the src_offset is added to some
pointer. On 32bit machines, the pointer overflows and the overall result is as
if subtraction was performed, yielding correct result. On 64bit machine the
pointer gets messed up instead, resulting in segmentation fault.
Changing the minimum-computing code to
/* Find the lowest address. */
const GLubyte *low_addr = NULL;
if(vpv->num_inputs) {
low_addr = arrays[vp->index_to_input[0]]->Ptr;
for (attr = 1; attr < vpv->num_inputs; attr++) {
const GLubyte *start = arrays[vp->index_to_input[attr]]->Ptr;
low_addr = MIN2(low_addr, start);
}
}
fixes segfaults with blender.
It is also beneficial to add
assert(velements[attr].src_offset >= 0 &&
velements[attr].src_offset < 2000000000);
in st_draw.c:369 (that exposes the bug on 32bit machines).
The (trivial) test code will be attached.
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=30048
Summary: [r300g] ColorCube: Too many instructions
Product: Mesa
Version: git
Platform: Other
URL: http://www.colorcubestudio.com/
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/DRI/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: sa(a)whiz.se
Created an attachment (id=38482)
--> (https://bugs.freedesktop.org/attachment.cgi?id=38482)
RADEON_DEBUG=fp log
The game ColorCube isn't working correctly when GLSL is used:
r300 FP: Compiler Error:
r500_fragprog_emit.c::emit_paired(): emit_alu: Too many instructions
Using a dummy shader instead.
r300 FP: Compiler Error:
r500_fragprog_emit.c::emit_paired(): emit_alu: Too many instructions
Using a dummy shader instead.
r300 FP: Compiler Error:
r500_fragprog_emit.c::emit_paired(): emit_alu: Too many instructions
Using a dummy shader instead.
r300 FP: Compiler Error:
r500_fragprog_emit.c::emit_paired(): emit_alu: Too many instructions
Using a dummy shader instead.
r300 FP: Compiler Error:
r500_fragprog_emit.c::emit_paired(): emit_alu: Too many instructions
Using a dummy shader instead.
This is most likely the same, or similar too bug 28860, as they are both built
with the Blender Game Engine.
Contrary to 28860 I don't get any "rejected CS" in this game.
http://www.colorcubestudio.com/
System environment:
-- system architecture: 32-bit
-- Linux distribution: Debian unstable
-- GPU: RV570
-- Model: Asus EAX1950Pro 256MB
-- Display connector: DVI
-- xf86-video-ati: e9928fe036e9382fd7bc353f3f05531445f08977
-- xserver: 1.8.99.904 (1.9.0 RC 5)
-- mesa: 99f3c9caa39fbe9dfa7561c919202395720e9472
-- drm: 23287f05cf2443ddf9e028e29beb5bd30979c6cf
-- kernel: 2.6.35
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=34545
Summary: [gallium] segfault with vertarrays in mixed user/gpu
buffers
Product: Mesa
Version: git
Platform: x86-64 (AMD64)
OS/Version: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: wixorpeek(a)gmail.com
CC: maraeo(a)gmail.com
Created an attachment (id=43618)
--> (https://bugs.freedesktop.org/attachment.cgi?id=43618)
the test case
Hello, while researching for bug 34418 I have encountered an segmentation fault
trigerred when the __first__ vertex array is placed in user buffer and the
second on gpu. When first array in on gpu and second in user buffer, invalid
rendering occurs. This is gallium-wide (happens with both r300g and swrastg),
although I cannot extract meaningful backtrace from swrastg. My shot is that
is_interleaved_arrays in st_draw.c handles those cases incorrectly (at least
that's the code that gave me inspiration to write the test). Interestingly, the
backtraces from r300g have changed between HEAD and before-2a904fd6a0c (before
"set vertex arrays state only when necessary"). This is propably correct. The
current one is
#0 radeon_add_reloc (rcs=0x7f5b1e508010, buf=0x0, rd=R300_DOMAIN_GTT, wd=0) at
radeon_drm_cs.c:230
#1 radeon_drm_cs_add_reloc (rcs=0x7f5b1e508010, buf=0x0, rd=R300_DOMAIN_GTT,
wd=0) at radeon_drm_cs.c:297
#2 0x00007f5b1a79f032 in r300_emit_buffer_validate (r300=0x16993a0,
do_validate_vertex_buffers=<value optimized out>, index_buffer=<value optimized
out>)
at r300_emit.c:1192
#3 0x00007f5b1a7a21f1 in r300_emit_states (r300=0x16993a0, flags=<value
optimized out>, index_buffer=0x2, buffer_offset=0, index_bias=0) at
r300_render.c:252
#4 0x00007f5b1a7a3d95 in r300_draw_arrays (pipe=0x16993a0, info=<value
optimized out>) at r300_render.c:710
#5 r300_draw_vbo (pipe=0x16993a0, info=<value optimized out>) at
r300_render.c:775
#6 0x00007f5b1a849de8 in st_draw_vbo (ctx=<value optimized out>, arrays=<value
optimized out>, prims=<value optimized out>, nr_prims=1, ib=0x0,
index_bounds_valid=<value optimized out>, min_index=0, max_index=31) at
state_tracker/st_draw.c:717
#7 0x00007f5b1a8e18fb in vbo_draw_arrays (ctx=0x16bb170, mode=6, start=0,
count=<value optimized out>, numInstances=1) at vbo/vbo_exec_array.c:615
#8 0x0000000000400e1f in render () at immedcrash.c:15
while the old one is
#0 0x00007f74d66a823a in u_vbuf_mgr_set_vertex_buffers (mgrb=0x1ea5060,
count=1, bufs=0x0) at util/u_vbuf_mgr.c:482
#1 0x00007f74d64e50aa in r300_set_vertex_buffers (pipe=0x1eb6360, count=1,
buffers=0x7fff228a33d0) at r300_state.c:1491
#2 0x00007f74d6586fcf in st_draw_vbo (ctx=<value optimized out>, arrays=<value
optimized out>, prims=<value optimized out>, nr_prims=<value optimized out>,
ib=<value optimized out>, index_bounds_valid=<value optimized out>,
min_index=0, max_index=31) at state_tracker/st_draw.c:707
#3 0x00007f74d661ee4b in vbo_draw_arrays (ctx=0x1ee3f10, mode=6, start=0,
count=<value optimized out>, numInstances=1) at vbo/vbo_exec_array.c:593
#4 0x0000000000400e1f in render () at immedcrash.c:15
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=30002
Summary: [regression r300g] Menu problem in Tiny and Big
Product: Mesa
Version: git
Platform: Other
URL: http://www.tinyandbig.com/download/
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/DRI/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: sa(a)whiz.se
As mentioned in bug 28869 the menu in the game "Tiny and Big" isn't working
correctly: all of the buttons only show up as white rectangles on mouseover,
and the following is printed on the terminal:
r300: texture_create: Got invalid texture dimensions: 0x0x0
I tried some earlier git versions and noticed that this used to work, further
digging turned up this change:
commit 5cdedaaf295acae13ac10feeb3143d83bc53d314
Author: Marek Olšák <maraeo(a)gmail.com>
Date: Mon May 3 19:14:31 2010 +0200
r300g: refuse to create a texture with size 0
Unsurprisingly, commenting out this change gets the menu working again. I
haven't had this problem with any of the other drivers (like llvmpipe).
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
Hi,
On Sun, February 20, 2011 14:21, Daniel Vetter wrote:
> Well, don't start jumping around, yet. These patches are just to rule out
> some theories. Now: Is it fixed with just the 2nd patch alone or do you need both
> patches? This is very important, so please test extensively whether there are really
> no corruptions with just the 2nd patch.
I managed to create some corruption with an xterm above xpdf. It looks different
than the original corruption, so I think it's safe to say it's a different bug:
http://img593.imageshack.us/i/ss1298340823.pnghttp://img203.imageshack.us/i/ss1298340776.png
This was without xcompmgr running, I don't think I would have seen it otherwise.
Actually, it turns out it's really easy to trigger as well, so I can more easily
test patches now. Unfortunately I wasn't smart enough to store the bisecting kernels,
gah!
I tried with both your patches, as well as the HIC poking patch, but I still get it.
I'll try to pinpoint more exactly when this started to happen.
Greetings,
Indan