https://bugs.freedesktop.org/show_bug.cgi?id=88669
Bug ID: 88669
Summary: clover on radeonsi fails in
radeon_shader_binary_config_start
Product: Mesa
Version: git
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: michael(a)kuemmling.de
QA Contact: dri-devel(a)lists.freedesktop.org
Created attachment 112609
--> https://bugs.freedesktop.org/attachment.cgi?id=112609&action=edit
backtrace
ImageMagick 6.9.0 with OpenCL crashes, when bluring an image.
I'm using current mesa from git (after commit
3c3e60e050ea0850fcfeb5c4c2aa4f954d54d665) on Radeon HD 7750 (Southern Islands /
Cape Verde).
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=64201
Priority: medium
Bug ID: 64201
Assignee: dri-devel(a)lists.freedesktop.org
Summary: OpenCL usage result segmentation fault on r600g with
HD6850.
Severity: normal
Classification: Unclassified
OS: other
Reporter: spamjunkeater(a)gmail.com
Hardware: x86-64 (AMD64)
Status: NEW
Version: git
Component: Drivers/Gallium/r600
Product: Mesa
I am using OpenSUSE 12.3 x86_64 with 3.9 Kernel and ATI 6850HD GPU.
I just experiment OpenCL but I cannot make it with open source tools.
compiled llvm, clang with : ./configure --libdir=/usr/lib64 --prefix=/usr
--enable-{optimized,pic,shared} --disable-{assertions,docs,timestamps}
--enable-targets="x86_64" --enable-experimental-targets="R600"
(for llvm compilation, used http://llvm.org/docs/GettingStarted.html)
And after, compiled mesa-trunk with ./configure --with-gallium-drivers=r600
--prefix=/usr --libdir=/usr/lib64 --enable-{vdpau,texture-float}
--with-dri-drivers="" --enable-{gallium-llvm,r600-llvm-compiler,opencl}
--enable-glx-tls --enable-shared-{glapi,dricore}
but every utility that I tried, gives a segmentation error to me :-/
[ 3453.462803] python[7401]: segfault at 60 ip 00007f55c16292c0 sp
00007fff191b9138 error 4 in pipe_r600.so[7f55c14c0000+299000]
[ 3465.707476] pyrit[7529]: segfault at 0 ip 00007f53614fb7cb sp
00007f535f2bc5c0 error 6 in libLLVM-3.3svn.so[7f5360eb1000+1004000]
[ 3674.192257] cgminer[8773]: segfault at 20 ip 00007f2b65088710 sp
00007fffc1f24908 error 4 in libLLVM-3.3svn.so[7f2b648af000+1004000]
Most detailed report from pyrit, by using it with benchmark argument;
> pyrit benchmark
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
Calibrating... 0x7faf109a72e0: i32 = GlobalAddress<i32 (i32, i32, i32)*
@llvm.AMDGPU.bit.extract.u32.> 0
Undefined function
UNREACHABLE executed at
/run/media/death/OldRoot/temp/llvm/lib/Target/R600/AMDGPUISelLowering.h:56!
Stack dump:
0. Running pass 'Function Pass Manager' on module 'radeon'.
1. Running pass 'AMDGPU DAG->DAG Pattern Instruction Selection' on function
'@sha1_process'
Aborted
Regards,
Erdem
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=64776
Priority: medium
Bug ID: 64776
Assignee: dri-devel(a)lists.freedesktop.org
Summary: [9.1.2]"GPU fault detected" whit "eclipse juno" crash
system
Severity: normal
Classification: Unclassified
OS: All
Reporter: mombelli.mauro(a)gmail.com
Hardware: Other
Status: NEW
Version: 9.1
Component: Drivers/Gallium/radeonsi
Product: Mesa
Created attachment 79557
--> https://bugs.freedesktop.org/attachment.cgi?id=79557&action=edit
dmesg with a nice error log
hi,
after updating to mesa, ati-dri and mesa-libgl 9.1.2, everything work but when
launching "eclipse juno" (even a fresh install) the monitor turn off, sometimes
the system doesn't respond, sometimes the montor keep turning on and off, GUI
in freezed but i can still use virtual consolle. No problem with steam, bzflag,
flash, older version of eclipse or other java program. Also GPU extensive test
have been done on windows system with no fault.
Work-around is falling back to 9.1.1
my board:
$ lspci | grep -i VGA
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
Pitcairn PRO [Radeon HD 7850]
here you will find attached dmesg and xorg log during one of the (rare) times
when monitor was going on and off. Xorg seems to stop just before the system
goes in this "loop state"
anyway dmesg seems to catch the problem
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=84663
Bug ID: 84663
Summary: high cpu usage, poor performance in Borderlands 2 with
radeonsi, PRIME
Product: Mesa
Version: git
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: haagch(a)frickel.club
Created attachment 107328
--> https://bugs.freedesktop.org/attachment.cgi?id=107328&action=edit
sysprof recording from borderlands 2 only
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor
Graphics Controller (rev 09)
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
Wimbledon XT [Radeon HD 7970M] (rev ff)
xorg stable, mesa git, linux 3.17-rc7.
I have had something similar in some games I think, but most recently with
Borderlands 2.
Here is a random screenshot with the HUD fps display from someone with a HD
7870 that shows that it runs mostly with 60 fps:
https://i.imgur.com/qH0sBkl.jpg
And here is a short clip of how it runs for me that shows it runs with 20-30
fps: https://www.youtube.com/watch?v=ZeZreRntt3k
Radeontop says that the gpu is only used to ~30%.
While running Borderlands 2 the CPU usage is always at 100+% on my i7 3632qm.
I was undecided whether to report this here, but the difference is quite large
so I thought I'd give it a try because I think the game itself is not supposed
to use this much cpu time, so maybe it has something to do with the driver.
Theories:
< glennk> guessing from that output that the game engine uses a lot of
occlusion queries and is stalling on them
I haven't really found anything to test that yet.
< agd5f> haagch, hybrid laptops have to do a lot of extra copying to get the
frame from the rendering GPU to the display GPU
I hope that the overhead is not *that* large because losing 70+% of gpu time
would make it kind of useless for the affected games.
Fortunately many (most?) games run much better, for example unigine valley
shows good gpu usage: https://www.youtube.com/watch?v=sLWvYJlfvWM
which makes me believe that there is a specific bottleneck.
Attached is a sysprof profile of borderlands 2 but I don't know which of it is
normal (like 25% total cpu time for glDrawRangeElements?).
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=76490
Priority: medium
Bug ID: 76490
Assignee: dri-devel(a)lists.freedesktop.org
Summary: No output after radeon module is loaded (R9 270X)
Severity: normal
Classification: Unclassified
OS: All
Reporter: mail(a)geleia.net
Hardware: Other
Status: NEW
Version: unspecified
Component: DRM/Radeon
Product: DRI
Created attachment 96217
--> https://bugs.freedesktop.org/attachment.cgi?id=96217&action=edit
kernel log
The screen goes black after the radeon module is loaded. The only way I can get
any output is to blacklist the radeon module, load it via modprobe and then
change the resolution with xrandr from another computer via ssh.
I seem to get some sort of lockup if I don't blacklist the module, because then
I get a black screen at startup and I cannot even ssh into the machine. I tried
to enable netconsole from the kernel command line but I can't get it to work
(do I have to compile it statically?).
I tried this with 3.14-rc6. For reference, I'm including the log output I get
after I load the radeon module.
This card is an MSI R9 270X Gaming 4G.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=37724
Summary: occlusion queries are messed up in ut2004 (regression,
bisected)
Product: Mesa
Version: unspecified
Platform: Other
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: aaalmosss(a)gmail.com
The majority of the 3d objects disappear periodically since this commit:
f76787b3eae3f0b8af839fabfb24b57715a017f6 is the first bad commit
commit f76787b3eae3f0b8af839fabfb24b57715a017f6
Author: Marek Olšák <maraeo(a)gmail.com>
Date: Sun May 29 04:36:36 2011 +0200
r300g: fix occlusion queries when depth test is disabled or zbuffer is
missing
From now on, depth test is always enabled in hardware.
If depth test is disabled in Gallium, the hardware Z function is set to
ALWAYS.
If there is no zbuffer set, the colorbuffer0 memory is set as a zbuffer
to silence the CS checker.
This fixes piglit:
- occlusion-query-discard
- NV_conditional_render/bitmap
- NV_conditional_render/drawpixels
- NV_conditional_render/vertex_array
:040000 040000 baeff41ffed8952cbb1666d04941c6d5d01ca4fc
cdb64f4b684804b818df4b65c04109eaad568e11 M src
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=89785
Bug ID: 89785
Summary: GPU Fault 147 and Ring Stalls and Tests Fail in
Pillars of Eternity
Product: Mesa
Version: 10.5
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: matt.scheirer(a)gmail.com
QA Contact: dri-devel(a)lists.freedesktop.org
Created attachment 114656
--> https://bugs.freedesktop.org/attachment.cgi?id=114656&action=edit
Kernel log of GPU Hang
On my r9 290 GPU, the newly released Pillars of Eternity causes a GPU fault and
kernel panic on 10.5.1. Works fine with an Intel part. Requires a REISUB to
reset the system every time.
Attached is the kernel log of the crashing behavior - it is reproducible every
time I run the game as soon as it hits a loading screen. Kernel is 3.19.2.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=65968
Priority: medium
Bug ID: 65968
Assignee: dri-devel(a)lists.freedesktop.org
Summary: Massive memory corruption in Planetary Annihilation
Alpha
Severity: normal
Classification: Unclassified
OS: Linux (All)
Reporter: andreas.ringlstetter(a)gmail.com
Hardware: x86-64 (AMD64)
Status: NEW
Version: git
Component: Drivers/DRI/r300
Product: Mesa
Created attachment 81105
--> https://bugs.freedesktop.org/attachment.cgi?id=81105&action=edit
Example of corruption in PA. The skybox texture has been completely
overwritten, partly with textures from other programms, corruption in other
textures is already starting.
Using the R300 driver (git version from 2013-06-19) on a Mobility Radeon X1400
(128MB dedicated ???), I get massive memory corruption which can be seen in the
attached screenshot when running the Planetary Annihilation Alpha.
The game makes use of virtual texturing, thats means a mega texture which won't
possibly fit in the RAM in one piece.
However, it appears like textures which are NOT part of the mega texture have
been mapped into the same address space. I could see other textures, and even
bitmaps from other applications.
In the screenshot, there are large grey stripes for example, however there is
no such texture in the game. The color does match the color of the window
border though. Performing further tests, I even managed to get parts of album
covers from Banshee into PA.
This issue is not only limited to Planetary Annihilation though and the
corruption also works other way around, where applications overwrite the
bitmaps of other applications.
The effects of the corruption are clearly visible in PA due to the large
textures. They are not deterministic, but appear very reliable, most likely due
to the high memory usage.
Using other applications which frequently allocate new textures (like Banshee
with album covers) speeds up the corruption and makes it even visible in other
applications like Firefox, Cinnamon etc., although not reliable.
Attached are:
Screenshot of corruption
Xorg-log
glxinfo output
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=43698
Bug #: 43698
Summary: On PPC, OpenGL programs use incorrect texture colors.
Classification: Unclassified
Product: Mesa
Version: git
Platform: PowerPC
OS/Version: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: ghostlydeath(a)gmail.com
Created attachment 54298
--> https://bugs.freedesktop.org/attachment.cgi?id=54298
PrBoom
With git b1a8b7b0196c73bcfe488cbfc9e9fcd1d7ce7d9b.
Texture colors are incorrect, they appear to be ABGR instead of RGBA, thus blue
becomes green, red becomes alpha, etc.
The only thing that is not affected is glxgears, but that does not use any
textures.
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.