https://bugs.freedesktop.org/show_bug.cgi?id=64201
Priority: medium
Bug ID: 64201
Assignee: dri-devel(a)lists.freedesktop.org
Summary: OpenCL usage result segmentation fault on r600g with
HD6850.
Severity: normal
Classification: Unclassified
OS: other
Reporter: spamjunkeater(a)gmail.com
Hardware: x86-64 (AMD64)
Status: NEW
Version: git
Component: Drivers/Gallium/r600
Product: Mesa
I am using OpenSUSE 12.3 x86_64 with 3.9 Kernel and ATI 6850HD GPU.
I just experiment OpenCL but I cannot make it with open source tools.
compiled llvm, clang with : ./configure --libdir=/usr/lib64 --prefix=/usr
--enable-{optimized,pic,shared} --disable-{assertions,docs,timestamps}
--enable-targets="x86_64" --enable-experimental-targets="R600"
(for llvm compilation, used http://llvm.org/docs/GettingStarted.html)
And after, compiled mesa-trunk with ./configure --with-gallium-drivers=r600
--prefix=/usr --libdir=/usr/lib64 --enable-{vdpau,texture-float}
--with-dri-drivers="" --enable-{gallium-llvm,r600-llvm-compiler,opencl}
--enable-glx-tls --enable-shared-{glapi,dricore}
but every utility that I tried, gives a segmentation error to me :-/
[ 3453.462803] python[7401]: segfault at 60 ip 00007f55c16292c0 sp
00007fff191b9138 error 4 in pipe_r600.so[7f55c14c0000+299000]
[ 3465.707476] pyrit[7529]: segfault at 0 ip 00007f53614fb7cb sp
00007f535f2bc5c0 error 6 in libLLVM-3.3svn.so[7f5360eb1000+1004000]
[ 3674.192257] cgminer[8773]: segfault at 20 ip 00007f2b65088710 sp
00007fffc1f24908 error 4 in libLLVM-3.3svn.so[7f2b648af000+1004000]
Most detailed report from pyrit, by using it with benchmark argument;
> pyrit benchmark
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
Calibrating... 0x7faf109a72e0: i32 = GlobalAddress<i32 (i32, i32, i32)*
@llvm.AMDGPU.bit.extract.u32.> 0
Undefined function
UNREACHABLE executed at
/run/media/death/OldRoot/temp/llvm/lib/Target/R600/AMDGPUISelLowering.h:56!
Stack dump:
0. Running pass 'Function Pass Manager' on module 'radeon'.
1. Running pass 'AMDGPU DAG->DAG Pattern Instruction Selection' on function
'@sha1_process'
Aborted
Regards,
Erdem
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=64776
Priority: medium
Bug ID: 64776
Assignee: dri-devel(a)lists.freedesktop.org
Summary: [9.1.2]"GPU fault detected" whit "eclipse juno" crash
system
Severity: normal
Classification: Unclassified
OS: All
Reporter: mombelli.mauro(a)gmail.com
Hardware: Other
Status: NEW
Version: 9.1
Component: Drivers/Gallium/radeonsi
Product: Mesa
Created attachment 79557
--> https://bugs.freedesktop.org/attachment.cgi?id=79557&action=edit
dmesg with a nice error log
hi,
after updating to mesa, ati-dri and mesa-libgl 9.1.2, everything work but when
launching "eclipse juno" (even a fresh install) the monitor turn off, sometimes
the system doesn't respond, sometimes the montor keep turning on and off, GUI
in freezed but i can still use virtual consolle. No problem with steam, bzflag,
flash, older version of eclipse or other java program. Also GPU extensive test
have been done on windows system with no fault.
Work-around is falling back to 9.1.1
my board:
$ lspci | grep -i VGA
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
Pitcairn PRO [Radeon HD 7850]
here you will find attached dmesg and xorg log during one of the (rare) times
when monitor was going on and off. Xorg seems to stop just before the system
goes in this "loop state"
anyway dmesg seems to catch the problem
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=84663
Bug ID: 84663
Summary: high cpu usage, poor performance in Borderlands 2 with
radeonsi, PRIME
Product: Mesa
Version: git
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: haagch(a)frickel.club
Created attachment 107328
--> https://bugs.freedesktop.org/attachment.cgi?id=107328&action=edit
sysprof recording from borderlands 2 only
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor
Graphics Controller (rev 09)
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
Wimbledon XT [Radeon HD 7970M] (rev ff)
xorg stable, mesa git, linux 3.17-rc7.
I have had something similar in some games I think, but most recently with
Borderlands 2.
Here is a random screenshot with the HUD fps display from someone with a HD
7870 that shows that it runs mostly with 60 fps:
https://i.imgur.com/qH0sBkl.jpg
And here is a short clip of how it runs for me that shows it runs with 20-30
fps: https://www.youtube.com/watch?v=ZeZreRntt3k
Radeontop says that the gpu is only used to ~30%.
While running Borderlands 2 the CPU usage is always at 100+% on my i7 3632qm.
I was undecided whether to report this here, but the difference is quite large
so I thought I'd give it a try because I think the game itself is not supposed
to use this much cpu time, so maybe it has something to do with the driver.
Theories:
< glennk> guessing from that output that the game engine uses a lot of
occlusion queries and is stalling on them
I haven't really found anything to test that yet.
< agd5f> haagch, hybrid laptops have to do a lot of extra copying to get the
frame from the rendering GPU to the display GPU
I hope that the overhead is not *that* large because losing 70+% of gpu time
would make it kind of useless for the affected games.
Fortunately many (most?) games run much better, for example unigine valley
shows good gpu usage: https://www.youtube.com/watch?v=sLWvYJlfvWM
which makes me believe that there is a specific bottleneck.
Attached is a sysprof profile of borderlands 2 but I don't know which of it is
normal (like 25% total cpu time for glDrawRangeElements?).
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=76490
Priority: medium
Bug ID: 76490
Assignee: dri-devel(a)lists.freedesktop.org
Summary: No output after radeon module is loaded (R9 270X)
Severity: normal
Classification: Unclassified
OS: All
Reporter: mail(a)geleia.net
Hardware: Other
Status: NEW
Version: unspecified
Component: DRM/Radeon
Product: DRI
Created attachment 96217
--> https://bugs.freedesktop.org/attachment.cgi?id=96217&action=edit
kernel log
The screen goes black after the radeon module is loaded. The only way I can get
any output is to blacklist the radeon module, load it via modprobe and then
change the resolution with xrandr from another computer via ssh.
I seem to get some sort of lockup if I don't blacklist the module, because then
I get a black screen at startup and I cannot even ssh into the machine. I tried
to enable netconsole from the kernel command line but I can't get it to work
(do I have to compile it statically?).
I tried this with 3.14-rc6. For reference, I'm including the log output I get
after I load the radeon module.
This card is an MSI R9 270X Gaming 4G.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=37724
Summary: occlusion queries are messed up in ut2004 (regression,
bisected)
Product: Mesa
Version: unspecified
Platform: Other
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/r300
AssignedTo: dri-devel(a)lists.freedesktop.org
ReportedBy: aaalmosss(a)gmail.com
The majority of the 3d objects disappear periodically since this commit:
f76787b3eae3f0b8af839fabfb24b57715a017f6 is the first bad commit
commit f76787b3eae3f0b8af839fabfb24b57715a017f6
Author: Marek Olšák <maraeo(a)gmail.com>
Date: Sun May 29 04:36:36 2011 +0200
r300g: fix occlusion queries when depth test is disabled or zbuffer is
missing
From now on, depth test is always enabled in hardware.
If depth test is disabled in Gallium, the hardware Z function is set to
ALWAYS.
If there is no zbuffer set, the colorbuffer0 memory is set as a zbuffer
to silence the CS checker.
This fixes piglit:
- occlusion-query-discard
- NV_conditional_render/bitmap
- NV_conditional_render/drawpixels
- NV_conditional_render/vertex_array
:040000 040000 baeff41ffed8952cbb1666d04941c6d5d01ca4fc
cdb64f4b684804b818df4b65c04109eaad568e11 M src
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
https://bugzilla.kernel.org/show_bug.cgi?id=105581
Bug ID: 105581
Summary: amdgpu: r9 380 Dual Monitor Setup: rotation not
working
Product: Drivers
Version: 2.5
Kernel Version: 4.2.2, 4.3-rc3, 4.3-rc4
Hardware: Intel
OS: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: Video(DRI - non Intel)
Assignee: drivers_video-dri(a)kernel-bugs.osdl.org
Reporter: f.otti(a)gmx.at
Regression: No
Created attachment 189561
--> https://bugzilla.kernel.org/attachment.cgi?id=189561&action=edit
Xorg log of 4.3.rc3
Lengthy explanation posted here:
https://forums.gentoo.org/viewtopic-p-7824374.html
Dual Monitor setup doesn't seem to work at all when both are connected via DVI.
When one is connected to HDMI and the other to the first DVI, dual monitor only
works when one of them is connected after boot.
Both only work when one is connected to HDMI and the other to the second DVI.
With neither configuration does rotating any screen work. Results in crash of
the X server (attachment).
Everything works as intended under Windows.
What I mean with "dual monitor is not working" is that both screens are
recognized as connected but one of the screens shows only a black picture, but
I can move applications to it (which I can't see).
Steps to reproduce: Boot computer and try to rotate one of the screens with the
KDE system settings or xrandr.
--
You are receiving this mail because:
You are watching the assignee of the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=89785
Bug ID: 89785
Summary: GPU Fault 147 and Ring Stalls and Tests Fail in
Pillars of Eternity
Product: Mesa
Version: 10.5
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: matt.scheirer(a)gmail.com
QA Contact: dri-devel(a)lists.freedesktop.org
Created attachment 114656
--> https://bugs.freedesktop.org/attachment.cgi?id=114656&action=edit
Kernel log of GPU Hang
On my r9 290 GPU, the newly released Pillars of Eternity causes a GPU fault and
kernel panic on 10.5.1. Works fine with an Intel part. Requires a REISUB to
reset the system every time.
Attached is the kernel log of the crashing behavior - it is reproducible every
time I run the game as soon as it hits a loading screen. Kernel is 3.19.2.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=93247
Bug ID: 93247
Summary: Bound by Flame game crashes GPU
Product: Mesa
Version: 11.0
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: l.gambetta(a)alice.it
QA Contact: dri-devel(a)lists.freedesktop.org
Created attachment 120332
--> https://bugs.freedesktop.org/attachment.cgi?id=120332&action=edit
Messages printed into the journalctl log about this crash
I'm trying to run this newly released game (
http://store.steampowered.com/app/243930/ ) on this PC:
CPU AMD A10-7850K, 8 GB Ram
ATI Radeon 7850 1Gb
Mesa 11.0.6
xf86-video-ati 7.5.0
linux kernel 4.2.6
Distro is Manjaro Linux.
The game is 32 bit and I'm using native 32 bit libraries instead of the ones
supplied by the Steam SDK.
The game crashes on the loading screen after the "start game" and intro
sequence.
I had to reset my PC.
--
You are receiving this mail because:
You are the assignee for the bug.