https://bugs.freedesktop.org/show_bug.cgi?id=89971
Bug ID: 89971
Summary: HDMI out *not* working with radeon (mobile 8550g)
Product: DRI
Version: DRI git
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: DRM/Radeon
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: adrianovidiugabor(a)gmail.com
Created attachment 114999
--> https://bugs.freedesktop.org/attachment.cgi?id=114999&action=edit
screen's content
Hi all.
If I'm connecting my laptop's HDMI out to my TV I get a black screen with
garbage on top (see attached pic). This is with mirror mode on. If making the
second screen primary, the pc blocks and I have to restart it. It won't unblock
even if I disconnect the cable.
If switching to one of the virtual consoles, everything works correctly as it
should, so the problem seems to be related somehow to the display server.
The Tv is recognized correctly by xrandr.
Specs: Asus laptop, with AMD A8-5550M, Radeon 8550G(Aruba)/8670M(Hainan), OS
ArchLinux(x64) Xorg-xerver 1.17, kernel 3.19.3, Mesa-git, xf86-ati-git,
llvm-svn.
This doesn't seem to be a regression (experienced it with older versions of the
kernel and userspace components) and it's reproducible on another machine.
HDMI out works on Windows and ChromeOS.
Didn't try Catalyst on Linux.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=80531
Priority: medium
Bug ID: 80531
Assignee: dri-devel(a)lists.freedesktop.org
Summary: 3.16-rc2 hdmi output resolution out of range Cape
Verde PRO [Radeon HD 7750]
Severity: normal
Classification: Unclassified
OS: Linux (All)
Reporter: fredrik(a)obra.se
Hardware: x86-64 (AMD64)
Status: NEW
Version: XOrg CVS
Component: DRM/Radeon
Product: DRI
Created attachment 101759
--> https://bugs.freedesktop.org/attachment.cgi?id=101759&action=edit
dmesg with drm.debug=1
With 3.16-rc2 my tripple screen setup breaks.
DisplayPort-0 connected 1920x1200 <- ok
HDMI-0 connected 1920x1080 <- (tv) resolution out of range
DVI-0 connected 1680x1050 <- ok
All screens work during bootup before X starts.
ddx driver 7.3.0.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=75992
Priority: medium
Bug ID: 75992
Assignee: dri-devel(a)lists.freedesktop.org
Summary: Display freezes & corruption with an r7 260x on
3.14-rc6
Severity: major
Classification: Unclassified
OS: Linux (All)
Reporter: edt(a)aei.ca
Hardware: x86-64 (AMD64)
Status: NEW
Version: unspecified
Component: DRM/Radeon
Product: DRI
Created attachment 95520
--> https://bugs.freedesktop.org/attachment.cgi?id=95520&action=edit
log of boot
I recently added a R7 260X to my system. While the card works with 3.13 its
supposed work much better with 14-rc. This is not the case. My system is
unstable without radeon.dpm=0 which was the default in .13.
linux 3.14-rc6 (with an up to date arch, stable X and mesa-git (10.2) mesa 10.1
and 10.0 also show very similar problems.
When X started I did notice some corruption. There are sets of two rectangles
about of a height of 2 or 3 mm, width of 25m or so with a second about a cm
below. The often occurs in chomium especially when scrolling. Runing the
unigine-sanctuary or unigine-tropics demo/benchmark programs also produce the
above problems and eventually stall.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=97084
Bug ID: 97084
Summary: BUG: scheduling while atomic
Product: DRI
Version: unspecified
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: DRM/Radeon
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: amarildosjr(a)riseup.net
Created attachment 125330
--> https://bugs.freedesktop.org/attachment.cgi?id=125330&action=edit
dmesg
Overview: After simulating with X-Plane for a few minutes (can take hours
sometimes), the whole OS freezes upon exiting the simulator.
Steps to Reproduce:
1) Simulate with X-Plane (currently 10.45) with the "AS350 B3+" helicopter. It
could take a few minutes, or it could take several hours (which is normal for a
flight Sim).
2) Exit the Simulator.
Actual Results: The entire OS froze, requiring a hard-reboot.
Expected Results: The Simulator should've just closed.
Build Date & Hardware: Don't remember, but started happening almost a year ago.
Additional Builds and Platforms: Any Linux distro with Kernel 4.1 and onwards
(not being precise on Kernel versioning). Arch, Ubuntu, Mint, Debian
Testing/Sid, OpenSUSE, etc.
Additional Info: Seems related to
https://bugzilla.kernel.org/show_bug.cgi?id=110121
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=96964
Bug ID: 96964
Summary: R290X stuck at 100% GPU load / full core clock on
non-x86 machines
Product: DRI
Version: XOrg git
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: DRM/Radeon
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: kb9vqf(a)pearsoncomputing.net
Our twin Radeon 290X cards are stuck at 100% GPU load (according to radeontop
and Gallium) and full core clock (according to radeon_pm_info) on non-x86
machines such as our POWER8 compute server. The identical card does not show
this behaviour on a test x86 machine.
Forcibly crashing the GPU (causing a soft reset) fixes the issue. Relevant
dmesg output starts at line 4 in this pastebin:
https://bugzilla.kernel.org/show_bug.cgi?id=70651 It is unknown if simply
triggering a soft reset without the GPU crash would also resolve the issue.
I suspect this is related to the atombios x86-specific oprom code only
executing on x86 machines, and related setup therefore not being finalized by
the radeon driver itself on non-x86 machines. However, this is just an
educated guess.
radeontop output of stuck card:
gpu 100.00%, ee 0.00%, vgt 0.00%, ta 0.00%, sx 0.00%, sh 0.00%, spi 0.00%, sc
0.00%, pa 0.00%, db 0.00%, cb 0.00%
radeontop output of "fixed" card after GPU crash / reset, running 3D app:
gpu 4.17%, ee 0.00%, vgt 0.00%, ta 3.33%, sx 3.33%, sh 0.00%, spi 3.33%, sc
3.33%, pa 0.00%, db 3.33%, cb 3.33%, vram 11.72% 479.87mb
Despite the "100% GPU load" indication, there is no sign of actual load being
placed on the GPU. 3D-intensive applications function 100% correctly with no
apparent performance degradation, so it seems the reading is a.) spurious and
b.) causing the core clock to throttle up needlessly.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=96789
Bug ID: 96789
Summary: Black screen with R9 290 and Apple Cinema Display 23"
Product: DRI
Version: unspecified
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: DRM/Radeon
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: benh(a)kernel.crashing.org
With Radeon (any recent distro version including upstream 4.7-rc5), my Apple
Cinema Display 23" gives a black screen when radeon loads.
The monitor is a fairly old DVI LCD 1920x1200x60. It has a fixed timing, ie, no
scaler. It used to work with earlier versions of radeons but I haven't used it
on a Linux machine for a couple of hears at least.
It works using the agd5f's current amdgpu, but not upstream which doesn't
support the HAWAII. I isolated the difference that makes it work to the PPLL
setting. This one liner fixes it:
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -953,7 +953,7 @@ static void avivo_get_fb_ref_div(unsigned nom, unsigned
den, unsigned post_div,
unsigned *fb_div, unsigned *ref_div)
{
/* limit reference * post divider to a maximum */
- ref_div_max = max(min(100 / post_div, ref_div_max), 1u);
+ ref_div_max = max(min(128 / post_div, ref_div_max), 1u);
/* get matching reference and feedback divider */
*ref_div = min(max(DIV_ROUND_CLOSEST(den, post_div), 1u), ref_div_max);
Here's a log of the working vs. non-working versions of the calculations in
radeon_compute_pll_avivo(). Note that when I did these, I also changed
radeon's max_feedback_div to match amdgpu's value of 0xfff instead of 0x7ff in
current radeon, though that didn't impact the results.
[ 3.471131] fb_div_min/max=4/4095 pll_flags=400
[ 3.471132] by 10 ! fb_div_min/max=40/40950
[ 3.471133] ref_div_min=2 (from 0/2)
[ 3.471133] ref_div_max=1023 (from 0/1023)
[ 3.471134] vco_min/max=600000/1200000
[ 3.471134] post_div_min/max=4/7
[ 3.471135] initial nom=153970, den=2700
[ 3.471136] reduced nom=15397, den=270
[ 3.471136] - trying post_div 4, ref_div_max=32
[ 3.471137] tentative ref_div=32m, fb_div=7299
[ 3.471137] adjusted ref_div=32m, fb_div=7299
[ 3.471138] diff=7, diff_best=-1
[ 3.471138] - trying post_div 5, ref_div_max=25
[ 3.471139] tentative ref_div=25m, fb_div=7128
[ 3.471139] adjusted ref_div=25m, fb_div=7128
[ 3.471139] diff=6, diff_best=7
[ 3.471140] - trying post_div 6, ref_div_max=21
[ 3.471140] tentative ref_div=21m, fb_div=7185
[ 3.471141] adjusted ref_div=21m, fb_div=7185
[ 3.471141] diff=6, diff_best=6
[ 3.471141] - trying post_div 7, ref_div_max=18
[ 3.471142] tentative ref_div=18m, fb_div=7185
[ 3.471142] adjusted ref_div=18m, fb_div=7185
[ 3.471150] diff=6, diff_best=6
[ 3.471150] post_div_best=7
[ 3.471151] - trying post_div 7, ref_div_max=18
[ 3.471151] tentative ref_div=18m, fb_div=7185
[ 3.471152] adjusted ref_div=18m, fb_div=7185
[ 3.471153] [drm:amdgpu_pll_compute] 153970 - 153960, pll dividers - fb:
239.5 ref: 6, post 7
Now this is with radeon *NOTE: I have bumped the max fb div to the same as AMD
GPU when taking that trace but that had no effect:
[ 4.718126] fb_div_min/max=4/4095 pll_flags=410
[ 4.718126] by 10 ! fb_div_min/max=40/40950
[ 4.718127] ref_div_min=2 (from 0/2)
[ 4.718128] ref_div_max=1023 (from 0/1023)
[ 4.718128] vco_min/max=600000/1200000
[ 4.718129] post_div_min/max=4/7
[ 4.718129] initial nom=153970, den=2700
[ 4.718130] reduced nom=15397, den=270
[ 4.718130] - trying post_div 4, ref_div_max=25
[ 4.718131] tentative ref_div=25m, fb_div=5703
[ 4.718131] adjusted ref_div=25m, fb_div=5703
[ 4.718132] diff=11, diff_best=-1
[ 4.718133] - trying post_div 5, ref_div_max=20
[ 4.718133] tentative ref_div=20m, fb_div=5703
[ 4.718133] adjusted ref_div=20m, fb_div=5703
[ 4.718134] diff=11, diff_best=11
[ 4.718134] - trying post_div 6, ref_div_max=16
[ 4.718135] tentative ref_div=16m, fb_div=5474
[ 4.718135] adjusted ref_div=16m, fb_div=5474
[ 4.718136] diff=14, diff_best=11
[ 4.718136] - trying post_div 7, ref_div_max=14
[ 4.718136] tentative ref_div=14m, fb_div=5589
[ 4.718137] adjusted ref_div=14m, fb_div=5589
[ 4.718137] diff=12, diff_best=11
[ 4.718138] post_div_best=5
[ 4.718138] - trying post_div 5, ref_div_max=20
[ 4.718139] tentative ref_div=20m, fb_div=5703
[ 4.718139] adjusted ref_div=20m, fb_div=5703
[ 4.718141] [drm:radeon_compute_pll_avivo] 153970 - 153980, pll dividers -
fb: 570.3 ref: 20, post 5
The modeline is:
Modeline 55:"1920x1200" 60 153970 1920 1968 2000 2080 1200 1203 1209 1235 0x48
0x9
And is consistent between the 2 drivers (different mode number but same
values).
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=96712
Bug ID: 96712
Summary: Kernel hard LOCKUP related to radeon driver
Product: DRI
Version: unspecified
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: DRM/Radeon
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: lebedev.ri(a)gmail.com
Created attachment 124765
--> https://bugs.freedesktop.org/attachment.cgi?id=124765&action=edit
dmesg of all that boot
This is debian testing, fully updated as of 28 jun 2016 (today)
I believe the lockup happened while rendering video via kdenlive (all
multithreading options set to 6, gpu acceleration on (not sure it works)),
and then trying to change a page in google chrome.
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 16
Model: 10
Model name: AMD Phenom(tm) II X6 1075T Processor
Stepping: 0
CPU MHz: 3000.000
CPU max MHz: 3000.0000
CPU min MHz: 800.0000
BogoMIPS: 6019.74
Virtualization: AMD-V
L1d cache: 64K
L1i cache: 64K
L2 cache: 512K
L3 cache: 6144K
NUMA node0 CPU(s): 0-5
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb
rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid
aperfmperf eagerfpu pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic
cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt nodeid_msr
cpb hw_pstate vmmcall npt lbrv svm_lock nrip_save pausefilter
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=96487
Bug ID: 96487
Summary: Cannot force power_dpm_force_performance_level to high
Product: DRI
Version: XOrg git
Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: DRM/Radeon
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: ltjbour(a)gmail.com
Created attachment 124464
--> https://bugs.freedesktop.org/attachment.cgi?id=124464&action=edit
Information about the supported gpu states
Linux 4.6.2-1-ARCH #1 SMP PREEMPT Wed Jun 8 08:40:59 CEST 2016 x86_64 GNU/Linux
I thought my problem was perhaps a duplicate of
https://bugs.freedesktop.org/show_bug.cgi?id=70654 but I'm pretty sure that
this isn't the same problem. The problem I'm having is that I can't set the
highest frequency state from my gpu (AMD A8-4500M APU + HD 7640G), in a
consistent and persistent way. And this is regardless of whether uvd is enabled
or not (as I assume this is enabled automatically and I haven't been running
any application that requires it)
In other words I would like to be able to set the dpm level to 'high' so that
it stays in the higher frequency state, but I can't set the
power_dpm_force_performance variable to anything other than 'auto' and 'low'. I
get the following output:
λ echo high | sudo tee
/sys/class/drm/card0/device/power_dpm_force_performance_level
high
tee: /sys/class/drm/card0/device/power_dpm_force_performance_level: Invalid
argument
It turns out that after some testing, I have noticed that it's more stable to
set 'performance' dpm state and force it to 'low', which leaves the GPU
frequency at ~335MHz, than setting it to auto which makes the frequencies jump
between modes [335/490/655]MHz depending on the load. I would obviously like to
be able to set a single mode and have a constant frequency. If I can set 655MHz
permanently that would be ideal.
I've already tried using 'dynpm' and 'profile' modes, but they don't work. I
tried a bunch of times, even enabled/disabled some radeon parameters to see if
they were somehow conflicting but I wasn't able to succesfully change states a
single time. It would permanently stay at 200MHz with either of these two
profile modes set. That left me with the single choice of using 'dpm', as it
was the only mode that was able to at least change the states.
I was told in IRC that the reason I couldn't set the 'high' value permanently
was due to a hardware limitation of Trinity chips. I don't see how this can
possibly be true as I can get this same GPU to be consistently at its highest
frequency state in Windows 8. So if anything this is a limitation of the
driver?
I've attached the kernel info about the available states, the list of radeon
parameters and their values in my system, and the output of /proc/cpuinfo. As
of now I don't know what else to add so feel free to ask for any additional
information, I'll make it available as soon as I can.
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.freedesktop.org/show_bug.cgi?id=96326
Bug ID: 96326
Summary: Heavy screen flickering in OpenGL apps on R9 390
Product: Mesa
Version: git
Hardware: Other
OS: Linux (All)
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel(a)lists.freedesktop.org
Reporter: 0xe2.0x9a.0x9b(a)gmail.com
QA Contact: dri-devel(a)lists.freedesktop.org
GPU: R9 390
GPU manufacturer: Gigabyte
Kernel: 4.5, 4.6, etc
Firmware: both hawaii_smc.bin and hawaii_k_smc.bin
Hello,
I am experiencing heavy LCD screen flickering in OpenGL apps when automatic GPU
power management is enabled.
The flickering is related to mclk transitions. Forcing mclk=1.5GHz, and letting
sclk be controlled by DPM, removes the flickering.
Related issues:
http://bugs.freedesktop.org/show_bug.cgi?id=91880http://bugs.freedesktop.org/show_bug.cgi?id=92302
--
You are receiving this mail because:
You are the assignee for the bug.