https://bugs.freedesktop.org/show_bug.cgi?id=4374
Eric Anholt <eric(a)anholt.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution| |FIXED
--- Comment #20 from Eric Anholt <eric(a)anholt.net> 2012-03-02 17:44:20 PST ---
I seem to recall testing this on my 865 about a year ago and seeing …
[View More]the bug,
but today the bug doesn't show up any more. So I'm tentatively going to mark
this fixed, though I don't have a commit identified that fixed it :(
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
You are the assignee for the bug.
[View Less]
https://bugs.freedesktop.org/show_bug.cgi?id=45558
Bug #: 45558
Summary: cannot render on a drawable of size equal the max
framebuffer size
Classification: Unclassified
Product: Mesa
Version: git
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/DRI/i830
AssignedTo: dri-devel(a)lists.freedesktop.org
…
[View More]ReportedBy: prahal(a)yahoo.com
Created attachment 56550
--> https://bugs.freedesktop.org/attachment.cgi?id=56550
do not drift one pixel above what we want - x2, y2 are exclusive max
The xorg drawable sizes are exclusive as I have learned on xorg-devel irc
channel by MrCooper.
This means that width = x2 - x1 , x2 behing exclusive and thus in the places
the code does DrawBuffer->Width + x1 to get the right coordinate when we get a
coordinate one pixel above what we want (ie we send x2 which is exclusive).
This patch fixes a few places were the gen2 865g are affected by this bug. That
is gnome-shell renders badly due to cogl using atlas a power of two of the
resolution (which is good but this leads to 2048 which is the framebuffer max
size on gen2).
I will also attach the testcase I used which is dumb but reproduce the same
issue as gnome-shell /cogl. Ie a drawable attached to the context as read and
draw buffer the size equal to framebuffer max size , ie 2048, on 865g gen2.
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
[View Less]
Hi All,
I have a drm device on the platform bus, similar to the exynos driver.
right now libdrm (at least the tests included in libdrm) refuses to open
the device because i915, nouveau, radeon and vmwgfx is all they know
about. Looking at the libdrm code it is not obvious how to fix this
(except for adding "exynos", "mydevice", "myotherdevice" to the module
table which seems awkward and not very futureproof). Any hints or
thoughts how to proceed here?
Thanks
Sascha
--
Pengutronix e.K. …
[View More] | |
Industrial Linux Solutions | http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
[View Less]
Folks:
I recently uncovered a bug in the block layer. It uses a workqueue to
periodically probe removable drives for media or other state changes,
and the workqueue it uses is system_nrt_wq.
The bug is that system_nrt_wq is not freezable, so it keeps on running
even while the system is in the process of suspending or hibernating.
Doing I/O to a suspended drive doesn't work well and in some cases
causes nasty problems. Obviously these polls need to stop during a
suspend transition.
A …
[View More]search through the kernel shows that system_nrt_wq is also used in a
few other subsystems:
./fs/cifs/cifssmb.c: queue_work(system_nrt_wq, &rdata->work);
./fs/cifs/cifssmb.c: queue_work(system_nrt_wq, &wdata->work);
./fs/cifs/misc.c: queue_work(system_nrt_wq,
./fs/cifs/connect.c: queue_delayed_work(system_nrt_wq, &server->echo, SMB_ECHO_INTERVAL);
./fs/cifs/connect.c: queue_delayed_work(system_nrt_wq, &tcp_ses->echo, SMB_ECHO_INTERVAL);
./fs/cifs/connect.c: queue_delayed_work(system_nrt_wq, &cifs_sb->prune_tlinks,
./fs/cifs/connect.c: queue_delayed_work(system_nrt_wq, &cifs_sb->prune_tlinks,
./drivers/mmc/core/host.c: queue_work(system_nrt_wq, &host->clk_gate_work);
./drivers/gpu/drm/drm_crtc_helper.c: queue_delayed_work(system_nrt_wq, delayed_work, DRM_OUTPUT_POLL_PERIOD);
./drivers/gpu/drm/drm_crtc_helper.c: queue_delayed_work(system_nrt_wq, &dev->mode_config.output_poll_work, DRM_OUTPUT_POLL_PERIOD);
./drivers/gpu/drm/drm_crtc_helper.c: queue_delayed_work(system_nrt_wq, &dev->mode_config.output_poll_work, 0);
./security/keys/gc.c: queue_work(system_nrt_wq, &key_gc_work);
./security/keys/gc.c: queue_work(system_nrt_wq, &key_gc_work);
./security/keys/gc.c: queue_work(system_nrt_wq, &key_gc_work);
./security/keys/gc.c: queue_work(system_nrt_wq, &key_gc_work);
./security/keys/key.c: queue_work(system_nrt_wq, &key_gc_work);
My question to all of you: Should system_nrt_wq be made freezable, or
should I create a new workqueue that is both freezable and
non-reentrant? And if I do, which of the usages above should be
converted to the new workqueue?
Thanks,
Alan Stern
[View Less]
https://bugs.freedesktop.org/show_bug.cgi?id=45984
Bug #: 45984
Summary: Piglit:Xserver crashes with segmentation fault after
few seconds of r600.tests launch.
Classification: Unclassified
Product: Mesa
Version: 8.0
Platform: x86-64 (AMD64)
OS/Version: Linux (All)
Status: NEW
Severity: major
Priority: medium
Component: Drivers/Gallium/r600
AssignedTo: dri-…
[View More]devel(a)lists.freedesktop.org
ReportedBy: hysvats(a)gmail.com
Created attachment 56951
--> https://bugs.freedesktop.org/attachment.cgi?id=56951
dmesg
Driver Stack Details:
=========================
1) Kernel-3.0.0-12-generice-pae
2) drm-2.4.31
3) Mesa-8.0-devel (git-a7750c9)
4) Xorg-server-1.11.0
5) xf86-video-ati- master
System Environment:
===================
Asic : NI SeymourXT
O.S. : Ubuntu-11.10 (64 bit)
Processor : AMD Athlon(tm) 64 X2 @ 1000 MHz
Memory : 2 GB
Steps to Reproduce:
===================
1) Install piglit from git clone git://anongit.freedesktop.org/git/piglit
2) Run ./piglit-run.py tests/r600.tests results/r600.results
Observation :
=============
Xserver crashes with segmentation fault after few seconds of r600.tests launch.
--
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
[View Less]
Status update:
In r600.c I found for RS780, num_*_threads are like this:
sq_thread_resource_mgmt = (NUM_PS_THREADS(79) |
NUM_VS_THREADS(78) |
NUM_GS_THREADS(4) |
NUM_ES_THREADS(31));
But in documents, each of them should be a multiple of 4. And in
r600_blit_kms.c, they are 136, 48, 4, 4. I want to know why
79, 78, 4 and 31 are use here.
Huacai Chen
> On Wed, …
[View More]2012-02-29 at 12:49 +0800, chenhc(a)lemote.com wrote:
>> > On Tue, 2012-02-21 at 18:37 +0800, Chen Jie wrote:
>> >> 在 2012年2月17日 下午5:27,Chen Jie <chenj(a)lemote.com> 写道:
>> >> >> One good way to test gart is to go over GPU gart table and write a
>> >> >> dword using the GPU at end of each page something like 0xCAFEDEAD
>> >> >> or somevalue that is unlikely to be already set. And then go over
>> >> >> all the page and check that GPU write succeed. Abusing the scratch
>> >> >> register write back feature is the easiest way to try that.
>> >> > I'm planning to add a GART table check procedure when resume, which
>> >> > will go over GPU gart table:
>> >> > 1. read(backup) a dword at end of each GPU page
>> >> > 2. write a mark by GPU and check it
>> >> > 3. restore the original dword
>> >> Attachment validateGART.patch do the job:
>> >> * It current only works for mips64 platform.
>> >> * To use it, apply all_in_vram.patch first, which will allocate CP
>> >> ring, ih, ib in VRAM and hard code no_wb=1.
>> >>
>> >> The gart test routine will be invoked in r600_resume. We've tried it,
>> >> and find that when lockup happened the gart table was good before
>> >> userspace restarting. The related dmesg follows:
>> >> [ 1521.820312] [drm] r600_gart_table_validate(): Validate GART Table
>> >> at 9000000040040000, 32768 entries, Dummy
>> >> Page[0x000000000e004000-0x000000000e007fff]
>> >> [ 1522.019531] [drm] r600_gart_table_validate(): Sweep 32768
>> >> entries(valid=8544, invalid=24224, total=32768).
>> >> ...
>> >> [ 1531.156250] PM: resume of devices complete after 9396.588 msecs
>> >> [ 1532.152343] Restarting tasks ... done.
>> >> [ 1544.468750] radeon 0000:01:05.0: GPU lockup CP stall for more than
>> >> 10003msec
>> >> [ 1544.472656] ------------[ cut here ]------------
>> >> [ 1544.480468] WARNING: at drivers/gpu/drm/radeon/radeon_fence.c:243
>> >> radeon_fence_wait+0x25c/0x314()
>> >> [ 1544.488281] GPU lockup (waiting for 0x0002136B last fence id
>> >> 0x0002136A)
>> >> ...
>> >> [ 1544.886718] radeon 0000:01:05.0: Wait for MC idle timedout !
>> >> [ 1545.046875] radeon 0000:01:05.0: Wait for MC idle timedout !
>> >> [ 1545.062500] radeon 0000:01:05.0: WB disabled
>> >> [ 1545.097656] [drm] ring test succeeded in 0 usecs
>> >> [ 1545.105468] [drm] ib test succeeded in 0 usecs
>> >> [ 1545.109375] [drm] Enabling audio support
>> >> [ 1545.113281] [drm] r600_gart_table_validate(): Validate GART Table
>> >> at 9000000040040000, 32768 entries, Dummy
>> >> Page[0x000000000e004000-0x000000000e007fff]
>> >> [ 1545.125000] [drm:r600_gart_table_validate] *ERROR* Iter=0:
>> >> unexpected value 0x745aaad1(expect 0xDEADBEEF)
>> >> entry=0x000000000e008067, orignal=0x745aaad1
>> >> ...
>> >> /* System blocked here. */
>> >>
>> >> Any idea?
>> >
>> > I know lockup are frustrating, my only idea is the memory controller
>> > is lockup because of some failing pci <-> system ram transaction.
>> >
>> >>
>> >> BTW, we find the following in r600_pcie_gart_enable()
>> >> (drivers/gpu/drm/radeon/r600.c):
>> >> WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
>> >> (u32)(rdev->dummy_page.addr >> 12));
>> >>
>> >> On our platform, PAGE_SIZE is 16K, does it have any problem?
>> >
>> > No this should be handled properly.
>> >
>> >> Also in radeon_gart_unbind() and radeon_gart_restore(), the logic
>> >> should change to:
>> >> for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
>> >> radeon_gart_set_page(rdev, t, page_base);
>> >> - page_base += RADEON_GPU_PAGE_SIZE;
>> >> + if (page_base != rdev->dummy_page.addr)
>> >> + page_base += RADEON_GPU_PAGE_SIZE;
>> >> }
>> >> ???
>> >
>> > No need to do so, dummy page will be 16K too, so it's fine.
>> Really? When CPU page is 16K and GPU page is 4k, suppose the dummy page
>> is 0x8e004000, then there are four types of address in GART:0x8e004000,
>> 0x8e005000, 0x8e006000, 0x8e007000. The value which written in
>> VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR is 0x8e004 (0x8e004000<<12). I
>> don't know how VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR works, but I
>> think 0x8e005000, 0x8e006000 and 0x8e007000 cannot be handled correctly.
>
> When radeon_gart_unbind initialize the gart entry to point to the dummy
> page it's just to have something safe in the GART table.
>
> VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR is the page address used when
> there is a fault happening. It's like a sandbox for the mc. It doesn't
> conflict in anyway to have gart table entry to point to same page.
>
> Cheers,
> Jerome
>
>
[View Less]