Hi Linus,
One radeon, 2 core fixes, and an interface update to allow for > 2 crtcs in vblank.
hopefully the Intel guys can give me some updates on the s/r issue tomorrow.
Dave.
The following changes since commit c87a8d8dcd2587c203f3dd8a3c5c15d1e128ec0d:
drm/radeon: fixup refcounts in radeon dumb create ioctl. (2011-03-17 13:58:34 +1000)
are available in the git repository at: ssh://master.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git drm-fixes
Alex Deucher (1): drm/radeon/kms: prefer legacy pll algo for tv-out
Chris Wilson (1): drm: Fix use-after-free in drm_gem_vm_close()
Dave Airlie (1): drm: check for modesetting on modeset ioctls
Ilija Hadzic (1): drm/kernel: vblank wait on crtc > 1
drivers/gpu/drm/drm_crtc.c | 51 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_gem.c | 5 ++- drivers/gpu/drm/drm_ioctl.c | 3 ++ drivers/gpu/drm/drm_irq.c | 15 ++++++--- drivers/gpu/drm/radeon/atombios_crtc.c | 6 +++- include/drm/drm.h | 3 ++ 6 files changed, 75 insertions(+), 8 deletions(-)
On Mit, 2011-03-23 at 04:18 +0000, Dave Airlie wrote:
One radeon, 2 core fixes, and an interface update to allow for > 2 crtcs in vblank.
[...]
Ilija Hadzic (1): drm/kernel: vblank wait on crtc > 1
This patch was still being debated yesterday, are you deliberately pushing it regardless? Once it hits mainline, it'll be pretty much set in stone.
2011/3/23 Michel Dänzer michel@daenzer.net:
From what I can see it was the userspace patches being debated, this
one seemed fine and the interface looked okay to me.
Dave.
On Mit, 2011-03-23 at 18:16 +1000, Dave Airlie wrote:
The author ignored my suggestions to make the patch smaller and simpler, more maintainable and more future-proof all at once.
2011/3/23 Michel Dänzer michel@daenzer.net:
It was already small and I'm not sure merging the flags made it more maintainable. Its always being a slightly painful ioctl, and hopefully any future changes add a new ioctl esp if we want 64-bit values.
The only comment I really thought was necessary was changing the CAP name, but since that isn't part of the ABI (just the number) we can quickly fix it with a follow-up.
Dave. Dave.
On Mit, 2011-03-23 at 19:06 +1000, Dave Airlie wrote:
Having two holes between _DRM_VBLANK_FLAGS_MASK, _DRM_VBLANK_HIGH_CRTC_MASK and _DRM_VBLANK_TYPES_MASK can unnecessarily complicate future extensions.
Its always being a slightly painful ioctl, and hopefully any future changes add a new ioctl esp if we want 64-bit values.
Is there really a problem with assuming that if the 32-bit value went backwards, the upper 32 bits have incremented by 1? 1 << 32 refreshes is about 2 years, so I can't really see any ambiguity there. For hardware that doesn't have 64 bit frame counters (does any have that yet?), something like that needs to be done at some level anyway.
Anyway, that's another discussion...
On Mit, 2011-03-23 at 06:40 -0500, Ilija Hadzic wrote:
No, one central point is not to leave two holes between _DRM_VBLANK_FLAGS_MASK, _DRM_VBLANK_HIGH_CRTC_MASK and _DRM_VBLANK_TYPES_MASK .
2011/3/23 Michel Dänzer michel@daenzer.net:
Okay I've pushed this to my tree before this discussion got on my radar and I'm just catching up now.
I'll push the following patch to Linus to keep the biggest gap in the 32-bit word for future use, then we can fixup the userspace patches.
Kernel interfaces aren't considered unchangeable usually on the kernel is released.
Dave.
On Don, 2011-03-24 at 21:06 +1000, Dave Airlie wrote:
As we discussed on IRC, I'd personally go further, but this is definitely an improvement and fixes the worst problems. Thanks Dave.
Reviewed-by: Michel Dänzer michel@daenzer.net
On Thu, Mar 24, 2011 at 4:31 AM, Ilija Hadzic ihadzic@research.bell-labs.com wrote:
OK, I'll update libdrm side to match this change and send the patch later today
Quite frankly, this whole discussion is a clear example of why DRM has been problematic.
Why the hell am I getting pushed stuff that is clearly not baked? It's the second week of the merge window, the stuff I'm getting should have been finalized two weeks ago, not be something that is still being discussed and that has API issues.
In other words: Why should I pull this at all?
Linus
On Thu, 24 Mar 2011, Linus Torvalds wrote:
In other words: Why should I pull this at all?
You should pull it (eventually) because it fixes a problem that is real and visible and does it in a way that is conservative enough to not risk introducing new breakage.
Whether you should pull it now or wait until you get convinced that we got our act together is totally your call.
Speaking as the author of the patch, what Dave pushed this morning is good enough for me. I spent quite a bit of time testing all the usage patterns (including the latest version pushed to you) and I can assert that the change is safe.
I can't speak for others, but I have not seen any additional complaints.
-- Ilija
On Thu, Mar 24, 2011 at 8:49 AM, Ilija Hadzic ihadzic@research.bell-labs.com wrote:
Alternatively, kick it out and wait for Ilija to see and address Michel's complaints. It's not too late to improve this.
~ C.
On Fri, Mar 25, 2011 at 12:37 AM, Linus Torvalds torvalds@linux-foundation.org wrote:
Linus,
Take a step back, it was an enhancement to a current API, had gotten reviewed by two people when I merged it and made sense. Michel raised his concern after that point, so no matter what it was already in a tree I'd pushed out to public so the only answer when he raised his concern was to revert or fix it. Its a minor problem. Like I'd have pushed this patch post merge window, it solves a real problem that Ilija was seeing and he stepped up and fixed it, post-merge review is what happened here, and really this is nothing compared to say the fallout in the VFS after 2.6.38-rc1.
If you think this has anything to do with Intel's ability to break your hardware on every merge then you've got your wires crossed.
Dave.
On Fri, 25 Mar 2011, Dave Airlie wrote:
Just to be fair to Michel (and prevent any unnecessary "fights" on the list, for which I am sure people have had enough by now), the concern in question that triggered revision of the API was raised in time, but I oversaw it in the pile of other comments I was also trying to address.
I am neither the first nor the last guy to inadvertently drop a review comment due to limited "bandwidth", so this should not be made into a big deal. Especially, when no damage was done (except for a few extra E-mails and a little extra work).
Dave did the follow-up patch to satisfy Michel (thanks!) and I have already submitted the user space stuff that matches it, so hopefully we are all aligned now.
thanks,
Ilija
On Thu, Mar 24, 2011 at 1:05 PM, Dave Airlie airlied@gmail.com wrote:
If you think this has anything to do with Intel's ability to break your hardware on every merge then you've got your wires crossed.
No, it's about the fact that I expect to be pushed code that is WRITTEN AND TESTED BEFORE THE MERGE WINDOW.
The merge window is not for writing new code. The code that gets merged should have been written two weeks ago already. The only new code that I want to see are actual regressions.
I have been talking about this for YEARS now. It's not a new issue. I hate seeing patches sent to me while they are clearly still being discussed and developed. There's something seriously wrong there when that happens.
Linus
On Fri, Mar 25, 2011 at 9:48 AM, Linus Torvalds torvalds@linux-foundation.org wrote:
Like seriously you really think VFS locking rework wasn't under development or discussion when you merged it? I'm sure Al would have something to say about it considering the number of times he cursed in irc about that code after you merged it.
Here's the point you are missing. I'd quite happily have pushed this *outside the merge window* because it solves a real problem with 0 probability of introducing any new problems, so f'ing what if it was under discussion everything in the kernel is still being discussed and developed. The ABI change was a minor move of the field to leave a larger hole for future changes, it wasn't a fucking fanotify syscall.
This isn't even close to the level of the usual type of fuckups you get in a merge window, it just happens you were cc'ed on the discusson, otherwise I'm betting you'd never even notice. I'm betting something much worse landed in this merge window that you should be giving a fuck about, but this isn't the droid you are lookin for.
Dave.
On Thu, Mar 24, 2011 at 5:07 PM, Dave Airlie airlied@gmail.com wrote:
Umm. That code was basically over a year old by the time it was merged.
How old was the code we're talking about now? Seriously?
And your argument that this case is something you'd have pushed even outside the merge window - I think that sounds like more of the same problem. You say it fixes a problem - but does it fix a REGRESSION?
Do you see the difference? Every single commit I get "fixes a problem". But our rules for these things are much stricter than that.
Maybe not. But why is it always the DRM tree that has these issues? Why is it that the DRM tree is the one that gets relatively _huge_ patches after -rc1 is out?
I really REALLY wish that you graphics people would at some point admit that you guys have a problem. I am hoping that the intel side is being worked on.
Instead, I see what seems to be you being in a hurry, and arguments why uncooked code should be merged even outside the merge window.
Do you see what I'm aiming at here?
If this was a one-time event, we wouldn't be having this discussion. But the DRM tree is one of the BIGGEST issues after the merge window has closed. And it's EVERY SINGLE RELEASE.
Why? Some introspection please. You don't even have to answer me. I ask you to answer that to yourself.
Linus
On Thu, Mar 24, 2011 at 5:17 PM, Linus Torvalds torvalds@linux-foundation.org wrote:
.. regardless, it's pulled now. I just hope that some day I'll be taken by surprise, and the drm tree won't be the biggest issue after -rc1.
Some day.
Linus
On Thu, Mar 24, 2011 at 8:17 PM, Linus Torvalds torvalds@linux-foundation.org wrote:
Below are my feeling and likely don't reflect any others people feeling.
DRM have been trying to play catchup for years, GPU are likely the most complex piece of hardware you can find in a computer (from memory management, to complex modesetting with all kind of tweaks to the utterly crazy 3d engine and everythings that comes with it) Unlike others piece of hardware, when it comes to fully use a GPU acceleration, there is no common denominator that we would be able to put in the kernel (like a tcp stack or filesystem). I am sure very few people would like to see a full GL stack into the kernel.
This results in various non common API that each driver expose to the userspace and it's all half cooked, because we have a tendency to release early (which is not necessarily wrong in my eyes). If i were to do it cleanly for one device i wouldn't freeze the API before i got a full fast stack (ie fast GL driver, video decompression, dual GPU, efficient power management) this is exactly what nouveau is doing, they are in the experimental for good reasons, they have the freedom to fix their API and they keep improving it each time their userspace progress.
So from my POV either frozen API for DRM is not a good solution (there is a reason why closed source driver bundly everythings together kernel, GL, ddx, ...) either we should leave in experimental until we get our API right which would likely means several years (2-4years as a wild guess) given current number of people working on this. That would mean that most distribution wouldn't enable the open source driver and then open source driver likely wouldn't get enough testing (kind of a chicken and egg problem).
I am not even talking about on dramatic GPU changes in last few years. For instance few years ago having a dual screen setup meaned that you were king of somethings, or least on the top of the hill. Nowadays dual, triple screens or even more, is common setup but some DRM API was designed without even thinking that one day there would be more than 2 crtc (expectation was likely that there would be flying car by then).
Well this are my feeling, we are just chasing a fast moving target and always shooting short on the API and freeze ourself into corner case. Maybe we, or just i, are bad at designing API (well not always i do believe for instance that the modesetting API we expose is a good one).
Cheers, Jerome
On 25/03/2011, at 10:55, Jerome Glisse j.glisse@gmail.com wrote:
Oh, I wish this were actually the case. The last time we attempted such a thing we were blasted by Linus. It does make me wonder at why we're even bothering being in staging.
This is where the binary drivers have a huge advantage, they package all the pieces of their driver together and can modify things as necessary.
Part of me does think such an approach with the open source graphics drivers would be better. The current model doesn't really fit too well in my opinion. Though, admittedly, there's different problems to going other ways.
Ben.
On Fri, Mar 25, 2011 at 6:25 AM, Ben Skeggs skeggsb@gmail.com wrote:
Well i think being able to evolve the API would help a lot, it should still be possible to keep supporting old API over a year or so. But my feeling is some of the current API for some of the device, needs heavy lifting if we ever want to improve things.
Cheers, Jerome
On Fri, 25 Mar 2011 10:01:14 -0400 Jerome Glisse j.glisse@gmail.com wrote:
Going back to the old model of a separate drm repo would create more problems than it solves, IMO.
One thing I think Linus has been fairly consistent about is that making things easy for users (well at least power users who build their own kernels) is important. That means ABIs need to be stable so that their existing userspace continues to work even after a kernel upgrade.
If we went back to the old, out of tree model, we might be able to break things more easily (i.e. require lockstep upgrades of kernel & userspace when we change things, hopefully for a good reason), but I think end users would end up suffering. They'd have to rely on their distro to pick up such changes and make sure dependencies were met and that upgrades/downgrades worked properly across the pre-packaged kernels that they made available.
One of the main reasons we moved in-tree was to make sure our bits got into the hands of users more quickly, and to make life easier for the various downstream distros, not all of which have deep expertise in the graphics stack.
Fortunately, neither of the issues that started this thread, the suspend/resume regression in i915 and the vblank ioctl enhancement, are problems in this respect. The former is just a bug (though definitely not one that should have made it to Linus's tree) and the latter does preserve compatibility and fix a major issue, so isn't really a problem imo.
But in the general sense, I think we just have to bite the bullet, take our time with ABI additions and changes so as to preserve compatibility for a long time (I think we've been doing well with this on the Intel side at least; we add feature flags every time we change something, and make sure userspace is forward and backward compatible). This is more work for us, but I think it benefits the user in the end. And it could be worse, at least we're not still dealing with memory layout compatibility between the DRM, fbdev, DDX and DRI drivers anymore!
On Fri, Mar 25, 2011 at 1:34 PM, Jesse Barnes jbarnes@virtuousgeek.org wrote:
While for small change, slowly evolving the API is doable in backward compatible way, major change are not. For instance if i were to write radeon kms today i would drop the domain stuff out of the mix, would not do the command submission the way they are and bunch of others things that i have been meaning to try but have yet to prototype. Change i am thinking about here means different userspace which would communicate with kernel in a completely different way.
I am just acknowledging the fact that i did a bunch of very poor decision in radeon kms and i feel that today i know better what would be a good API but i can be wrong once again.
That aside, i would prefer if we stay upstream but if moving in our own repo allow us to break API then i would jump in right away.
Note that when changing API one could keep old API with old code around and just refuse to fix any issue with the old one, that means some one updating kernel with something that worked will still have a working configuration. If he then want to improve its graphics experience he would have to update the userspace bits.
Cheers, Jerome
On Fri, Mar 25, 2011 at 10:17 AM, Linus Torvalds torvalds@linux-foundation.org wrote:
It was 30 lines of clean code, that really was fine to be merged in its first form it was merely a future maintaince issue to clean up the interface before it was released as stable.
Okay I'll explain something from my position and maybe you'll never want to pull from me again, but the kernel release cycle doesn't work at all well for graphics drivers.
Why?
well the major fail case we have is my monitor doesn't switch on. Now if I merge new hardware support for a new GPU in 2.6.38, and sometime in 2.6.39-rc1 we come across a variant that is broken (this happens every kernel, we find at least 5 GPU variants or BIOS table reports on radeon, look at pretty much any post -rc1 patch from Alex Deucher). Now by your rules this isn't a regression, but now for a user to actually get this change in their hands I have to wait until 2.6.40-rc1, and only once its in your tree, maybe it can go to stable. This is 6 months later. That is to pardon my french, fucking shithouse. I have to make judgement calls on a lot of patches on whether they are suitable or not to go upstream and I try to think that the sooner the poor bastard stuck with this hardware can get this fix then the better it is for everyone, regression or not.
In this case, if you had a >2 monitor setup connected to an evergreen card, and you tried to do 3D on the 3rd monitor it would just hang the app in a loop forever, the fix needs 3 pieces, one in the kernel, and two userspace fixes. I can have the userspace fixes on users disks in under a week, literally. We release a new libdrm/-ati driver and distros will have it available in hours via rawhide or xorg-edgers in Ubuntu. Now under kernel rules you want me to hold it up for 6 months? just because it was a few days later for the merge window. Why 6 months? because a distro won't ship it until 2.6.40 is released.
Another example is most of Marek's patches where he enables some userspace feature by allowing the kernel to accept new commands to send to the GPU. Again this is to avoid a 6 month window where nobody can use this feature of the 3D driver that is on their disk until they get a kernel upgrade. Despite what you have said before and obviously think its much easier to get users to update userspace than kernels in the real world.
This is why I often put things that aren't strict regression fixes in after -rc1 and accept the same from intel and nouveau. I draw the line at things like performance enhancements and I should be more strict on some of the crap that gets past in Intel, but I make a lot more judgement calls on these things and I often make them wrong, but I'd rather be making them than just being an ass to people who are stuck in vesa mode and can't suspend/resume because their GPU just shows a black screen on startup on new hw or they can't get acceleration support for 4 months.
I'm also aware we never get enough testing coverage before stuff hits your tree, we'd need 1000s of testers to run drm-next and we just don't have that variation. So yes when new features hit -rc1 with the drm they nearly always cause regressions, its just not possible to test this stuff on every GPU/monitor/bios combination in existance before we give it to you, that just isn't happening. Like radeon pageflipping caused machines to completely hang and I didn't find out until -rc7 due to lack of testing coverage.
I'm seriously contemplating going back to out-of-tree drivers so we can actually get test coverage before you get things, however that comes with its own set of completely insane problems.
Its not like I'm not aware of the problems here, I'm very aware, I'm just clueless on how to provide actual valuable drm code to users in anything close to a timely manner, people buy new graphics card quicker than I can get code into the kernel.
Dave.
On Fre, 2011-03-25 at 17:21 +1000, Dave Airlie wrote:
From my POV the real failure here was that the change made it to *any* tree while there were outstanding review issues from when it was initially discussed a few weeks earlier. Then when the change was submitted — more or less unchanged — I was on my birthday weekend enjoying some time away from computers, and when I had caught up with things, it was already in drm-next.
Actually, the hangs could be fixed in the X driver alone, but the author seems uninterested in contemplating that. Maybe because he seems to think it's easier to get the kernel fix to users, but I'm with you on that it's quite clearly the opposite.
That said, I agree with your analysis in general, but not in this particular case.
2011/3/25 Michel Dänzer michel@daenzer.net:
Thats the problem really I read all the discussion and there wasn't much that seemed bad, I think the problem with your suggestions was there was a lot of latitude to disagree with them and I read the comments and disagreed with them as well, and it fixed the problem so I decided it should be pushed or we'd end up waiting another 6 months to fix it for the people who it actually affects. This isn't the message that I'd like to send to people who get off their arses and fix our fuckups.
I can't see how you can actually fix the hangs in userspace, you can hack around the hangs so the user just gets a less useful behaviour and make maintaining the userspace hack forever, or you can fix the insufficient kernel interface and get it out quick. I'm not really into the whole put a userspace hack in and maintain it for years thing either, and I'd rather nobody else ever decided it was a good idea.
Dave.
On Fre, 2011-03-25 at 17:51 +1000, Dave Airlie wrote:
The fuckup is in userspace trying to use a kernel interface which never pretended to work for more than 2 CRTCs for the third and above CRTCs.
By not calling the kernel interface when you know it won't do what you want.
Userspace has to check for the presence of the fixed kernel interface anyway. In the absence of it, the currently proposed fix just keeps the same old broken behaviour, risking hangs in some circumstances, when it could avoid them with possibly not too much effort.
This thread turned into much more than what its scope is and I hoped I would stay out of the "fight" (especially after the vocabulary got very "liberal"). Yet, my code (as trivial as it is) has sparked up some old issues and seems to be referred to over and over again, so I want to make a few technical points straight:
* There is no way for an application to see the VBLANK events without crossing into the kernel. VBLANKs are interrupts from the hardware and only kernel knows them. Any userspace-only fix would be an ugly hack (as Dave pointed) and maybe the application would look like it's not stuck but it would not be doing the right thing no matter what you do in the user space. So if I was fixing this, I figured I would do it right. If anyone knows better than me and can technically prove (in code, not in words) that an application can synchronize to (a correct) VBLANK without calling the kernel, be my guest: submit your code and evict mine. I have no problem with that, but if something breaks then, it *will* be a regression.
* Most comments about DDX pertained to old-kernel/new-userland situation, which for most users using distros won't happen because distros will make sure they have they have consistent packages. Still, even that situation was taken care of in the sense that it does not cause bogus errors (BTW, that comment was made and was legitimate and I took care of it) nor any other "damage" to the kernel. Discussing whether visual behavior in this case (a case that is actually an installation/dependency problem) should be the same as in old versions or still broken but masked out in appearance is a waste of time and bandwidth.
* Saying that "kernel was not supposed to work with more two CRTCs" is just plain wrong (and probably based on superficial understanding of the kernel code). I spent considerable time analyzing the kernel from the driver up and the userland from the application down and found that *everything* in the kernel is capable of supporting up to 32 CRTCs and so is in the userland. It's only that one ioctl was stopping the information flow between the userland and the kernel so I fixed it. That may have been the legacy from old days when kernel didn't support multiple CRTCs, but now it does. Also, there is now hardware with multiple CRTCs, so kernel (and associated ioctls) should better support it.
* My fix alone, does not justify introducing a new ioctl and nobody can convince me that it does. In fact, being too liberal on introducing new ioctls is just bad programming. Recognizing that the existing ioctl has other shortcomings and coming up with a totally new ioctl with many other features and then adjusting the userland to use it is a long term project. My fix was a short term thing to get things that are broken working.
And just a few non-technical things in conclusion and a few of my own thoughts and views: Although I didn't mention any names so far, everyone smart enough reading this will know that most of my responses are a) directed to Michel and b) are just a repeat of what I have already said on the list (in response to his repeat of what was also already said). Let me just say that I value everyone's comments and I respect the knowledge, work and time effort of everyone in this group. However, in every development model (including the open-source) there is a point after which you have to move on (even if the proposed thing is not perfect) and there is someone whose word should be final.
My understanding is that for the DRM, the final say comes from Dave. So, fine, there were imperfections in my proposed modification to the ioctl, there was one comment that I (mistakenly) oversaw and that Michel found utterly important (although I still think that it wasn't that much of a big deal), but if Dave accepted the original change, that implicitly meant that he took into accounts the discussion, follow-up modifications and rebuttals and that the issue should be behind us. Continuing to insist after that point is just plain destructive (and could be in part the reason why DRM, besides the lack of resources, is behind closed-source drivers).
And there is another point that Dave made that I fully support. To all of you, I am a newcomer, and a plain garden-variety user who instead of whining on Bugzilla and hoping that some hacker out there will show some "mercy", actually stepped up to fix the problem (and to be allowed to do that, I had to convince a few higher-ups that this would actually be in the interest of those who write my paycheck). Some of you have shown appreciation for that (thanks!), but some of you have seen it as an opportunity to publicly prove that you are smarter than me (for which I frankly don't care, as long as it does not impact the progress of the development, but in this case it did).
I may be new to Linux/DRM "club", but I am not new to the large system development (including a wide variety of kernels and other very complex subsystems). Now if I can get my own (very reasonable, technically correct, generally well written, and useful for the community) fixes into open-source without sparking up major fights, then in my day-job, I have a strong case to present to a bunch of MBAs in three-piece suits that we should be basing more of our products on open-source code. That alone, in the long run helps the community, which is, I presume, a common interest of all of us.
'nuff said
-- Ilija
[ Removing Linus from CC, I doubt he's that interested in this anymore ]
On Fre, 2011-03-25 at 09:52 -0500, Ilija Hadzic wrote:
It's not possible to synchronize correctly in this case, that's the whole point. When userspace calls the ioctl for a different CRTC than the one it's really interested in, that's (when it doesn't hang) essentially making up a sequence number which doesn't necessarily have any direct meaning for the CRTC userspace wants to synchronize to. I suggested several other possibilities for making up sequence numbers in that case without risking[0] a hang. But even if you don't want to go into that, which is understandable to some degree, there's always the simple option of not exposing this functionality to the X server at all in this case.
[0] Or rather deliberately triggering, as userspace should know when CRTC 1 is disabled.
Will they? As Dave pointed out, a new xf86-video-ati release with the fix could be out at any time, but the kernel change could still take months to make it into a distro kernel.
That's not what I said.
There's no question that the kernel change will be necessary for all the functionality to work perfectly on all CRTCs. But the hangs are mostly a userspace issue and can be fixed there alone.
That's your point of view. Mine is that your (certainly appreciated) fixes had flaws that could have been easily and quickly resolved if you had so chosen. Instead, you wasted a lot of everybody's time pushing back and justifying the flaws rather than cooperating on a better solution.
You won't have any trouble finding plenty of examples of newcomers having a better experience, as most of them thankfully don't come in with such a bad attitude and conduct.
2011/3/25 Ilija Hadzic ihadzic@research.bell-labs.com:
I would like to think my self as someone knowing enough about the drm, i have writte somethings like 4 different version of the radeon kms before converging on the one actually in the kernel. Yes you right most of the drm core that deals with crtc can handle more than 2 this is because most of this are new drm stuff introduced with kms. I was taking the vblank stuff as an example as it's one of the oldest piece of drm that was designed long ago, way before kms in time where gpu with 2 crtc were only high end one.
Cheers, Jerome
On 25 March 2011 04:21, Dave Airlie airlied@gmail.com wrote:
Not that I have much credibility to be talking here, but as a nouveau user with a problem setup where I cannot seem to find anyone to help [1], I wonder if getting support for the absolute latest GPUs into mainline perhaps should not be the highest priority. Also, have you considered the possibility that taking longer for code to hit Linus' tree would have the effect of increasing your tester base for drm-next? If I had a new video card that was giving me a blank screen, I would certainly be ready to try a testing kernel tree if I thought it would get me some level of functionality.
On Fri, Mar 25, 2011 at 3:21 AM, Dave Airlie airlied@gmail.com wrote:
My feeling on that is that maybe too much code sharing accross gpu of different generation hurt more than it helps. I have got the feeling that some of the newer Intel asic share some of the bit of older one and that intel is focusing there attention on newer one and obviously doesn't have time or resource to check that change they do don't impact older hw (i don't think such testing is doable without massive investment which is very very unlikely to happen given size of linux market).
Cheers, Jerome
On Fri, 25 Mar 2011 10:04:10 -0400 Jerome Glisse j.glisse@gmail.com wrote:
God yes. The mantra of "code sharing is always good" sounds nice, but in my experience it's very often not true, especially when you're still trying to get things to work.
We've been slowly splitting code out to make it more robust against changes to different generations, but we still have a little ways to go. And of course, such splitting introduces problems in itself.
But as a policy going forward, I'll advocate splitting code whenever a new generation requires something slightly different than an old one. As an example, rather than adding a few conditionals to our FDI training code to support newer chips, I've split it into separate functions entirely, leaving the old one alone. If, after awhile we've decided that they really are mostly the same and we have things working well, we can consider merging them again, but only after extensive testing across generations.
dri-devel@lists.freedesktop.org