Mainboards with 4 or even up to 7 PCIe x8 or x16 slots are available today, and graphics cards with 3 to 5 outputs are also common.
Such a combination would make a very attractive classroom server for up to 25 pupils/students, cheaper than any other solution, much easier to administrate than a network of single-seat PC's, and much faster w.r.t. graphics than a server with thin clients. We would be highly interested in such a solution (I'm a professor for computer science at a technical high school and university of applied sciences), and I'm currently investigating the possibilities for that.
Unfortunately, currently only Xephyr can be used for that, which doesn't support hardware acceleration (and hence makes e.g. Gnome 3 unhappy and video playback slow).
There once was a patch to allow one X server per graphics card output: http://www.facebook.com/note.php?note_id=110388492307351 http://people.freedesktop.org/~airlied/multiseat/
However, this patch never made it to mainline and no longer applies cleanly.
Now the question is:
* Are there any plans or activities to port these patches to recent kernels? To include these patches or something similar in the official linux kernel any time soon?
* How much work (and how complex/difficult) would it be to port these patches to the current linux kernel? Could that be assigned to a student?
* Is there anyone who is experienced in that area of the kernel and would implement that (and push it to the official kernel), perhaps with some financial support (how much would that cost)?
I'd just need the base functionality:
* Static configuration of render devices (via kernel boot parameter?) or even just one hardcoded device / X server per graphics card output, no need to assign devices to output combinations dynamically.
(perhaps being able to assign symbolic links to the device nodes based on card and output number with udev would be nice)
* No need to have any gdm / consolekit support for that.
* ATI radeon only.
Background: The competition is offering exactly that to us: The beast is called "Microsoft MultiPoint Server". It is a product created especially for the educational market (and seems to be quite successful there), but also offered for small offices.
Technically, it is a multi-user Windows server supporting up to 20 local users, each user being assigned one port of a local graphics card (or one USB graphics card) and a USB keyboard and mouse.
It explicitely provides one user per graphics output port, not just per card, exactly what I would like to have for linux.
So I'd need some linux equivalent to compete with that w.r.t. number of seats per PC/server to get a linux classroom installed at our school instead of Windows...
Many thanks in advance for your help!
Nothing in my plans, I did a proof of concept to show how someone should do things, I'd sort of hoped some of the people who dedicate time to fixing multiseat and cared about it would pick things up and run with them, it really does need a proper ioctl or configfs configuration interface since any static setup will invariable be wrong for 50% of people, and any kernel commandline interface will invariably be ugly and complex in order to solve the problem.
I could envisage some sort of configfs where you echo 5 > num_seats then echo VGA-1 > seat1, etc.
Then you could just write some scripts per machine if you don't want gdm/consolekit support.
As I said porting it isn't the problem, doing it correctly is the problem. A decent student could probably pick it up and run with it, it really depends on the person. I can provide some guidance but they'd have to be fairly self-starting.
Not me, my interest in multi-seat has taken a diversion into fixing something else, I might get back to it in a year or so.
Dave.
On 2011-07-31 22:09, Dave Airlie wrote:
Is there any mailing list for the technical aspects of multiseat linux? Who and where are the relevant developers?
Klaus.
On 2011-07-31 22:09, Dave Airlie wrote:
Hmmm, what's about the opposite approach? To me, it sounds simpler and more logical when the kernel always creates one device node per output (or maybe dynamically per connected output), without any need for configuration or device assignment.
If a single X server wants to control several outputs, libdrm should open the corresponding number of devices in parallel. We already have both static X configuration and xrandr for configuring that, and if the devices allow only a single open, this would also arbitrate outputs between servers (a server can't open an output which is already taken).
Klaus.
It just doesn't fit in with how the drm device nodes work, like it might seem simpler in the kernel but I think it would just complicate userspace.
I haven't decided it couldn't work but I'd need a working implementation to even consider merging it, where I've already done a demo of how I think it should work, which means I don't have to revalidate things if someone were to complete it.
Dave.
On Mon, 1 Aug 2011 20:47:42 +0100 Dave Airlie airlied@gmail.com wrote:
It also doesn't fit some cases of reality (eg the USB displaylink stuff) where the output and the GPU are effectively decoupled.
There are also some interesting security issues with a lot of GPUs where you'd be very very hard pushed to stop one task spying on the display of another as there isn't much in the way of MMU contexts on the GPU side.
Alan
On 2011-08-01 22:22, Alan Cox wrote:
But I believe this is a problem of all approaches which provide multiple hardware-accelerated (or Xv-enabled) seats on a single GPU, no matter if based on multiple DRM devices, on Xephyr or Xnest with some kind of OpenGL or DRI passthrough, or on Wayland: If one has direct access to the graphics engine, he also can access any video memory he wants.
Hence, that's no argument against multiple DRM devices on a single card, because the other solutions suffer from the same problem.
In the long term, it needs to be fixed, but in a classroom environment, that's not my primary concern (and I believe 90 % of all multiseat installations will be classroom or home environments).
Klaus.
Not always. It's a bit more complicated than that. Some hardware supports write only memory spaces, some hardware supports contexts in the GTT). On other cards you need some kind of security model and verifier to handle this. I don't think reloading the GTT is enough on most of the cards because the display scanout for all the framebuffers is needed all the time.
Hence, that's no argument against multiple DRM devices on a single card, because the other solutions suffer from the same problem.
Displaylink USB devices don't have this problem - they have others. Ditto mulitple graphics cards doesn't.
Certainly with such security limits it won't catch on in many places, and I suspect it wouldn't be much use in many classroom environments students being what they are!
Alan
2011/8/2 Alan Cox alan@lxorguk.ukuu.org.uk:
I believe you respond to Klaus Kusche here?
Klaus: please, CC mailing list so everyone can see your messages.
On 2011-08-02 12:28, Rafał Miłecki wrote:
My mail Alan cited above was CC'd to the list. No idea where it got eaten. Is there some spam filter filtering list postings?
On Die, 2011-08-02 at 13:10 +0200, Prof. Dr. Klaus Kusche wrote:
There's a moderation queue for posts from addresses that aren't subscribed to the list.
On 2011-08-02 12:26, Alan Cox wrote:
Multiple single-seat graphics cards won't allow me to connect 15-25 users to one PC / server, and the main goal is to have a whole classroom on a *single* machine: No network and file sharing to configure, no remote software distribution and updates to manage, no remote authentication, no boot from net, ...
And chances are that the machine has to be dual boot with Microsoft Multipoint Server.
15-25 USB graphics devices might work technically, but they will be much slower than the unaccelerated solution with multi-output graphics cards and Xephyr's, perhaps even slower than thin clients (given that they are all USB 2 and the total USB 2 bandwidth is quite limited).
Such configurations will be rare and will have different hardware and software. I doubt that someone will publish a ready-to-run exploit which works out of the box on all of them. Hence, exploiting this will require some technical intelligence. I don't believe that pupils and students in the basic courses have that (and we're not talking about graduate courses).
And what would they get? Their neighbour's screen contents. They could just as easily look at their neighbour's screen directly. And as long as I do not run individual VM's with strict firewalling on the host OS, they will find other ways to exchange information (or sabotage each other) anyway, by IPC or whatever...
I've been told that it is almost impossible to completely block information exchange between pupils on Microsoft Multipoint Server even on naive levels, too, and nevertheless that product is selling well...
So the security aspect is something to keep in mind, but not something which excludes a shared-graphics-card solution a priori.
Klaus.
On Mon, Aug 1, 2011 at 3:41 PM, Prof. Dr. Klaus Kusche klaus.kusche@computerix.info wrote:
You almost always have more connectors than display controllers (e.g., you might have displayport, svideo, DVI-I and VGA, but only two display controllers so you can only use two of the connectors at any time). Also certain combinations of connectors are not possible depending on the hw (e.g., the svideo and the VGA port may share the same DAC, so you can only use one or the other at the same time).
Alex
On 2011-08-02 14:59, Alex Deucher wrote:
Hmmm, for my purposes I was only thinking about new, current hardware, not about previous-generation cards, and only about digital outputs:
* The professional, high-quality solution would be ATI's FirePro 2460: 4 mini Displayports, all active at the same time, single slot (passive cooling, < 20 W, so that's a great energy saver, too, competing with thin and zero clients, and it's silent and long-lived)
* The XFX HD-677X-Z5F3 most likely offers most ports per Euro and space: 5 mini Displayports, all active at the same time, single slot, for less than 100 Euro
(this would result in 16/20 seats with any quad-crossfire mainboard and 28/35 seats with some server mainboards if the BIOS is able to assign addresses to 7 graphics cards)
Even the low-cost 6450 supports 3 and the 6570 supports 4 independent simultaneous outputs, so any ATI 6xxx card can drive all its outputs at the same time (and I believe that was also true for ATI 5xxx) However, cards with 3 or 4 digital outputs are hard to find in that price range... (XFX HD6570 is one of them)
But you're correct, my suggestion above needs to be refined: One DRI device per display controller.
Klaus.
On Tue, Aug 2, 2011 at 10:22 AM, Prof. Dr. Klaus Kusche klaus.kusche@computerix.info wrote:
Even then it gets a little tricky. AMD cards are fairly flexible, but some other cards may have restrictions about which encoders can be driven by which display controllers. Then how do you decide which display controller gets assigned to which connector(s)? You also need to factor in things like memory bandwidth. E.g., a low end card may not be able to drive four huge displays properly, but can drive four smaller displays.
Alex
On 2011-08-02 16:34, Alex Deucher wrote:
What is your suggestion to "do things right"? How would you assign DRI device nodes to multiple monitors? Do you have better suggestions for building multi-seat systems beyond 4 seats with 4 single-output cards?
How does xrandr currently solve those problems? It might also "see" more outputs than there are display controllers, it has the same job of assigning connectors to display controllers, and it also has the problem that setting all outputs to their maximum resolution might cause the card to run out of memory bandwidth. So either the logic needed is already there, or the problems are not multiseat-specific, but affect today's multi-screen environments in general.
I think there is no need to do better than xrandr currently does. In fact, that's the multiseat solution we have today: Configure one X server (most likely using xrandr) with one huge display spanning all the outputs and monitors connected to one card, and start one Xephyr per monitor and user within that display. This just lacks any acceleration and Xv.
(the fact that xrandr already seems to handle most of this was one of the reasons why I suggested that the kernel should just export every output the hardware offers to userland: I believed that the userland already knows how to allocate and configure outputs, and the only thing missing is the ability to access the same card from more than one X server or to assign the outputs of one card to two or more X servers)
I also believe and accept that there will be no solution supporting all graphics cards existing today and 10 years back. Only some cards offer KMS, only some cards offer 3D acceleration, some older cards don't even offer dual-screen support for one X server, only some cards will offer multi-seat support in future. If somebody wants to build a high-density shared-card multiseat system, he has to choose suitable hardware.
Microsoft Multipoint Server also depends on the driver of the card to be able to configure all the outputs simultaneously - some cards do, other's don't.
Klaus.
On Tue, Aug 2, 2011 at 11:28 AM, Prof. Dr. Klaus Kusche klaus.kusche@computerix.info wrote:
Some drivers can already do this (the radeon driver at least). google for zaphod mode. You basically start one instance of the driver for each display controller and then assign which randr outputs are used by each instance of the driver. It already works and supports acceleration. The problem is, each instance shows up as an X protocol screen rather than a separate X server so, you'd have to fix the input side.
Even if you only support KMS supported hardware which seems reasonable to me, you still have a lot of cards out there with interesting display configurations. We still make cards with DVI and VGA ports on them or more connectors than display controllers. You don't really want to go through the whole effort if it only works on a handful of special cards. It wouldn't take that much more work to come up with something suitable for the majority of hardware that is out there. In most cases, it will probably be a custom configuration for each person, so as long as the right knobs are there, you can configure something that works for your particular system.
Alex
On 2011-08-02 17:48, Alex Deucher wrote:
Hmmmm... * Zaphod seems to have even fewer active users and less future/support than Xephyr, so putting work into it doesn't seem to be a future-proof investment. * It would be of interest only if it were possible to configure two zaphod drivers assigned to two different outputs (but the same PCI Id!) in two *different* X servers, but I'm quite sure that's not supported...
Any idea what "something suitable" could be? What the missing "right knobs" are?
Back to the beginning of the discussion: The primary interest is not how to configure outputs. xrandr already does that, and that should be used, not duplicated. We only want what xrandr is able to do today (at most), not more. However, we want it for more than one X server.
The central question is: How do two or more X servers share access to a single graphics engine? The second question is: How do xrandr outputs get assigned to X servers such that each server gets exclusive access to its outputs? And the third item on the todo list is perhaps tightening security and server/user separation...
Airlied's prototype implementation was a working demonstration for the first item (just for radeon). His suggestion for the second question was purely kernel-based (using configfs). If I understood it correctly: * For each card, first configure the number of DRM devices you want to have (one per X server). * Then, assign xrandr outputs to these devices.
This way, each X server opening its render device should only "see" the outputs assigned to this device.
Is this agreed? Any alternatives?
Basically, I think multiseat configurations will be static in most cases - after all, a multiseat configuration usually has quite a cabling mess, with a fixed number of monitors, each having a keyboard and mouse, which are statically mapped to fixed evdev devices using udev. Re-cabling is an effort anyway, so editing a config file in this case would be acceptable (after all, most existing multi-Xephyr solutions are also statically configured). Hence several xorg.conf selected with -config or one large xorg.conf with several layouts selected by -layout will suffice, each specifying input devices, a graphics card and an xrandr output similar to zaphod mode.
So what would be needed to make that work?
If someone wants dynamic configuration without xorg.conf, I think the only thing needed is some bookkeeping in the kernel which server is using which xrandr output: * If a server is started without specific configuration, it just grabs the next available output (or all unused outputs on that card?). * If a server activates an unused output, this output should be assigned to him exclusively until disabled. * If a server tries to activate an output already in use by another server, it should get an error. * If a server disables an output, this output becomes available to other servers.
What would be needed for that? Is the information about enabled and disabled outputs currently stored in the kernel or in userland?
Klaus.
On Tue, Aug 2, 2011 at 3:11 PM, Prof. Dr. Klaus Kusche klaus.kusche@computerix.info wrote:
Some way to assign a certain set of KMS objects (crtcs and encoders/connectors) to a particular client.
You don't really want xrandr. You want some way to isolate a group of KMS objects. You can write a client that uses the KMS ioctls to configure the the display hardware. That's what the ddx does right now with kms drivers. I think you'd want to add some knobs (in sysfs or configfs maybe) that lets you limit which KMS objects are exposed to a particular client.
The central question is: How do two or more X servers share access to a single graphics engine?
This is already possible. For rendering the client only needs to allocate buffers via the drm memory management ioctls and then submit command buffers that act on those buffers. For things like direct rendering, the dri2 protocol provides the way for the ddx and 3d driver to share buffers via the drm.
Right now it queries the ddx queries the drm using the kms modesetting ioctls and gets a list of all the KMS display objects. The DDX then parses that list and provides a translation layer for xrandr.
And the third item on the todo list is perhaps tightening security and server/user separation...
The drm already provides this for the most part. Things could get tricky if we ever add support for sharing buffers between drms.
Seems like the best approach to me.
Some way to associate a limited set of KMS display objects with a particular drm device.
All of that information is stored in the kernel. You don't need X at all. The ddx basically just provides a translation layer between KMS and xrandr, provides acceleration for render and core X and sets up shared buffers for 3D rendering.
Alex
On 2011-08-03 19:51, Alex Deucher wrote:
From the user's / administrator's point of view, I'm still not convinced that we need to isolate kms objects into groups explicitely and that we need another visible configuration interface at the kernel level (like a configfs interface) for that.
Multiseat already has too many places to configure: * xorg.conf (for defining x servers and assigning input devices to them) * xdm / gdm / kdm / consolekit config (to get things running) * udev rules (for assigning nice input device names to specific USB ids)
Please don't add another configuration step or interface if it can be avoided.
All the device and output assignment can easily be expressed in xorg.conf (as it is already done e.g. for zaphod), and that's also the place where it belongs: The information which input devices, which PCI id (or card device) and output, and which X server number belong to each seat should be kept together in *one* place. Based on that information, each X server could allocate the ressources it was configured for - no need to do that separately.
Of course the kernel has to do some bookkeeping which server has allocated which outputs: It must prevent that some server allocats a card or output already taken, and it must take care that each server only accesses the ressources it has allocated.
But from the user's point of view, there is no need to explicitely isolate outputs or create separate device nodes for each output *before* starting the X servers: I think it's no problem that a server "sees" all outputs of its card (including those not belonging to him), only using or reconfiguring them must result in an error if they have already been taken by some other server.
Klaus.
Alex Deucher <alexdeucher <at> gmail.com> writes:
I think with MPX and pointer barriers some parts are already in place to do multiseat with zaphod. Though I don't know how well MPX and pointer barriers work together. Applications grabbing input might cause issues too.
Best regards, Chí-Thanh Christopher Nguyễn
dri-devel@lists.freedesktop.org