On Thu, 2017-05-04 at 10:11 -0700, Eric Anholt wrote:
"Ong, Hean Loong" hean.loong.ong@intel.com writes:
On Wed, 2017-05-03 at 13:28 -0700, Eric Anholt wrote:
hean.loong.ong@intel.com writes:
From: Ong Hean Loong hean.loong.ong@intel.com
Hi,
The new Intel Arria10 SOC FPGA devkit has a Display Port IP component which requires a new driver. This is a virtual driver in which the FGPA hardware would enable the Display Port based on the information and data provided from the DRM frame buffer from the OS. Basically all all information with reagrds to resolution and bits per pixel are pre-configured on the FPGA design and these information are fed to the driver via the device tree information as part of the hardware information.
I started reviewing the code, but I want to make sure I understand what's going on:
This IP core isn't displaying contents from system memory on some sort of actual physical display, right? It's a core that takes some input video stream (not described in the DT or in this driver) and stores it to memory?
If the IP Core you are referring to is some form of GPU then in this case we are using the Intel FPGA Display Port Framebuffer IP. It does display contents streamed from the ARM/Linux system to the Display Port of the physical Monitor.
Below a simple illustration of the system:
ARM/Linux --DMA-->Intel FPGA Display Port Framebuffer IP | | Physical Connection Display Port
The "DMA" in this diagram is the frame reader IP, right? The frame reader, as described in the spec, sounds approximately like a DRM plane, so if you have that in your system then that needs to be part of this DRM driver (otherwise you won't be putting the right things on the screen!).
Would the drm_simple_display_pipe_init be able to handle this ? It seems to be displaying the proper images on screen, based on my current changes. There were recommendations to use the drm_simple_display_pipe_init instead of creating the CRTC and planes myself