I've done benchmarks and comparison between proprietary drivers and Mesa, Mesa seems to be up to 200x slower compiling the same shader, since i understand optimizing such part of code may take months or even more, i have thought to solve it this way:
Upon calling glLinkProgram , an unoptimized version of the shader ( compiles much much faster ) is uploaded to the GPU Then a separate thread is launched that will optimize the shader and as soon it is done, on the next call to glUseProgram it will upload optimized version in place of unoptimized one.
This will solve many performance issues and temporary freezes with games that load/unload content while running, while not reducing performance once the background optimization is done
Tiziano Bacocco tiziano@tizbac.dyndns.org writes:
I've done benchmarks and comparison between proprietary drivers and Mesa, Mesa seems to be up to 200x slower compiling the same shader, since i understand optimizing such part of code may take months or even more, i have thought to solve it this way:
Upon calling glLinkProgram , an unoptimized version of the shader ( compiles much much faster ) is uploaded to the GPU Then a separate thread is launched that will optimize the shader and as soon it is done, on the next call to glUseProgram it will upload optimized version in place of unoptimized one.
This will solve many performance issues and temporary freezes with games that load/unload content while running, while not reducing performance once the background optimization is done
Yeah, we've thought of this, and it would take some work. Sounds like a fun project for someone.
Presumably there needs to be a api-level mechanism to wait for the background optimization to finish, so that piglit etc can validate the behavior of the optimized shader?
-- Chris
On Tue, Jul 10, 2012 at 5:17 AM, Eric Anholt eric@anholt.net wrote:
Tiziano Bacocco tiziano@tizbac.dyndns.org writes:
I've done benchmarks and comparison between proprietary drivers and Mesa, Mesa seems to be up to 200x slower compiling the same shader, since i understand optimizing such part of code may take months or even more, i have thought to solve it this way:
Upon calling glLinkProgram , an unoptimized version of the shader ( compiles much much faster ) is uploaded to the GPU Then a separate thread is launched that will optimize the shader and as soon it is done, on the next call to glUseProgram it will upload optimized version in place of unoptimized one.
This will solve many performance issues and temporary freezes with games that load/unload content while running, while not reducing performance once the background optimization is done
Yeah, we've thought of this, and it would take some work. Sounds like a fun project for someone.
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org