Thanks for sharing! Indeed, one of the motivators for our approach is that it can be pretty difficult to compare images across GPUs, since floating-point issues mean that for complex shaders one can get pretty difference images. Sometimes, if the shader takes a “time” parameter, we find that shaders that look similar when animated can be very different frame-by-frame.
Cool — thanks for trying these out!
I suppose it would be quite a coincidence if the exact issues that affected Img GPU drivers would also affect Intel Mesa drivers — if they did then I would speculate the issue being to do with some common infrastructure that both drivers use (LLVM perhaps).
Sorry for the very slow reply to this one!
Yes, we do plan to perform Mesa testing. Not sure when, but hopefully soon!
GLFuzz isn’t available publicly yet. If you’re interested in using it then please get in touch with me privately.
Indeed – minimising bugs you find “in the wild” (rather than using a fuzzer) can be such a pain and so time-consuming. We have a pile of OpenCL bugs to report that we found during a recent project, but creating a minimal repro would likely be a day of work per bug.
Thanks for reporting on this experiment! I think the issue is that the runtime of the shader is causing the per-frame rendering deadline to be missed, so that nothing is rendered. Whilst OpenGL says little about this formally, a shader should complete within some reasonable time limit, and the trouble is that how…
Interesting – though we can often expect really rather different results across platforms due to the many implementation-defined aspects of GLSL. What we shouldn’t see, though, are big differences on a single platform between the results for a shader before and after our semantics-preserving changes. (We might expect some pixel-level differences due to floating-point issues.)