Finished all the needed primitives for software renderer, next stop: adding 3D rendering on GPU so I can switch back to use mostly GPU for developing.
I'm keeping the CPU and GPU renderers in synchronization, the 2D rendering is already fully done and produces an exact output in both the CPU and GPU renderers. I have decided to use quite simple material "shaders", mostly relying on cubemaps to provide most material effects (this includes glass materials, glossy surfaces, general lighting on dynamic models, etc.). With the right combinations of inputs one can do wonders with such a simple setup :)
I've found that with proper filtering (actually during cubemap generation, no expensive operations needed during rendering) one can have cubemaps without any seams and actually the cubemaps can be really tiny for most glossy materials. This enabled to use just world-space normal maps and instead creating such cubemaps dynamically to provide the lighting for the dynamic objects (instead of computing the lights per-pixel).
The static world will use pre-generated lighting, mostly relying on raytraced ambient occlusion. I plan to do global illumination effects by manually placing extra lights, as I think it will be better than the automatic approach I've used in the previous engine. Due to the need to process lighting in HDR it tended to have washed out colors and basically was the same as a pure ambient occlusion. This will enable more artistic approach to the thing.
Speaking of HDR, similarly to the previous engine, it will use various fake HDR approaches as well. I don't like how with HDR you can't easily control the resulting image, also with the tonemapping operators the colors tend to be washed out and there are over and under exposures. I always felt that the HDR rendering simulates more a camera than a human vision. Unless you're observing extreme light conditions you generally do not see over and under exposures.