I noticed DSLR scanning color negatives works very well, but setting the correct exposure is critical and to get shots rivaling my dedicated filmscanner I need to take multiple exposures (‘HDR’) to get all the color detail.
Remember that white balance is done after the sensor stage in most / all cameras. So setting the white balance to correct some of the red / orange cast might help in previewing and taking your shot, but it will do nothing to the data captured in the RAW file.
So what happens is that you’re basically scanning your negative ‘as a positive’. Most good camera’s have 14 bits of precision, but mine has 12 for instance. Imagine you have a good 14bit RAW file, and you expose it perfectly (which means you are ALMOST clipping the red channel in the raw data), that means you’re using 14 bits of precision for the red channel… which means the blue and green channels have way less than that. Then you’re inverting, fixing the colors, fixing contrast… and yes you might end up with some posterization and / or banding because of the limited dynamic range you shot in the green / blue channels.
Like I said my (Sony) DSLR has 12bits of precision as far as I know. Let’s say I did not expose perfectly (because this is hard to do, you have no raw data histogram, so you have to try with trial and error at what settings you’re clipping and at what settings you’re not. And remember we’re talking about ‘clipping the raw data’ here, not the (preview) jpeg). So let’s say the best exposure I can manage without clipping any channels means I filled the histogram of my shot for about 50% to 75% percent. That means I actually used something like 11bits of precision for the shot… Just for the red channel. That means I only captured 6 to 7 bits of precision most likely in the green and blue channels and this causes issues. So that’s why I (and Filmlab too as far as I understood) is taking multiple shots to merge them together like HDRMerge does, so you can actually try to capture more dynamic range in all the channels than would be possible in a single shot.
Of course your DSLR camera has it’s own color response… but any scanner ($50 or $5000) is no different. That’s why people talk about calibrating then when you’re scanning positives.
But every scanner tool where I talked about this says that for negatives there is no (useful) color calibration. Since you’re taking all the channels individually and ‘lining them up’ while inverting to do the final color balance, the actual color-balance that your capture device introduces is (almost) irrelevant.
At least I have scanned negatives on my real film scanner and the same negative on my quite-old DSLR and when done properly the color output is exactly the same from the two, as far as I can see.
If you really dig into the Filmlab scenario and you’re scanning ‘positives’ (slides), then doing a white-balance check on the lightsource really does help in getting the color ‘pretty good’ right from the start. Of course if you want more precision, you have to think about calibration. Capturing an IT8 target with Filmlab (while white-balance set to your light-source) and then using a free utility like LProf to make an .ICC profile will help if you want to color to be more ‘perfect’.
But I don’t think the purpose of Filmlab is to get perfect calibrated color. I think it’s more about getting ‘usable results as easy and quickly as possible’