Depth of field.
...or lack of.
Software is getting better at spoofing shallow DoF using multiple exposures.
This is taken from a tripod using maximum aperture on my compact camera (f/4.5) to get maximum "Bokeh" effect at mid-zoom.
The camera has a max apertaure of f/1.8 but only at a very wide 24mm equivalent.
You can see slight bokeh/blurry background but not much considering the proximity of Mr Monkey. It's a physical limitation imposed by the (small) sensor size.
The following image is using "Background Blur" Scene Mode. It takes 2 shots in rapid succession and merges them.
It's actually pretty convincing and uses same f/4.5 and same focal length zoom.
I know from experience that "fly-away hair" defeats its ability to separate the 2 layers but it still does a decent job at separating distracting backgrounds.
The difference in colour balance is down to the fact that image #1 is shot in RAW and converted to jpg with no adjustment. Image #2 is a straight jpg (a limitation of the Scene Mode).
>>>>> Bigger Glass & Bigger Sensor >>>>Here is f/4.5 on a crop sensor DSLR (Hand held under terrible flouro light)
Here's same camera but with 50mm wide open at f/1.4 (so fairly "Big Glass"). The DoF doesn't even encompass the whole of Mr Monkey's head.
I've got to say that the "Background Blur" on my compact camera has done an admirable job when you compare f/4.5 equivalents.
It doesn't compare to a big piece of glass at f/1.4 though. I expect that's just a matter of time and processing power though.
I must try my Phone camera's ability to spoof a shallow depth of field. Watch this space.
>>> Phone Camera >>> NO EFFECTS
>>> Phone Camera >>> EFFECT "Lens Blur"
Opinions?
I think it's not too bad considering the "infinite depth of field" tendancy of tiny phone-camera sensors.