Nvidia do get a boost to draw calls, just not as great as AMD. where AMD may triple, nvidia may only get double. (this is a rough, vague example)
1 draw call time = overhead time of an API + rendering time for that specific object
1 synced frame at 60 fps = 16 ms
draw calls per frame = 16 ms / (overhead + rendering)
If GPU draws non-shaded cubes (very small rendering time) then behold explosion of draw calls with less overhead in DX12. Most of the draw call time is cut down.
If GPU draws complex shaded tesselated objects (long render time per object) less of a gain happens when we reduce the overhead. Here, draw call time was not wasted predominantly on overhead in the first place.
Because of this, game designers always separate static geometry from dynamic in a sense that all static geometry is being baked together to reduce separate draw calls - for example if you have 17 non moving buildings in the distance, you don't do 17 draw calls one for each building, rather 1 draw call for a group of 17 buildings as a single object. It's called batching and is very effective in reducing draw calls when same materials/shaders are reused all over the game world. This technique alone is offsetting most of the shortcomings DX11 has in terms of overhead.
So the draw call number per frame will grow but not based on GPU architecture, but based on complexity of each object being drawn. The balance can go both ways once constraints are lifted: will it be in favor of more detailed objects because available draw call balance per frame will be high enough, or more different objects on screen because quality of each individual object is high enough.
This shift in balance will only make developers free to make more game objects animated by CPU and physically simulated by CPU, and use much more varied materials in their game.
The problem with AMD is a software stack limitation with DX11. Nvidia have done a better job at extracting parallel threaded operation for their driver stack, whereas AMD rely on a single CPU thread. DX11 allows limited gains, but Nvidia went for it & it paid off with ~50% performance over AMD. Peculiarly, AMD's command processor is reputedly more robust than Nvidia's and hence benefits more from DX12/Mantle/Vulkan explicit multi-threading. Additionally the HW compromises needed due to 28nm for Maxwell (which resulted in good perf/watt in current gaming workloads) may be less beneficial going forward. Though by the time it matters, I expect both IHVs will have new architectures on 14/16nm...
It's true for DX11 drivers, nvidia's gain is not negligible here. However AMD's disadvantage in DX11 shouldn't be looked as an direct relative advantage in DX12 - complexity of each draw call will dictate peak number of draw calls per frame more than anything else.
IMO it will come to this - Nvidia will still be better with high detailed geometry and AMD as of Fiji now will be better with shader performace especially if shader uses many different textures to sample from. DX12 overhead differences will translate to slightly different CPU core usage percentages (much below 100% mind you, bottleneck will never be on CPU with DX12 unless played on Atom).