Whether it soundly beats the RTX 2080 Ti, it is too early to determine until we see reviews on it. My take is that it will win some, and lose some tests depending on game and settings. There are certainly more CUDA cores on the 3070, but they don't operate the same way on the 2080 Ti, so the increase may not translate to a near proportionate improvement.
This is something many fail to understand, and is true for most architectural changes, and is also why we should be careful to estimate the performance of RDNA2 too. And as you were saying, it's very likely that any new architecture will improve some areas while getting worse(or relatively worse) in others as the resource balance is changed.
Most of the benchmark titles we have seen so far are heavily Nvidia optimized.
In order to
optimize software for specific hardware, you need to design the software around special instructions, performance related features or specific performance characteristics.
Unless you count things like raytracing as an "optimization", there is virtually no recent game optimized for any specific architecture.
As to bandwidth starved, I believe the 3070 is able to provide very solid 1440p support and decent gaming experience at 4K especially with DLSS. At this resolution, I feel the bandwidth should be sufficient at least for the foreseeable future.
Bandwidth scaling varies a lot between games, it depends on how the games does LoD scaling, how many framebuffers and render passes are involved etc.
The GPU architecture also plays a role. The scheduling of operations and cache increases can have some impact.