That's kind of a... weird way of looking at it. I mean, if you're an enthusiast sure, you can throw power requirements and cooling requirements out the window. Have your 1000w power supply and your water chiller. But neither Intel nor AMD is selling chips that require water chillers and go way past the point of diminishing returns. 99% of desktop users are fine with a 65w or less processor, if they even use a desktop anymore. Laptops and smartphones are where it's at for a lot of people these days.
No, it's not weird at all. It's what an enthusiast would say versus someone else who would rather have maximized performance-per-watt over maximized performance. And, even environmentalists can be enthusiasts because if they're really committed they can set up a solar array with batteries so that their gaming PC doesn't generate much carbon.
Thing is, AMD under Su has been really big on small dies to maximize profits. Even Radeon VII has a small die. Small dies are rarely the optimal path for enthusiast performance. Of course, having a large die doesn't guarantee outstanding performance either (as we saw with Fiji). If those dies are full of irrelevant quasi-relevant stuff like Tensor, then maybe what you're getting is help with hot spots more than the highest performance. (Or, if the architecture has a bottleneck...) It does seem to make some sense, though, to go to chiplets to spread heat out, since we're at 7nm now — despite the latency. However, if we're talking about best performance we would be far nearer the reticle limit. Enthusiasts should be wary about getting excited about expensive products like Radeon VII that ship with small dies and high clocks. That's not only probably not optimal for performance-per-watt, it's not optimal for pure performance. What it is optimal for is margin for the company peddling it. Small dies with high stock clocks are great for planned obsolescence.
Hyperbole isn't very useful in this subject area, either. 105 watts is very little. Even if we were to see a return to the 250 watt 9590 (it used more than the 220 it was rated for), that would hardly be all that outlandish when compared to what high-performance GPUs use. This is an industry where people used to run triple SLI. Anandtech and others included triple GTX 480 and 580 systems in their benchmarks. I am reminded of the sudden obsession with power consumption that happened with televisions once marketeers starting pushing LED backlit TVs. Suddenly, it was utterly shameful to not have the most power-sipping LED technology. Anyone buying a CCFL model was clearly wanting to drag knuckles and doom the planet. It's fascinating to see how marketing pressure can change the narrative so rapidly and effectively.
If people are really so interested in performance-per-watt, then why enable companies to slap 3 fans on a 1660 and a high voltage — in search of a clock that's well outside of the efficient limit of the chip? People buy cards like this. In fact, I don't think any 3rd-party GPUs with decent cooling don't have some amount of overclock on them out of the box. GPU manufacturers know that enthusiasts care more about performance than they care about performance-per-watt efficiency. If Fiji, Vega, and Polaris had been able to lead in raw performance then sales would have been considerably higher, despite the worse performance-per-watt (outside the rare games that are more efficient on recent GCN). I have no doubt that if someone were to put out a 500 watt GPU that leads in performance over Nvidia's top card by 25% it would sell very well, or which equalized performance at half the price.
Server CPUs typically have lots of cores with high performance-per-watt and a low TDP. Enthusiasts were sold server-centric designs by AMD before. Remember Bulldozer/Piledriver? 8 cores with weak FPU performance back in 2011/2012 was definitely not optimal for gamers. It was, though, more useful for servers. What many don't realize is that AMD sold a good number of Piledriver Opterons for supercomputers. Those computers mainly used GPUs for the heavy lifting, making the cheapness of Piledriver attractive. Yes, Intel's Xeon had better performance-per-watt than Bulldozer/Piledriver so AMD didn't do so well in the server market. (That doesn't mean the design, though, wasn't more server-oriented than it could have been.) The supercomputer market cared less about power consumption, although that's becoming more of a marketing focus lately. AMD pioneered the many small cores design philosophy with Bulldozer and what we're seeing with Ryzen is an extension of that philosophy, only without CMT.
Something like a 180 watt TDP would be perfectly reasonable for a 16 core 32 thread desktop CPU. 105 seems too low to maximize performance. It also seems like a way to enable mediocre board VRMs to get less heat from customers. This way, AMD props up the practice of selling "8 phase" boards that are really 4, and so on.
However, all this said, I like performance-per-watt a lot because it makes cooling without a high level of noise easier. Regardless, though, quiet cooling and high performance are at odds. The enthusiast point of view typically favors the latter, unless that person has deep enough pockets to offset the higher heat with better dissipation hardware.
If you think it's weird to advocate for higher performance (higher TDP) over something very conservative like 105 watts, consider the explosion of AIOs in popularity (not their penchant for literally exploding, as happened with my H50). Dual fan AIOs are pretty standard these days for enthusiasts and their CPUs. Why shouldn't they be running cheaper air coolers instead? They're running dual and triple fan AIOs to gain performance.
We're not, we can OC can we not?
How successful that is depends on design factors like power delivery robustness. This can be seen, for instance, with GPUs that can't get further because of weak VRM and/or not enough/large enough connectors. How easy it is also depends on various decisions. I think most anyone would rather have a CPU perform higher at stock so their overclocking effort is less work. There are also people who don't want to or who can't overclock (work computers).
If you release low TDP parts across the board then board makers can sell their fake/weak VRMs and enthusiasts will be stuck paying the extra extra premium for the few top-end boards that actually have good performance. If your CPUs, though, are more demanding out of the box, there is less room for fakery.
There is also the potential issue of the number of layers. I have read that 6 layers are really needed for PCI-e 4 and yet Gigabyte is apparently going to be selling 4 layer 570 boards. Perhaps one reason for the 105 watt TDP is to make it possible to cut down on the number of layers needed. I don't know if there is a connection.