To be precise: as soon as a clueless news outlet reports it as "secret", it's no longer secret.
To be even more precise: just because the subject of a piece of information is not yet ready for publication does not mean that it is a secret per se, it just means that it is yet to be disclosed. Of course one could call this a trade secret, but then
all chips are trade secrets if you look closely enough even when they are on the market - it's not like the designs are publicized below a very high level overview, after all. That a designer and maker of high end computer chips is working on a chip for an upcoming high end production node does thus not qualify as a "secret" IMO - it's both the most blindingly obvious thing ever (as they will be working on chips for all major nodes going forward, long before said nodes are available), and too general a piece of information to warrant actively keeping secret. What's next, is it a secret that Nvidia's design labs use electricity simply because they don't publicize it?
Why?! AMD's Radeon VII with Vega 20 on TSMC N7 node was released LAST year in January.
I am asking seriously, you are hiding from the question.
You know how product launches work, right? You announce a product when mass production has been going for long enough that the product can reach customers within a reasonable amount of time. In other words: while there are rumors galore about GPU designs both one and two years out, you don't
ever get official commentary on them before the maker is ready to put them to market. Indirect information might be given out - such as updates on new architectures or related products for other markets (HPC/server/etc.) - but beyond that there won't be a peep from Nvidia until they are, at the very least, ready to give a launch event date. And so far Nvidia has had little reason to move on to the more expensive TSMC 7nm node as 12nm is cheap and still has them at efficiency parity/a small lead compared to RDNA 1, with a solid lead in absolute performance. They launched the Super lineup to compete better with RDNA, which gives them a further holdover period before they need to launch a new architecture - though competition is likely to heat up later this year. There's also the potential issue of die sizes, with Turing dice being
massive which is problematic on smaller nodes. This might not be an outright issue, but it sure is good reason for Nvidia to hold off on adopting newer and smaller nodes until they have to, as even a 2080Ti shrunk to TSMC 7nm would be around the reticle limit for that node, which would both hurt yields and give little-to-no room for performance increases.