Risk-v cpu designed for less of 5 hours with using AI.
This comment probably is also suitable to topic for that AI is dangerous for workers places but this topic is old, last comment is before more of two years abd I see no point to reviving it.
In this article, we report a RISC-V CPU automatically designed by a new AI approach, which generates large-scale Boolean function with almost 100% validation accuracy (e.g., > 99.99999999999% as Intel31) from only external input-output examples rather than formal programs written by the human. This approach generates the Boolean function represented by a graph structure called Binary Speculation Diagram (BSD), with a theoretical accuracy lower bound by using the Monte Carlobased expansion, and the distance of Boolean functions is used to tackle the intractability.
These guys invent a new Binary Decision Diagram (calling it a Binary Speculation Diagram), and then have the audacity to call it "AI".
Look, advancements in BDDs is cool and all, but holy shit. These researchers are overhyping their product. When people talk about AI today, they don't mean 1980s style AI. Don't get me wrong,
I like 1980s style AI, but I recognize that the new name of 1980s-style is called "Automated Theorem Proving". You can accomplish awesome feats with automated theorem proving (such as this new CPU), but guess what? A billion other researchers are also exploring BDDs (ZDDs, and a whole slew of other alphabet-soup binary (blah) diagrams) because this technique is
widely known.
"Chinese AI Team innovates way to call Binary Decision Diagram competitor an AI during the AI hype cycle". That's my summary of the situation. Ironically, I'm personally very interested in their results because BDDs are incredibly cool and awesome. But its
deeply misrepresenting what the typical layman considers AI today (which is mostly being applied to ChatGPT-like LLMs, or at a minimum, deep convolutional neural networks that underpin techniques like LLMs).
------------------
Furthermore, standard BDDs can create and verify Intel 486-like chips quite fine. That's just a 32-bit function (64-bits with 2x inputs), probably without the x87 coprocessor (so no 80-bit floats or 160-bit 2x inputs). Modern BDD-techniques that's used to automatically verify say, the AMD EPYC or Intel AVX512 instructions are doing up to 3x inputs of 512-bits each, or ~1536-bits... and each bit is exponential-worst case for the BDD technique. So... yeah... 64-bit inputs vs 1536 isn't really that impressive.
-----------
In fact, the underlying problem is: with only finite inputs and their expected outputs (i.e., IO examples) of a CPU, inferring the circuit logic in the form of a large-scale Boolean function that can be generalized to infinite IO examples with high accuracy.
I literally have an example of this sitting in my 1990s-era BDD textbook. This isn't new at all. This team is overselling their achievement. Albeit my textbook is only on the "textbook" level Reduced Ordered Binary Decision Diagram, with a few notes on ZDDs and the like... but I'm not surprised that "new BDD-style" could lead to some unique advantages.
Now, BSD (or whatever this "Binary Speculation Diagram" thingy is) might be interesting. Who knows? New BDDs are discovered all the time, its an exceptionally interesting and useful field. Necessary to advance the state-of-the-art of CPU design, synthesis, testing. Furthermore, this is
exactly the kind of technology I'd expect hardcore chip designers to be using (its obviously a great technique). But... its industry standard. This is what CPU researchers are studying / experimenting with every day for the past 40+ years, I kid you not.
------------
BTW: ROBDDs (and all data-structures based off of BDDs) are awesome. I'd love to divert this topic and talk about ROBDDs, their performance characteristics, #P complete problems, etc. etc. But... its not AI. Its automated theorem proving, its exhaustive 100% accurate search with perfectly accurate designs of perfectness.