# Will there ever be a need for a 128-bit CPU in your computer?



## qubit (Nov 25, 2013)

This question applies equally to desktops and portable devices of all kinds.

With the debunking of that 128-bit ARM CPU story, it got me wondering if there will _ever_ be a need for a 128-bit general purpose CPU of any architecture (x86, ARM, MIPS etc) no matter how advanced computers become?

Think about it, the main  benefit of today's 64-bit CPUs over their 32-bit versions is not the enlarged word size, but the address bus, which allows an absolutely humongous amount of memory to be addressed. This isn't going to run out in the foreseable future, if ever. Therefore, this leaves the number crunching capability of 128-bit CPUs as the only advantage, where they would be twice as quick, since they handle twice as much data in one go. A good analogy is painting a wall with a brush that's twice as wide: it will take half the time to complete.

For most kinds of programs having a wider word size makes no difference at all, especially where the data values are small such as on a wordprocessor that handles single byte characters at a time. Only in things requiring certain kinds of intense maths operations such as cryptography (eg RSA/SSL, bitcoins) perhaps CAD and other maths-intensive tasks such as calculating Pi would we see a benefit. In those instances, graphics cards have proved to be very capable number crunchers, removing this benefit. (Moving memory blocks about would be twice as quiick though, which could make things noticeably faster if there's a large number of large blocks to move or one really big one.)

Also, I suspect that quantum computers using qubits could become mainstream in the not too distant future, removing the need for ever more powerful classical CPUs. although I don't think they will ever die out.

For these reasons, I'm going to hazard that we'll never see a general purpose 128-bit CPU.

What do you think?


----------



## 15th Warlock (Nov 25, 2013)

Yes! 640K will never be enough!!!1!


----------



## qubit (Nov 25, 2013)

I really meant to vote not sure, but I can't change it now, lol.


----------



## Over_Lord (Nov 25, 2013)

I'd like Intel quad-cores for $100 instead.. 64-bit ones itself


----------



## entropy13 (Nov 25, 2013)

qubit said:


> Also, I suspect that quantum computers using *qubits* could become mainstream in the not too distant future, removing the need for ever more powerful classical CPUs. although I don't think they will ever die out.
> 
> For these reasons, I'm going to hazard that we'll never see a general purpose 128-bit CPU.



So in summary, you just said "I AM THE FUTURE!!! WORSHIP ME, PUNY HUMANS!!!"


----------



## Nordic (Nov 25, 2013)

Yes. Remember when 512mb was a serious amount of ram? A little different but the idea remains the same.

Edit:
Better example. Remember when they thought ipv4 would never run out?


----------



## FordGT90Concept (Nov 25, 2013)

qubit said:


> What do you think?


These same things were said when 32-bit debuted in the 1990s.  128-bit memory addressing space will come eventually.  I just don't know if it will be in my lifetime or not.

How rapidly memory grows in density is what will ultimately determine when.  "If you build it, they will come."


----------



## Solaris17 (Nov 25, 2013)

Ya I will but probably not for the same reason others will probably for the epeen, but more to sell other people on it. Just think about it services are things that people buy or sell to make someone else day a little easier or better (blah blah solaris i can think of over 9k instances where this isnt true) at least on paper. Services like dream spark for example by Microsoft offer software that costs thousands of dollars to people for free if they are going to school. This helps people (not just kids in their mid 20s) get a hand on things they might not other wise ever have the ability to install. The same goes for things like 128bit CPUs I see it from a different perspective then most like the 290x Alot of people bitched about heat but the way I look at tech advancements isnt about whats bad about them or how they compare to last years model but instead what they offer the future. 128bit CPUs will push developers to code for it and new programmers will get to start with the latest and greatest and memory changes like that can open up how we and software interact with memory mapping or maybe even memory architecture as a whole. The question most people pose is actually will you upgrade for you or should I upgrade for myself. The way to actually see the question imo is will this upgrade help others will supporting this product benefit the future technologically? Anyones CP now adays can play crysis but if you stop thinking about upgrades as a means to an end and see them as a push towards a future were tron isnt unlikely I think peoples answers may change a bit.


Heres to the future


----------



## KainXS (Nov 25, 2013)

I would like to say yes because I believe in magic and ponies and whatnot so I will in the poll but their will have to be some breakthroughs in addressing, and production, might it be possible in the future yes, maybe we will need it for some reason but I don't see a point to having a 128bit cpu right now and theres not really anything beyond that since a 256bit cpu would be an impossible dream right now.


----------



## radrok (Nov 25, 2013)

I voted yes.

Still we need software to catch up with 64bit instructions before making any other kind of jump.

Most consumer programs are 32bit based and this will be the norm for a long time, can you remember how much time ago 64bit CPUs started selling?


----------



## FordGT90Concept (Nov 25, 2013)

https://en.wikipedia.org/wiki/64-bit_computing#64-bit_processor_timeline

The AMD64 instruction set we're using today debuted in 2003 with AMD Athlon 64 (Clawhammer).  It took 10 years for AMD64 to become mainstream and it will take probably another 10 years before 32-bit programs become a rarity (like 16-bit was in the 2000s).

Pretty much all commercial programs are available for AMD64 except games.  Games are rapidly going to catch up with PCs now that PS4 and Xbone are AMD64.


----------



## Mussels (Nov 25, 2013)

of course there will be, one day.


----------



## The Von Matrices (Nov 25, 2013)

qubit said:


> I suspect that quantum computers using qubits could become mainstream in the not too distant future, removing the need for ever more powerful classical CPUs. although I don't think they will ever die out.



I absolutely agree, which is the reason I answered "No".  There is no way that in 30 years that the world will still be looking at transistors (much less silicon transistors) as the primary method of computation.  Moore's law is slowly coming to an end, and the only way to keep the pace of technological progress is to have a paradigm shift and reinvent the fundamentals of computing.


----------



## FordGT90Concept (Nov 25, 2013)

But even with quantum computers, they still need memory.


----------



## The Von Matrices (Nov 25, 2013)

You're assuming that the computing world will remain in binary.  Even modern flash memory (MLC and TLC) doesn't store data in native binary.


----------



## FordGT90Concept (Nov 25, 2013)

Everything can be fundamentally reduced to binary--even the human genome (it's 2-bit).  "Native binary" is a made up term.  Hard drives are just magnetic fields but by switching the polarity of the magnetism, regions of the metal platters can be made to retain values on demand.  Binary is a simple proposition which is why it is so popular in computing.  That isn't going to change in our lifetimes.


----------



## qubit (Nov 25, 2013)

Indeed, Ford.

Binary is the smallest and simplest number base possible and the most robust for building a digital computer with. Regardless of computer architecture this fundamental principle won't change.


----------



## The Von Matrices (Nov 25, 2013)

Binary is a set of two states.  In computers, there is a single threshold value of the data medium that determines the difference between a "1" or a "0" whether it's magnetic field polarity, electrical potential, electrical current, etc..  However, while it's great to think about binary theoretically, binary is frequently not the most efficient way to operate.  In flash memory, MLC is a quaternary system and TLC is an octal system.  Sure, it can be converted to binary information, but the method of data storage doesn't fit the fundamental definition of binary states.  By that definition as well the human genome isn't fundamentally binary since it is comprised of 4 states (it's quaternary), but as you said each quaternary state can be converted to a 2-bit representation.

Thus my argument is that while it's easy to think in binary, it might be inefficient to produce a 128 bit processor that works in binary.  Instead, it might be a better idea to make a processor that runs in a quatenary system and thus only be 64 wide or an octal system that is only 32 wide.  It would be fundamentally equivalent to a 128-bit binary processor, but it would not be 128 values wide.  Today that seems like a radical proposition but by the time 128-bit addressing is actually needed I doubt it will be as radical.

Ternary computers have been built and were even considered "the future" before electronic computers.  There are actually many proposed types of quantum computers that don't operate in binary.  For example instead of quibits, quantum computing might be using five state "quidits".  Similarly, optical computers don't need to be composed of only two states because you can polarize light.  The main reason binary quantum and optical computers are popular is because they present a logical transition from binary transistors.


----------



## FordGT90Concept (Nov 25, 2013)

Hard drives are usually operated on in 512 byte sectors or 4,096-bits.  Doesn't change the fact that all of the above have a least common denominator of bits.



> it might be inefficient to produce a 128 bit processor that works in binary


That's an oxymoron.  Bits, regardless of how many there are, is binary.  Intel, IBM, AMD, ARM, etc. could manufacture a 128-bit processor in as little has two years.  The reason why they don't is because there is not enough demand for it to justify the expense.

The smallest expression x86 understands is 8-bits wide (Intel refers to it as a "word").  Anything smaller is padded to it.  Most new x86 processors already have 128-bit FPUs to handle decimal (required to quantify USA's public debt) and quad (useful for science).  I suspect it won't be long before ALUs are expanded to 128-bit wide.

"Five states" = 3-bit (000, 100, 010, 110, 001)  If those states can act like flags, then 5-bits.

Optical systems presently do operate on binary: bright light, dim/no light.  If you try to add wavelengths to the mix, it makes the photoreceptor design substantially more expensive.  It becomes cost prohibitive quickly which is why it isn't going anywhere fast.


----------



## Drone (Nov 25, 2013)

Someday ... somewhere yup ...

But if devs can't be arsed to make proper software for that then who the fuck needs that 128bit. Even 64bit ain't used at its full potential now.


----------



## Mussels (Nov 25, 2013)

Drone said:


> Someday ... somewhere yup ...
> 
> But if devs can't be arsed to make proper software for that then who the fuck needs that 128bit. Even 64bit ain't used at its full potential now.



try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.


----------



## FordGT90Concept (Nov 25, 2013)

Art thou forgetting XP Pro x64? XD


----------



## Drone (Nov 25, 2013)

Mussels said:


> try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.


Playing modern video games is not problem #1, and what's it got do with xp anyway. There's x86 version for all other operating systems.


----------



## The Von Matrices (Nov 25, 2013)

FordGT90Concept said:


> That's an oxymoron.  Bits, regardless of how many there are, is binary.  Intel, IBM, AMD, ARM, etc. could manufacture a 128-bit processor in as little has two years.  The reason why they don't is because there is not enough demand for it to justify the expense.
> 
> The smallest expression x86 understands is 8-bits wide (Intel refers to it as a "word").  Anything smaller is padded to it.  Most new x86 processors already have 128-bit FPUs to handle decimal (required to quantify USA's public debt) and quad (useful for science).  I suspect it won't be long before ALUs are expanded to 128-bit wide.



_I think this is where our confusion lies._  The conventional way to define the "bit width" of a processor is by minimum the width of _all_ units, and that is what I have been using.  Usually this comes down to the width of the memory address space since that is the last to be enlarged.  However, you're referring to other the width of _any_ computational unit in the processor.  I hope you realize is that there have always been processors with wider computational units than the memory address space but they weren't considered to be "wider" processors. For example the Pentium processor with MMX had a 64-bit FPU.  By your definition it would be a 64-bit processor, but because it had a 32-bit memory address space, everyone else considered it a 32-bit processor.  I agree with you that 128-bit wide functional units in the processor are useful in the near term, and I hope you agree with me that 128-bit memory addressing won't be needed until far in the future considering how long it will be until any single computer will have more than 16 exabytes of memory.  Supercomputers will reach that memory capacity in the nearer future, but since they don't have a shared memory space then 128-bit won't be that important.



FordGT90Concept said:


> Hard drives are usually operated on in 512 byte sectors or 4,096-bits.  Doesn't change the fact that all of the above have a least common denominator of bits.



You're confusing the number of states with higher level organization.  Hard drives still store data in binary (opposite poles) and are only organized into higher order groups of sectors on order to simplify hard drive controller design.  You could still take a microscope and identify the individual "north" and "south" magnetic states on the platters.  Although it also uses pages, common flash memory stores data in 4 states or 8 states per cell and needs to be converted into binary to be used in contemporary computers.  If you look at any cell there will not be binary data.  If some alien acquired a flash memory chip the alien would have no idea that it represented binary information.



FordGT90Concept said:


> "Five states" = 3-bit (000, 100, 010, 110, 001)  If those states can act like flags, then 5-bits.



I am not arguing that there is some set of data that cannot be represented in binary.  Of course I know this is not true; any integer can be converted to binary.  However, computations don't have to be performed in binary.  There is no fundamental law requiring binary, but other number systems cannot be natively computed on transistors.  Once other forms of computers become prominent, binary may no longer be the preferred method of computation, and this certainly could be true considering how long it will be until any computer needs more than 16 exabytes of memory.



FordGT90Concept said:


> Optical systems presently do operate on binary: bright light, dim/no light.  If you try to add wavelengths to the mix, it makes the photoreceptor design substantially more expensive.  It becomes cost prohibitive quickly which is why it isn't going anywhere fast.



You misread my post.  I argued that polarizers would be used with optical computing, which has nothing to do with variable wavelength or intensity.  You can represent many states just by polarizing light at the source and adding filters at the receiver.


----------



## Agility (Nov 25, 2013)

I was kind of in a dilemma choosing yes or no because of your question. Well not probably in our lifetime. Could even be your great grandson era. And yeah the possibilities will be there.


----------



## FordGT90Concept (Nov 25, 2013)

The Von Matrices said:


> For example the Pentium processor with MMX had a 64-bit FPU.  By your definition it would be a 64-bit processor, but because it had a 32-bit memory address space, everyone else considered it a 32-bit processor.


Absolutely not.  I used the acronyms ALU and FPU explicitly for a reason.



The Von Matrices said:


> If you look at any cell there will not be binary data.  If some alien acquired a flash memory chip the alien would have no idea that it represented binary information.


Yes, they would.  All they have to realize is the states represent data.  They'd figure out how to read and write it shortly thereafter.


I think a super computer could be built in the next 10-20 years that has a bus fast enough to treat many CPUs as cores and have one massive pool of memory or something like old Opterons had where processors can share memory.  Should it happen, it would become the first processor with a 128-bit architecture (IA-128 anyone?).


----------



## Mindweaver (Nov 25, 2013)

There will most definitely be a 128 bit processor unless someone changes it.



			
				Newton's First Law of Motion said:
			
		

> *Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. *


----------



## douglatins (Nov 25, 2013)

Well i dont know what the future holds, if the current type of computing will continue onward, but considering that, i would thing there would be 256, 512, 1024, 4096 bits arquitectures considering the limits of what could be run are endless, lets say we believe the theory that we live in a simulation, imagine the computer to run that?!?!?


----------



## Sinzia (Nov 25, 2013)

I'm sure eventually we'll need something similar along the lines of 128 bit to get more memory.
640k is enough!


----------



## lilhasselhoffer (Nov 25, 2013)

Why can't this exist now?  You've already got high speed interlinks built into the high-end Intel and AMD CPUs.  Extrapolating a tiny bit, the interlinks could be designed in such a way as to link two physical processors into one effective unit.  Two 64 bit busses could effectively be one 64 bit bus, though this would take some substantial rework and 

As a mild refresher, Moore's Law relates to transistor count.  It doesn't relate to storage density, or the bus width of computers.  There is no law relating to the bus width incremental increases.  

Now, do I think we'll see it in computers?  Absolutely, and not in the vague future.  I see it in the next 10 years.  Let's think about the history.  We'll limit the computational history to 8, 16, 32, and 64 bit processors.  The start of 8 bit is somewhere in the 70's (I can't give more specifics, because having 8 bit processors and seeing them as useful are two very different things).  The mid 80's was where the 16 bit processors started.  Next, the 32 bit processors came into their own around the mid 90's.  64 bit processors are finally being adopted (between regular availability in gaming and available programming), but it's the early 2010's.  So we've got 15, 10, and 20 years for the bit width of processors to be generally adopted and substantially utilized.  Viewing this from the historic track record, we're looking at 10-30 years from now as when 128 bit busses are realistically going to be adopted.

The real question is; "Will these busses mean anything to calculation speed by the time we need them?"  Having such a substantial bus in the binary domain means more, and more complex, data can be dealt with.  Quantum computing could effectively compress multiple calculations into a few operators, thereby making the speed of data processing several times that of the ability for small busses to deliver data.  Effectively, the processing would require a higher sized bus, just to keep it fed with data.   At this point the bus is no longer of relevance to computation, only whether that bus can deliver enough data to keep the processor busy.  


I think this is what we're having problems understanding each other.  Decoupling the bus from the processor is difficult, because there' no way this model works in our current computational models.  Quantum computing is crazy in that it breaks all of our current understanding, but will require some legacy pieces of technology to work.  Data storage, as we currently know it, is only binary.  Collections of binary data are structured such that they aren't physically separable, but for the sake of fidelity we only see two states.  That kind of limited understanding is what will eventually bar quantum computing from progressing, and is why we cannot decouple the two without a fundamental shift in our understanding.


Compressing all of this into "Other" seems like a waste.  I think we'll see it, but it won't matter in the same way it does today.


----------



## The Von Matrices (Nov 25, 2013)

FordGT90Concept said:


> Absolutely not.  I used the acronyms ALU and FPU explicitly for a reason.



You're still stretching the definition of 128-bit since the most commonly used definition would indicate that _all _parts of the processor must be 128-bits wide including the memory addresses, which I hope you agree is not a limitation now or will be in the near future.



FordGT90Concept said:


> I think a super computer could be built in the next 10-20 years that has a bus fast enough to treat many CPUs as cores and have one massive pool of memory or something like old Opterons had where processors can share memory.  Should it happen, it would become the first processor with a 128-bit architecture (IA-128 anyone?).



I doubt this will happen.  The node interconnect will still be a major limitation to supercomputer efficiency and the types of code that the supercomputer can run.  Communication technology would have to improve even more rapidly than computational technology, which seems like a stretch to imagine given the exact opposite has occurred in the past.



lilhasselhoffer said:


> The real question is; "Will these busses mean anything to calculation speed by the time we need them?"  Having such a substantial bus in the binary domain means more, and more complex, data can be dealt with.  Quantum computing could effectively compress multiple calculations into a few operators, thereby making the speed of data processing several times that of the ability for small busses to deliver data.  Effectively, the processing would require a higher sized bus, just to keep it fed with data.   At this point the bus is no longer of relevance to computation, only whether that bus can deliver enough data to keep the processor busy.
> 
> I think this is what we're having problems understanding each other.  Decoupling the bus from the processor is difficult, because there' no way this model works in our current computational models.  Quantum computing is crazy in that it breaks all of our current understanding, but will require some legacy pieces of technology to work.  Data storage, as we currently know it, is only binary.  Collections of binary data are structured such that they aren't physically separable, but for the sake of fidelity we only see two states.  That kind of limited understanding is what will eventually bar quantum computing from progressing, and is why we cannot decouple the two without a fundamental shift in our understanding.
> 
> Compressing all of this into "Other" seems like a waste.  I think we'll see it, but it won't matter in the same way it does today.



Thank you for this reply;  I completely agree with you.  One of the major shortfalls of most of the technology predictions in these forums is the assumption that the world will continue to incrementally advance existing technological paradigms for the foreseeable future.  I voted "no"for that reason, but "other" is just as good a response.


----------



## Black Panther (Nov 25, 2013)

Defintely yes. In my relatively short life cpu's went from 80286 being 16 bit I think... to just a couple of years ago having 32 bit indispensable.

That was when I started getting interested in computing, because cpu's were 8 bit before. And even lower yet before...

Progress is happening faster all the time.

I envisage that within 5 years or less we'll be having 128 bit. And considering the improvement rates we're getting, within yet another couple of years we'd be having 512 and then 1024.


----------



## BiggieShady (Nov 25, 2013)

Ah, Qubit's daily poll ... My answer is yes, and not only for memory allocation but also for increased precision. Today double precision stands for 64 bit.


----------



## The Von Matrices (Nov 25, 2013)

I still think that quibit should rephrase this poll since "128 bit" can mean a lot of things.  Does he mean 128-bit memory addressing (which is what I assume that means) or any part of the processor being 128-bit.  Because 128-bit memory buses and FPUs have been around for years, so by the latter definition the question is meaningless.



Black Panther said:


> I envisage that within 5 years or less we'll be having 128 bit.



I doubt anyone will need to address 16 exabytes of memory within 5 years.  The highest end Intel Xeon E7 can address 4TB of memory at the moment.  If we're optimistic and scale this by Moore's law, then the highest end computer will reach 16EB in 44 years.

Where's Anand Chandrasekher when you need him?


----------



## newtekie1 (Nov 25, 2013)

I would make the argument that if Windows was designed properly, we wouldn't really have a need for 64-bit processors right now, or we'd just be starting to need them in the mainstream desktop world in the last year or so, forget about 128-bit.


----------



## FordGT90Concept (Nov 25, 2013)

The Von Matrices said:


> You're still stretching the definition of 128-bit since the most commonly used definition would indicate that _all _parts of the processor must be 128-bits wide including the memory addresses, which I hope you agree is not a limitation now or will be in the near future.


Memory addressing space is only one component of 128-bit computing.  Even so, most AMD64 processors today can physically only access 48-bits worth of memory.  An 128-bit processor could be released today with 64-bit memory addressing and that doesn't make it any less of a 128-bit processor as long as the instruction set supports up to 128-bit memory addressing.

As I said previously, "if" is not the question; the question is "when."  I think "when" will determined by needs in the super computer space.  AMD64 happened because x86's memory limitations were becoming problematical for large databases.  128-bit will happen when some other urgent need isn't being fulfilled by AMD64.  I believe that, in the pursuit of higher efficiency (because of ARM), the next major architectural change will come soon and it will add more registers.  Making the processor 128-bit would likely be a component of achieving that end.


----------



## qubit (Nov 25, 2013)

The Von Matrices said:


> I still think that quibit should rephrase this poll since "128 bit" can mean a lot of things.  Does he mean 128-bit memory addressing (which is what I assume that means) or any part of the processor being 128-bit.  Because 128-bit memory buses and FPUs have been around for years, so by the latter definition the question is meaningless.


No, the question is perfectly worded and it's clarified further by my OP. Note that the word size of a CPU is ostensibly defined by the size of its ALU anyway, not the address bus.

A great example are the ancient 6502 and Z80 CPUs from the 1970s. The ALU on these was 8-bits wide, hence worked on 8-bit values at a time and hence were 8-bit CPUs. Now, the address bus was actually twice as wide at 16-bits, yet these were still 8-bit CPUs.

The floating point component of today's CPUs is really a separate CPU bolted on to the same die (think the 386 which could have a 387 coprocessor attached in a separate socket that was then integrated from the 486 onwards) so even if it works on 128-bit values, the CPU is still considered 64-bit as that's the size of the ALU.

Also, the forum doesn't allow the poll to be edited even if I wanted to.


----------



## FordGT90Concept (Nov 25, 2013)

32-bit processors can have 64-bit ALUs.  See Atom.  The wider ALU allows the processor to handle int64 and uint64 operations much faster.  Try to install an AMD64 operating system on it though and it will fail miserably because it doesn't implement the full AMD64 instruction set (extra registers, wider memory addresses, etc.).


FYI, N64 had a 64-bit CPU with a 32-bit bus.


----------



## BiggieShady (Nov 25, 2013)

When you look extended instruction sets like SSE or AVX we are already at 128 bit (SSE) or 256 bit (AVX and AVX2) even 512 bit (AVX-512) wide instructions ... I guess we consider CPU really 128 bit if it does arithmetic with 128 bit vector data in a single clock


----------



## Disruptor4 (Nov 25, 2013)

I think there will be in the future, but maybe not in our life time (I'm 23 and still don't think my life time provided I live a long and prosperous life! )


----------



## qubit (Nov 25, 2013)

BiggieShady said:


> When you look extended instruction sets like SSE or AVX we are already at 128 bit (SSE) or 256 bit (AVX and AVX2) even 512 bit (AVX-512) wide instructions ... I guess we consider CPU really 128 bit if it does arithmetic with 128 bit vector data in a single clock


While these instructions are super-wide, isn't the result always 64-bit? Effectively this would keep the CPU as 64-bit.I'm not challenging here, I just don't know much about these instructions.


----------



## W1zzard (Nov 25, 2013)

The data size of integral arguments to the ALU define the bitness of a processor architecture.

So on a 8 bit processor, the biggest ADD operation you had took 8-bit operands. 32-bit on 32-bit machines and 64-bit on 64-bit machines. The CPU guarantees that these basic operations are completed atomically.

Today's AVX instruction set extensions are 128 and 256 bit processors for the sake of the discussion in this thread.

There is no reason that we ever need general purpose arithmetic 128-bit operations because the numbers in real life, and so a typical computer are relatively small (64-bit is plenty).

64-bit architectures take a significant performance hit vs. 32 bit, per instruction, because instructions and data take up more space, require more memory bandwidth, produce larger executables. On the other hand they can process numbers twice as big, but to do 1+1 = 2 or perform 99.99% of math in your life you'll be fine with 64-bit numbers. Of course the exponential growth of silicon performance will make the performance hit less relevant over time, just like you don't care about exe size on your HDD anymore today.

Today's 64-bit Intel CPUs can only address 48-bit memory btw, cost savings because nobody will have that much memory in one machine for the foreseeable future. more $$ for intel.


----------



## W1zzard (Nov 25, 2013)

qubit said:


> While these instructions are super-wide, isn't the result always 64-bit?


http://software.intel.com/sites/pro...er_c/intref_cls/common/intref_avx_details.htm

first paras

for intel instruction set throughput and latencies:
http://www.intel.com/content/dam/ww...4-ia-32-architectures-optimization-manual.pdf
Appendix C


----------



## The Von Matrices (Nov 25, 2013)

W1zzard said:


> Today's 64-bit Intel CPUs can only address 48-bit memory btw, cost savings because nobody will have that much memory in one machine for the foreseeable future. more $$ for intel.



In a sense it's both 48-bit and 64-bit at the same time.  The actual addresses are 64-bit, but the specification of AMD64 only allows the first 48 bits to be used; bits 48-63 are just a copy of bit 47.  The reason this is done is because it makes memory address math much faster (you only need to compute 48 bits instead of 64).  So it's not really a cost saving measure, it's a performance enhancing measure that makes sense considering it will not restrict the memory usage of current systems.


----------



## W1zzard (Nov 25, 2013)

The Von Matrices said:


> it's a performance enhancing measure that makes sense considering the memory usage of current systems.


if the memory controller was implemented as 64 bit instead of 48 bit, there would be no performance difference. but using 48 bit math instead of 64-bit (like you describe), saves you transistors


----------



## The Von Matrices (Nov 25, 2013)

W1zzard said:


> using 48 bit math instead of 64-bit (like you correctly describe), saves you transistors



I guess it's just a matter of perspective.  You see it as "they can remove excess transistors and make a bigger profit" whereas I see it as "they can reallocate those transistors toward improving performance in other ways".


----------



## qubit (Nov 25, 2013)

Thanks W1zz, that cleared it up nicely about those extendend instructions. 

I knew about the 48-bit address bus though. This simply means that the top 16-bits in the address register are always zero, or "dummies" like von described, to pad out to the 64-bit architecture.


----------



## W1zzard (Nov 25, 2013)

yes, agreed, same thing. but your earlier post "memory translation much faster" is not correct.

qubit: google wiki canonical form addresses for slightly more reading


----------



## W1zzard (Nov 25, 2013)

regarding the "atomic" in my post above:

obviously a 32-bit application in all higher level programming languages can do 64-bit math on variables, but the operation is performed in multiple steps, so it's possible that another thread sees an intermediary result (in the memory cells) which doesn't properly reflect the outcome of this operation. with 64-bit operations that's guaranteed to not happen = atomic. so atomic means something like "in one go". 32 bit multithreaded apps need to take special care to synchronize accesses to larger than 32-bit data types.


----------



## qubit (Nov 25, 2013)

W1zzard said:


> qubit: google wiki canonical form addresses for slightly more reading



I learned something today.  I'd not heard the term canonical form addresses before. I see that it allows a seamless expansion of the address space to the full 64-bits as hardware evolves over time. Very clever.

I've not looked at CPU architecture at this level of detail for some time and it's quite refreshing to do so again.

Here's the details for anyone else that wants to learn about this: http://en.wikipedia.org/wiki/X86-64#Canonical_form_addresses

EDIT:



W1zzard said:


> regarding the "atomic" in my post above:
> 
> obviously a 32-bit application in all higher level programming languages can do 64-bit math on variables, but the operation is performed in multiple steps, so it's possible that another thread sees an intermediary result (in the memory cells) which doesn't properly reflect the outcome of this operation. with 64-bit operations that's guaranteed to not happen = atomic. so atomic means something like "in one go". 32 bit multithreaded apps need to take special care to synchronize accesses to larger than 32-bit data types.



This sounds like a kind of synchronization problem one must be careful to avoid with the use of flags. I'm sure many obscure program bugs are caused by missing stuff like this.


----------



## BiggieShady (Nov 26, 2013)

qubit said:


> While these instructions are super-wide, isn't the result always 64-bit? Effectively this would keep the CPU as 64-bit.I'm not challenging here, I just don't know much about these instructions.



You can have full 256 bit for a single scalar value ... as long it is an integer.
Doubles are packed in vectors ... when used as scalars, instructions read lowest 64 bits.


----------



## mare1980k1 (Mar 22, 2014)

Actually there will be. That's a fact. And we are all going to be here when it happens. Standard processor word length will be 4Gigabyte, CPU speed will be measured in MHz and our RAM memory will have up to 8Terabyte. It may sound sooo strange, but by the end of the year 2030, it WILL be like that  Can't wait ^_^ Of course, computers will be much different than today.. it's enough to say that one unit for storing one bit data will physically be smaller than one atom. Now it's just a science fiction, but on the other hand it's absolute fact that it will be like that in 15 years.


----------



## johnspack (Mar 23, 2014)

I say yes,  and I hope soon.  386DX was 32 bit,  Pentium was 32/64 bit,  we are still at 64 bit...  not a lot of progression.  Either go Risc,  or go big.....


----------



## Mussels (Mar 23, 2014)

johnspack said:


> I say yes,  and I hope soon.  386DX was 32 bit,  Pentium was 32/64 bit,  we are still at 64 bit...  not a lot of progression.  Either go Risc,  or go big.....




when abouts do you think we're going to hit the limits of 64 bit? how many years?


----------



## Solaris17 (Mar 23, 2014)

Mussels said:


> when abouts do you think we're going to hit the limits of 64 bit? how many years?


In the server world it isnt going to take long at all.


----------



## lilhasselhoffer (Mar 23, 2014)

Before we begin; necro thread anyone?



johnspack said:


> I say yes,  and I hope soon.  386DX was 32 bit,  Pentium was 32/64 bit,  we are still at 64 bit...  not a lot of progression.  Either go Risc,  or go big.....



There aren't two types of cars.  There aren't two types of planes.  There aren't two types of pets.  We have different types of processors for a reason.

RISC is an instruction set, not a reference to bus sizes.  ARM is in fact releasing a 64 bit RISC processor.

Everything else wrong with this statement can be summed up simply; you don't have the slightest inkling of what we have today.  Most programs use less than 4 GB of addressable memory.  Most programs still run on 32 bit instructions.  No hardware can currently support more than 64 GB of RAM per CPU.  The benefits to having a 128 bit wide bus are zero, as we can't fully use a 64 bit one.  Considering that traditional computing isn't likely to last another two decades, I'd be hard pressed to say 128 bit will ever come about.  It's be like someone in 1990 conjecturing that the CPU and PCH would be the only two chips a motherboard really required to run in 2010.  It  would have been insane at the time, yet it seems like a reasonable conclusion now.

My money is that by the time a 128 bit bus is useful our paradigm for computing will have outmoded the notion.



mare1980k1 said:


> Actually there will be. That's a fact. And we are all going to be here when it happens. Standard processor word length will be 4Gigabyte, CPU speed will be measured in MHz and our RAM memory will have up to 8Terabyte. It may sound sooo strange, but by the end of the year 2030, it WILL be like that  Can't wait ^_^ Of course, computers will be much different than today.. it's enough to say that one unit for storing one bit data will physically be smaller than one atom. Now it's just a science fiction, but on the other hand it's absolute fact that it will be like that in 15 years.



I'm not even sure what the heck I just read.

CPUs are currently measured in GHz speed, not MHz (that's a factor of 1000 greater, or 1 GHz = 1000 MHz).  They have net been measured in MHz since the 90s.

A word is a standardized number of bits.  It, by definition, cannot change length.

We could theoretically address 2^64 bits in the current generation of hardware, which translates to 2 Exabytes of memory.  This is significantly more than 8 TB.

A bit being smaller than an atom?  I don't know how this relates to anything.



Edit:
64 bit would offer 2^64 bits of address space.  We can currently use about 2^16.  Tell me why we need to start worrying about 2^128 when we've still got 2^48 untapped.

Edit:
Solaris17 brings up an excellent point.  Servers do access significantly more resources.  If you could link 2-4 processors you'd either need to treat them as independent entities, or use a significant portion of the addressable space.  I don't know of anything that can currently do this, but it'd be foolish to not look toward Google or Amazon to pioneer something like this.

Of course, the extra ram is but one component that needs a 128 bit width.  Processors themselves would still be running the same smaller operations, with most of the extra width full of place holder values.

I can't maintain this argument for long, as it leads back to the more distant future of computing.  Extra RAM sounds great, but it's something we shouldn't be looking at yet.  We've still got plenty of room to grow in the 64 bit hardware iteration.


----------



## patrico (Mar 23, 2014)

yeah id say one day there will be, unless theres a huge breakthrough in quantum computing which changes the game and the way a cpu works rendering old  technology  and binary a thing of the past and we'll have to emulate bits and bytes using the new tech


----------



## micropage7 (Mar 23, 2014)

Do you think we will we ever see a 128-bit general purpose CPU?

the answer mostly yes but i think it wont come in short time


----------



## NC37 (Mar 23, 2014)

Already had Altivec and that was a 128 bit over a decade ago. But programs had to be coded to take advantage of it and it the entire CPU wasn't 128bit. 

Most professional programs were coded for it but OS and everyday stuff, nope. It was the only way Apple could get advantages over Windows back when it launched but those advantages eroded as IBM/Motorola let their processors slide and Intel just surpassed them.


----------



## FordGT90Concept (Mar 23, 2014)

Mussels said:


> when abouts do you think we're going to hit the limits of 64 bit? how many years?


Windows 95 was mainstream 32-bit adoption which debuted in 1995.  Windows Vista was mainstream 64-bit and that debuted 2007.  12 years difference (or 10 years if you prefer to go with XP x64).  Everything in computing tends to be exponential so 10^2 to 12^2 years from 2005/2007.  If the trends continue, we'll be seeing mainstream 128-bit processors around 2105-2151.  It won't happen in most of our lifetimes.


----------



## JunkBear (Mar 23, 2014)

FordGT90Concept said:


> https://en.wikipedia.org/wiki/64-bit_computing#64-bit_processor_timeline
> 
> The AMD64 instruction set we're using today debuted in 2003 with AMD Athlon 64 (Clawhammer).  It took 10 years for AMD64 to become mainstream and it will take probably another 10 years before 32-bit programs become a rarity (like 16-bit was in the 2000s).
> 
> Pretty much all commercial programs are available for AMD64 except games.  Games are rapidly going to catch up with PCs now that PS4 and Xbone are AMD64.



That's what I have in backup rig a 3200+ clawhammer. Surprisingly it more snappy than some 64 bits modern CPU. I just can't figure out why.


----------



## FordGT90Concept (Mar 23, 2014)

What operating system?  XP on late single-core processors is really fast.


----------



## johnspack (Mar 23, 2014)

I thought there was supposed to be a 128bit  version of win8....  hmmm.....
and hopefully it will be a risc cpu.....


----------



## Aquinus (Mar 23, 2014)

NC37 said:


> Already had Altivec and that was a 128 bit over a decade ago. But programs had to be coded to take advantage of it and it the entire CPU wasn't 128bit.



Altivec could handle 128-bit mathematical operations by using it's 128-bit vector store plus ALU but like most CPUs, It does not live in 128-bit memory space, in fact the Apple's PowerPC G5 was the first Apple PowerPC CPU that has supported 64-bit memory addresses, before that they ran in 32-bit memory space.



lilhasselhoffer said:


> Edit:
> 64 bit would offer 2^64 bits of address space. *We can currently use about 2^16.* Tell me why we need to start worrying about 2^128 when we've still got 2^48 untapped.



I'm sitting on 5.76GB used? That's definitely more than 2^16, consider 2^16 is 16-bit memory... 64KB... I think you mean 32 not 16.


----------



## lilhasselhoffer (Mar 23, 2014)

Aquinus said:


> I'm sitting on 5.76GB used? That's definitely more than 2^16, consider 2^16 is 16-bit memory... 64KB... I think you mean 32 not 16.



You are correct, my apologies.  For some bass ackwards reason I didn't further convert from GB.  Stupid mistake on my part.


----------



## JunkBear (Mar 23, 2014)

FordGT90Concept said:


> What operating system?  XP on late single-core processors is really fast.



Yes but I can also roll seven without problems. With a nice graphic card i canuse Aero mode in windows.


----------



## BiggieShady (Mar 23, 2014)

FordGT90Concept said:


> Windows 95 was mainstream 32-bit adoption which debuted in 1995.  Windows Vista was mainstream 64-bit and that debuted 2007.  12 years difference (or 10 years if you prefer to go with XP x64).  Everything in computing tends to be exponential so 10^2 to 12^2 years from 2005/2007.  If the trends continue, we'll be seeing mainstream 128-bit processors around 2105-2151.  It won't happen in most of our lifetimes.



That may not be correct - not enough ponts.
With intel's x86 timeline where we have more points:

4 bit cpu: 1971
8 bit cpu: 1972 (1 year)
16 bit cpu: 1978 (6 years)
32 bit cpu: 1985 (7 years)
64 bit cpu: 2004 (19 years)

Years in between is what we look to aproximate with exponetial function:
Guesstimated base of the exponential function would be around 2.67

2.67^0 = 1 year
2.67^1 = 2.67 years (more than 50% error - 16 bit cpu sample is off but this is the best we can do)
2.67^2 = 7.1289 years - close enough
2.67^3 = 19.034163 years - close enough

so we can expect 128 bit architecture in 2.67^4 = 50.82121521 years from '04
That's about 40 more years.


----------



## Aquinus (Mar 23, 2014)

@BiggieShady Touche! However I think that as transistor densities increase, that might even be a pretty liberal extrapolation, but we'll see. The point is, 64-bit is here to stay in the consumer market for quite some time because the cost to handle 64-bits worth of addresses and having DIMMs to handle it is no cheap (or easy) task.

I'll agree though, 40 years seems a reasonable guesstimate unless something groundbreaking changes the game.


----------



## FordGT90Concept (Mar 23, 2014)

BiggieShady said:


> That may not be correct - not enough ponts.
> With intel's x86 timeline where we have more points:
> 
> 4 bit cpu: 1971
> ...


Remember, Itanium debuted in 2001 so it could be even sooner.  I'm thinking 128-bit processors can show up in as little as 40 years in specialized uses but mainstream support could take up to 144 years.  Even though x86-64 made its debut back in 2003, there's still a large number of 32-bit devices still being sold today.  At the same time, 16-bit devices are almost impossible to find.  I'd argue that, until 32-bit is almost completely gone, 64-bit still hasn't been mainstream adopted.


----------



## Aquinus (Mar 23, 2014)

FordGT90Concept said:


> Remember, Itanium debuted in 2001 so it could be even sooner.  I'm thinking 128-bit processors can show up in as little as 40 years in specialized uses but mainstream support could take up to 144 years.  Even though x86-64 made its debut back in 2003, there's still a large number of 32-bit devices still being sold today.  At the same time, 16-bit devices are almost impossible to find.  I'd argue that, until 32-bit is almost completely gone, 64-bit still hasn't been mainstream adopted.



Yes, but you could use 16-bit apps in 32-bit Windows and you got kind of "cut off" when 64-bit variants came around that ditched 16-bit support.

Do you suspect that 32-bit support will eventually get dropped in favor of a purely 64-bit OS, or will it take a 128-bit system that only supports 64-bit and not 32-bit to make that change (much like the transition from 32-bit Windows to 64-bit Windows)? I personally would like to see 32-bit go the way of the dinosaurs and have systems be running purely 64-bit code before 128-bit ever gets introduced. That's me though.


----------



## Solaris17 (Mar 23, 2014)

Way before 128 bit. They were already considering it with windows 8


----------



## Arjai (Mar 23, 2014)

The Von Matrices said:


> . * If some alien acquired a flash memory chip the alien would have no idea that it represented binary information.*



_If some Alien acquired a flash drive_...* Really?* We can sent a tin can with a camera to Mars. An Alien is probably laughing his ass off, _Right Now,_ after reading that!
 (<not an Alien)

BTW, I voted YES. Everything else is just conjecture. The future is unknown, therefore anything is possible. Take a good look at the past ten years. Then take a look at your current computer. No way you thought it could get this good, back then. BTW, I still have a Socket A, AMD 2600+ running Win 7. It was my daily driver until about 2 years ago, long story, it's awaiting my re-emergence from homelessness, it will be Crunching again, possibly even with an upgraded MB and a 3200 Barton! OC'd of course.


----------



## FordGT90Concept (Mar 23, 2014)

Aquinus said:


> Yes, but you could use 16-bit apps in 32-bit Windows and you got kind of "cut off" when 64-bit variants came around that ditched 16-bit support.


x86-64 supports 16-bit.  Windows 64-bit does not.



Aquinus said:


> Do you suspect that 32-bit support will eventually get dropped in favor of a purely 64-bit OS, or will it take a 128-bit system that only supports 64-bit and not 32-bit to make that change (much like the transition from 32-bit Windows to 64-bit Windows)? I personally would like to see 32-bit go the way of the dinosaurs and have systems be running purely 64-bit code before 128-bit ever gets introduced. That's me though.


I doubt it.  To remove 16-bit and 32-bit would require a new 64-bit only instruction set.  It could maybe happen when 128-bit is introduced but I doubt before then.  The only way it would happen before is if 128-bit still appears distant and they can save a lot of money/power by removing the legacy support.  I suspect Intel may be tempted to do this to compete with ARM but the cost of 16-bit and 32-bit compared to 64-bit is tiny so it is very difficult to justify doing that.


----------



## Aquinus (Mar 24, 2014)

FordGT90Concept said:


> x86-64 supports 16-bit.  Windows 64-bit does not.
> 
> 
> I doubt it.  To remove 16-bit and 32-bit would require a new 64-bit only instruction set.  It could maybe happen when 128-bit is introduced but I doubt before then.  The only way it would happen before is if 128-bit still appears distant and they can save a lot of money/power by removing the legacy support.  I suspect Intel may be tempted to do this to compete with ARM but the cost of 16-bit and 32-bit compared to 64-bit is tiny so it is very difficult to justify doing that.



Sorry, I wasn't specific enough. I meant at the OS level. Do you think that Windows will abandon 32-bit before 128-bit comes around? X86_64 might be able to execute the commands, but that doesn't mean the OS has to support it, much like how 64-bit windows doesn't "support" 16-bit apps.

I guess the idea is will they disincentive making 32-bit apps now that 64-bit has taken a large foothold? I think it should, but this is talking from a software level and and not hardware.

If we were talking about X86 itself, that a completely different can of worms to be opening.


----------



## FordGT90Concept (Mar 24, 2014)

I think Microsoft could. 20 years from now, it will be difficult to argue 32-bit is relevant anymore.  I don't see anyone defending 16-bit these days and it was kicked to the curb 20 years ago.

There is more incentive to eliminate legacy support in software than in hardware.  Micosoft literally has to provide binaries for both versions of everything.  That's burdensome which is why they decided to axe 16-bit back in XP x64.


----------



## johnspack (Mar 24, 2014)

It's funny,  anyone remember Itanium?  A cpu that could only run 32 bit when 16 bit was mostly predominant.  Don't know why I thought of that....  but still,  either cisc,  or risc,  128 bit is coming....


----------



## lilhasselhoffer (Mar 24, 2014)

Aquinus said:


> Sorry, I wasn't specific enough. I meant at the OS level. Do you think that Windows will abandon 32-bit before 128-bit comes around? X86_64 might be able to execute the commands, but that doesn't mean the OS has to support it, much like how 64-bit windows doesn't "support" 16-bit apps.
> 
> I guess the idea is will they disincentive making 32-bit apps now that 64-bit has taken a large foothold? I think it should, but this is talking from a software level and and not hardware.
> 
> If we were talking about X86 itself, that a completely different can of worms to be opening.




I'd conjecture that MS will kill support for 32 bit instruction sets well in advance of ever having a 128 bit processor.  The math is relatively simple here.

Right now most CPUs (in dedicated PCs) out there run either x86 or x64 instruction sets.  These instruction sets are shared between the cheapo work stations, servers, and high end gaming rigs.  The real drag on releasing the instruction set native to x86 is legacy device support and a lack of capability in the lower end of hardware.  Like it or not, Atom and previous generation gaming systems didn't have the resources to run an x64 environment.

Knowing this, cue Intel upgrading the Atom (into the Celeron) and AMD releasing the APU.  Both of these developments leverage enough resources to perform acceptably, while still offering complete x64 instruction sets.  The APUs are basically the driving force of current generation video game systems, while Celeron and the APU are duking it out in the smart devices market.  It wouldn't therefore be unreasonable to see the end of 32 bit instructions in the next 10 years.  There'd be no devices that actually require them, and the small amount of resources they free up would basically offer a free performance boost.

The introduction of 128 bit instructions is insanely unlikely in the next ten years.  I make this statement for two reasons.  Other people have shown the math related to adoption of new instructions, so read previous posts for their work.  Finally, who actually needs that much math?  Right now the largest pure computing occurs on distributed networks of CPUs.  The common CPU doesn't have to deal with exceptional mathematical loads, and the people who need that computational ability have a source for it.  Whenever the average person actually needs 128 bit sized computational abilities we'll see the shift toward it.  For now, it doesn't make sense for one person to drive a train to work, when they could use a car to no ill affect.  Having the resources to buy a train and lay track doesn't mean it's a good idea.


RISC is not included above because it barely makes sense to be using a 64 bit processor today.  I've never seen a phone with more than 4 GB of RAM, and other devices using RISC architecture have even less resources.  Saying that it is possible, and being ready to pay $1200 for the next phone you buy, are two different things. 


Side note:
4 bit processors are functionally dead.  They might still be used as cheap DACs, but they aren't common to see today outside of specific applications.
8 bit processors are hobbyist fodder.  Hello learning platform, specifically the likes of Arduino.
16 bit processors are in a murky place.  They are used as DACs, but because of increased price, decreased features, and an obliterated code base they aren't as common as might be conjectured.
32 bit processors are ubiquitous.  Most intelligent devices running ARM are lumped in here, along with newer hobbyist hardware. 
64 bit processors are ubiquitous in the PC market. 
128 bit processors haven't ever been produced.  There have been one off systems with one component being 128 bit, but never have we seen a true 128 bit processor.




johnspack said:


> It's funny,  anyone remember Itanium?  A cpu that could only run 32 bit when 16 bit was mostly predominant.  Don't know why I thought of that....  but still,  either cisc,  or risc,  128 bit is coming....



So, who runs Itanium today?  

I remember Itanium as a substantial leap forward, that landed on a rake and wound up with a huge bruise on its face.  It ran 64 bit, when 32 bit was common: http://en.wikipedia.org/wiki/Itanium

I'm betting your current system is x86-64, despite Itanium having come on the scene as pure x64 years ago.


It's also fallacious to say "it's coming," as if that were a justification for anything.  You and I have an upcoming and inevitable death, which is coming.  The death of our sun is coming.  The extinction of our universe is coming.  These things are inevitable, while 128 bit computers is not inevitable.


----------



## FordGT90Concept (Mar 24, 2014)

Intel's problem with Itanium is that it targeted a market that didn't really exist.  If Intel reinvented the concept of Itanium to compete directly with, for example, ARM, it could really come into its own.


----------



## lilhasselhoffer (Mar 24, 2014)

FordGT90Concept said:


> Intel's problem with Itanium is that it targeted a market that didn't really exist.  If Intel reinvented the concept of Itanium to compete directly with, for example, ARM, it could really come into its own.



I concede this point.  

Itanium was a big risk, that wasn't properly supported.  If Intel threw more weight behind it we could see something great.  My apprehension is that Intel doesn't come back to many ideas after getting burnt once.


----------



## FordGT90Concept (Mar 24, 2014)

Nehalem struck twice! XD

Intel also repurposed Larrabee from a GPU concept into a HPC product.

I think Intel knows x86 is long in the tooth.  I think they also see the ominous threat from ARM.  If they aren't quietly working on a new instruction set and architecture for it, I think they know they're digging their own grave.  Intel can't rely on their process advantage forever.  Imagine if Intel reinvented Itanium into a 128-bit processor that can handle four 32-bit, two 64-bit, or one 128-bit instruction at a time.  It would be a wide processor but it could run at a low clock speed with very low voltage and heat output.


----------



## Champ (Mar 24, 2014)

Scratch


----------



## wrcousert (Aug 30, 2014)

I've seen computers go from 4K RAM to 4GB RAM in less than 30 years. That's a million old increase.  I think we could see similar increases in the next couple decades. Maybe sooner if we get true molecular nanotechnology - the assembler.


----------



## Mussels (Aug 30, 2014)

wrcousert said:


> I've seen computers go from 4K RAM to 4GB RAM in less than 30 years. That's a million old increase.  I think we could see similar increases in the next couple decades. Maybe sooner if we get true molecular nanotechnology - the assembler.




i've gone from 8 bit and 4MB on my first PC to 64 bit and 24GB on my current one.

pretty sure we'll see some more increases.


----------



## Sir B. Fannybottom (Aug 30, 2014)

No! Everyone knows you don't need that much power, just like how you don't need a hard drive more than 1GB


----------



## Satori2869 (Sep 9, 2014)

FordGT90Concept said:


> But even with quantum computers, they still need memory.


 Ah but with a quantum computer wouldn't the memory system be part of the computer? The answer would be stored in the question,  no?


----------



## TheoneandonlyMrK (Sep 9, 2014)

The emotion engine in the ps2 was 128bit wasn't it I realise it's not the main cpu but it was ubiquitous and general purpose.


----------



## Satori2869 (Sep 9, 2014)

FordGT90Concept said:


> But even with quantum computers, they still need memory.




But won't quantum entanglement take care of that?


----------



## BiggieShady (Sep 9, 2014)

Satori2869 said:


> But won't quantum entanglement take care of that?



Quantum entanglement effect is supposed to allow us to read a qubit without affecting it.


----------



## Aquinus (Sep 11, 2014)

I think that the question is vague considering I'm wondering if we're talking about memory register widths or data register widths. As it stands right now, I don't see us needing 128-bit memory space for several decades. As far as higher-precision math done by CPUs, I think there might be an argument that some applications that could benefit from fixed precision math, could benefit from an ALU that supports mathemetical operations on integers exceeding 64-bits (long integer) in one clock cycles. Generally speaking, these applications usually benefit from floating point math but I don't see how either 128-bit integer ALU support or 128-bit memory space would be needed in the near future.

A lot of people say, "Well, people said that 640k of memory was plenty and now look." But I think I would make the argument that CPUs and memory modules were a lot more simple with vastly fewer transistors. It's a lot harder to cram more on a wafer than it used to be and I think we need to keep in mind that respect ICs, they can only get so big without yields going to crap and they can only get so small without things like quantum entanglement due the the circuit being only ten or so atoms wide and is a real problem with Si-based semi-conductors. All in all, progress has been good, but it's slowing and we have to keep that in mind when attempting to predicting the future. Even recent history has shown a slowdown and it's hard to ignore that.


----------



## Toothless (Sep 11, 2014)

How is TPU not hosting online tech classes? You all are about to make my brain go ploosh.

Anywho, Intel was smacked in the face with bringing a 64bit into a 32bit world too soon, but what if they built on the idea again? A industrial-type line of Pentiums made to take workloads differently. 

Wouldn't it be easier to have mainstream motherboards support two say.. i5's or i3's than to change the bit system to 128? Do we really need another revolution going from 64 to 128? 90% of applications I see or use still run 32bit. I'd think it would be easier to buff the 64bit power before bringing in 128bit to a 32/64bit. Where would the support be?

I have so many questions and Google has no answers for me.


----------



## Aquinus (Sep 11, 2014)

Lightbulbie said:


> I have so many questions and Google has no answers for me.


I would start by learning how memory addressing works. Without knowing that, you really can't say you know what you're talking about. Increasing the range of memory addresses only helps if you're already constrained by the width of the address bus which is the case when you run out of addressable space (like the 4GB limit with 32-bit systems or the 16K address (not page) limit on 16-bit systems.)

Considering we're only using a mere fraction of the 64-bit address space, I think we have quite the ways to go before we start running out of memory addresses to use for memory and memory mapped I/O.

Edit: I should also note that most "64-bit CPUs" don't tend to actually support a full 64-bits worth of address space, but rather a sufficiently large portion of 64-bit space to satisfy all current memory needs. It's also good to remember that x86(_64) systems address 8-bit words per address, even if they support math operations on say longs (64-bit integers) so it's important to focus on just memory when talking about "64-bit" because the term in and of itself is vague.


----------



## Toothless (Sep 11, 2014)

Aquinus said:


> I would start by learning how memory addressing works. Without knowing that, you really can't say you know what you're talking about. Increasing the range of memory addresses only helps if you're already constrained by the width of the address bus which is the case when you run out of addressable space (like the 4GB limit with 32-bit systems or the 16K address (not page) limit on 16-bit systems.)
> 
> Considering we're only using a mere fraction of the 64-bit address space, I think we have quite the ways to go before we start running out of memory addresses to use for memory and memory mapped I/O.


So what would the memory limit be on a 128bit system?


----------



## 64K (Sep 11, 2014)

Lightbulbie said:


> So what would the memory limit be on a 128bit system?



It would be 2^128 ~ 340 trillion trillion terabytes.


----------



## Toothless (Sep 11, 2014)

64K said:


> It would be 2^128 or ~3.4038^38 bytes.


That's a large number..


----------



## Aquinus (Sep 12, 2014)

64K said:


> It would be 2^128 ~ 340 trillion trillion terabytes.


Yes, On a machine that has an address resolution of 8-bits which is practically every modern microprocessor and microcontroller. Just thought I should throw that out there. Theoretically speaking, forcing 16-bit words and address resolution would allow any given address size to double total capacity without changing the amount of addressable space you have since every address would represent 16-bits instead of 8, but that has much wider spread repercussions as operating systems rely on 8-bit address resolution. To change that would mean a fundamental change to how memory works if we continue to support data types like bytes and 8-bit chars. If I wrote assembly, I would have to completely change the way I thought about bytes if they were still supported in such a design because you would have to manage storing two bytes per memory address unless you want to waste 50% of your addressable space. However, the same argument could be made of a boolean (or a single bit), how do you efficently store that in a 8-bit word without adding overhead or wasting space? That's a hard question to answer.


----------



## FordGT90Concept (Sep 12, 2014)

theoneandonlymrk said:


> The emotion engine in the ps2 was 128bit wasn't it I realise it's not the main cpu but it was ubiquitous and general purpose.


http://en.wikipedia.org/wiki/Emotion_Engine


> Contrary to some misconceptions, these SIMD capabilities did not amount to the processor being "128-bit", as neither the memory addresses nor the integers themselves were 128-bit, only the shared SIMD/integer registers. For comparison, 128-bit wide registers and SIMD instructions had been present in the 32-bit x86 architecture since 1999, with the introduction of SSE.


I think it is 32-bit seeing how it takes two 32-bit operations to get a 64-bit value.


----------



## awesomesauce (Sep 12, 2014)

maybe they gonna pass 128 and go right to 256


----------



## TheoneandonlyMrK (Sep 12, 2014)

Truth is they did and do use 128 bit parts but on whole are built to efficiently process 16/32/64 bit operations. 
I have read that wicky and I was only wrong in that it is the main ps2 cpu it did simd side by side risc based processing which is similar to what some here think 128bit cpus can evolve from and certainly could start up from.


----------



## RejZoR (Sep 12, 2014)

For consumer segment, i don't really see a reason to use anything more than 64bit for my lifetime. We are talking memory limitations of 50 petabytes (pebibytes but i'll keep it old way to understand easier). That's 50.000 Terabytes. We haven't reached even 1TB of RAM and we won't for quite some time considering how long we needed to go from 256MB to current 16GB. Other than that, for consumer segment, more bits don't really bring anything than more RAM support which was a serious issue for 32bit processors. But now that klimitation was eliminated, 64bit is here to stay for a while.


----------



## Aquinus (Sep 12, 2014)

Lightbulbie said:


> That's a large number..





awesomesauce said:


> maybe they gonna pass 128 and go right to 256





RCoon said:


> Check out Graham's number  A good punchline for "your mom" jokes.



I don't see how any of these posts are actually constructive. Don't post unless you have something to ask or contribute otherwise, off topic posts and posts that aren't thought through will just derail the thread. I don't know about other people, but I would like a serious, intelligent and thoughtful discussion on the topic.

I love discussing things with respect CPU architecture, but I don't love having to deal with shenanigans while I do it.


----------



## RCoon (Sep 12, 2014)

Aquinus said:


> I don't see how any of these posts are actually constructive. Don't post unless you have something to ask or contribute otherwise, off topic posts and posts that aren't thought through will just derail the thread. I don't know about other people, but I would like a serious, intelligent and thoughtful discussion on the topic.
> 
> I love discussing things with respect CPU architecture, but I don't love having to deal with shenanigans while I do it.



Fair game, posts shall be deleted


----------



## Toothless (Sep 12, 2014)

Aquinus said:


> I don't see how any of these posts are actually constructive. Don't post unless you have something to ask or contribute otherwise, off topic posts and posts that aren't thought through will just derail the thread. I don't know about other people, but I would like a serious, intelligent and thoughtful discussion on the topic.
> 
> I love discussing things with respect CPU architecture, but I don't love having to deal with shenanigans while I do it.


"404 fun not found"
Is it not allowed to have cute little side comments? I see slightly off topic comments in 90% of forums posts. It's okay to have a little fun.

Now back to the topic before this post gets deleted. I'm a "bit" confused on how GPUs can hit 256bits. Is it simply that they are created in a whole different way? What if we were able to take a GPU chip, amp the clocks up, and somehow make it run as a CPU? 

By theory, couldn't we run a desktop off of a GPU? 
>It has cores (change them to act like normal processor cores)
>Has a video output
>Has memory, just somehow stick memory sticks in to add more
>Has a way to get power from the PSU
>Has cute wittle fans to cool it down
Just lacking USB and SATA ports but I'm sure there could be a way to make it work. If GPUs can hit these massive amount of bits, why are CPUs having such a hard time hitting it? 

Apologies for ignorance. This is something I've been fascinated with but never was able to find a time to poke it out.


----------



## natr0n (Sep 12, 2014)

Just like we emulate PS2 which is a 128bit on a x86/32bit PC using pcsx2; It all comes down to coding.

As long as we have smart programing there is no current limitation.


----------



## Aquinus (Sep 12, 2014)

Lightbulbie said:


> "404 fun not found"
> Is it not allowed to have cute little side comments? I see slightly off topic comments in 90% of forums posts. It's okay to have a little fun.
> 
> Now back to the topic before this post gets deleted. I'm a "bit" confused on how GPUs can hit 256bits. Is it simply that they are created in a whole different way? What if we were able to take a GPU chip, amp the clocks up, and somehow make it run as a CPU?
> ...



GPUs are not general processors, the kind of operations they run are special. They're designed to do the same set of instructions of different sets of data at the same time at lower clock speeds. You're also mixing terms. GPUs with a *256-bit memory interface* can write/read that many bits per rising and falling edges of the memory clock. 256-bits represent the *data bus*, not the *address bus*. Once again, I did say you should learn about memory addressing first and this is part of it. For example, my i7 has 4 memory channels, each memory channel is 64-bits wide, therefore I have a 256-bit memory interface on my CPU, but it's not a "256-bit CPU".

Generally speaking, GPUs are designed to do highly parallel tasks like calculating the same algorithm on several sets of numbers (the same computation, different data). CPUs are designed to excel at serial workloads as well as interfacing with peripherals. CPUs and GPUs work very differently and it's important to understand that.


----------



## Toothless (Sep 12, 2014)

Aquinus said:


> GPUs are not general processors, the kind of operations they run are special. They're designed to do the same set of instructions of different sets of data at the same time at lower clock speeds. You're also mixing terms. GPUs with a *256-bit memory interface* can write/read that many bits per rising and falling edges of the memory clock. 256-bits represent the *data bus*, not the *address bus*. Once again, I did say you should learn about memory addressing first and this is part of it. For example, my i7 has 4 memory channels, each memory channel is 64-bits wide, therefore I have a 256-bit memory interface on my CPU, but it's not a "256-bit CPU".
> 
> Generally speaking, GPUs are designed to do highly parallel tasks like calculating the same algorithm on several sets of numbers (the same computation, different data). CPUs are designed to excel at serial workloads as well as interfacing with peripherals. CPUs and GPUs work very differently and it's important to understand that.


Okay, now I get it. Thank you. Now I can stop looking at my 660 wishing it would help with making my antivirus scans go faster.


----------



## Sasqui (Sep 12, 2014)

My first processor (albeit borrowed for a summer) in 1981 was much like this one, 8-bit with 32KB of memory in an Apple II







So, about 10 years ago I bought my first 64 bit capable CPU, the 630 Prescott.

Doing some extrapolating, I won't need 128 bit CPU until the year 2041


----------



## BiggieShady (Sep 12, 2014)

Well, you can look it this way - when doing vector math, you basically have 4 floats (4*32=128 bit) operands. So vector ALU that can do add and mul vectors is your 128 bit cpu. That would be all cpus that support SSE instruction set (128 bit registers), so everythig since pentium 3.

But traditional 128bit registers as quadruple preccission floats? Not yet, 128bit will be used only for vectors for some time.



Sasqui said:


> Doing some extrapolating, I won't need 128 bit CPU until the year 2041



Not that much time


----------



## Adminymous (Jan 6, 2015)

Hi im from the future! The 64 bit architecture reigns supreme... By the time the goverment took over everything progress stopped to the point all programs were written in 64 bit. The entire architecure our goverment operates on is 64bit. They say it is a terroristic thought to think of anything past 128bit it would be too expensive to change our systems.  It took 30 years to introduce the 64bit architecture. During this time.. The entire world's slew of programs were in need of overhaul to address more memory. Sadly it was struck down. Im going to hide now.


----------



## Aquinus (Jan 6, 2015)

Adminymous said:


> By the time the goverment took over everything progress stopped to the point all programs were written in 64 bit.


Yes, I'm sure they're watching you at every turn too. They've probably implanted a geolocation tracking device in your arm. BENGHAZI! Thanks Obama. 


Adminymous said:


> The entire architecure our goverment operates on is 64bit.


Source? Also 64-bit isn't an architecture. It's the width of the address registers in any given CPU. Two CPUs with very different architectures (like SPARC vs x86 vs ARM) all have 64-bit variants, but they're nothing alike.


Adminymous said:


> They say it is a terroristic thought to think of anything past 128bit it would be too expensive to change our systems.


No, it would just be useless because we can't even address 64-bit worth of memory space yet. In reality we don't really even touch anything beyond 36-bits right now.


Adminymous said:


> It took 30 years to introduce the 64bit architecture.


Actually it took a lot less when we actually needed it because we were running out of addressable space and memory continued to get bigger.


Adminymous said:


> The entire world's slew of programs were in need of overhaul to address more memory.


That's what happens when major revisions are made to a particular ISA?


Adminymous said:


> Sadly it was struck down. Im going to hide now.


Problem solved, you can go back under that rock you've been living under.


----------



## FordGT90Concept (Jan 6, 2015)

I think I have to reevaluate this because Moore's Law is in trouble as process gets smaller and smaller.  Even if we follow it to the end where each transistor is only composed of a few atoms, will memory density exceed 3.4e+38 bytes?  I think the answer is no.  128-bit is theoerically only reasonable in smaller-than-atom processors aka quantum computing.  I think we'll have an answer in the next few decades so I'd change my vote, if I could, to "Not Sure."


----------



## phanbuey (Jan 6, 2015)

It's not just about memory limits, its about being able to access more registers, and for certain programs a 128 bit processor has huge performance and power implications by limiting the reads writes needed to cache.  You're probably not going to see an X86 128bit variant any time soon but an ARM / Apple RISC chip - that's probably not too far off.

I would say just like with 64 bit, proprietary systems and risc linux based computers will get 128 bit way before the X86 crowd.

http://riscv.org


----------



## qubit (Jan 6, 2015)

FordGT90Concept said:


> I think I have to reevaluate this because Moore's Law is in trouble as process gets smaller and smaller.  Even if we follow it to the end where each transistor is only composed of a few atoms, will memory density exceed 3.4e+38 bytes?  I think the answer is no.  128-bit is theoerically only reasonable in smaller-than-atom processors aka quantum computing.  I think we'll have an answer in the next few decades so I'd change my vote, if I could, to "Not Sure."


All sounds very reasonable to me. It's interesting to muse about where the hard limit for miniaturization will be, isn't it?

Also, I've edited the poll to allow votes to be changed. Have at it!


----------



## karakarga (Jan 7, 2015)

64 bits, means 2 over 64, for memory operations, which is nearly unreachable! 128 bits again for memory use of 2 over 128.

If 2 over 64 bits amount for RAM hits it's end limit, like 2 over 32 is 4Gb, there is no need!

It seems, it can not be reached in maybe 500 years!


----------



## FordGT90Concept (Jan 7, 2015)

phanbuey said:


> It's not just about memory limits, its about being able to access more registers, and for certain programs a 128 bit processor has huge performance and power implications by limiting the reads writes needed to cache.  You're probably not going to see an X86 128bit variant any time soon but an ARM / Apple RISC chip - that's probably not too far off.
> 
> I would say just like with 64 bit, proprietary systems and risc linux based computers will get 128 bit way before the X86 crowd.
> 
> http://riscv.org


x86-64 didn't gain traction until memory capacity greater than 4 GiB was deemed necessary.  We can reasonably expect this to be true of 128-bit as well.

I wouldn't consider a processor 128-bit unless all functions of it can handle 128 bits.  There's a lot of ARM processors out there, for example, that can handle 64-bit instructions and logic but the memory controller cannot access 64 bits worth of memory thus, it's only partially 64-bit.  Partially 128-bit processors could come soon but fully 128-bit in the sense that x86-64 and Itanium are 64-bit is in doubt.

If Moore's Law does fall apart, the best we can hope for is 128-bit emulation where a controller distributes workloads over a series of 64-bit processors.  128 bits worth of memory could be connected to the controller and that's the pool of memory the 64-bit processors use (but can't access it all).  But that's still only partially 128-bit so it doesn't count for this thought problem.


----------



## xBruce88x (Jan 7, 2015)

BiggieShady said:


> Well, you can look it this way - when doing vector math, you basically have 4 floats (4*32=128 bit) operands. So vector ALU that can do add and mul vectors is your 128 bit cpu. That would be all cpus that support SSE instruction set (128 bit registers), so everythig since pentium 3.
> 
> But traditional 128bit registers as quadruple preccission floats? Not yet, 128bit will be used only for vectors for some time.
> 
> ...



maybe it was a typo and he really meant 2014


----------



## phanbuey (Jan 7, 2015)

FordGT90Concept said:


> x86-64 didn't gain traction until memory capacity greater than 4 GiB was deemed necessary.  We can reasonably expect this to be true of 128-bit as well.
> .



Of X86-128, yes... absolutely.



FordGT90Concept said:


> I wouldn't consider a processor 128-bit unless all functions of it can handle 128 bits. There's a lot of ARM processors out there, for example, that can handle 64-bit instructions and logic but the memory controller cannot access 64 bits worth of memory
> .



I would disagree with this, a CPU and a memory controller are two different things (which is why you can have a 128 bit CPU with a 64 Bit IMC).  As far as I remember, memory controllers for x86 used to be part of the MB chipset on until AMD decided to start slapping an IMC on the die (and consequently kicking the crap out of NetBurst), but this does not mean that a CPU  MUST be a CPU + IMC.  They're putting all sorts of crap on die now...

if the question was, would we see a 128 bit IMC+CPU anytime soon, because we need to access that quantity or RAM, then the answer will be hell no - unless somehow  the entire paradigm of memory access shifted and PCI-e flash became so cheap and fast that it no longer made sense to even have ram or disk controllers.



xBruce88x said:


> maybe it was a typo and he really meant 2014



Not really what I was talking about but OK.


----------



## xBruce88x (Jan 7, 2015)

was referring to where he typed 2041, thought you were referring to that when you said not that much time lol. eh, it happens.


----------



## Aquinus (Jan 7, 2015)

phanbuey said:


> Of X86-128, yes... absolutely.
> 
> 
> 
> ...


This post tells me that you're not articulating yourself properly so hopefully I can try to supplement and expand on what you're trying to say. CPUs have been able to handle doing instructions with 64-bits worth of *data* for a long time. It's nothing new for an X86 processor to be able to math with longs (64-bit integers) or to do math with doubles (64-bit floating point numbers). There is a potential benefit from increase the width of the *ALU and data registers*, which are completely separate from the *address registers* used to access memory and memory mapped I/O.

Will we need 128-bit address support in the near future? Probably not. Memory isn't expanding fast enough for it to really make a difference any time soon as 64-bits worth of addresses is a ton of memory space.

Will we need 128-bit data and ALU support in the near future: Not for the general consumer. Most tasks can be handled with a smaller ALU and doing wider operations only takes more time and most of the time that extra width only gets your precision and accuracy in a calculation with a lot of digits. So generally speaking, I think that there aren't enough economic factors in play to make widening the ALU and memory registers worth it, not to mention it adds to the size of the die and width of the data bus between the different parts of the CPU, all of which takes up die space and needs to be designed very carefully to support high clock speeds.

All in all, I think about this problem like I think about computer upgrades. If you don't need it, it's a waste. Changing the width of the memory controller and address register or the ALU and data registers without a reason is a waste. As it stands right now, modern CPUs are really good at doing just about everything developers need them to do while staying within the confines of reality.


----------



## qubit (Jan 7, 2015)

FordGT90Concept said:


> I wouldn't consider a processor 128-bit unless all functions of it can handle 128 bits.  There's a lot of ARM processors out there, for example, that can handle 64-bit instructions and logic but the memory controller cannot access 64 bits worth of memory thus, it's only partially 64-bit.  Partially 128-bit processors could come soon but fully 128-bit in the sense that x86-64 and Itanium are 64-bit is in doubt.


The size of a CPU is officially defined by the size of the data word that it can handle in its main registers, not the memory or any other features. For an easy example, think of the ancient 6502 and Z80 CPUs. These could handle 8-bit data in their accumulators and had 16-bit memory address registers, but they were still 8-bit CPUs.

The line is blurred a bit with modern x86 CPUs which have extra instruction sets added to them which can handle 128-bit or maybe even 256-bit words in one go. However, they are still considered 64-bit CPUs, since the main x86 registers hold 64-bit values.


----------



## Aquinus (Jan 7, 2015)

qubit said:


> size of a CPU is officially defined by the size of the data word that it can handle in its main registers, not the memory or any other features.


You mean the largest data word. A "word" alone would be the amount of space that a single memory address takes up. So for most modern CPUs, a word is 8-bits, a byte. However, modern CPUs usually have a 64-bit ALU. However it's worth remembering that some CPUs like AMD's FX lineup has that funky 128-bit FMA floating point unit, but that doesn't make it a 128-bit CPU.

I don't think there is any official designation for this, but generally speaking when I work with micro-controllers, there is a clear distinction between the two as the last micro-controller I used had two 16-bit address registers and two 8-bit data registers that could be combined to do limited 16-bit math. The problem is that most people don't even understand that there is a difference between a memory register and an address register and instantly assume the two are the same and they're not.

So to say "Will we need 128-bit CPUs?" is dumb because the question is vague. The proper answer is, "What part of the CPU are you talking about?"


----------



## FordGT90Concept (Jan 7, 2015)

Aquinus said:


> So to say "Will we need 128-bit CPUs?" is dumb because the question is vague. The proper answer is, "What part of the CPU are you talking about?"


This.  The entire CPU being 128-bit is a long ways off (if ever).  Parts of it being 128-bit is already here (e.g. FPU quad-float).


			
				Wikipedia said:
			
		

> Native support of 128-bit floats is defined in SPARC V8 and V9.


----------



## ChevyOwner (Jan 7, 2015)

When we get 40k HD (yes 40,000) we will need it.


----------



## xvi (Jan 7, 2015)

"640K ought to be enough for anybody."


----------



## zeneregion (Sep 4, 2015)

Mussels said:


> try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.


 ( There is a mistake I think. )

I don't know who will need more than *16 Exabytes* memory. Maybe one day. According to my point view, these CPU structures are very very important for floating point arithmetics, because of this reason one day we will need 128 bit. ( Actually we have in GPUs )


----------



## Blue-Knight (Sep 4, 2015)

In my opinion...

If we will ever need more than what "64 bit" has to offer... I will probably not be alive to see that, just see how many years we were stuck at "32 bit".

And "32 bit" still usable with "PAE", but some specific applications perform better on "64 bit" due to the "32 bit" limitations.

Well, if we will ever need "128 bit" it is not going be because of memory limitations... Current hardware is not even near the "64 bit" limit.

My conclusion: I voted "No". "64 bit" will stay for a very, very long time.


----------



## Drone (Sep 4, 2015)

Blue-Knight said:


> In my opinion...
> 
> If we will ever need more than what "64 bit" has to offer... I will probably not be alive to see that, just see how many years we were stuck at "32 bit".
> 
> ...




Well who knows maybe quantum computing will come in our lifetime so nobody would care about "bits" anymore.


----------



## BiggieShady (Sep 4, 2015)

64 bit address space is more than enough and silicon lithography limitations wouldn't allow 128 bit address space even when stacked ... as for instruction operands width, well instruction sets get extended, new instructions work on combined registers ... didn't we got support for 128bit floating point numbers in x87 in the olden days that way.


----------



## toastem2004 (Sep 5, 2015)

Here are my thoughts, and please remember, they are just thoughts' not facts:

In a personal computer, one that a user would use at home or work - no
In a workstations such as those for MRI, CAD, Maya, ect - Maybe*
In Servers and Cloud Computing systems - Yes, i do think so

There would be no use for a 128-bit CPU for home use, and most office use.  Not only in memory space addressing, but even for general registers/computing in the cpu.  Any application that would require such amounts of memory or processing power would be offloaded to a server or cloud based operation.  There are some particular usage scenarios where that kind of power could be tapped, but a lot of that workload I could see being offloaded to a GPU for local data crunching, or to a server farm.


BiggieShady said:


> didn't we got support for 128bit floating point numbers in x87 in the olden days that way.


Correct, but those are specialized instruction sets.  AVX is 256bit operations


----------



## BiggieShady (Sep 5, 2015)

toastem2004 said:


> AVX is 256bit operations


Also those are operations on vectors where each component in a vector is a double. No gains in precision, only speedup from SIMD parallelism.
My point is that true 128bit machine would need to have 128bit memory address space (not going to happen), and have ALU/FPU that supports 128bit base scalar types in a single clock (it would make cpu-s less efficient for less wide operands so not going to happen). Specialized instruction sets that serve as extensions to the x86/x64 work pretty well.


----------



## P4-630 (Sep 5, 2015)

Drone said:


> Well who knows maybe quantum computing will come in our lifetime so nobody would care about "bits" anymore.



http://www.tudelft.nl/en/current/la...nstituut-qutech-start-samenwerking-met-intel/


----------



## qubit (Sep 5, 2015)

BiggieShady said:


> My point is that true 128bit machine would need to have 128bit memory address space (not going to happen)


Not 100% sure what you mean by that, but here goes. The address bus can be logically 128-bits wide, but in reality, less address pins are physically exposed on the chip as such a gargantuan amount of much memory isn't used for various reasons. For example, today's CPUs have a logical address bus of 64-bits, but it's only physically something like 48-bits wide and they work fine.

Also, it wouldn't be hard to organize memory chips into a 128-bit wide word size configuration. This kind of thing is done all the time eg 8-bit wide memory ganged together for a 32-bit wide word and so on. Another good example is a graphics card with a wide data bus such 384- or 512-bit wide. The memory chips certainly aren't that wide, but are ganged together to provide that word width.


----------



## bonehead123 (Sep 5, 2015)

*MS Corporate mission statement for 2036:*

"Windows 128"  

128-bit extensions and a graphical shell for a 64-bit patch to a 32-bit operating system originally coded for a 16-bit microprocessor, written by a 8-bit company that can't stand 1-bit of competition !


----------



## Aquinus (Sep 5, 2015)

BiggieShady said:


> Also those are operations on vectors where each component in a vector is a double. No gains in precision, only speedup from SIMD parallelism.
> My point is that true 128bit machine would need to have 128bit memory address space (not going to happen), and have ALU/FPU that supports 128bit base scalar types in a single clock (it would make cpu-s less efficient for less wide operands so not going to happen). Specialized instruction sets that serve as extensions to the x86/x64 work pretty well.


AVX2 expands support to integers IIRC.


qubit said:


> Another good example is a graphics card with a wide data bus such 384- or 512-bit wide.


That's not an apples to apples comparison and I'll explain why. GPUs do the same instruction in tandem to a large set of data so in order to read and write data quick enough, you need a wide bus with a lot of bandwidth. CPUs a bit different because we're talking much more serial applications than GPUs are running. As a result, there are a lot of things like loops, conditionals, and logic, as opposed to data like in GPUs.

This can be showed in overclocking video memory versus system memory. VRAM overclocking tends to scale linearly, system memory does not.


----------



## qubit (Sep 5, 2015)

@Aquinus Yes, it works like that, but I think you missed my point, which was simply that memory chips are ganged together to make memory data buses as wide as necessary for the application. In the case of graphics cards that bus tends to be very wide indeed.


----------



## Atomic77 (Sep 8, 2015)

I said yes we will probably see one at some point  but It could be quite a while some of us might be dead before it happens. 64bit in the PC world all though it hasn't really done much will enventually be replaced.


----------



## Aquinus (Sep 8, 2015)

qubit said:


> @Aquinus Yes, it works like that, but I think you missed my point, which was simply that memory chips are ganged together to make memory data buses as wide as necessary for the application. In the case of graphics cards that bus tends to be very wide indeed.


That's only for the data buses. Width of the actual registers doing math is another thing. FMA is a thing too where you can do essentially two floating point operations at once on a single extra-wide SMID unit. It's how you can do one 256-bit FP op or two (of the same,) 128-bit FP ops.

Although I think this converstation is a bit stupid because there are a lot of widths in a cpu and asking a generic question like, "Will there ever be a need for a 128-bit CPU in your computer", is dumb because it makes the assumption that the CPU doesn't have anything that is other than 64-bit wide for anything in it, which isn't true. We use things wider than 32 and 64-bit often when it comes to everything that isn't directly dealing with physical memory.

I don't think we'll need 128-bit CPUs any time soon with respect to mappable address space.
I'm uncertain as to the necessity to do math operations on larger numbers though which could be a reasonable use case going forward.

With respect to data buses, you'll always have the slower but wider or faster but thinner argument. ...and even then, you have things like PCI-E which mixes the benefits for serial communication with parallel comm.

All in all, I do still think this discussion went off the deep end when it started.


----------



## qubit (Sep 8, 2015)

Aquinus said:


> Although I think this converstation is a bit stupid because there are a lot of widths in a cpu and asking a generic question like, "Will there ever be a need for a 128-bit CPU in your computer", is dumb because it makes the assumption that the CPU doesn't have anything that is other than 64-bit wide for anything in it, which isn't true. We use things wider than 32 and 64-bit often when it comes to everything that isn't directly dealing with physical memory.


I'm talking about the main registers being 128-bit, not the floating point ones, SIMD ones or other specialized registers which can be very wide indeed. Those main registers which do the basic processing of the CPU are what define its word size, not the specialized types, hence the question is still valid.

Finally, I think you're reading more into this than there is and if you don't like this thread because you think it's a bit stupid, you don't have to post in it.


----------



## Aquinus (Sep 8, 2015)

If X86 were to go 128-bit as you suggest it would need to double the size of every address and data register in the CPU. On top of that it would need to double the size of the ALU. On top of that, it would have to expand the widths of data buses to words can efficiently be sent in one clock cycle. Needless to say, the size of the core would increase by a very large amount to accommodate it, that wasn't the case with X86_64.

I think it's stupid because we are nowhere near the limitations of what current machines can do with respect having enough memory or working with data value that are so big. It's at a point where if someone truly needs more than 64-bits for an integer, a floating point number is probably going to serve them better. It's really that simple.

What's not simple is overhauling the CPU to do 128-bit logic across the board because X86_64 simply added extensions to X86 which was already capable of doing 64-bit math, just not addressing 64-bit space.

I say it's dumb because to do what you suggest to x86 because of the number of changes that would be needed and those changes are without a doubt going to increase the size of the core. I'm just making that perfectly clear because a lot of people don't even know the difference between a data and address register and even fewer people understand that 32-bit and 64-bit X86 ALUs both were capable of doing 64-bit math.

128-bit (ALUs, registers, addresses, the works) would be a fundamental change to CPU architecture, unlike X86_64 was.

You would also have to consider if words are going to remain 32-bits big or 64, or 128. The bigger you make words, the more memory is wasted. The number of issues with "wider" grow exponentially which is why you don't see people touting super wide CPUs. It's a crap ton of work for minimal gain. X86_64 really was only to address memory address space, nothing more.


----------



## Bansaku (Sep 8, 2015)

What, like the 64-bit and it's 18.1 exabytes of potential memory not enough?
They will never be a general purpose 128-bit CPU as there is ZERO need for one.


----------



## yogurt_21 (Sep 10, 2015)

Mussels said:


> when abouts do you think we're going to hit the limits of 64 bit? how many years?


based on the push for virtualization and super computers I agree with solaris not too long at all providing the pc and server market continue to go their separate ways. Cost seems to be more limiting than tech these in the server world days I can order 4 R430's with dual 8 core cpus for the same price as one with dual 16 core cpus. obviously rack space, power, convenience and heat,  go to the single 32 core/64 thread server but in the former config I end up with 64 cores/128 threads and more redundancy. The way things are going though in less than 2 years I'll be able to get a 1u rack mount with 64 cores/128 threads for the same price as the 32 cores/64 thread one.

If this tend continues there will be more demand for bigger better server cpus and less worry about how much processing is done on end user machines. Ie mainframes reworked for the modern age. In that case the extra silicone on a 128-bit cpu won't see quite so silly. Crunching larger and larger numbers will continue so long as we maintain our curiosity. Humane genome, space, particle physics, string theory, etc all require huge supercomputers to crunch their numbers. Soon those computers will begin to look silly and someone will start the march towards better number crunchers.

Now qubit said in your computer so I believe he's thinking desktop/laptop/or whatever mobile device will pass for a pc in the future. In that case I think it will take a long time for consumer grade to get it. 64-bit had obvious gains for the consumer 128 bit wont


----------



## Atomic77 (Sep 18, 2015)

It's like this when the time comes it will happen enough said.


----------



## qubit (Sep 18, 2015)

Aquinus said:


> If X86 were to go 128-bit as you suggest it would need to double the size of every address and data register in the CPU. On top of that it would need to double the size of the ALU. On top of that, it would have to expand the widths of data buses to words can efficiently be sent in one clock cycle. Needless to say, the size of the core would increase by a very large amount to accommodate it, that wasn't the case with X86_64.
> 
> I think it's stupid because we are nowhere near the limitations of what current machines can do with respect having enough memory or working with data value that are so big. It's at a point where if someone truly needs more than 64-bits for an integer, a floating point number is probably going to serve them better. It's really that simple.
> 
> ...


Yes, I agree, especially with the first two paragraphs.

However, for some reason though, you're still missing my point and still think I'm advocating such a CPU when I'm not, so you're arguing against something I didn't say. In fact, if you read my OP again, you'll see that I've actually argued against it and also voted No in the poll. 

EDIT

In fact, most people actually voted Yes in the poll, so it's them you're disagreeing with, not me.


----------



## FordGT90Concept (Sep 18, 2015)

bonehead123 said:


> *MS Corporate mission statement for 2036:*
> 
> "Windows 128"
> 
> 128-bit extensions and a graphical shell for a 64-bit patch to a 32-bit operating system originally coded for a 16-bit microprocessor, written by a 8-bit company that can't stand 1-bit of competition !


The original goes:


> 32 bit extensions and a graphical shell [on top of] a 16 bit patch to an 8 bit operating system originally coded for a 4 bit microprocessor, written by a 2 bit company, that can't stand 1 bit of competition.


Merge the two:

128 bit extensions on graphical shell on top of a 64 bit patch to a 32-bit operating system which deviated from a 16 bit patch to an 8 bit operating system originally coded for a 4 bit microprocessor written by a 2 bit company that can't stand 1 bit of competition.

"deviated" = Windows 9x + ME -> NT

I believe the original quote was talking about Windows 95.


----------



## AlwaysHope (Sep 20, 2015)

128 bit 'general' purpose cpu?? how are we defining 'general' here?  yes, semantics does come into it..


----------



## qubit (Sep 20, 2015)

AlwaysHope said:


> 128 bit 'general' purpose cpu?? how are we defining 'general' here?  yes, semantics does come into it..


Why don't you try reading my OP? I explained it clearly there.


----------



## R-T-B (Sep 20, 2015)

I voted yes.  Wanna know why?

Forever is a long time...  and I do believe if humanity is still around 1000 years from now (or more), we'll find a need for this or make one.


----------



## AlwaysHope (Sep 22, 2015)

qubit said:


> Why don't you try reading my OP? I explained it clearly there.



Yes, that's all good and fine, but what was considered 'general' in x86 computer usage a decade ago is somewhat different to what is considered 'general' in today's world and who knows what 'general' will mean another decade from now...


----------



## R-T-B (Sep 22, 2015)

AlwaysHope said:


> Yes, that's all good and fine, but what was considered 'general' in x86 computer usage a decade ago is somewhat different to what is considered 'general' in today's world and who knows what 'general' will mean another decade from now...




True.  For that matter how about "general" 20 years from now?  For all we know discrete GPUs could be gone then and become part of the CPU die and computational unit, maybe even as an additional instruction set (I don't buy that for a second, but who knows?)


----------



## yogurt_21 (Sep 22, 2015)

20 years from now we could be post apocalypse and instead of tech advancements we'd just use the brains of our fallen compadres. Graphics would be amazing, but processing would take a massive hit.


----------



## sayam qazi (Sep 23, 2015)

Mussels said:


> try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.



You are missing something. 32 bit can address ~4GB of RAM while 64 bit can ... well! double that number 32 times i.e. 16 ExaBytes. 

1 exabyte = 1 000 000 000 gigabytes


----------



## FordGT90Concept (Sep 23, 2015)

2^64 = 18,446,744,073,709,551,616 bytes or 16 exa-binary-bytes (EiB) or 18 exabytes (EB).
2^128 = 340,282,366,920,938,463,463,374,607,431,770,000,000 bytes; no SI prefix exists to describe a number that large.  It could coequally be said as 340 billion, billion, billion, billion bytes.


----------



## Basard (Sep 23, 2015)

I don't see why we are even using more than 640k RAM....  Bill Gates deemed it to be enough.


----------



## yogurt_21 (Sep 23, 2015)

FordGT90Concept said:


> 2^64 = 18,446,744,073,709,551,616 bytes or 16 exa-binary-bytes (EiB) or 18 exabytes (EB).
> 2^128 = 340,282,366,920,938,463,463,374,607,431,770,000,000 bytes; no SI prefix exists to describe a number that large.  It could coequally be said as 340 billion, billion, billion, billion bytes.


or 340 trillion yottabytes. or if you could stack terms 340 tera-yottabytes

Essentially as out of reach today in disk storage terms as a petabyte was in 1980. So yeah we're talking a long time before 128-bit becomes useful for servers. Still not sure for whatever desktops will be in 50 years.


----------



## xorbe (Sep 24, 2015)

sayam qazi said:


> You are missing something. 32 bit can address ~4GB of RAM while 64 bit can ... well! double that number 32 times i.e. 16 ExaBytes.
> 
> 1 exabyte = 1 000 000 000 gigabytes



32-bit mode can access more than 4GB with certain modes.  64-bit mode can't actually use full 64-bit addresses, it's something less.  It's more like 36-bit vs 52-bit.  This is to keep page tables and depth at reasonable sizes.  On x86 this is handled by a hard-coded page table walker.  On the defunct Alpha arch, it was software (pal code) defined.

Current x86 chips already have 128 and 256 bit multimedia instructions, don't they?  It's just not extended to the whole instruction set, or addressing modes.  They don't even use 64-bit addressing just yet, there is still room to grow for quite a while.


----------



## Atomic77 (Sep 24, 2015)

Terrabytes Petabytes Zetabytes it could go on and on.


----------



## R-T-B (Sep 24, 2015)

xorbe said:


> 32-bit mode can access more than 4GB with certain modes.



Technically, it can't.  It can however, access memory above the 4GB address space with PAE.  But it can't use more than 4GBs at once per process (ie, at a given moment).

Just nitpicking.  You are pretty much spot on.



Atomic77 said:


> Terrabytes Petabytes Zetabytes it could go on and on.



Yes, those numbers exist, but there comes a point where they are simply too big.  I don't remember where it starts but at some point beyond Petabytes short of going quantum we simply can't do it, because there aren't enough atoms available on the entire planet.


----------



## FordGT90Concept (Sep 24, 2015)

Yotta was added to SI prefixes in 1991.  No doubt more prefixes will be added to describe such a large number but as of right now, they don't exist. 340 quintillion septillion would likely be the most accurate way to phrase it now.


----------



## AlwaysHope (Sep 26, 2015)

Here's one area that will push the limits of 64bit computing. 
I predict that every single byte of data will have unparalleled levels of security associated with it (and all that involves!) will become MUCH more complex than any of us could ever imagine today. This alone could very will chew up a LOT of memory space for those who like 'on the fly' constant encryption like with cloud computing, software as a service, etc.. If we think 512 bit support is pretty good today... then I expect that to more than quadruple, if not then 10 fold or more in the coming years...


----------



## lilhasselhoffer (Sep 26, 2015)

R-T-B said:


> Technically, it can't.  It can however, access memory above the 4GB address space with PAE.  But it can't use more than 4GBs at once per process (ie, at a given moment).
> 
> Just nitpicking.  You are pretty much spot on.
> 
> ...




Thought I'd expound on that.  The Earth is 27.7% Silicon by mass: http://hyperphysics.phy-astr.gsu.edu/hbase/tables/elabund.html
The mass of a single atom of silicon is 28.085 amu.
The mass of earth is 5.972*10^24 kg.
The mass of Silicon is therefore 1.654*10^24 kg.
The amount of silicon atoms on earth is therefore 1.654*10^24 kg / 4.6*10^-26 = 3.5957*10^49 atoms.

Now, the fun:
If you only needed 4 silicon atoms per memory cells, that would mean about 1*10^49 memory cells could be produced.  With the above value of 2^128 being 340*10^36, that would mean that if every atom of silicon was used in CPUs there's be only about 1*10^11 memory chips available.


In short, if we continue to produce chips out of silicon we won't be running out any time soon.  You can alter my assumptions, but that's where computing is currently traveling.


----------



## FordGT90Concept (Sep 26, 2015)

And don't forget the virtually unlimited quantities in...


----------



## DeathtoGnomes (Sep 26, 2015)

Well after reading a bunch of posts I can say there are a lot of naysayers to needing 128-bit and beyond. IMHO, We really do need hit the 256-, or even the 512- bit wide registers, I mean how else are we ever going to get the transporters built and working? The way things look now we cant even have a 64-bit memory controller! Bottlenecks-R-US!! Come-on you alien assholes share that knowledge!


----------



## R-T-B (Sep 26, 2015)

lilhasselhoffer said:


> Thought I'd expound on that.  The Earth is 27.7% Silicon by mass: http://hyperphysics.phy-astr.gsu.edu/hbase/tables/elabund.html
> The mass of a single atom of silicon is 28.085 amu.
> The mass of earth is 5.972*10^24 kg.
> The mass of Silicon is therefore 1.654*10^24 kg.
> ...



Thanks for the scientific backup.  Still, consuming your entire planet plus other parts of space to make a single 128-bit addressable memory chip strikes me as extremely environmentally unfriendly. 

Actually, it wouldn't even surprise me if that's the premise with which aliens will invade us.  "You are now memory chip.  Nom nom"


----------



## DeathtoGnomes (Sep 26, 2015)

R-T-B said:


> Thanks for the scientific backup.  Still, consuming your entire planet plus other parts of space to make a single 128-bit addressable memory chip strikes me as extremely environmentally unfriendly.
> 
> Actually, it wouldn't even surprise me if that's the premise with which aliens will invade us.  "You are now memory chip.  Nom nom"


I think it was "so you want to be a wizard" book series that references a planet of pure silicone, with an attitude...

but...

dropping chips can be environmentally unfriendly too.


----------



## qubit (Sep 28, 2015)

R-T-B said:


> Technically, it can't.  It can however, access memory above the 4GB address space with PAE.  But it can't use more than 4GBs at once per process (ie, at a given moment).
> 
> Just nitpicking.  You are pretty much spot on.


This is called paged memory access and is something they did on 8-bit computers 30 years ago with their tiddly 64K RAM limit. My BBC Master computer did this for a "massive" total of 128K RAM. 

It's quite ironic seeing the same technique being used in a modern computer that can address gigabytes of RAM today.


----------

