Jump to content
Washington Football Team Logo
Extremeskins

EET: Moore's Law Dead by 2022, Expert Says


JMS

Recommended Posts

Moore's law was coined in 1965  by Gordon E. Moore a cofounder of Intel. 
 
It originally drove a doubling of processor speed every 18 months.  or  "1 MHz to 5 GHz, a 3,500-fold increase in speed" from 1980-today..... which in turn drove computer sales.   In the 1990's when heat became a major limitation to increasing clock speed Moore's law drove transister density on integrated circuit boards doubleing every 18months ~ 2 years...
 
http://www.eetimes.com/document.asp?doc_id=1319330

Moore's Law Dead by 2022, Expert Says
 
PALO ALTO, Calif. — Moore's Law -- the ability to pack twice as many transistors on the same sliver of silicon every two years -- will come to an end as soon as 2020 at the 7nm node, said a keynoter at the Hot Chips conference here.
While many have predicted the end of Moore's Law, few have done it so passionately or convincingly. The predictions are increasing as lithography advances stall and process technology approaches atomic limits.
"For planning horizons, I pick 2020 as the earliest date we could call it dead," said Robert Colwell, who seeks follow-on technologies as director of the microsystems group at the Defense Advanced Research Projects Agency. "You could talk me into 2022, but whether it will come at 7 or 5nm, it's a big deal," said the engineer who once managed a Pentium-class processor design at Intel.
....


I don't think it's driven computer sales as much as it did when clock speeds were doubling back in the 1990's...    I think we've already seen an end of the days when your new cutting edge computer became outdated and almost unuseable over a 2 year period..    I think when Moore Law formally ends we will see that cycle even become more prolonged, as Computers will become more like TV and other appliances which you will own fo years, decades and still be considered cutting edge.

Link to comment
Share on other sites

That's a good point about speed driving sales JMS.  My two computers are both about 4 years old and fast enough for anything I want to do now or that I can foresee in the future.  Newer ones are not very much faster, if at all, just a bit more energy efficient.  

 

Where recent advances have been really important is in peripheral devices like SSDs and increases in regular hard drive sizes.  When I first got an ssd it was the single most noticeable upgrade I can remember and they will be getting a lot faster.  Having terabytes of storage is also very handy.  Getting more memory or a slightly faster processor was cool back in the day but now it's other systems that can add value or make upgrades feasible.   That is till quantum computers are in the picture.

Link to comment
Share on other sites

I really don't see quantum computers becoming commercially available, even once they're actually functional. Theoretically a quantum computer could crack any traditional encryption algorithm in a tiny fraction of the time any other system could do it. If/once they become a reality, the US gov't (or whoever develops it) will lock that **** down with a quickness. Can you imagine the havoc that could ensue from rogue regimes or just every day hackers getting hold of something like that?

 

Well, unless quantum encryption is employed, which is actually a reality. But I don't think it's use is very widespread yet.

Link to comment
Share on other sites

That's a good point about speed driving sales JMS.  My two computers are both about 4 years old and fast enough for anything I want to do now or that I can foresee in the future.  Newer ones are not very much faster, if at all, just a bit more energy efficient.  

 

Where recent advances have been really important is in peripheral devices like SSDs and increases in regular hard drive sizes.  When I first got an ssd it was the single most noticeable upgrade I can remember and they will be getting a lot faster.  Having terabytes of storage is also very handy.  Getting more memory or a slightly faster processor was cool back in the day but now it's other systems that can add value or make upgrades feasible.   That is till quantum computers are in the picture.

 

Yeah what's hapenned with CPU's since the late 1990's is they've added more cores...  so instead of buying 1 cpu with 1 core...  now you buy 1 cpu with 2, 4, 8, 16 cores  all running at roughly the same speed you ran at a decade or more ago..   but more cores don't make your computer incrementally faster,   if the software you are running isn't writen to take advantage of multiple cores it doesn't add much speed at all.    And most users just don't undertand why multiple cores would be necessary.   It was easier to understand getting twice or four times faster computer and that being a good reason to upgrade.

 

I have to say,  I don't even have a SSD yet.   I had to think about what you were reffering to..    I do have one of those portable usb drives but I think that's different from the internal super fast solid state drives you are reffering too...   I know apples new iMac which is an entirely new concept ( footprint/ case) for the PC  doesn't even have room for an internal non SSD drive..   Which supports your thoughts on the matter.

Apples new Mac Pro, no internal drive bay ( other than SSD ) and no cd drive.. just usb ports for everything.

MacPro000b.jpg

Link to comment
Share on other sites

I really don't see quantum computers becoming commercially available, even once they're actually functional. Theoretically a quantum computer could crack any traditional encryption algorithm in a tiny fraction of the time any other system could do it. If/once they become a reality, the US gov't (or whoever develops it) will lock that **** down with a quickness. Can you imagine the havoc that could ensue from rogue regimes or just every day hackers getting hold of something like that?

 

Well, unless quantum encryption is employed, which is actually a reality. But I don't think it's use is very widespread yet.

I guess a quantum computer is ultimately the limit to Moores law with regard to circuit density being engineered down to the density of subatomic particals ( quantum mechanics ).. But Moores Law will break well before that. All it takes to break Moores law is to not be able to double circuit density in any given 2 year period... So let's say quantum computers become commerically available in 2050.. but circuit density stopped doubling in 2025..

I think a more interesting phenomina is the difference in philosophies going on in the software and the hardware. As the hardware get's more and more sophisticated ( multi cores )... The majority of software today is moving away from languages that can handle that complexity with closer ties to HW and Memory like ( C, C++) to higher level programming languages like (Java ) and even higher level languages like ( Java Script )..

Java for example doesn't support pointers for memeory access, and has all sorts of saftey constraints built into the language, but still supports multi threading.

And java script which doesn't even support multi threading or dynamic memory allocation.

Link to comment
Share on other sites

I really don't see quantum computers becoming commercially available, even once they're actually functional. Theoretically a quantum computer could crack any traditional encryption algorithm in a tiny fraction of the time any other system could do it. If/once they become a reality, the US gov't (or whoever develops it) will lock that **** down with a quickness. Can you imagine the havoc that could ensue from rogue regimes or just every day hackers getting hold of something like that?

 

Well, unless quantum encryption is employed, which is actually a reality. But I don't think it's use is very widespread yet.

I guess that's true, I'm no expert, but who knows what we haven't figured out yet?  It's the unknown possibilities with a paradigm shift like quantum computing that are exciting.  Even if it's not going to happen in my life it's something cool to imagine.

 

 

I have to say,  I don't even have a SSD yet.   I had to think about what you were reffering to..    I do have one of those portable usb drives but I think that's different from the internal super fast solid state drives you are reffering too.

 

They are amazing, flat out.  I remember in the early 90's thinking that we will use memory for storage one day.  It was crazy expensive then.  I'm glad I got to see it and gladder still I get to use it.

Link to comment
Share on other sites

We're coming to a point where transistors atomically won't be able to get smaller with the current silicon-based semiconductors. That will be a part of why graphene will be so important. Graphene, being made of carbon which is the smallest element with four valence electrons, will allow for smaller transistors.

Link to comment
Share on other sites

Moore's law will still apply. I was just out visiting a company the other day that was working on some new technology that would allow Moore's law to continue. I can't for the life of me remember what it was since I visit several technology companies every week. I want to say it involved photo-lithography.

 

Being a gamer, When I get a new system, it tends to be pretty close to the best of everything when I get it, which is overkill for 99% of applications out there. So I can keep a CPU for several year since the only thing I really will upgrade is a Graphics Card. Usually that's good enough to keep pushing through most games for a couple more years. I built my current CPU in Dec 2011 and its still going strong. I just replaced the Graphics card to a 770GTX last year. No issues at all with games. Well Star Citizen gives me issues but its still in Alpha.


 

They are amazing, flat out.  I remember in the early 90's thinking that we will use memory for storage one day.  It was crazy expensive then.  I'm glad I got to see it and gladder still I get to use it.

Don't defrag that SSD, they have limited write capability.

Link to comment
Share on other sites

Is that true?   I can't imagine a HD which you wouldn't need to defrag occasionally..

Well fortunately for your imagination, it's not a Hard Drive. The way they work is fundamentally different what without the moving parts and timings to get to the physical location the data is stored.

Also, you should look into different file systems with regards to defragmentation. The file system that I've been using for years, ext4, allocates memory in a manner to reduce fragmentation in the first place.

Link to comment
Share on other sites

I guess that's true, I'm no expert, but who knows what we haven't figured out yet?  It's the unknown possibilities with a paradigm shift like quantum computing that are exciting.  Even if it's not going to happen in my life it's something cool to imagine.

 

 

They are amazing, flat out.  I remember in the early 90's thinking that we will use memory for storage one day.  It was crazy expensive then.  I'm glad I got to see it and gladder still I get to use it.

http://www.purestorage.com/

Link to comment
Share on other sites

Is that true?   I can't imagine a HD which you wouldn't need to defrag occasionally..

 

Yeah SSD' store things different and their retrieve times are so much faster. Like I said the write cycles are limited, here is an article on how to take care of them.

 

http://www.pcworld.com/article/2043634/how-to-stretch-the-life-of-your-ssd-storage.html

 

They are awesome to put  you OS on and other large programs that you don't remove.

Link to comment
Share on other sites

Is that true?   I can't imagine a HD which you wouldn't need to defrag occasionally..

The purpose of defragging is to take pieces of files that have slowly been scattered around the hard drive and consolidate them so that the hard drive's head doesn't have to go looking in 10 different places on a spinning disk to put together one file to load into memory.

 

SSDs don't have moving parts, so the locations of various pieces of data that make up a file are irrelevant.  They're all accessed electronically and no one location is any "closer" than any other.

 

It is true that defragging an SSD is a bad idea, but nowadays it's more useless than harmful.  It can slightly shorten the life span, but the technology has improved to the point that current SSDs are going to last for a long time anyway.  Definitely longer than you're likely to keep any individual computer.  If you have an older model it would have more of a negative effect.

Link to comment
Share on other sites

Well fortunately for your imagination, it's not a Hard Drive. The way they work is fundamentally different what without the moving parts and timings to get to the physical location the data is stored.

Also, you should look into different file systems with regards to defragmentation. The file system that I've been using for years, ext4, allocates memory in a manner to reduce fragmentation in the first place.

Yes yes, but thinking this through, SSD's are efficent because there is no platter, no need to spin the head. The entire array of memory can be accessed dynamically rather than sequentially between disk spins.

So it alot more efficent than traditional HD's; but that's irrelivent... as the memory get's fragmented your efficiency is still going to take a hit as you have to collect data and compile it from multiple locations rather than one block read. The SSD is still not as fast as Ram and as long as it's using the same file systems as HD's it will still suffer inefficiencies due to disk fragmation.. Right.

Link to comment
Share on other sites

The purpose of defragging is to take pieces of files that have slowly been scattered around the hard drive and consolidate them so that the hard drive's head doesn't have to go looking in 10 different places on a spinning disk to put together one file to load into memory.

That's a good answer.

 

SSDs don't have moving parts, so the locations of various pieces of data that make up a file are irrelevant.  They're all accessed electronically and no one location is any "closer" than any other.

I guess my point is it's still going to be faster to read your data in one block read from one location than it will be to access the same SSD across 10 locations to accumulate all your data with 10 separate read opperations, which is how the file system would have to assemble your fragmented file.

I realize teh SSD is still going to be more efficent than a traditional HD with spinning platters whcih physically have to be moved into place.. But it fragmented will still be less efficient than a defragmented SSD.

It is true that defragging an SSD is a bad idea, but nowadays it's more useless than harmful.  It can slightly shorten the life span, but the technology has improved to the point that current SSDs are going to last for a long time anyway.  Definitely longer than you're likely to keep any individual computer.  If you have an older model it would have more of a negative effect.

Another good explaination.. thanks.

Link to comment
Share on other sites

Yes yes, but thinking this through, SSD's are efficent because there is no platter, no need to spin the head. The entire array of memory can be accessed dynamically rather than sequentially between disk spins.

So it alot more efficent than traditional HD's; but that's irrelivent... as the memory get's fragmented your efficiency is still going to take a hit as you have to collect data and compile it from multiple locations rather than one block read. The SSD is still not as fast as Ram and as long as it's using the same file systems as HD's it will still suffer inefficiencies due to disk fragmation.. Right.

Wrong. Data is not just stored in one block, its stored in hundreds of thousands of blocks of data....I think its 8kb blocks, but I forget, I had the class over 2 years ago. But the size is not that important as it is a small amount. So the computer has to read all of these blocks even if they are sequential. A Normal HD has to spin and move the heads to read the blocks all across the HD. For a SSD, these blocks are all the same distance from each other no matter where they are.

Link to comment
Share on other sites

Wrong. Data is not just stored in one block, its stored in hundreds of thousands of blocks of data....I think its 8kb blocks, but I forget, I had the class over 2 years ago.

True, and the block size is configurable.

 

But the size is not that important as it is a small amount. So the computer has to read all of these blocks even if they are sequential.

Kind of.. Yes a computer has to read all those blocks if they are sequential.. But if they are sequential a computer can read them in one step, rather than having to spin the platters intermitantly between navigating to different blocks in order to collect all the file's data.. The read traverses blocks, or the read buffer can be larger than the disk block size.

Now with a SSD you don't have the huge time hit of having wait for the physical platters to spin into position ever; and the heads to find the specific blocks; because the SSD can address thememory dynamically like ram. But with a fragmented drive even a SSD would have to make multiple reads, rather than one long sequential read. Which would still hose you up with regard to efficiency.

 

A Normal HD has to spin and move the heads to read the blocks all across the HD. For a SSD, these blocks are all the same distance from each other no matter where they are.

Yes but you can read multiple blocks from a traditional HD with one instruction if they are sequential. If they aren't sequential it would turn into multiple reads.

The same would be true of a SSD, only you wouldn't have to spin any heads between the multiple reads.

that's why running the defragger on a SSD would still be helpful, even if even a fragmented SSD would still be better than a fragmented traditional HD where the platters would have to physically spin.

Link to comment
Share on other sites

A while ago, there was a saying in the tech industry ... "Grove giveth and Gates taketh away".

 

(Performance advances from Intel - CEO Andy Grove, were consumed by the latest version of Windows).

 

There's no doubt that the massive advances in hardware have in part been swallowed by software bloat. If we reach the end of the road with hardware advances for a while, maybe we can go back to writing more efficient code.

Link to comment
Share on other sites

No that is not how it works. On a normal HD a program can be located all over the place. The speed of the HD is affected by this as it has to physically move, this causes the delay. If the program is located in one continuous block, this speeds things up because the drive doesn't have to move all over the place to read things. However if it takes 10,000 instructions to read a program, it still takes 10,000 instructions because the information is stored in these 8kb blocks actually I think it is 2KB blocks but its minor difference for this example. The HD has to be pointed to all of these blocks even if they are in order. For example and unfragmented drive is told to read block 5, 1, 6, 10, 567, 9836 and 123485 to read the program. The head has to move to those locations slowing things down. An unfragmented drive is told to read blocks 1, 2, 3, 4, 5, and 6 to read a program, the head doesn't have to move much so its faster. For a SSD blocks 5, 1, 6, 10, 567 and 123485 are the same "distance" as blocks 1,2,3,4,5 and 6 are. Thus for a a SSD there is no seek time. 10,000 instructions is 10,000 instructions for a fragmented drive, and unfragmented drive and a SSD. No matter what type of HD it always has to be pointed to each individual block even if they are in order.

Link to comment
Share on other sites

A while ago, there was a saying in the tech industry ... "Grove giveth and Gates taketh away".

 

There's no doubt that the massive advances in hardware have in part been swallowed by software bloat. If we reach the end of the road with hardware advances for a while, maybe we can go back to writing more efficient code.

I, for one, pride myself on efficient coding. Then again, I'm focused on embedded systems, so limited resources are an expectation for me.
Link to comment
Share on other sites

Is that true?   I can't imagine a HD which you wouldn't need to defrag occasionally..

You never defrag a solid state drive.  Heck, a major part of the way they actually work often requires data to be fragmented physically.  Unless the tech has been changed since the last time I looked (may well be the case), when you modify a block, you need to write that block to a free block and re-map.  You now can delete the old block.  You want to not access that old storage area from reuse to maximize drive life.  The new logical mapping isn't going to slow you down anyway since its got to go through a logical map anyway.  It is not like a hard drive where you have a read head needing to go to a specific place, you now just create a flow from one set of map coordinates on the ssd to a set of coordinates in memory.

A while ago, there was a saying in the tech industry ... "Grove giveth and Gates taketh away".

 

There's no doubt that the massive advances in hardware have in part been swallowed by software bloat. If we reach the end of the road with hardware advances for a while, maybe we can go back to writing more efficient code.

I remember developing some hugely complex GUIs that had to live in just 512k of RAM.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...