Monday, August 17th 2009
IBM Scientists Use DNA Scaffolding To Build Tiny Circuit Board
Today, scientists at IBM Research and the California Institute of Technology announced a scientific advancement that could be a major breakthrough in enabling the semiconductor industry to pack more power and speed into tiny computer chips, while making them more energy efficient and less expensive to manufacture.
IBM researchers and collaborator Paul W.K. Rothemund, of the California Institute of Technology, have made an advancement in combining lithographic patterning with self assembly - a method to arrange DNA origami structures on surfaces compatible with today's semiconductor manufacturing equipment.Today, the semiconductor industry is faced with the challenges of developing lithographic technology for feature sizes smaller than 22 nm and exploring new classes of transistors that employ carbon nanotubes or silicon nanowires. IBM's approach of using DNA molecules as scaffolding - where millions of carbon nanotubes could be deposited and self-assembled into precise patterns by sticking to the DNA molecules - may provide a way to reach sub-22 nm lithography.
The utility of this approach lies in the fact that the positioned DNA nanostructures can serve as scaffolds, or miniature circuit boards, for the precise assembly of components - such as carbon nanotubes, nanowires and nanoparticles - at dimensions significantly smaller than possible with conventional semiconductor fabrication techniques. This opens up the possibility of creating functional devices that can be integrated into larger structures, as well as enabling studies of arrays of nanostructures with known coordinates.
"The cost involved in shrinking features to improve performance is a limiting factor in keeping pace with Moore's Law and a concern across the semiconductor industry," said Spike Narayan, manager, Science & Technology, IBM Research - Almaden. "The combination of this directed self-assembly with today's fabrication technology eventually could lead to substantial savings in the most expensive and challenging part of the chip-making process."
The techniques for preparing DNA origami, developed at Caltech, cause single DNA molecules to self assemble in solution via a reaction between a long single strand of viral DNA and a mixture of different short synthetic oligonucleotide strands. These short segments act as staples - effectively folding the viral DNA into the desired 2D shape through complementary base pair binding. The short staples can be modified to provide attachment sites for nanoscale components at resolutions (separation between sites) as small as 6 nanometers (nm). In this way, DNA nanostructures such as squares, triangles and stars can be prepared with dimensions of 100 - 150 nm on an edge and a thickness of the width of the DNA double helix.
The lithographic templates were fabricated at IBM using traditional semiconductor techniques, the same used to make the chips found in today's computers, to etch out patterns. Either electron beam or optical lithography were used to create arrays of binding sites of the proper size and shape to match those of individual origami structures. Key to the process were the discovery of the template material and deposition conditions to afford high selectivity so that origami binds only to the patterns of "sticky patches" and nowhere else.
The paper on this work, "Placement and orientation of DNA nanostructures on lithographically patterned surfaces," by scientists at IBM Research and the California Institute of Technology, will be published in the September issue of Nature Nanotechnology and is currently available here.
Source:
IBM
IBM researchers and collaborator Paul W.K. Rothemund, of the California Institute of Technology, have made an advancement in combining lithographic patterning with self assembly - a method to arrange DNA origami structures on surfaces compatible with today's semiconductor manufacturing equipment.Today, the semiconductor industry is faced with the challenges of developing lithographic technology for feature sizes smaller than 22 nm and exploring new classes of transistors that employ carbon nanotubes or silicon nanowires. IBM's approach of using DNA molecules as scaffolding - where millions of carbon nanotubes could be deposited and self-assembled into precise patterns by sticking to the DNA molecules - may provide a way to reach sub-22 nm lithography.
The utility of this approach lies in the fact that the positioned DNA nanostructures can serve as scaffolds, or miniature circuit boards, for the precise assembly of components - such as carbon nanotubes, nanowires and nanoparticles - at dimensions significantly smaller than possible with conventional semiconductor fabrication techniques. This opens up the possibility of creating functional devices that can be integrated into larger structures, as well as enabling studies of arrays of nanostructures with known coordinates.
"The cost involved in shrinking features to improve performance is a limiting factor in keeping pace with Moore's Law and a concern across the semiconductor industry," said Spike Narayan, manager, Science & Technology, IBM Research - Almaden. "The combination of this directed self-assembly with today's fabrication technology eventually could lead to substantial savings in the most expensive and challenging part of the chip-making process."
The techniques for preparing DNA origami, developed at Caltech, cause single DNA molecules to self assemble in solution via a reaction between a long single strand of viral DNA and a mixture of different short synthetic oligonucleotide strands. These short segments act as staples - effectively folding the viral DNA into the desired 2D shape through complementary base pair binding. The short staples can be modified to provide attachment sites for nanoscale components at resolutions (separation between sites) as small as 6 nanometers (nm). In this way, DNA nanostructures such as squares, triangles and stars can be prepared with dimensions of 100 - 150 nm on an edge and a thickness of the width of the DNA double helix.
The lithographic templates were fabricated at IBM using traditional semiconductor techniques, the same used to make the chips found in today's computers, to etch out patterns. Either electron beam or optical lithography were used to create arrays of binding sites of the proper size and shape to match those of individual origami structures. Key to the process were the discovery of the template material and deposition conditions to afford high selectivity so that origami binds only to the patterns of "sticky patches" and nowhere else.
The paper on this work, "Placement and orientation of DNA nanostructures on lithographically patterned surfaces," by scientists at IBM Research and the California Institute of Technology, will be published in the September issue of Nature Nanotechnology and is currently available here.
51 Comments on IBM Scientists Use DNA Scaffolding To Build Tiny Circuit Board
So in other words, human brains are way ahead in the software department. For now.... *queue ominous music*
And just like humans we will not do any thing about it until it's near to late.
anyhow, too late for what?
There is one for blind people too if i remember correctly. Yep we fail at near every thing, except destroying every thing and ironic enough it be us who destroys us. No i think as of yet it cannot but i do think a brain can be interfaced with a computer or we are dam close to it at least... I bet there closer to interfacing the brain with computer than we actually know. And i mean on a larger scale than in pets and for some types of disability.
Sooner or later it will happen.. And we know we can do it if we learn enough as technology advancements will allow us more ways to do it.
Some argue that we are all capable of solving very complex math problems, it's just that we don't know we are doing it. The most common example given is the ability to catch a ball thrown at you. You have to figure out how fast the ball is going, the path it is taking, and then where to put your hand to catch it. There are many variables in this that are also calculated.
All of the data is input and calculated in fractions of a second and updated to the very split second the ball is in our hand.
All very simple, we do it every day, but it requires complex computations to pull it off.
Humans can't get even close because, figuratively speaking, 0-9 are alien concepts to the brain. The brain must calculate a quantity and assign it a value then interpret the value via language, announce it, and repeat. In this regard, the brain operates about 240,000,000 times slower than a Core i7 920.
Both have distinct advantages and disadvantages. Neural networks (brains) have the ability to store, interpret, and recall images at a rate at least three times faster than a CPU. The more complex the image, the greater the lead. Neural networks also have the ability to learn and repair damage (to some extent) which CPUs do not. Neural networks are pretty lousy at math though where CPUs kick ass.
No doubt, a merger of the two would be ideal but that means heading into territory I'm not so certain we should be (you've seen or at least heard of all the sci-fi material out there depicting the possibilities).
Oh, computers can't make an curve either--especially with digital monitors. :laugh: That is figured using one's understanding of how one expects a ball to fly. Just like how one expects that stepping off a cliff means the end of you. The brain doesn't handle the situation with a bunch of equations, mathematics, variables, algebra, etc. It handles it from a very simple perspective that is learned through repetition.
Take, for example, a puppy. You don't have to teach it all of the concepts of math to make it catch a ball. You have to toss a ball at its face until it figures out it has to open its mouth and catch it. Do that for a few days and, if the dog is coordinated enough, it can pull off some pretty remarkable moves to catch that ball in short time. Not because it understands how gravity works--it understands how you throw it. Throw it a different way like put a curve on it and, just like a batter, there's a good chance it won't be able to catch it. The brain either can't register the rotation of the ball fast enough or the eyes simply can't pick up the detail to make that decision. Either way, dog and human alike are fooled. Throw a curve ball every time and viola, both hit/catch it.
Acting out expectations is a simple task for a brain to achieve (requires no "computations"). A computer, on the other hand, could use a high speed camera to tell you the trajectory of a ball just by watching the laces and position of the ball over a few frames. The only difficulty there is programming the computer to "find" the ball and then "find" the laces. The calculations are readily handled by the CPU's architecture with some simple instructions based on fluid dynamics, velocities, and accelerations.
Stepping off of a cliff is confusing instinct with rational thinking. In tests done with newborns, it was shown that the baby's would not cross over a perceived drop of a few feet even though the drop off was covered by a sheet of glass. The baby's would go right up to the edge and stop. It knew instinctively that it was a dangerous situation. It's instinct warned it of a drop off but it could not at this point in time know or comprehend glass. The incentive used to try and get the baby's to cross was the Mom on the other side with a bottle. Definitely not a learned response.
As to the math side of it, we may not be figuring out the paths in the traditional sense, but calculations are none the less being performed. Dug this up.
So how does the same gooey substance simultaneously acquire visual data, calculate positional information, and gauge trajectory to let a lizard’s tongue snatch a fly, a dog’s mouth catch a Frisbee, or a hand catch a falling glass? “With the thousands of muscles in the body, the motor cortex clearly isn’t ‘thinking’ in any sense about movement,” says UC San Diego neuroscientist Patricia Churchland. According to Stanford University’s Krishna Shenoy, the brain seems to create an internal model of the physical world, then, like some super-sophisticated neural joystick, traces intended movements onto this model. “But it’s all in a code that science has yet to crack,” he says. Whatever that code is, it’s not about size. “Even a cat’s brain can modify the most complicated motions while executing them.”
www.ocztechnology.com/products/ocz_peripherals/nia-neural_impulse_actuator
However, a computer can be programmed to learn from repetition. For instance, if you take a robot arm and tell the computer to record the movements as you guide the arm along a path, the computer can repeat that path. If you throw the ball to the same place all the time, it could catch it every time. The only reason why they don't behave like humans is because they process everything differently from humans (binary instead of neurons). That's exactly what I'm getting at. Neurons are very good at controling muscles (they speak the same language). The only real delay is in visual cues as it takes the brain longer to recognize something is flying at you than to position your hands to catch or deflect it.
There are of course exceptions. You learn what can and can not burn you, etc.
I at no time stated that the brain worked like a computer. I said that the brain was doing the calculations on a level and in a way that we do not understand. It absolutely is doing these calculations as, we will stick with the ball, it is able to figure out/calculate variables. If a gust of wind catches the ball at the last second, the brain will recalculate the flight path and move the hand to catch the ball.
The brain can miscalculate and not catch the ball just as a computer can miscalculate due to the input, sensory or binary, being wrong.
We become better at catching the ball through repetition not because of the repetition, but because the brain is learning that the ball will not always follow the calculated path. One thread (ha ha) is calculating the flight path, while another one is waiting for a variable and will then recalculate and supersede the first one if a variable is introduced.
I will give you that catching a ball is somewhat learned as it is instinct to get out of the way when something is thrown at you. You sometimes do not get out of the way in time due to slow reaction, fear or other reasons.
The computer, which for all intents and purpose is nothing more than a glorified abacus, is so vastly inferior to the brain, not on the 1+1=2 scale, but on the "aha, I've got it" scale, that it will be centuries before it is even close to cognitive thinking.
The entire concept of math/algebra is used to describe what happens around us; not in any way control it. The decimal system represents an epic fail with arcs, zero, and on the quantum level. The world doesn't run on numbers--we try to stick numbers on everything to make it appear less chaotic. It also gives a sense of control over it which in turn suppresses fears. Numbers still don't define nature--they aren't the lowest common denominator. If that were the case than eventually, you'd never miss (a computer can be made to behave that way). Reality is even the best professional baseball players can and do miss. Expectations determine hit or miss, not inflight calculations. As evidence of this, most players know if they are going to swing or not before the ball is even thrown. How the batter swings (bunt, slow, or hard) is also determined before hand. Once the ball is thrown, all that is decided, based on expectations, is when to swing the bat in their predetermined way. If it is a curve ball and the batter didn't expect it, the batter will most likely miss or foul. If the ball is moving slower than expected, the batter will most likely swing too early. If the ball is moving faster than expected, the batter will swing late.
The swing itself is controlled through experience. This is why a different bat than usual can really screw a batter up. On the other hand, if you give a bat to a robot, it can calculate the weight distribution of the bat and all the optics can be set up to never miss. Humans (margin of error) never operate on a degree of exactness that computers always operate on (zero errors, only operator/programmer error). Computer's expertise is binary; the brain's expertise is recognizing food, threats, and genetic compatibility. A computer can do what the brain can do inefficiently and the brain can do what computers do inefficiently. Leave the thinking to humans and the calculating to computers.
It has been a very entertaining topic and I thank you for the stimulating conversation.
I will counter the last couple of points and then give you the floor. You would always stand a chance of missing as there are an infinite number of variables that could happen. The catcher could miss-step, he could see something in the stands that distracts him etc. The catcher however is a pro and therefore his abilities are naturally better than the average person. This is why he very seldom misses compared to the average person. This particular example has nothing to do with the brain, this is a case of pure gambling.
If the pitcher has thrown a curve,two fast and a drop, and the next pitch according to the films and his stats will be a curve, then all calculations, however performed, are removed as the batter is going to swing in a particular way with no adjustments or compensation. I would argue that these are pre-programmed instincts for survival. Food is a try it and see proposition, threats such as loud noises, sudden movement etc. are common in all animals and genetic compatibility, well, there are anti sheep laws out there for a reason.:laugh:
The brains true gift is reasoning, no other animal can even come close to it. Reasoning encompasses the " I wonder why that happens.."
The other gift is abstract thinking. It is not always right, but it is a very powerful mechanism. It may have been wrong in the " he is sick because demons are in him..." but none the less, this was a very abstract and profound thought.
The floor is yours.