Tuesday, November 10th 2015

NVIDIA Announces Jetson TX1 Module to Bring Deep Learning to Robots and Drones

NVIDIA today unveiled a credit-card sized module that harnesses the power of machine learning to enable a new generation of smart, autonomous machines that can learn. The NVIDIA Jetson TX1 module addresses the challenge of creating a new wave of millions of smart devices -- drones that don't just fly by remote control, but navigate their way through a forest for search and rescue; compact security surveillance systems that don't just scan crowds, but identify suspicious activity; and robots that don't just perform tasks, but tailor them to individuals' habits -- by incorporating capabilities such as machine learning, computer vision, navigation and more.

Jetson TX1 is the first embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information. This new approach to program computers is called machine learning and can be used to perform complex tasks such as recognizing images, processing conversational speech, or analyzing a room full of furniture and finding a path to navigate across it. Machine learning is a groundbreaking technology that will give autonomous devices a giant leap in capability.
With its 1 teraflops of performance -- comparable to the fastest supercomputer from 15 years ago -- Jetson delivers exceptional performance for machine learning, computer vision, GPU computing and graphics, while drawing very little power.

"Jetson TX1 will enable a new generation of incredibly capable autonomous devices," said Deepu Talla, vice president and general manager of the Tegra business at NVIDIA. "They will navigate on their own, recognize objects and faces, and become increasingly intelligent through machine learning. It will enable developers to create industry-changing products."

Available as a module, Jetson TX1 is also built into a Developer Kit, which enables hobbyists and professionals to develop and test highly advanced autonomous devices. This makes it easy to transition from development to manufacturing and production.

Industry Support
Ovum Principal Analyst Michael Azoff said: "Jetson TX1 is a significant advance in moving machine learning applications from research into the real world. It has uses that range from artificial intelligence-assisted robots, to advanced systems in automobiles, and to Internet of Things-connected intelligent machines. The ecosystem around Jetson will accelerate the transfer of AI from lab to real-world machines."

Sertac Karaman, professor of aeronautics and astronautics at the Massachusetts Institute of Technology, said, "NVIDIA's Jetson TX1 is so powerful and easy to use, we decided to base MIT's robotics systems and science course around it. Our students will use TX1 for embedded vision, stereo reconstruction and machine learning, so their scale racecars will be able to detect and avoid obstacles. I'm excited with the possibilities that Jetson offers."

Jeff Bier, president of Berkeley Design Technology, Inc., said: "Based on BDTI's independent analysis, the Jetson TX1 stands out in three respects. First, developing applications on the Jetson TX1 feels more like developing on a PC than like developing on a typical embedded board. Second, the JetPackTX1 installer makes it easy to install a system image on the board. Third, support for CUDA enables developers to use the GPU to accelerate their applications without having to delve into the complexities of GPU programming."

System Specs and Software
Key features of Jetson TX1 include:
  • GPU: 1 teraflops, 256-core Maxwell architecture-based GPU offering best-in-class performance
  • CPU: 64-bit ARM A57 CPUs
  • Video: 4K video encode and decode
  • Camera: Support for 1400 megapixels/second
  • Memory: 4GB LPDDR4; 25.6 gigabits/second
  • Storage: 16GB eMMC
  • Wi-Fi/Bluetooth: 802.11ac 2x2 Bluetooth ready
  • Networking: 1GB Ethernet
  • OS Support: Linux for Tegra
  • Size: 50mm x 87mm, slightly smaller than a credit card
Jetson TX1 includes the most comprehensive SDK for embedded visual computing, including:
  • cuDNN is a CUDA-accelerated library for machine learning. For both training and inference, it is compatible with many industry-standard frameworks, including Caffe, Theano and Torch.
  • VisionWorks is a CUDA-accelerated library and framework for computer vision. It is an implementation of the OpenVX 1.1 specification with additional NVIDIA extensions.
  • Support for the latest graphics drivers and APIs, including OpenGL 4.5, OpenGL ES 3.1 and Vulkan.
  • Support for CUDA 7.0. CUDA turns the GPU into a general-purpose processor, giving developers access to tremendous parallel performance and power efficiency.
Availability
The NVIDIA Jetson TX1 Developer Kit can be preordered starting Nov. 12 for $599 in the United States, with availability in other regions in the next few weeks. The Jetson TX1 module will be available in early 2016 at a suggested price of $299 (in quantities of 1,000 or more) from distributors around the world.
Add your own comment

31 Comments on NVIDIA Announces Jetson TX1 Module to Bring Deep Learning to Robots and Drones

#1
95Viper
btarunr"Jetson TX1 will enable a new generation of incredibly capable autonomous devices," said Deepu Talla, vice president and general manager of the Tegra business at NVIDIA. "They will navigate on their own, recognize objects and faces, and become increasingly intelligent through machine learning. It will enable developers to create industry-changing products."
Shades of Skynet
Posted on Reply
#2
truth teller
600 smackaroos
doesnt come with a 7" full hd touch screen display
doesnt come with a 3500mah battery
i cant put a sim card in it and make phone calls
doesnt fit in my pocket
also doesnt run crysis 1 at very low settings

if you think people are going to pay high end smartphone money for a castrated arm based smartphone you can stick some wires in you need to take a step back and rethink your strategy
Posted on Reply
#3
HumanSmoke
truth tellerdoesnt come with a 7" full hd touch screen display
doesnt come with a 3500mah battery
i cant put a sim card in it and make phone calls
doesnt fit in my pocket
also doesnt run crysis 1 at very low settings

if you think people are going to pay high end smartphone money for a castrated arm based smartphone you can stick some wires in you need to take a step back and rethink your strategy
You should get on the phone to those idiots at M.I.T. and let them know what a mistake they are making. They might even give you tenure out of sheer gratitude.
Actually seems pretty cheap for a dev kit when measured against AMD's Seattle development board which is ticketed at $3000
Posted on Reply
#4
truth teller
HumanSmokeYou should get on the phone to those idiots at M.I.T. and let them know what a mistake they are making. They might even give you tenure out of sheer gratitude.
Actually seems pretty cheap for a dev kit when measured against AMD's Seattle development board which is ticketed at $3000
that is server grade hardware, 8 core 128gb ram, released in q4 2013 (i think)
this however is an expensive intel edison knock off, yet to be released in q4 2015, that can only get 1teraflop when using 16 bit floating point (maybe 0.5 in fp32 if lucky/overclocked)
so i dont see why your comparison is more fitting than my previous one, face it, this is almost a smartphone on a breadboard (some missing parts)
Posted on Reply
#5
HumanSmoke
truth tellerthat is server grade hardware, 8 core 128gb ram, released in q4 2013 (i think)
The A1100 dev board has 16GB DDR3 support. You are confusing the board with what the production A1100 supposedly supports.
Just as a contrast the 1GB DDR3 version is supposedly priced ~$400
truth tellerthis however is an expensive intel edison knock off, yet to be released in q4 2015, that can only get 1teraflop when using 16 bit floating point (maybe 0.5 in fp32 if lucky/overclocked)
Well, duh! neural networks use half-precision, so why wouldn't they be measuring FLOPs at FP16?

Anyhow, be sure to let me know you get on putting these M.I.T. longhairs right - you clearly have a better grasp of the situation than they do.
Posted on Reply
#7
vega22
95ViperShades of Skynet
oh fuck....
Posted on Reply
#8
Musaab
nVidia made good use of their powerful GPUs and the CUDA programming language. nVidia is securing Tegra's future .
Posted on Reply
#9
Diverge
Anyone know how to get the educational pricing? I still have a working .edu email :)
Posted on Reply
#10
truth teller
HumanSmokeneural networks use half-precision (floating point)
i hope you are not serious, have you ever implemented one? most dont even use floating point at all, integer based chains is where it is at

and if the single target is nn and/or deep learning, one can create an array of fpgas fed by a raspberry pi, which is way cheaper, in most cases with better performance (hell, even a ps4 can do 2teraflops) and you are not locked to one linux distribution/binaries.
HumanSmokeyou clearly have a better grasp of the situation
better luck next time ???
Posted on Reply
#11
HumanSmoke
truth telleri hope you are not serious, have you ever implemented one? most dont even use floating point at all, integer based chains is where it is at
I suggest you givethis and this a read. They lead tothis.
GPU based deep learning seems to be a growth industry. From what I've seen, FPGA, Xeon/Xeon Phi, and IBM Power seem limited in scope because of the individual attention required for each project. i.e. the difference between basic/pure research and commercialization - at least for now. While FPGA's will obviously gain traction in the future, they haven't progressed beyond limited research stage at this point.

But anyhow, let me know how you get on convincing M.I.T, Stanford et al (not to mention all the start-ups and neural networking vendorsleveraging GPU cloud based services like AWS and Azure) how they are making a huge mistake. I'm sure they'd love to hear from you.
Posted on Reply
#12
truth teller
HumanSmoke*bunch of links with linear algebra operations and simplified views on how neural networks are built and work*
excuse me?

also you keep mentioning mit and such, if tomorrow someone, at said institutions, released a paper calming the world is square would it matter? wtf... just because they at some point used a kepler based _server_ solution does it mean they will buy and use this? jesus...
Posted on Reply
#13
HumanSmoke
truth telleralso you keep mentioning mit and such, if tomorrow someone, at said institutions, released a paper calming the world is square would it matter? wtf... just because they at some point used a kepler based _server_ solution does it mean they will buy and use this? jesus...
Do you have reading difficulties? From the article:
btarunrSertac Karaman, professor of aeronautics and astronautics at the Massachusetts Institute of Technology, said, "NVIDIA's Jetson TX1 is so powerful and easy to use, we decided to base MIT's robotics systems and science course around it.
While I generally support the TPU community, I am faced with a choice of who to believe:
a: Anonymous forum poster, or
b: Professor of Aeronautics and Astronautics with a Ph.D in electrical engineering and computer science from one of the U.S. most prestigious learning institutes. The professor seems to be not alone, since a company called Kespry seem to be a Day One launch partner as well.
Posted on Reply
#14
truth teller
HumanSmokeDo you have reading difficulties?
the article mentions that the hardware will be used for teaching purposes _only_, that means automation with probably collision avoidance in small robots, and maybe some object detection/avoidance on some visual computing and/or robotic courses.
your posts on the other hand suggests that this will be massively used by mit/et-al to build a large scale neural network that will be so influential that people will flock to the stores to buy this and detect the faces of which neighbor pissed in your lawn from cctv footage.
my point still stands, this is nothing more than a smartphone prototype on a bread board so you can stick wires in it, and charging 600 shekels for it is just ludicrous.
all those kepler based server had one going in favor: standard interface, pci express, this has a fan on top and some wires beneath. they are hooping this will become a platform and people (read companies) will build _custom_ hardware for/around it, and i will bet you my anonymous phd degree in forum posting it wont pan out the way they want. now if this was priced like the tk1 it might have a change, folks used it in school, "oh thats cool, i might get one", it would get the average electron/bit fiddlers attention and companies would start to see the value in it, but like this no way jose

posted via my esp8266 based swarm computing array
...
Posted on Reply
#15
HumanSmoke
truth tellerthe article mentions that the hardware will be used for teaching purposes _only_, that means automation with probably collision avoidance in small robots, and maybe some object detection/avoidance on some visual computing and/or robotic courses.
Last time I checked, Universities expend money to buy equipment and infrastructure
truth telleryour posts on the other hand suggests that this will be massively used by mit/et-al to build a large scale neural network that will be so influential that people will flock to the stores to buy this and detect the faces of which neighbor pissed in your lawn from cctv footage.
I suggested no such thing, so quit the hyperbole and trolling. The only other example I posted was for an industrial use. I made no mention of "people flocking to the stores". You obviously do have reading difficulties - you can get help for that. If you could read, you would note that the Jetson launch partners (with the possible exception of Stereolabs) are commercial-industrial concerns - Kespry, Percepto, and Herta Security.
Just because your thinking doesn't extend beyond Wal-Mart, doesn't mean that everyone else is bound by the same limitations.
Posted on Reply
#17
truth teller
HumanSmokeI suggested no such thing
HumanSmokeYou obviously do have reading difficulties
i see...
truth tellerif you think people are going to pay high end smartphone money for a castrated arm based smartphone you can stick some wires in you need to take a step back and rethink your strategy
HumanSmokeYou should get on the phone to those idiots at M.I.T. and let them know what a mistake they are making.
clearly i am the one that cant read, after all by mentioning the high price point for regular consumer i somehow insulted the mit folks
HumanSmokequit the hyperbole and trolling
yeah... whatever you are trying to acomplish with all those "just googled" links aint working... thx for the repeated insults btw
Posted on Reply
#18
HumanSmoke
truth tellerif you think people are going to pay high end smartphone money for a castrated arm based smartphone you can stick some wires in you need to take a step back and rethink your strategy
truth tellerclearly i am the one that cant read, after all by mentioning the high price point for regular consumer i somehow insulted the mit folks
HumanSmokeI suggested no such thing, so quit the hyperbole and trolling.
truth tellerif you think people are going to pay high end smartphone money for a castrated arm based smartphone you can stick some wires in you need to take a step back and rethink your strategy
truth tellerthx for the repeated insults btw
Just pointing out your lack of comprehension. Glad you liked my terminology though - I usually reserve it for trolls. QED.

Anyhow, I'm done here. Continue to wail about how an industrial/commercial product is an instant failure because...
truth telleralso doesnt run crysis 1 at very low settings
...because I don't care, and I seriously doubt M.I.T., and the rest of the launch partners do either.
Posted on Reply
#19
Rockarola
HumanSmokeJust pointing out your lack of comprehension. Glad you liked my terminology though - I usually reserve it for trolls. QED.

Anyhow, I'm done here. Continue to wail about how an industrial/commercial product is an instant failure because...

...because I don't care, and I seriously doubt M.I.T., and the rest of the launch partners do either.
Whoa, you are not very good at playing nice, are you?
Why all the anger? MIT is not infallible, I am not infallible and neither are you...no need to call posters trolls, unless they live under a bridge and harass goats! ☺
Posted on Reply
#20
HumanSmoke
RockarolaWhoa, you are not very good at playing nice, are you?
Why all the anger? MIT is not infallible, I am not infallible and neither are you...no need to call posters trolls, unless they live under a bridge and harass goats! ☺
Reasonable discussion = Cool
People putting words into your mouth and indulging in straw man arguments = Not cool

You can easily check my posting record to ascertain what is acceptable and what is not WRT a reasonable discussion IMO. Others who think commercial hardware is a loser if it can't play Crysis I don't tend to put into that category either.
Posted on Reply
#21
Rockarola
HumanSmokeReasonable discussion = Cool
People putting words into your mouth and indulging in straw man arguments = Not cool

You can easily check my posting record to ascertain what is acceptable and what is not WRT a reasonable discussion IMO. Others who think commercial hardware is a loser if it can't play Crysis I don't tend to put into that category either.
I can check YOUR record to see what is acceptable? Really? Like really really?

If you don't get the "can it run Crysis" joke, you should step back, take a deep breath and grow a sense of humour.
'mike drop' I'm outta here, before I say something my mother will be angry about.
Posted on Reply
#22
HumanSmoke
RockarolaI can check YOUR record to see what is acceptable? Really? Like really really?
Yes. What is acceptable in my opinion
HumanSmokeYou can easily check my posting record to ascertain what is acceptable and what is not WRT a reasonable discussion IMO.
Who else's opinion would be applicable to my posts.
RockarolaIf you don't get the "can it run Crysis" joke, you should step back, take a deep breath and grow a sense of humour.
Can it run Crysis is less of a joke than an overused trite meme. But whatever. I hear some people found Bernard Manning funny as well.
Posted on Reply
#23
truth teller
HumanSmokeJust pointing out your lack of comprehension.
indeed, im having quite a hard time trying to to figure out why you went straight in to "attack mode"
the product has two prices points, same sku, one for educational institutions and another for "people"/individuals, as i've put it (are corporations people? despite having people in them i dont think so), with a $300 premium on one of those (2 fold!), you promptly mention that mit is going to use it ("big whoop", for classes) and ask multiple times for me to have a talk with some smarter phd there.
so just answer me one thing, what did my first post, or any of the other ones, have to do with that when i am/was only referring to the price when sold to individuals? and im the troll with reading problems? ive got bad new for you "buddy"...
who (which individual, so you can understand properly) in their right mind will pay _600 american pesos_ for a phones cpu with wires poking out of it? you can continue to enjoy listing more corporations that are on the "sponsors tab", and it wont change anything about what ive said, nor its veracity
*and post*
Posted on Reply
#24
95Viper
HumanSmokeWho else's opinion would be applicable to my posts.
The moderation staff's.
Posted on Reply
#25
HumanSmoke
truth tellerthe product has two prices points, same sku, one for educational institutions and another for "people"/individuals, as i've put it...[ ]...with a $300 premium on one of those (2 fold!)
What is unusual about that? A lot of hardware and software has discounted pricing for educational users ( and bulk discounts for non-profit org's IIRC).
truth tellerso just answer me one thing, what did my first post, or any of the other ones, have to do with that when i am/was only referring to the price when sold to individuals?
The whole premise of your argument seems to revolve around Nvidia charging too much for too little in relation to its competition. Correct? And because of this, the product is instant fail?
The same of the former can be levelled at pretty much all of Nvidia's product stack. Quadro costs a premium, Tesla costs a premium, GeForce costs a premium etc etc... Yet, even with those prime examples of premium priced hardware backed by a CUDA software stack, the company still sells to OEMs, ODMs, and individuals - and moreover dominates the market in spite of the pricing and hardware features. Given that the company has leveraged their software to provide a deep learning ecosystem, why should it fare any worse than these other product lines? (This is the question I would like answered)

The only product line that effectively failed was the one where they developed with no market to sell to and no immediate customers (Tegra as consumer phone/tablet) - that is clearly not the case here.
95ViperThe moderation staff's.
No, they rule on content and its relationship regarding forum rules of conduct. They do not (or should not) have a bearing on directing an opinion on another's on-topic posting.
Posted on Reply
Add your own comment
Sep 7th, 2024 22:23 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts