Weitere Aktivitäten

85 Pinnwandeinträge

Seite 1 von 9 12345 ...
  1. GPU Applications Get The Container Treatment

    Containerized high performance computing is fast becoming one of the more popular ways of running HPC workloads. And perhaps nowhere is this more apparent than in the realm of GPU computing, where Nvidia and its partners have devoted considerable attention to making this method as seamless as possible.

    This is all the more appreciated in HPC set-ups, where GPUs add another layer of software complexity, which, in turn, has provided an extra bit of motivation for Nvidia to embrace the container model. The company’s efforts in this area has centered on the Nvidia GPU Cloud (NGC) which encompasses its own registry of GPU-based containers. The only significant drawback is that compatibility is limited to Nvidia GPUs of fairly recent vintage, specifically the Pascal-generation and Volta-generation processors.
  2. NVLink Bridge Comparison: Quadro RTX vs. GV100 - What's the Difference? - Exxact

    Today we’ll compare the NVLink Bridge for the New NVIDIA Quadro RTX, and the previous generation NVIDIA Quadro GV100 graphics cards. There are some major differences here, so be sure to pay attention!

    Probably the most stark difference is the fact that the RTX only has 1x NVLink bridge, as where the older Volta Quadro GV100 has two. As where the top of the link Quadro RTX 8000/6000 has a maximum bandwidth of 100 GB/s, the older Quadro GV100 having two bridges, effectively allowing for a maximum bandwidth of 200 GB/s! Furthermore, on the lower end Quadro RTX 5000, the maximum bandwidth is (only) 50GB/s, half that of the higher end RTX cards.
  3. The New Chip From NVIDIA Is All Set To Become The Brain Of Robots

    This Ikea kitchen is incubating the robots of the future

    Nvidia today opened its robotics research lab, a 13,000-foot facility in Seattle. The lab will house 50 roboticists, 20 from Nvidia Research staff, and others from the wider academic research community.

    Dieter Fox, a member of University of Washington computer science faculty, is the director of the lab. The lab was intentionally opened in Seattle to be close to knowledgable people from the University of Washington such as Human-Centered Robotics Lab director Maya Cakmak.

    The lab officially opened last November but moved to its permanent headquarters today.
  4. NVIDIA's $1,100 AI brain for robots goes on sale

    NVIDIA's plan to power autonomous robots has kicked off in earnest. The company has released a Jetson AGX Xavier Module that gives robots and other intelligent machines the processing oomph they need for their AI 'brains.' You're not about to buy one yourself -- it costs $1,099 each in batches of 1,000 units. However, it could be important for delivery robots and other automatons that need a lot of specialized performance with relatively little power use.

    The Jetson Xavier system-on-chip at the heart of the module relies on no less than six processors to get its work done. There's a relatively conventional eight-core ARM chip, but you'll also find a Volta-based GPU, two NVDLA deep learning chips and dedicated image, video and vision components.
  5. Mercedes and NVIDIA team up to build next-gen AI vehicles

    From modular vans to autonomous cars -- and, as recently revealed at CES, gesture-based controls -- Mercedes has some big ambitions for the next generation of its vehicles. Now, it's announced that AI company NVIDIA will be the team to help it achieve them.

    Speaking to the audience at the Mercedes-Benz booth at this year's CES, Mercedes-Benz Executive Vice President Sajjad Khan and NVIDIA founder and CEO Jensen Huang unveiled their vision for the next-generation of AI vehicles. "We're announcing a new partnership going forward, creating a computer that defines the future of autonomous vehicles, the future of AI and the future of mobility," said Huang.
  6. Nvidia’s T4 GPUs are now available in beta on Google Cloud – TechCrunch

    Google Cloud today announced that Nvidia’s Turing-based Tesla T4 data center GPUs are now available in beta in its data centers in Brazil, India, Netherlands, Singapore, Tokyo and the United States. Google first announced a private test of these cards in November, but that was a very limited alpha test. All developers can now take these new T4 GPUs for a spin through Google’s Compute Engine service.

    The T4, which essentially uses the same processor architecture as Nvidia’s RTX cards for consumers, slots in-between the existing Nvidia V100 and P4 GPUs on the Google Cloud Platform. While the V100 is optimized for machine learning, though, the T4 (as its P4 predecessor) is more of a general-purpose GPU that also turns out to be great for training models and inferencing.
  7. IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

    IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to provide what the companies call the “the highest performance in any tested converged system” while supporting data science practices and AI data pipelines (data prep- training- inference- archive) in which data volumes continually grow.

    Called IBM SpectrumAI with Nvidia DGX, the all-flash offering is designed to be an AI data infrastructure and is configurable from a single IBM Elastic Storage Server, to a rack of nine Nvidia DGX-1 servers with 72 Nvidia V100 Tensor Core GPUs, up to multi-rack configurations. IBM said Spectrum storage scales “practically linearly,” with random read data throughput requirements to feed multiple GPUs. The system has demonstrated 120GB/s of data throughput in a rack, according to IBM.
  8. NVIDIA Wins First AI Benchmarks - Nvidia Corporation ist the clear Winner and No.1 - all AMD Vega GPU's last (lack of AI Software) and remains in the dust of Nowhere!

    Finally, I would point out that NVIDIA is still the only game in town for accelerating neural network training using on-premises infrastructure, in any cloud service (AWS, Azure, Alibaba) other than GCP, or even in any GCP-hosted training other than for TensorFlow. Consequently, this is all a bit of a tempest in a teapot.

    Nonetheless, NVIDIA’s breadth of AI hardware and software solutions, TensorCore performance, ease-of-use with the Nvidia GPU Cloud repository, and global ecosystem in AI production and research will keep NVIDIA in the lead for some time to come.
  9. Embedded-Plattform bietet 32 TeraOps fuer den Einsatz von KI in der Robotik

    NVIDIA hat die Verfügbarkeit seiner Embedded-Plattform Jetson AGX Xavier in Form alleinstehender Produktions-Module angekündigt. Die Plattform richtet sich speziell an den Einsatz fortschrittlicher KI- und Embedded-Vision-Anwendungen, die es Roboterplattformen, im Feld mit Leistung auf Workstation-Ebene und völlig autonom zu arbeiten.

    32 TeraOPS versprechen leistungsstarke KI im Feld

    Das Jetson AGX Xavier Modul ist das jüngste Bauteil aus NVIDIAs Jetson AGX-Familie an Embedded Linux-Hochleistungsrechenplattformen. Es verspricht eine GPU-Workstation-Leistung im Feld sowie beachtliche 32 TeraOPS (OPS = Operations per second) an Rechenleistung, um KI-Anforderungen auch ohne direkte Cloud--Anbindung stemmen zu können. Für schnelle Datenübertragung sorgen 750 Gbit/s Hochgeschwindigkeits-I/Os in kompakten 100x87 mm Formfaktor.
  10. SC18: "Supercomputing Conference" NVIDIA CEO Jensen Huang on the New HPC

    NVIDIA CEO Jensen Huang addresses 700+ attendees of SC18, the annual supercomputing conference, in Dallas, where he revealed the rapid adoption of the NVIDIA T4 cloud GPU, the company's growing position on the TOP500 list of the world's fastest supercomputers, showcased groundbreaking demos, and unveiled news regarding the NGC container registry.

Zeige Pinnwandeinträge 1 bis 10 von 85
Seite 1 von 9 12345 ...
Über Pro_PainKiller

Allgemeines

Über Pro_PainKiller
Liest PCGH:
PCGH.de & Heft (Abo)
Biografie:
HPC Consultant
Wohnort:
LinkX Kybernetik Interconnect
Interessen:
Machine Learning: deep structured & hierarchical learning
Beruf:
HPC IT Verantwortlicher
Mein PC
Prozessor:
Intel Xeon E5-4650L FibreCluster
Mainboard:
HP ProLiant BL660c Gen8
Arbeitsspeicher:
512GB +64 GB
Festplatte(n):
SAS-SSD 1200.2 6-fach 3840GB Fibreraid @ VMware vSphere Enterprise & Oracle VM VirtualBox
Grafikkarte:
Titan Xp @ SLi
Sound:
Onboard deaktiviert
Netzteil:
Enterprise Server Redundant Hot-Plug-Netzteil mit 2200 W
Gehäuse:
Server-Racks TS 8, High Performance Cooling Systeme HPC
Betriebssystem:
Linux (alle Distributionen)

Signatur


NVIDIA DGX Station | Intel Xeon Platinum 8180 @ 4.6 GHz (28-Core) | 512GB LRDIMM DDR4 | 4X Tesla V100 | 128GB HBM2 | NVLink Quad-GPU
Speedtest Result - What's your speed? http://www.speedtest.net/result/5867316291.png / http://www.hitmee.com/wp-content/upl...ebt-Crisis.jpg

Statistiken


Beiträge
Beiträge
765
Beiträge pro Tag
0,54
Letzter Beitrag
AMD Navi: Vier Varianten einer Navi-GPU in MacOS-Update genannt 21.01.2019 17:47
Pinnwandeinträge
Pinnwandeinträge
85
Letzter Pinnwandeintrag
21.01.2019 18:43
Diverse Informationen
Letzte Aktivität
Gestern 21:49
Mitglied seit
20.03.2015
Empfehlungen
0

1 Freund

  1. slot108 slot108 ist offline

    Software-Overclocker(in)

    slot108
Zeige Freunde 1 bis 1 von 1
Seite 1 von 6 12345 ...

23.01.2019


17.01.2019


16.01.2019


14.01.2019


13.01.2019



Seite 1 von 6 12345 ...
Statistik

Beitragsstatistik


iTrader Profile Aktuelle Bewertungen
Marktplatz:
Positives Feedback:
0
0%
Mitglieder die positiv bewertet haben:
Mitglieder die negativ bewertet haben:
0
0
Total positive Feedbacks: 0
Bestätige Feedback für Pro_PainKiller
Zeige alle Feedbacks für Pro_PainKiller
  Letzten
Monat
Letzten
6 Monate
Letzten
12 Monate
  0 0 0
  0 0 0
  0 0 0