Re: NVIDIA Turing - Info a spekulace
Napsal: čtv 30. srp 2018, 00:37
No rekl asi to co jsem cekal ze rikat bude.
Nevim zda jsem neco dulezityho nepreslech, ale zadny mind blowing skutecnosti nerekl.
Diskuze o hardware, software a overclockingu
https://forum.pctuning.cz/
No neviem, ci to je to, co povedal. Bolo to trochu nejasne, ale urcite povedal, ze je to "memory" link a nie "display?" link ako v pripade SLI.Hladis píše:Ja jen posloucham, mam to na jinym monitoru a delam jiny "dulezitejsi" veci.Jinak rekl, ze nVlink je u Turingu jen rychly SLI, jinak je to asi stejny.
//
Slibuje, ze ty dopuruceny ceny zakaznik uvidi.... hmm.
Huang ukazoval 2114Mhz pri predvadeni GTX1080...Eddward píše:2.13Ghz random card OC na Editors days, odpoved na moju otazkuz toho co hovori by to malo byt lepsie ako pri Pascaloch, budeme surprised
to dobre viem, a ? ved som nepisal ze je to dobre alebo zle... vypustil toto cislo a ze budeme prekvapeny a k tomu budu nove nastroje co uz vieme, viac informacii nie je...HEAD píše:Huang ukazoval 2114Mhz pri predvadeni GTX1080...Eddward píše:2.13Ghz random card OC na Editors days, odpoved na moju otazkuz toho co hovori by to malo byt lepsie ako pri Pascaloch, budeme surprised
Povidal hromadu obecnych frazi. To je co mi z toho co rekl vyslo, jelikoz to nejspis neresi neduhy SLI, coz tak nejak priznal. Samo aby to zvladalo 4K, vyssoky obnovovaci frekvence a HDR, tak to potrebuje vyssi prenosovy rychlosti nez stary SLI, takze mi z toho vychazi, ze to je porad stejny princip, jen s vetsi propustnosti. Neni to prevratny reseni, ze se dve karty budou tvarit jako jedna, ikdyz vsiml jsem si, ze do budoucna nejaky takovy "prekvapko" nevyloucil.Timmah! píše: No neviem, ci to je to, co povedal.
este raz si precitaj co som napisal... ja cislu 2,12 verim, preco nie, ale to nie to comu neverimLeon_ píše:To už je iný level dohadov, ak výrobca ukáže na vlastné oči 2,12 a ľudia tomu neveria, že to tak bude![]()
![]()
No ja myslim, ze praveze toto potvrdil, ze to bude mozne. Ibaze ako povedal, nie je to take jednoduche, lebo tam do toho vstupuje latencia, takze na hry to velmi nebude vyuzitelne, lebo keby mala jedna karta cez nvlink tahat data z frame-bufferu druhej, tak by to paradoxne vykon zhorsilo (zrejme oproti situacii, ked ma kazda karta vlastny separatny buffer a data, ktore sa don nezmestili, taha zo systemovej RAM alebo z kerej matere).Hladis píše:Povidal hromadu obecnych frazi. To je co mi z toho co rekl vyslo, jelikoz to nejspis neresi neduhy SLI, coz tak nejak priznal. Samo aby to zvladalo 4K, vyssoky obnovovaci frekvence a HDR, tak to potrebuje vyssi prenosovy rychlosti nez stary SLI, takze mi z toho vychazi, ze to je porad stejny princip, jen s vetsi propustnosti. Neni to prevratny reseni, ze se dve karty budou tvarit jako jedna, ikdyz vsiml jsem si, ze do budoucna nejaky takovy "prekvapko" nevyloucil.Timmah! píše: No neviem, ci to je to, co povedal.
HH: Can people turn on ray tracing mode on non RTX GPUs?
TP: It's up to game developers but most devs coding ray tracing will use Microsoft DXR and that's not NVIDIA proprietary way of coding ray tracing. It's an open way of describing the behavior you want to accomplish for ray tracing. On NVIDIA GPU, especially Turing GPU that's gonna run really well with RTX. The same operations can be emulated on older hardware but it will not run very fast.
HH: On RT Cores and Tensor Cores in Turing. Are they going to be used for RTX or are they going to be exposed in CUDA?
TP: Tensor Cores are exposed through CUDA. RT Cores are enabled through OptiX. OptiX is low level API for ray tracing but developers will most likely use higher level API via RTX or Microsoft DXR.
HH: On DLSS. What is it actually doing? and How is it delivered to gamer?
TP: NVIDIA is using deep learning to train neural network to recognize the sample raster image. Two ways to deliver DLSS: 1) Developers can develop their own AI algorithm 2) More common way -- NVIDIA will use SATURN V supercomputer. Get an early build of the game, run trillions of patterns, generate training data for that game, delivered through game developers or eventually updated in real time.
HH: Can a game developer send their code to NVIDIA for free (for DLSS training)?
TP: Yes. It's not a pay to play thing. NVIDIA is not being paid for it because it's in the interest of the ecosystem and gamers to drive excitement and help the virtuous cycle.
HH: Should we really expect 2080 to be 50% faster vs 1080 in current game at least in general sense? (referring to the chart released by NVIDIA)
TP: It's accurate provided you are GPU limited (e.g. 4K, 1440p high refresh rate, or with eye candy). If you're already running at 250 FPS and CPU limited, you won't see the difference. I'd like to be as accurate as possible with this: 2080 will be much faster than 1080 for most cases. 50% on traditional games running cases while GPU limited (e.g. 4K, 1440p high refresh, or maximum settings).
HH: Is there a case where 2080 might outperform 1080 Ti?
TP: I think so. I would expect some cases where 2080 would beat 1080 Ti but I don't have the data in front of me.
HH: How overclockable is Turing?
TP: It's really good. The 2080 FE is built for overclocking on both power and cooling side. The chip is faster than I've ever seen. When you look at raw clocks of Turing vs raw clock of Pascal, I've showed 2.1Ghz with random cards during Editors' day. NVIDIA is also launching NVIDIA Scanner and GPU Boost 4.0. People will hear more about it after NDA expires.
HH: What do you think your customer is going to be most impressed with when they put in the new Turing card?
TP: The most impressive thing for me is the combination of DLSS and Ray Tracing in new games. On the other hand, I'm going to try the existing games and it will satisfy gamers that are pushing the limits today with high-res monitor, high refresh rate, and turning all the eye candy.
Proste momentalne jen rychlejsi SLI a stale AFR. Tedy zadny velky deal. Snad v budoucnu.Today, NVLink is being used to improve SLI. This will improve configurations such as 144Hz 4K surround. NVIDIA is deploying NVLink as a foundation for future mGPU setup.
NVLink will improve microstuttering but i don't think it will go away. It's more about the current technique of doing mGPU scaling where it's called AFR.
SLI as a brand (motherboards, games, etc) all work with NVLink. Turing changes the connector to NVLink but everything else remains unchanged.
myslim, ze to bolo 2144 MHzHEAD píše:Huang ukazoval 2114Mhz pri predvadeni GTX1080...Eddward píše:2.13Ghz random card OC na Editors days, odpoved na moju otazkuz toho co hovori by to malo byt lepsie ako pri Pascaloch, budeme surprised
http://www.expertreviews.co.uk/samsung/ ... date-specsHence the focus on upscaling and, unsurprisingly in 2018, AI is involved at almost every step along the way. By letting loose a machine-learning algorithm on a set of before and after videos, where 8K has been downscaled to lower resolutions, Samsung’s upscaling algorithm is able to learn how to best tackle upscaling, producing the cleanest possible picture, in all sorts of different scenarios.
v podstatě to už bylo postováno dvě stránky zpátkymadPav3L píše:Tady máte přepis důležitých bodů. Níže je výpis jen toho zajímavějšího, aby to nezabralo pár stránek...
ZA 400€ som predal 1080 s vodným blokom.. pri tej cene s tým výkonom to bude črep.Bandy666666 píše:Mňa dosť zaujíma aký bude mat výkon GTX2060 keď bude pravda že to bude okolo 1080 tak za cenu cca400eur to beriem.Nevie niekto koľko to bude asi mať CUDA jadier? Viem že to bude mať 5GB GDDR6
Spis vsechny hry s RT. Ale standartne to bude pomalejsi nez HW reseni RT a o tom se zminoval i Petersen.del42sa píše: 1.) Většina her bude používat Microsoft DXR, které může implmentovat i AMD skrz Radeon ProRender
Jestli si pamatuju spravne, tak je to "detsky snadny" aktivovat a nic nebrani vyvojari to prohnat HW akceleraci. Aspon to bylo v tom odkazu cos sem daval.del42sa píše: 2.) Hardwarová akcelerace ray tracingu u NV karet poběží přes Optix low level API
jj tohle se potvrdilo a je to logicky. nV to zadara vyvojari prozene pres svuj superkompik a to se implementuje do hry.del42sa píše: 3.) to o čem jsme spekulovali ohledně DLSS se potvrdilo, optimalizace bude v hře ( buď dodá developer sám, nebo využije předtrénovaných AI samplů od Nvidie )