Stránka 12 z 13

Re: NVIDIA a plány s PhysX

Napsal: čtv 6. kvě 2010, 10:47
od nou
tak je pravda ze fermi je v roznych OpenCL aplikaciach a benchmarkoch +- dvakrat rychlejsia.

tuna som spravil male porovnanie. mojej radeon 5850 s geforce 480 ktorej vysledky som nasiel na nete. http://pctforum.tyden.cz/viewtopic.php?f=97&t=156451

celkovo poblem s OpenCL a aj vseobecne s GPGPU na ATI je v tom ze ATI ma omnoho menej kontrolnych obvodov ako nvidia. moderne CPU maju malicke ALU teda aktualne obvody ktore robia samotne vypocty (scitanie nasobenie atd.) a vetsinu plochy (okrem L cache co je dalsi rozdiel) zaberaju kontrolne obvody na dekodovanie instrukcii predpovedanie toku instrukcii a vetvenia a ine serepeticky.

nVidia sa s Fermi neuveritelne priblizila k CPU, takze je jednoduchsie (pre programatora) vytiahnut pri GPGPU ten vykon ako pri ATI. na druhu stranu sa podla mna ATI/AMD ani do toho nebudu az tak hrnut kedze maju k dispozicii aj klasicke CPU.

Re: NVIDIA a plány s PhysX

Napsal: pát 7. kvě 2010, 08:09
od Fofrer
http://extrahardware.cnews.cz/nvidia-s- ... na-geforce

nic jak jsem cekal .. jen se cerila voda a zazrak se nedeje :twisted:

Re: NVIDIA a plány s PhysX

Napsal: stř 7. srp 2013, 22:30
od Krteq
Krapet to tu oživím. Roy Taylor (bývalý VP a CTO u nV) věští konec PhysX a CUDA.
"I think CUDA is doomed. Our industry doesn’t like proprietary standards. PhysX is an utter failure because it’s proprietary. Nobody wants it. You don’t want it, I don’t want it, gamers don’t want it. Analysts don’t want it. In the early days of our industry, you could get away with it and it worked. We’ve all had enough of it. They’re unhealthy.

Nvidia should be congratulated for its invention. As a trend, GPGPU is absolutely fantastic and fabulous. But that was then, this is now. Now, collectively our industry doesn’t want a proprietary standard. That’s why people are migrating to OpenCL."
VR-Zone: Roy Taylor on APUs, gaming, and the end of CUDA

Re: NVIDIA a plány s PhysX

Napsal: čtv 8. srp 2013, 09:19
od del42sa
Konec PhysX je prorokovaný mnohými už delší dobu, ale myslím, že díky tomu že CPU verzi nabízí Nvidia zdarma, tak pro menší studia a B tituly to je jediná možnost, protože Havok je dost drahá záležitost a Bulet zatím není tak rozšířený a ani nenabízí takové možnosti s ohledem na snadnou implementaci a vývojářské nástroje, jako nabízí PhysX. Minimálně díky tomu PhysX nevymře imho.

Re: NVIDIA a plány s PhysX

Napsal: čtv 8. srp 2013, 09:43
od Krteq
Tak on Ray Taylor je takový ten klasický "PR mág". Když byl u nV, tak podobné výroky zněly na adresu ATi atp.

Ale v tom, že hromada lidí přechází z CUDA na OpenCL, má celkem pravdu a to o PhysX se myslím týkalo převážně GPU PhysX, který běží právě na CUDA.

Re: NVIDIA a plány s PhysX

Napsal: ned 11. srp 2013, 17:19
od nou
ale ma pravdu. CUDA je ospravldnitelna tak pre vedecke alebo ine specializovane vypocty kde sa proste nakupi zelezo specianlne na to. ale nikto nebude tvorit hru ktoru si nezahra vecsina hracov. AMD malo teselaciu a kym sa to nedostalo do DX tak sme to mohli vidiet iba v technologickych demach.

Re: nVidia PhysX

Napsal: sob 31. kvě 2014, 22:51
od Hladis
Update první stranky s odkazy. Veskera diskuse o PhysX bude zde.

Crytek a nVidia se dohodli na pridani Gameworks s PhysX do hry Warface.
http://www.geforce.com/whats-new/articl ... -gameworks
Krteq píše: Roy Taylor věští konec PhysX a CUDA
Tohle je spis usmevny :)

Re: NVIDIA a plány s PhysX

Napsal: sob 31. kvě 2014, 22:56
od oneb1t
nou píše:ale ma pravdu. CUDA je ospravldnitelna tak pre vedecke alebo ine specializovane vypocty kde sa proste nakupi zelezo specianlne na to. ale nikto nebude tvorit hru ktoru si nezahra vecsina hracov. AMD malo teselaciu a kym sa to nedostalo do DX tak sme to mohli vidiet iba v technologickych demach.
do dx se to ani dostavat nemuselo ten snad ani zadnou specialni podporu pro teselaci nema tam slo spis o to ze graficka pipeline se stala vic programovatelna (taky diky dx11) a grafiky jsou vykonejsi

proto si ted jednotlivy enginy muzou naprogramovat tesselation shadery (predtim se to delalo v geometry shaderech) a tim ziskat teselaci (pokud by sis to chtel zkusit treba sam doma tak tady je navod jak si takovou vec naprogramovat)

http://antongerdelan.net/opengl/tessellation.html

Re: nVidia PhysX

Napsal: pon 2. čer 2014, 11:29
od Krteq
Hladis píše:
Krteq píše: Roy Taylor věští konec PhysX a CUDA
Tohle je spis usmevny :)
Ani ne, trh mu dává za pravdu. CUDA je na ústupu.

Re: nVidia PhysX

Napsal: pon 2. čer 2014, 11:57
od michal3d
Krteq píše:Ani ne, trh mu dává za pravdu. CUDA je na ústupu.
- hej? a na to su aj nejake statistiky ktorymi by si to mohol podlozit, alebo je to len taky tvoj pocit? :)

Re: nVidia PhysX

Napsal: pon 2. čer 2014, 12:23
od Hladis
Muzes mi nejak konkretne definovat ten ustup CUDA krome marketingstiny ? (to ani nemluvim o vesteni konce) Konkretni pripady a výsledky. Uprime v renderu me to prislo, ze CUDA naopak posilila a ne ze ztraci.
I'll try to summarize my experiences obtained in the course of developing ViennaCL, where we have CUDA and OpenCL backends with mostly 1:1 translations of a lot of compute kernels. From your question I'll also assume that we are mostly taking about GPUs here.

Performance Portability. First of all, there is no such thing as performance-portable kernels in the sense that you write a kernel once and it will run efficiently on every hardware. Not in OpenCL, where it is more apparent due to the broader range of hardware supported, but also not in CUDA. In CUDA it is less apparent because of the smaller range of hardware supported, but even here we have to distinguish at least three hardware architectures (pre-Fermi, Fermi, Kepler) already. These performance fluctuations can easily result in a 20 percent performance variation depending on how you orchestrate threads and which work group sizes you choose, even if the kernel is as simple as a buffer copy. It's probably also worth mentioning that on pre-Fermi and Fermi GPUs it was possible to write fast matrix-matrix multiplication kernels directly in CUDA, while for the latest Kepler GPUs it seems that one has to go down to the PTX pseudo-assembly language in order to get close to CUBLAS' performance. Thus, even a vendor-controlled language such as CUDA appears to have issues to keep the pace with hardware developments. Moreover, all CUDA code gets compiled statically when you run nvcc, which somewhat requires a balancing act via the -arch flag, while OpenCL kernels get compiled at run-time from the just-in-time compiler, so you can in principle tailor kernels down to the very specifics of a particular compute device. The latter is, however, quite involved and usually only becomes a very attractive option as your code matures and as your experience accumulates. The price to pay is the O(1) time required for just-in-time compilation, which can be an issue in certain situations. OpenCL 2.0 has some great improvements to address this.

Debugging and Profiling. The CUDA debugging and profiling tools are the best available for GPGPU. AMD's tools are not bad either, but they do not include gems like cuda-gdb or cuda-memcheck. Also, still today NVIDIA provides the most robust drivers and SDKs for GPGPU, system freezes due to buggy kernels are really the exception, not the rule, both with OpenCL and CUDA. For reasons I probably do not need to explain here, NVIDIA no longer offers debugging and profiling for OpenCL with CUDA 5.0 and above.

Accessibility and Convenience. It is a lot easier to get the first CUDA codes up and running, particularly since CUDA code integrates rather nicely with host code. (I'll discuss the price to pay later.) There are plenty of tutorials out there on the web as well as optimization guides and some libraries. With OpenCL you have to go through quite a bit of initialization code and write your kernels in strings, so you only find compilation errors during execution when feeding the sources to the jit-compiler. Thus, it takes longer to go through one code/compile/debug cycle with OpenCL, so your productivity is usually lower during this initial development stage.

Software Library Aspects. While the previous items were in favor of CUDA, the integration into other software is a big plus for OpenCL. You can use OpenCL by just linking with the shared OpenCL library and that's it, while with CUDA you are required to have the whole CUDA toolchain available. Even worse, you need to use the correct host compilers for nvcc to work. If you ever tried to use e.g. CUDA 4.2 with GCC 4.6 or newer, you'll have a hard time getting things to work. Generally, if you happen to have any compiler in use which is newer than the CUDA SDK, troubles are likely to occur. Integration into build systems like CMake is another source of headache (you can also find ample of evidence on e.g. the PETSc mailinglists). This may not be an issue on your own machine where you have full control, but as soon as you distribute your code you will run into situations where users are somewhat restricted in their software stack. In other words, with CUDA you are no longer free to choose your favourite host compiler, but NVIDIA dictates which compilers you are allowed to use.

Other Aspects. CUDA is a little closer to hardware (e.g. warps), but my experience with linear algebra is that you rarely get a significant benefit from it. There are a few more software libraries out there for CUDA, but more and more libraries use multiple compute backends. ViennaCL, VexCL, or Paralution all support OpenCL and CUDA backends in the meanwhile, a similar trend can be seen with libraries in other areas.

GPGPU is not a Silver Bullet. GPGPU has been shown to provide good performance for structured operations and compute-limited tasks. However, for algorithms with a non-negligible share of sequential processing, GPGPU cannot magically overcome Amdahl's Law. In such situations you are better off using a good CPU implementation of the best algorithm available rather than trying to throw a parallel, but less suitable algorithm at your problem. Also, PCI-Express is a serious bottleneck, so you need to check in advance whether the savings from GPUs can compensate the overhead of moving data back and forth.

My Recommendation. Please consider CUDA and OpenCL rather than CUDA or OpenCL. There is no need to unnecessarily restrict yourself to one platform, but instead take the best out of both worlds. What works well for me is to set up an initial implementation in CUDA, debug it, profile it, and then port it over to OpenCL by simple string substitutions.( You may even parametrize your OpenCL kernel string generation routines such that you have some flexibility in tuning to the target hardware.) This porting effort will usually consume less than 10 percent of your time, but gives you the ability to run on other hardware as well. You may be surprised about how well non-NVIDIA hardware can perform in certain situations. Most of all, consider the reuse of functionality in libraries to the largest extent possible. While a quick&dirty reimplementation of some functionality often works acceptable for single-threaded execution on a CPU, it will often give you poor performance on massively parallel hardware. Ideally you can even offload everything to libraries and don't ever have to care about whether they use CUDA, OpenCL, or both internally. Personally I would never dare to write vendor-locked code for something I want to rely on in several years from now, but this ideological aspect is should go into a separate discussion.

Re: nVidia PhysX

Napsal: pon 2. čer 2014, 13:23
od Krteq
Hladisi, postovat celý post s pohledem jediného člověka navíc někdy z půlky loňského roku (stačí jedna citace a hlavně odkaz :? Na to můžu rovnou reagovat tímhle :roll:

2014 will be the OpenCL Year

OpenCL už snad předloni předběhlo CUDA v mimo HPC segmentu, kde se podíl CUDA drží. Všude jinde, kde se využívá GPGPu, přebírá vedení OpenCL.

Re: nVidia PhysX

Napsal: pon 2. čer 2014, 13:32
od Hladis
Ta citace byla jen mimochodem. Stále nevim kde OpenCL CUDA predbiha a v cem. Ja vidim posileni pozice CUDA za poslední pul rok, konkretne v renderingu.

Re: nVidia PhysX

Napsal: pát 19. zář 2014, 17:22
od Hladis
Prezentace nového PhysX efektu Turf Effects a PhysX Flex pro travu, tekutiny a kolize s objekty https://www.youtube.com/watch?v=y1Lcbo4l_UY

Re: nVidia PhysX

Napsal: stř 12. lis 2014, 09:19
od ArgCZ
no takže jsem nahrál cca 30 hodin s PhysX na Borderlands 2 a pak jsem to vypnul, prostě mě to vadilo, vadilo mi to od začátku a myslel jsem si že si zvyknu a nějak tomu přijdu na chuť ale nestalo se, negativa převažují nad pozitivy, ten shluk bordelu kterej mě přijde naprosto nepřirozenej a přeplácanej mě spíše znepřehledňoval bojiště

prostě já z toho při vší snaze odvařenej nejsem, čekal jsem kapku něco jiného no

Re: nVidia PhysX

Napsal: stř 4. bře 2015, 23:01
od Hladis

Re: nVidia PhysX

Napsal: čtv 5. bře 2015, 08:06
od plastelinak
ArgCZ píše:no takže jsem nahrál cca 30 hodin s PhysX na Borderlands 2 a pak jsem to vypnul
Ja se ani nedivim, ja mel SLI 660 a FPS dropovaly i na 30, zkousel sem i hrat s jednou kartou + dedikovane physx na druhe ale ty fps dropy byly silene.

Je to jen tresnicka na dortu co pridava par peknych efektu, ale u me to byl jeden z duvodu proc koupit nvidiu.

Ani me nenapada jina hra, kde by byl physx tak videt, jak u borderlands.

EDIT: A to jsem mel u tech 660 upravenej bios, 170 power limit a 1260mhz jadro. V Sli porazeli i 780ti v nekterych hrach.

Re: nVidia PhysX

Napsal: sob 1. srp 2015, 14:22
od showtek
Ani sa ti necudujem ze si smal dropy ja som mal kombo 2x 970 SLi + 750ti physx dedikovanu pri vejucej vlajke v AC black flag malo vytazenie 750ti gpu 50% a teraz si zober ze 750ti OC verzia je +/- autobus na tom ako jedna tvoja 660 este dodam ze k SLi sa odporuca dalsia dedikovana karta pokial chces physx lebo to sposobuje dropy a tie som bez 750ti pocitoval aj na SLi 970 co uz je ina liga ako 660 na druhu stranu :) kupovat kartu za 160eur len pre par effektov (prechadzanie cez dym alebo vejuca vlajka) je tiez na uvazenie hlavne ked si vezmem ze physx je len v par hrach a este dodam ze aj ta vlajka viala tak trorcha divne :D

Re: nVidia PhysX

Napsal: čtv 2. úno 2017, 07:30
od ArgCZ
https://www.youtube.com/watch?v=H9nZWEekm9c

ty testy jsou jak generátor náhodných čísel :D

Jinak teda jsem poprvé teď hrál Mirrors Edge a tam mě na rozdíl od Borderlands 2(viz výše) přišlo PhysX super, ale nemam už bohužel nVidiu takže mě to při požadavku na více efektů okamžitě padlo na 10FPS(nejspíš přehozeno počítání PhysX na CPU?)

Re: nVidia PhysX

Napsal: úte 25. zář 2018, 17:15
od mareknr
V rámci nových ovládačov pre podporu RTX GPU (411.63) sa nachádza aj nová verzia ovládača pre PhysX - 9.18.0907. Zmeny oproti predchádzajúcej verzii som nenašiel. Na nvidia.com majú informácie zatiaľ iba k predchádzajúcej verzii 9.17.0524.

Tiež by som sem rád doplnil aj pár článkov s vyjadreniami o PhysX od Pierra Tiedemanna, ktorý sa na jeho tvorbe podielal ešte v časoch, keď sa toto API nevolalo PhysX (t.j. ešte predtým ako ho kúpila AGEIA) a robí to až doteraz. Prečítať by si to mali hlavne neustáli kritici. V článku z posledného tretieho odkazu je naznačené aj to, v akom stave je PhysX momentálne a ako pokračuje jeho vývoj.

Článok o mýte, že bol PhysX "skriplený": http://www.codercorner.com/blog/?p=1129
Výkonnostné zlepšovanie PhysX naprieč verziami (ukázané na testoch) - http://www.codercorner.com/blog/?p=914
Raytracing, RTX 2080, DXR, PhysX - http://www.codercorner.com/blog/?p=2013

Taktiež to vyzerá, že Metro Exodus bude mať okrem RTX podpory aj GPU PhysX a Hairworks:

https://wccftech.com/metro-exodus-nvidi ... rks-physx/