It feels like just yesterday that Chips and Cheese started publishing (*checked and they started up in 2020 -- so not that long ago after all!), and now they've really become a mainstay in my silicon newsletter stack, up there with Semianalysis/Semiengineering/etc.
> Intel uses a software-managed scoreboard to handle dependencies for long latency instructions.
Interesting! I've seen this in compute accelerators before, but both AMD and Nvidia manage their long-latency dependency tracking in hardware so it's interesting to see a major GPU vendor taking this approach. Looking more into it, it looks like the interface their `send`/`sendc` instruction exposes is basically the same interface that the PE would use to talk to the NOC: rather than having some high-level e.g. load instruction that hardware then translates to "send a read-request to the dcache, and when it comes back increment this scoreboard slot", the ISA lets/makes the compiler state that all directly. Good for fine control of the hardware, bad if the compiler isn't able to make inferences that the hardware would (e.g. based on runtime data), but then good again if you really want to minimize area and so wouldn't have that fancy logic in the pipeline anyways.
> both AMD and Nvidia manage their long-latency dependency tracking in hardware
This is incorrect for AMD, which has "s_waitcnt" instructions in its ISA, which is publicly documented. I believe it is also incorrect for Nvidia, but don't have the receipts to prove it.
I actually have a B580 that I replaced my old A580 with.
I didn't manage to get it for MSRP (because living in Europe does tend to increase the price quite a bit, a regular RTX 3060 is over 300 EUR here), but I have to say that it's a pretty nice card, when most others seem quite overpriced or outside of my budget.
When paired with an 5800X the performance is good, the XeSS upscaling looks prettier than FSR and pretty close to DLSS, the framegen also seems to have higher quality than FSR (but more latency, from what I've seen), the hardware AV1 encoder is lovely and the other QSV ones are great, though I do wish that I could get a case big enough and a new PSU to have both A580 and B580 in the same computer and use the B580 for games and A580 for the other stuff (not quite sure how well that combination would work, if at all).
Either way, I'm happy that I got the card, especially with a decent CPU (even the A series with my previous Ryzen 5 4500 was an absolute mess, no software showed the CPU being maxed out but it very much was a bottleneck) and do kind of hope that I'll get the likes of performance that you get in War Thunder, or even GTA V Enhanced Edition for the years to come (yes, the raytracing works there as well) or even more recent games like Kingdom Come: Deliverance 2.
If the upscaling/framegen support was even better in most game engines and games, then it could be stretched further or at least used as a band aid for the likes of Delta Force or Forever Winter - games that come out with pretty bad optimization and are taxing on the hardware, with no good way to turn subjectively unnecessary effects or graphical features off, despite the underlying engines themselves being able to scale way down.
At the end of the day, even if Intel Arc won't displace any of the big players in the market, it should improve the market competitiveness which is good for the consumer.
I'm also hoping that Intel puts out an Arc A770 class upgrade in their B-series line-up.
My workstation and my kids' playroom gaming computer both have A770's, and they've been really amazing for the price I paid, $269 and $190. My triple screen racing sim has an RX 7900 GRE ($499), and of the three the GRE has surprisingly been the least consistently stable (e.g. driver timeouts, crashes).
Granted, I came into the new Intel GPU game after they'd gone through 2 solid years of driver quality hell, but I've been really pleased with Intel's uncharacteristic focus and pace of improvement in both the hardware and especially the software. I really hope they keep it up.
They won't make a B770 or C770 because they lose money on every card they sell. The prices are low because otherwise they would sell 0 and they already paid for the silicon. The intel graphics division is run by fools who won't give their cards a USP in the SR-IOV feature home labers have been asking for for years. Doing what AMD and Nvidia do but worse is not a profitable strategy. There's a 50% the whole division gets fired in the next year though.
SR-IOV doesn't sell consumer cards, are you expecting Intel to produce an expensive XEON equivalent of Arc?
I'd expect them to attempt capturing some LLM market share by loading up the cards with RAM rather than expending effort on niche features.
SR-IOV would sell more cards than Intel would be able to sell if they charged market rate for their GPUs. Same goes for not selling a local LLM high VRAM variant. Intel is just allergic to competing by offering a USP.
AMD Ryzen CPUs have ECC enabled but not officially supported. Intel still locks away the feature.
A few thousand people is a lot more than the number of gamers who'd buy an Intel GPU at market price. If they don't raise their ASP into the black they're going to ax the whole division.
For reference the B580 die is nearly the size of the 4070 but sells for a third the price.
mm2 is directly proportional to BOM cost and minimum profitable selling price. Intels software and hardware is much less efficient than nvidia so their products cost more to make and sell for less. Battlemage is at best break even, with alchemist they lost money. If they want to stay in business they need to raise their ASP. Since no one is buying intel for their games support they need to find a new market, thus SRIOV and local LLM. Unlike nvidia they don't need to worry about cannibalizing other business units.
On the die size argument, which I see being echoed a lot online:
Why would a customer care or factor that into their purchasing decisions? Saying that these dGPUs with large dies are what is going to put Intel out of business is ludicrous and the Xe cores are shared amongst many of Intel's most lucrative products.
You can afford larger dies on N4 compared to when the 40-series were launched. It's no longer the leading edge node and yields have likely improved.
dGPUs have pretty expensive GDDR modules, I do not have data on the exact proportion but I would bet that the memory modules is the more important line item.
BoM matters less on lower volume (compared to mobile SoCs) dGPU units. Masks masks, R&D and validation are big fixed up-front costs.
Recurring software support is also independent of how many units get sold. Xe cores are shared by many Intel products (client & server CPUs, datacenter GPUs and gaming GPUs).
B580 is widely popular for gamers, Intel cannot keep up the demand at the moment. I doubt they need to unlock SRIOV on the gaming segment dGPUs to get rid of stock, as you seem to suggest. Their datacenter GPUs [1] offer support for SRIOV, as you probably already know, so I assume you are bemoaning market segmentation.
I have a couple of these too, and I strongly believe Intel is effectively subsidizing these to try to get a foothold in the market.
You get me equivalent of a $500 Nvidia card for around $300 or less. And it makes sense because Intel knows if they can get a foothold in this market they're that much more valuable to shareholders.
At current market pricing on dramexchange, 128GB of 16Gbit GDDR6 chips would cost $499.58. That only leaves $100.42 for the PCB, GPU die, miscellaneous parts, manufacturing, packaging, shipping, the store’s margin, etcetera. I suspect that they could not do that without taking a loss.
I wonder if they could mix clamshell mode and quadrank to connect 64 memory chips to a GPU. If they connected 128GB of VRAM to a GPU, I would expect then to sell it for $2000, not $600.
+ about $150 for the pcb, cooler and other stuff, I didn't consider
Times a 1.6 to 1.75 factor if they like actually being profitable (operations, rnd, sales, marketing, ...).
So about $1.5k, I guess.
Multiply that with a .33 "screw the competition" factor and my initial guess is almost spot on.
.
Real problem:
The largest GDDR7 package money can buy right now is 3GB. That's a 1376bit bus right there. GL fitting that to a sub 500mm2 die.
In the future you could put that amount of vram on a 512bit bus, tho.
Also normal DDR is getting really fast atm. 8 channel can already challenge most vram configurations. Maybe it's time soon to switch back to swappable memory.
>+ about $150 for the pcb, cooler and other stuff,
Assuming I had access to gerbers I could order replica of 5090 PCB for $65, including shipping. Intel PCB is half that. Again this is for a dude off the street buying 1-5 copies, not a bulk order.
This makes no sense.
Currently 24GB is the sweet spot to stay competitive and 32GB is the maximum amount of memory you could stick on a card and still have a decent memory to bandwidth ratio.
What they should do instead is make the cards thinner and more efficient so that you can easily put two of them in a case.
They are definitely selling them at close to no profit, but they are not anywhere near subsidizing them unless they botched their supply chain so badly that they are overpaying the BOM costs.
I'm proud to support them. Intel is also selling their lunar lake chips fairly cheaply too. Let's all hope they make it through this rough patch. I can't imagine a world where we only have one x86 manufacturer.
R&D is a sunk cost that is largely paid by their iGPUs. Selling at cost is not a subsidy and that is not relevant here since they should be making money off every sale. I tried estimating their costs a few months ago and found that they had room for up to a 10% margin on these, even after giving retailers a 10% margin. If they are not making money from these, it would be their fault for not building enough to leverage economics of scale.
>
Acer Arc A770 16gb GPU w/Free Game & Shipping $229.99
$229.99 $399.99 at Newegg
Sure that margin is holding when they had to mark the first generation down to get them off the shelves. It would truly surprise me if they've made a significant profit off these cards.
This is so cool! I think this is a video of CyberPunk 2077 with path tracing on versus off: https://www.youtube.com/watch?v=89-RgetbUi0. It sees like a real, next-generation advance in graphics quality that we haven't seem in awhile.
Just a heads up - it looks like the "Path Tracing Off" shots have ray tracing disabled as well. In the shots starting at 1:22 (the car and then the plaza), it looks like they just have the base screenspace reflections enabled. Path tracing makes a difference (sometimes big, sometimes small) for diffuse lighting in the game. The kind of reflection seen in those scenes can be had by enabling "normal" ray tracing in the game, which is playable on more systems.
Ray tracing is more intensive than "path tracing". From my understanding, they are the same with the only difference being that "path tracing" does less calculations by only considering a light source's most probable or impactful paths or grouping the rays or something. Neither scene is using "ray tracing".
Ray tracing refers to the act of tracing rays. You can use it for lighting, but also sound, visibility checks for enemy AI, etc.
Path tracing is a specific technique where you ray trace multiple bounces to compute lighting.
In recent games, "ray tracing" often means just using ray tracing for direct light shadows instead of shadow maps, raytraced ambient occlusion instead of screenspace AO, or raytraced 1-bounce of specular indirect lighting instead of screenspace reflections. "Path traced" often means raytraced direct lighting + 1-bounce of indirect lighting + a radiance cache to approximate multiple bounces. No game does _actual_ path tracing because it's prohibitively expensive.
I believe the "path tracing" you described here is actual path tracing insofar each sample is one "path" rather than one "ray", where a "path" does at least one bounce, which is equivalent to at least two rays per sample. Though I think the "old" path tracing algorithm was indeed very slow, because it sent out samples in random directions, whereas modern path tracing uses the ReSTIR algorithm, which does something called importance sampling, which is a lot faster.
The other significant part is that path tracing is independent of the number of light sources, which isn't the case for some of the classical ray traced effects you mention ("direct shadows" vs path traced direct lighting).
Was raytracing a psyop by Nvidia to lock out amd? Games today don't look that much nicer than 10 years ago and demand crazy hardware. Is raytracing a solution looking for a problem?
I've kind of wondered about this a bit too. The respective visual quality side of it that is. Especially in a context where you're actually playing a game. You're not just sitting there staring at side by side still frames looking for minor differences.
What I have assumed given then trend, but could be completely wrong about, is that the raytracing version of the world might be easier on the software & game dev side to get great visual results without the overhead of meticulous engineering, use, and composition of different lighting systems, shader effects, etc.
For the vast majority of scenes in games, the best balance of performance and quality is precomputed visibility, lighting and reflections in static levels with hand-made model LoDs. The old Quake/Half-Life bsp/vis/rad combo. This is unwieldy for large streaming levels (e.g. open world games) and breaks down completely for highly dynamic scenes. You wouldn't want to build Minecraft in Source Engine[0].
However, that's not what's driving raytracing.
The vast majority of game development is "content pipeline" - i.e. churning out lots of stuff - and engine and graphics tech is built around removing roadblocks to that content pipeline, rather than presenting the graphics card with an efficient set of draw commands. e.g. LoDs demand artists spend extra time building the same model multiple times; precomputed lighting demands the level designer wait longer between iterations. That goes against the content pipeline.
Raytracing is Nvidia promising game and engine developers that they can just forget about lighting and delegate that entirely to the GPU at run time, at the cost of running like garbage on anything that isn't Nvidia. It's entirely impractical[1] to fully raytrace a game at runtime, but that doesn't matter if people are paying $$$ for roided out space heater graphics cards just for slightly nicer lighting.
[0] That one scene in The Stanley Parable notwithstanding
[1] Unless you happen to have a game that takes place entirely in a hall of mirrors
Yep. I worked on the engine of a PS3/360 AAA game long ago. We spent a long of time building a pipeline for precomputed lighting. But, in the end the game was 95% fully dynamically lit.
For the artists, being able to wiggle lights around all over in real time was an immeasurable productivity boost over even just 10s of seconds between baked lighting iterations. They had a selection of options at their fingertips and used dynamic lighting almost all the time.
But, that came with a lot of restrictions and limitations that make the game look dated by today’s standards.
I get the pitch that it is easier for the artists to design scenes with ray-tracing cards. But I don’t really see why we users need to buy them. Couldn’t the games be created on those fancy cards, and then bake the lighting right before going to retail?
(I mean, for games that are mostly static. I can definitely see why some games might want to be raytraced because they want some dynamic stuff, but that isn’t every game).
The player can often have a light, and is usually pretty dynamic.
One of the effects I really like is bounce lighting. Especially with proper color. If I point my flashlight at a red wall, it should bathe the room in red light. Can be especially used for great effect in horror games.
I was playing Tokyo Xtreme Racer with ray tracing, and the car's headlights are light sources too (especially when you flash a rival to start a race). My red car will also bounce lighting on the walls in tunnels to make things red.
It doesn't even have to be super dynamic either, I can't even think of a game that has opening a door to the outside sun to change the lighting in a room with indirect lighting (without ray tracing it). Something I do every day in real life. It would be possible to bake that too, assuming your door only has 2 positions.
When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces. Rasterization as a rendering model for realistic lighting has outlived its usefulness. It overstayed because optimizing ray-triangle intersection tests for path tracing in hardware is a hard problem that took some 15 o 20 years of research to even get to the first generation RTX hardware.
>When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces.
It's ironic that you harp about "hacks" that are used in rasterization, when raytracing is so computationally intensive that you need layers upon layers of performance hacks to get decent performance. The raytraced results needs to be denoised because not enough rays are used. The output of that needs to be supersampled (because you need to render at low resolution to get acceptable performance), and then on top of all of that you need to hallu^W extrapolate frames to hit high frame rates.
As a sibling post already mentioned, rasterization based hacks are incapable of getting as accurate lighting as path tracing can, given enough processing time.
I will admit that I was a bit sly in that I omitted the word "realtime" from the path tracing part of my claim on purpose. The amount of denoising that is currently required doesn't excite me either, from a theoretical purity standpoint. My sincere hope is that there is still a feasible path to a much higher ray count (maybe ~100x) and much less denoising.
But that is really the allure of path tracing: a basic implementation is at the same time much simpler and more principled than any rasterization based approximation of global illumination can ever be.
And you still need rasterization for ray traced games (even "fully" path traced games like Cyberpunk 2077) because the ray tracing sample count is too low to result in an acceptable image even after denoising. So the primary visibility rendering is done via rasterization (which has all the fine texture and geometry detail without shading), and the ray traced (and denoised) shading is layerd on top.
This combination of techniques is actually pretty smart: Combine the powers of the rasterization and ray tracing algorithms to achieve the best quality/speed combination.
The rendering implementation in software like Blender can afford to be primitive in comparison: It's not for real-time animation, so they don't make use of rasterization at all and do not even use denoising. That's why rendering a simple scene takes seconds in Blender to converge but only milliseconds in modern games.
For primary visibility, you don't need more than 1 sample. All it is is a simple "send ray from camera, stop on first hit, done". No monte carlo needed, no noise.
On recent hardware, for some scenes, I've heard of primary visibility being faster to raytrace than rasterize.
The main reasons why games are currently using raster for primary visibility:
1. They already have a raster pipeline in their engine, have special geometry paths that only work in raster (e.g. Nanite), or want to support GPUs without any raytracing capability and need to ship a raster pipeline anyways, and so might as well just use raster for primary visibility.
2. Acceleration structure building and memory usage is a big, unsolved problem at the moment. Unlike with raster, there aren't existing solutions like LODs, streaming, compression, frustum/occlusion culling, etc to keep memory and computation costs down. Not to mention that updating acceleration structures every time something moves or deforms is a really big cost. So games are using low-resolution "proxy" meshes for raytracing lighting, and using their existing high-resolution meshes for rasterization of primary visibility. You can then apply your low(relative) quality lighting to your high quality visibility and get a good overall image.
Nvidia's recent extensions and blackwell hardware are changing the calculus though. Their partitioned TLAS extension lowers the acceleration structure build cost when moving objects around, their BLAS extension allows for LOD/streaming solutions to keep memory usage down as well as cheaper deformation for things like skinned meshes since you don't have to rebuild the entire BLAS, and blackwell has special compression for BLAS clusters to further reduce memory usage. I expect more games in the ~near future (remember games take 4+ years of development, and they have to account for people on low-end and older hardware) to move to raytracing primary visibility, and ditching raster entirely.
This doesn't hold at all. Path tracing doesn't "just work", it is computational infeasible. It needs acceleration structures, ray traversal scheduling, denoisers, upscalers, and a million other hacks to work any close to real-time.
Except that it isn't like that at all. All you get from the driver in terms of ray tracing is the acceleration structure and ray traversal. Then you have denoisers and upscalers provided as third-party software. But games still ship with thousands of materials, and it is up to the developer to manage lights, shaders, etc, and use the hardware and driver primitives intelligently to get the best bang for the buck. Plus, given that primary rays are a waste of time/compute, you're still stuck with G-buffer passes and rasterization anyway. So now you have two problems instead of one.
I don’t think the last two decades of computer graphics research are an Nvidia psyop. Ray tracing is relatively simple and also reflects reality (you’re literally doing a light simulation to approximate how we really see). It’s always been the gold standard for rendering, we just didn’t know how to make it fast enough for real time rendering.
I think there's two ways of looking at it. Firstly that raster has more or less plateaued, there haven't been any great advances in a long time and it's not like AMD or any other company have offered an alternative path or vision for where they see 3d graphics going. The last thing a company like nvidia wants is to be a generic good which is easy to compete with or simple to compare against. Nvidia was also making use of their strength/long term investment in ML to drive DLSS
Secondly, nvidia are a company that want to sell stuff for a high asking price, and once a certain tech gets good enough that becomes more difficult. If the 20 series was just a incremental improvement from the 10, and so on then I expect sales would have plateaued especially if game requirements don't move much.
I don't believe we have reached a raster ceiling. More and more it seems like groups are cahoots to push rtx and ray tracing. We are left to speculate why devs are doing this. nvidiabux? easier time to add marketing keywords? who knows... i'm not a game dev.
There's no need for implications of deals between nvidia and game developers in smoke filled rooms. It's pretty straightforward: raytracing means less work for developers, because they don't have to manually place lights to make things look "right". Plus, they can harp about how it looks "realistic". It's not any different than the explosion of electron apps (and similar technologies making apps using html/js), which might be fast to develop, but are bloated and feel non-native. But it's not like there's an electron corp, giving out "electronbux" to push app developers to use electron.
Raster quality is limited by how much effort engine developers are willing to put into finding computationally cheap approximations of how light/materials behave. But it feels like the easy wins are already taken?
All the biggest innovations in "pure" rasterization renderers in the last 10-15 years have actually been some form of raytracing in a very reduced, limited form.
Screenspace Ambient Occlusion? Marching rays (tracing) against the depth buffer to calculate a terrible but decent looking approximation of light occlusion. Some of the modern SSAO implementations like GTAO need to be denoised by TAA.
Screenspace Reflections? Marching rays against the depth buffer and taking samples from the screen to generate light samples. Often needs denoising too.
Light shafts? Marching rays through the shadow map and approximating back scattering from whether the shadowed light is occluded or not.
That voxel cone tracing thing UE4 never really ended up shipping? Tracing's in the name, you're just tracing cones instead of rays through a super reduced quality version of the scene.
Material and light behavior is not the problem. Those are constantly being researched too, but the changes are more subtle. The big problem is light transport. Rasterization can't solve that, it's fundamentally the wrong tool for the job. Rasterization is just a cheap approximation for shooting primary rays out of the camera into a scene. You can't bounce light with rasterization.
>>We are left to speculate why devs are doing this.
Well, I am a gamedev, and currently lead of a rendering team. The answer is very simple - because ray tracing can produce much better outcomes than rasterization with lower load on the teams that produce content. There's not much else to it, no grand conspiracy - if the hardware was fast enough 20 years ago to do this everyone would be doing it this way already because it just gives you better outcomes. No nvidiabux necessary.
> There's not much else to it, no grand conspiracy
True, in that raytracing is the future. Though I don't think it's a conspiracy rather than just the truth that "RTX" as a product was Nvidia creating a 'new thing' to push AMD out of. Moat building, plain and simple. Nvidia's cards were better at it unsurprisingly, much like mesh shaders they basically wrote the API standard to match their hardware.
And just to make sure Nvidia doesn't get more credit than it deserves, the debut RTX cards (RTX 20 series) were a complete joke. A terrible product generation offering no performance gains over the 10 series at the same price with none of the cards really being fast enough to actually do RT very well. They were still better at RT than AMD though so mission accomplished I guess.
I don't think it's just about looks. The advantage of ray tracing is the real time lighting done rather than the static baked maps. One of the features I feel that was lost with modern game lighting is dynamic environments. But as long as the game isn't only ray tracing, these types of interactions will stay disabled for the game. Teardown and The Finals are examples of a dynamic environment game with raytraced lighting.
Another example is when was the last time you've seen a game with a mirror that wasn't broken?
Hitman, GTA, both of which use a non-raytraced implementation. More to the point, lack of mirrors doesn't impact the gameplay. It's something that's trotted out as a nice gimmick, 99% of the time it's not there, and you don't really notice that it's missing.
Hitman is an example that contradicts your point about gameplay, guards will see you in mirrors and act appropriately. They'll be doing that for gameplay with a non-graphical method, but you need to show it to the player graphically for them to appreciate the senses available
>Hitman is an example that contradicts your point about gameplay, guards will see you in mirrors and act appropriately.
See:
>It's something that's trotted out as a nice gimmick, 99% of the time it's not there, and you don't really notice that it's missing.
Yeah, it's a nice detail for the 1% of time that you're in a bathroom or whatever, but it's not like the immersion takes a hit when it's missing. Moreover because the game is third person, you can't even accurately judge whether you'll be spotted through a mirror or not.
GTA V's implementation did not work in their cars. Rear view and side view mirrors in cars are noticeably low quality and missing other cars while driving, which is pretty big for gameplay purposes.
Working mirrors are limited to less complex scenes in GTA. Hitman too I believe.
Not if you want better fidelity: the VFX industry for film moved from rasterisation to raytracing / pathtracing (on CPU initially, and a lot of final frame rendering is still done on CPU due to memory requirements even today, although lookdev is often done on GPU if the shaders / light transport algorithms can be matched between GPU/CPU codepaths) due to the higher fidelity possible starting back in around 2012/2013.
It required discarding a lot of "tricks" that had been learnt with rasterisation to speed things up over the years, and made things slower in some cases, but meant everything could use raytracing to compute visibility / occlusion, rather than having shadow maps, irradiance caches, pointcloud SSS caches, which simplified workflows greatly and allowed high-fidelity light transport simulations of things like volume scattering in difficult mediums like water/glass and hair (i.e. TRRT lobes), where rasterisation is very difficult to get the medium transitions and LT correct.
i own an rtx 4090 and yes cyberpunk looks amazing with raytracing - but worth the $2000k and nvidia monopoly over the tech? a big resounding no (for me).
If you think of either crypto or gaming and not accelerated compute for advanced modeling and simulation when you hear Nvidia, you won't have sufficient perspective to answer this question.
What does RTX do, what does it replace, and what does it enable, for whom? Repeat for Physx, etc. Give yourself a bonus point if you've ever heard of Nvidia Omniverse before right now.
If suggested usage means upscaling, it's a dubious trade off. That's why I'm not using it in Cyberpunk 2077, at least with RDNA 3 on Linux, since I don't want to use upscaling.
Here is a good video by Digital Foundry looking at Metro Exodus Enhanced edition with devtools. Where they show what raytracing is and how it differs from regular lighting.
simplified tldr: with raytracing you build the environment, designate which parts (like sun, lamps) emit light and you are done. With regular an artist has to spend hours to days adding many fake lightsources to get same result.
Arc will be successful because it will be in all mobile chips. The individual gpu market is smaller by a big factor and they are targeting the biggest part of that market with low cost.
Intel Arc could be Intel's comeback if they play it right. AMD's got the hardware to disrupt nVidia but their software sucks and they have a bad reputation for that. Apple's high-end M chips are good but also expensive like nVidia (and sold only with a high-end Mac) and don't quite have the RAM bandwidth.
If they started shipping GPUs with more RAM, I think they'd be in a strong position. The traditional disruption is to eat the low-end and move up.
Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.
Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.
Intel is run by fools. I don't see them coming back. They just don't have the willingness to compete and offer products with USPs. Intel today is just MBAs and the cheapest outsourced labor the MBAs can find.
It feels like just yesterday that Chips and Cheese started publishing (*checked and they started up in 2020 -- so not that long ago after all!), and now they've really become a mainstay in my silicon newsletter stack, up there with Semianalysis/Semiengineering/etc.
> Intel uses a software-managed scoreboard to handle dependencies for long latency instructions.
Interesting! I've seen this in compute accelerators before, but both AMD and Nvidia manage their long-latency dependency tracking in hardware so it's interesting to see a major GPU vendor taking this approach. Looking more into it, it looks like the interface their `send`/`sendc` instruction exposes is basically the same interface that the PE would use to talk to the NOC: rather than having some high-level e.g. load instruction that hardware then translates to "send a read-request to the dcache, and when it comes back increment this scoreboard slot", the ISA lets/makes the compiler state that all directly. Good for fine control of the hardware, bad if the compiler isn't able to make inferences that the hardware would (e.g. based on runtime data), but then good again if you really want to minimize area and so wouldn't have that fancy logic in the pipeline anyways.
> both AMD and Nvidia manage their long-latency dependency tracking in hardware
This is incorrect for AMD, which has "s_waitcnt" instructions in its ISA, which is publicly documented. I believe it is also incorrect for Nvidia, but don't have the receipts to prove it.
I actually have a B580 that I replaced my old A580 with.
I didn't manage to get it for MSRP (because living in Europe does tend to increase the price quite a bit, a regular RTX 3060 is over 300 EUR here), but I have to say that it's a pretty nice card, when most others seem quite overpriced or outside of my budget.
When paired with an 5800X the performance is good, the XeSS upscaling looks prettier than FSR and pretty close to DLSS, the framegen also seems to have higher quality than FSR (but more latency, from what I've seen), the hardware AV1 encoder is lovely and the other QSV ones are great, though I do wish that I could get a case big enough and a new PSU to have both A580 and B580 in the same computer and use the B580 for games and A580 for the other stuff (not quite sure how well that combination would work, if at all).
Either way, I'm happy that I got the card, especially with a decent CPU (even the A series with my previous Ryzen 5 4500 was an absolute mess, no software showed the CPU being maxed out but it very much was a bottleneck) and do kind of hope that I'll get the likes of performance that you get in War Thunder, or even GTA V Enhanced Edition for the years to come (yes, the raytracing works there as well) or even more recent games like Kingdom Come: Deliverance 2.
If the upscaling/framegen support was even better in most game engines and games, then it could be stretched further or at least used as a band aid for the likes of Delta Force or Forever Winter - games that come out with pretty bad optimization and are taxing on the hardware, with no good way to turn subjectively unnecessary effects or graphical features off, despite the underlying engines themselves being able to scale way down.
At the end of the day, even if Intel Arc won't displace any of the big players in the market, it should improve the market competitiveness which is good for the consumer.
I love these breakdown writeups so much.
I'm also hoping that Intel puts out an Arc A770 class upgrade in their B-series line-up.
My workstation and my kids' playroom gaming computer both have A770's, and they've been really amazing for the price I paid, $269 and $190. My triple screen racing sim has an RX 7900 GRE ($499), and of the three the GRE has surprisingly been the least consistently stable (e.g. driver timeouts, crashes).
Granted, I came into the new Intel GPU game after they'd gone through 2 solid years of driver quality hell, but I've been really pleased with Intel's uncharacteristic focus and pace of improvement in both the hardware and especially the software. I really hope they keep it up.
They won't make a B770 or C770 because they lose money on every card they sell. The prices are low because otherwise they would sell 0 and they already paid for the silicon. The intel graphics division is run by fools who won't give their cards a USP in the SR-IOV feature home labers have been asking for for years. Doing what AMD and Nvidia do but worse is not a profitable strategy. There's a 50% the whole division gets fired in the next year though.
SR-IOV doesn't sell consumer cards, are you expecting Intel to produce an expensive XEON equivalent of Arc? I'd expect them to attempt capturing some LLM market share by loading up the cards with RAM rather than expending effort on niche features.
SR-IOV would sell more cards than Intel would be able to sell if they charged market rate for their GPUs. Same goes for not selling a local LLM high VRAM variant. Intel is just allergic to competing by offering a USP.
AMD Ryzen CPUs have ECC enabled but not officially supported. Intel still locks away the feature.
>The intel graphics division is run by fools who won't give their cards a USP in the SR-IOV feature home labers have been asking for for years.
Intel are "fools" for not adding a feature that maybe a few thousand people care about?
A few thousand people is a lot more than the number of gamers who'd buy an Intel GPU at market price. If they don't raise their ASP into the black they're going to ax the whole division.
For reference the B580 die is nearly the size of the 4070 but sells for a third the price.
> For reference the B580 die is nearly the size of the 4070 but sells for a third the price.
Doesn't this suggest the B580 has worse yields? Die surface area isn't directly proportional to selling price.
mm2 is directly proportional to BOM cost and minimum profitable selling price. Intels software and hardware is much less efficient than nvidia so their products cost more to make and sell for less. Battlemage is at best break even, with alchemist they lost money. If they want to stay in business they need to raise their ASP. Since no one is buying intel for their games support they need to find a new market, thus SRIOV and local LLM. Unlike nvidia they don't need to worry about cannibalizing other business units.
Some counter points.
On the die size argument, which I see being echoed a lot online:
Why would a customer care or factor that into their purchasing decisions? Saying that these dGPUs with large dies are what is going to put Intel out of business is ludicrous and the Xe cores are shared amongst many of Intel's most lucrative products.
You can afford larger dies on N4 compared to when the 40-series were launched. It's no longer the leading edge node and yields have likely improved.
dGPUs have pretty expensive GDDR modules, I do not have data on the exact proportion but I would bet that the memory modules is the more important line item.
BoM matters less on lower volume (compared to mobile SoCs) dGPU units. Masks masks, R&D and validation are big fixed up-front costs.
Recurring software support is also independent of how many units get sold. Xe cores are shared by many Intel products (client & server CPUs, datacenter GPUs and gaming GPUs).
B580 is widely popular for gamers, Intel cannot keep up the demand at the moment. I doubt they need to unlock SRIOV on the gaming segment dGPUs to get rid of stock, as you seem to suggest. Their datacenter GPUs [1] offer support for SRIOV, as you probably already know, so I assume you are bemoaning market segmentation.
[1] https://www.youtube.com/watch?v=tLK_i-TQ3kQ -- Wendell's video on Flex 170 GPU from Intel - Subscription Free GPU Accelerated VDI on Proxmox 8.1
Why would it say anything about yields?
What is says is that given the die area, Intel fails to capitalize on their chip relative to a 4070.
Intel GPUs are fabbed at TSMC so they're yields are the same as nvidia and amd.
I have a couple of these too, and I strongly believe Intel is effectively subsidizing these to try to get a foothold in the market.
You get me equivalent of a $500 Nvidia card for around $300 or less. And it makes sense because Intel knows if they can get a foothold in this market they're that much more valuable to shareholders.
Great for gaming, no real downsides imo.
They should drop a $600 card with 128gb of vram. This is just barely possible without losses on every sale.
And then just watch heads explode.
At current market pricing on dramexchange, 128GB of 16Gbit GDDR6 chips would cost $499.58. That only leaves $100.42 for the PCB, GPU die, miscellaneous parts, manufacturing, packaging, shipping, the store’s margin, etcetera. I suspect that they could not do that without taking a loss.
I wonder if they could mix clamshell mode and quadrank to connect 64 memory chips to a GPU. If they connected 128GB of VRAM to a GPU, I would expect then to sell it for $2000, not $600.
Yup, just went with the $3 per GB formula.
GPU should be about $200 at TSMC (400-450mm2).
+ about $150 for the pcb, cooler and other stuff, I didn't consider
Times a 1.6 to 1.75 factor if they like actually being profitable (operations, rnd, sales, marketing, ...).
So about $1.5k, I guess.
Multiply that with a .33 "screw the competition" factor and my initial guess is almost spot on.
.
Real problem:
The largest GDDR7 package money can buy right now is 3GB. That's a 1376bit bus right there. GL fitting that to a sub 500mm2 die.
In the future you could put that amount of vram on a 512bit bus, tho.
Also normal DDR is getting really fast atm. 8 channel can already challenge most vram configurations. Maybe it's time soon to switch back to swappable memory.
>+ about $150 for the pcb, cooler and other stuff,
Assuming I had access to gerbers I could order replica of 5090 PCB for $65, including shipping. Intel PCB is half that. Again this is for a dude off the street buying 1-5 copies, not a bulk order.
This makes no sense. Currently 24GB is the sweet spot to stay competitive and 32GB is the maximum amount of memory you could stick on a card and still have a decent memory to bandwidth ratio.
What they should do instead is make the cards thinner and more efficient so that you can easily put two of them in a case.
They are definitely selling them at close to no profit, but they are not anywhere near subsidizing them unless they botched their supply chain so badly that they are overpaying the BOM costs.
R&D isn't free.
Even selling at cost is a subsidy.
I'm proud to support them. Intel is also selling their lunar lake chips fairly cheaply too. Let's all hope they make it through this rough patch. I can't imagine a world where we only have one x86 manufacturer.
> I can't imagine a world where we only have one x86 manufacturer.
Does it even matter? Some people won’t notice even if there are zero x86 manufacturers.
In fact I would say lots of people have not bought x86 CPU in while, between Mac, RPi and risc-v boards…
X86 is still needed for a lot of software. The emulation just isn't there yet.
That would be news to people on mac with Rosetta Stone / Crossover.
A lot of server code and specialized software won’t work.
Competition is always good
R&D is a sunk cost that is largely paid by their iGPUs. Selling at cost is not a subsidy and that is not relevant here since they should be making money off every sale. I tried estimating their costs a few months ago and found that they had room for up to a 10% margin on these, even after giving retailers a 10% margin. If they are not making money from these, it would be their fault for not building enough to leverage economics of scale.
https://slickdeals.net/f/17910114-acer-arc-a770-16gb-gpu-w-f...
> Acer Arc A770 16gb GPU w/Free Game & Shipping $229.99 $229.99 $399.99 at Newegg
Sure that margin is holding when they had to mark the first generation down to get them off the shelves. It would truly surprise me if they've made a significant profit off these cards.
This is so cool! I think this is a video of CyberPunk 2077 with path tracing on versus off: https://www.youtube.com/watch?v=89-RgetbUi0. It sees like a real, next-generation advance in graphics quality that we haven't seem in awhile.
Just a heads up - it looks like the "Path Tracing Off" shots have ray tracing disabled as well. In the shots starting at 1:22 (the car and then the plaza), it looks like they just have the base screenspace reflections enabled. Path tracing makes a difference (sometimes big, sometimes small) for diffuse lighting in the game. The kind of reflection seen in those scenes can be had by enabling "normal" ray tracing in the game, which is playable on more systems.
Ray tracing is more intensive than "path tracing". From my understanding, they are the same with the only difference being that "path tracing" does less calculations by only considering a light source's most probable or impactful paths or grouping the rays or something. Neither scene is using "ray tracing".
Ray tracing refers to the act of tracing rays. You can use it for lighting, but also sound, visibility checks for enemy AI, etc.
Path tracing is a specific technique where you ray trace multiple bounces to compute lighting.
In recent games, "ray tracing" often means just using ray tracing for direct light shadows instead of shadow maps, raytraced ambient occlusion instead of screenspace AO, or raytraced 1-bounce of specular indirect lighting instead of screenspace reflections. "Path traced" often means raytraced direct lighting + 1-bounce of indirect lighting + a radiance cache to approximate multiple bounces. No game does _actual_ path tracing because it's prohibitively expensive.
I believe the "path tracing" you described here is actual path tracing insofar each sample is one "path" rather than one "ray", where a "path" does at least one bounce, which is equivalent to at least two rays per sample. Though I think the "old" path tracing algorithm was indeed very slow, because it sent out samples in random directions, whereas modern path tracing uses the ReSTIR algorithm, which does something called importance sampling, which is a lot faster.
The other significant part is that path tracing is independent of the number of light sources, which isn't the case for some of the classical ray traced effects you mention ("direct shadows" vs path traced direct lighting).
That's at least what I understand of the matter.
Was raytracing a psyop by Nvidia to lock out amd? Games today don't look that much nicer than 10 years ago and demand crazy hardware. Is raytracing a solution looking for a problem?
https://x.com/NikTekOfficial/status/1837628834528522586
I've kind of wondered about this a bit too. The respective visual quality side of it that is. Especially in a context where you're actually playing a game. You're not just sitting there staring at side by side still frames looking for minor differences.
What I have assumed given then trend, but could be completely wrong about, is that the raytracing version of the world might be easier on the software & game dev side to get great visual results without the overhead of meticulous engineering, use, and composition of different lighting systems, shader effects, etc.
For the vast majority of scenes in games, the best balance of performance and quality is precomputed visibility, lighting and reflections in static levels with hand-made model LoDs. The old Quake/Half-Life bsp/vis/rad combo. This is unwieldy for large streaming levels (e.g. open world games) and breaks down completely for highly dynamic scenes. You wouldn't want to build Minecraft in Source Engine[0].
However, that's not what's driving raytracing.
The vast majority of game development is "content pipeline" - i.e. churning out lots of stuff - and engine and graphics tech is built around removing roadblocks to that content pipeline, rather than presenting the graphics card with an efficient set of draw commands. e.g. LoDs demand artists spend extra time building the same model multiple times; precomputed lighting demands the level designer wait longer between iterations. That goes against the content pipeline.
Raytracing is Nvidia promising game and engine developers that they can just forget about lighting and delegate that entirely to the GPU at run time, at the cost of running like garbage on anything that isn't Nvidia. It's entirely impractical[1] to fully raytrace a game at runtime, but that doesn't matter if people are paying $$$ for roided out space heater graphics cards just for slightly nicer lighting.
[0] That one scene in The Stanley Parable notwithstanding
[1] Unless you happen to have a game that takes place entirely in a hall of mirrors
Yep. I worked on the engine of a PS3/360 AAA game long ago. We spent a long of time building a pipeline for precomputed lighting. But, in the end the game was 95% fully dynamically lit.
For the artists, being able to wiggle lights around all over in real time was an immeasurable productivity boost over even just 10s of seconds between baked lighting iterations. They had a selection of options at their fingertips and used dynamic lighting almost all the time.
But, that came with a lot of restrictions and limitations that make the game look dated by today’s standards.
I get the pitch that it is easier for the artists to design scenes with ray-tracing cards. But I don’t really see why we users need to buy them. Couldn’t the games be created on those fancy cards, and then bake the lighting right before going to retail?
(I mean, for games that are mostly static. I can definitely see why some games might want to be raytraced because they want some dynamic stuff, but that isn’t every game).
The player can often have a light, and is usually pretty dynamic.
One of the effects I really like is bounce lighting. Especially with proper color. If I point my flashlight at a red wall, it should bathe the room in red light. Can be especially used for great effect in horror games.
I was playing Tokyo Xtreme Racer with ray tracing, and the car's headlights are light sources too (especially when you flash a rival to start a race). My red car will also bounce lighting on the walls in tunnels to make things red.
It doesn't even have to be super dynamic either, I can't even think of a game that has opening a door to the outside sun to change the lighting in a room with indirect lighting (without ray tracing it). Something I do every day in real life. It would be possible to bake that too, assuming your door only has 2 positions.
When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces. Rasterization as a rendering model for realistic lighting has outlived its usefulness. It overstayed because optimizing ray-triangle intersection tests for path tracing in hardware is a hard problem that took some 15 o 20 years of research to even get to the first generation RTX hardware.
>When path tracing works, it is much, much, MUCH simpler and vastly saner algorithm than those stacks of 40+ complicated rasterization hacks in current rasterization based renderers that barely manage to capture crude approxinations of the first indirect light bounces.
It's ironic that you harp about "hacks" that are used in rasterization, when raytracing is so computationally intensive that you need layers upon layers of performance hacks to get decent performance. The raytraced results needs to be denoised because not enough rays are used. The output of that needs to be supersampled (because you need to render at low resolution to get acceptable performance), and then on top of all of that you need to hallu^W extrapolate frames to hit high frame rates.
Meanwhile raserization is fundamentally incapable of producing the same image.
As a sibling post already mentioned, rasterization based hacks are incapable of getting as accurate lighting as path tracing can, given enough processing time.
I will admit that I was a bit sly in that I omitted the word "realtime" from the path tracing part of my claim on purpose. The amount of denoising that is currently required doesn't excite me either, from a theoretical purity standpoint. My sincere hope is that there is still a feasible path to a much higher ray count (maybe ~100x) and much less denoising.
But that is really the allure of path tracing: a basic implementation is at the same time much simpler and more principled than any rasterization based approximation of global illumination can ever be.
And you still need rasterization for ray traced games (even "fully" path traced games like Cyberpunk 2077) because the ray tracing sample count is too low to result in an acceptable image even after denoising. So the primary visibility rendering is done via rasterization (which has all the fine texture and geometry detail without shading), and the ray traced (and denoised) shading is layerd on top.
You can see the purely ray traced part in this image from the post: https://substack-post-media.s3.amazonaws.com/public/images/8...
This combination of techniques is actually pretty smart: Combine the powers of the rasterization and ray tracing algorithms to achieve the best quality/speed combination.
The rendering implementation in software like Blender can afford to be primitive in comparison: It's not for real-time animation, so they don't make use of rasterization at all and do not even use denoising. That's why rendering a simple scene takes seconds in Blender to converge but only milliseconds in modern games.
Not quite correct.
For primary visibility, you don't need more than 1 sample. All it is is a simple "send ray from camera, stop on first hit, done". No monte carlo needed, no noise.
On recent hardware, for some scenes, I've heard of primary visibility being faster to raytrace than rasterize.
The main reasons why games are currently using raster for primary visibility:
1. They already have a raster pipeline in their engine, have special geometry paths that only work in raster (e.g. Nanite), or want to support GPUs without any raytracing capability and need to ship a raster pipeline anyways, and so might as well just use raster for primary visibility. 2. Acceleration structure building and memory usage is a big, unsolved problem at the moment. Unlike with raster, there aren't existing solutions like LODs, streaming, compression, frustum/occlusion culling, etc to keep memory and computation costs down. Not to mention that updating acceleration structures every time something moves or deforms is a really big cost. So games are using low-resolution "proxy" meshes for raytracing lighting, and using their existing high-resolution meshes for rasterization of primary visibility. You can then apply your low(relative) quality lighting to your high quality visibility and get a good overall image.
Nvidia's recent extensions and blackwell hardware are changing the calculus though. Their partitioned TLAS extension lowers the acceleration structure build cost when moving objects around, their BLAS extension allows for LOD/streaming solutions to keep memory usage down as well as cheaper deformation for things like skinned meshes since you don't have to rebuild the entire BLAS, and blackwell has special compression for BLAS clusters to further reduce memory usage. I expect more games in the ~near future (remember games take 4+ years of development, and they have to account for people on low-end and older hardware) to move to raytracing primary visibility, and ditching raster entirely.
This doesn't hold at all. Path tracing doesn't "just work", it is computational infeasible. It needs acceleration structures, ray traversal scheduling, denoisers, upscalers, and a million other hacks to work any close to real-time.
[dead]
Except that it isn't like that at all. All you get from the driver in terms of ray tracing is the acceleration structure and ray traversal. Then you have denoisers and upscalers provided as third-party software. But games still ship with thousands of materials, and it is up to the developer to manage lights, shaders, etc, and use the hardware and driver primitives intelligently to get the best bang for the buck. Plus, given that primary rays are a waste of time/compute, you're still stuck with G-buffer passes and rasterization anyway. So now you have two problems instead of one.
I don’t think the last two decades of computer graphics research are an Nvidia psyop. Ray tracing is relatively simple and also reflects reality (you’re literally doing a light simulation to approximate how we really see). It’s always been the gold standard for rendering, we just didn’t know how to make it fast enough for real time rendering.
I think there's two ways of looking at it. Firstly that raster has more or less plateaued, there haven't been any great advances in a long time and it's not like AMD or any other company have offered an alternative path or vision for where they see 3d graphics going. The last thing a company like nvidia wants is to be a generic good which is easy to compete with or simple to compare against. Nvidia was also making use of their strength/long term investment in ML to drive DLSS
Secondly, nvidia are a company that want to sell stuff for a high asking price, and once a certain tech gets good enough that becomes more difficult. If the 20 series was just a incremental improvement from the 10, and so on then I expect sales would have plateaued especially if game requirements don't move much.
I don't believe we have reached a raster ceiling. More and more it seems like groups are cahoots to push rtx and ray tracing. We are left to speculate why devs are doing this. nvidiabux? easier time to add marketing keywords? who knows... i'm not a game dev.
https://www.youtube.com/watch?v=NxjhtkzuH9M
There's no need for implications of deals between nvidia and game developers in smoke filled rooms. It's pretty straightforward: raytracing means less work for developers, because they don't have to manually place lights to make things look "right". Plus, they can harp about how it looks "realistic". It's not any different than the explosion of electron apps (and similar technologies making apps using html/js), which might be fast to develop, but are bloated and feel non-native. But it's not like there's an electron corp, giving out "electronbux" to push app developers to use electron.
Raster quality is limited by how much effort engine developers are willing to put into finding computationally cheap approximations of how light/materials behave. But it feels like the easy wins are already taken?
All the biggest innovations in "pure" rasterization renderers in the last 10-15 years have actually been some form of raytracing in a very reduced, limited form.
Screenspace Ambient Occlusion? Marching rays (tracing) against the depth buffer to calculate a terrible but decent looking approximation of light occlusion. Some of the modern SSAO implementations like GTAO need to be denoised by TAA.
Screenspace Reflections? Marching rays against the depth buffer and taking samples from the screen to generate light samples. Often needs denoising too.
Light shafts? Marching rays through the shadow map and approximating back scattering from whether the shadowed light is occluded or not.
That voxel cone tracing thing UE4 never really ended up shipping? Tracing's in the name, you're just tracing cones instead of rays through a super reduced quality version of the scene.
Material and light behavior is not the problem. Those are constantly being researched too, but the changes are more subtle. The big problem is light transport. Rasterization can't solve that, it's fundamentally the wrong tool for the job. Rasterization is just a cheap approximation for shooting primary rays out of the camera into a scene. You can't bounce light with rasterization.
>>We are left to speculate why devs are doing this.
Well, I am a gamedev, and currently lead of a rendering team. The answer is very simple - because ray tracing can produce much better outcomes than rasterization with lower load on the teams that produce content. There's not much else to it, no grand conspiracy - if the hardware was fast enough 20 years ago to do this everyone would be doing it this way already because it just gives you better outcomes. No nvidiabux necessary.
I'm a gamedev as well, also in rendering.
> There's not much else to it, no grand conspiracy
True, in that raytracing is the future. Though I don't think it's a conspiracy rather than just the truth that "RTX" as a product was Nvidia creating a 'new thing' to push AMD out of. Moat building, plain and simple. Nvidia's cards were better at it unsurprisingly, much like mesh shaders they basically wrote the API standard to match their hardware.
And just to make sure Nvidia doesn't get more credit than it deserves, the debut RTX cards (RTX 20 series) were a complete joke. A terrible product generation offering no performance gains over the 10 series at the same price with none of the cards really being fast enough to actually do RT very well. They were still better at RT than AMD though so mission accomplished I guess.
I don't think it's just about looks. The advantage of ray tracing is the real time lighting done rather than the static baked maps. One of the features I feel that was lost with modern game lighting is dynamic environments. But as long as the game isn't only ray tracing, these types of interactions will stay disabled for the game. Teardown and The Finals are examples of a dynamic environment game with raytraced lighting.
Another example is when was the last time you've seen a game with a mirror that wasn't broken?
Hitman, GTA, both of which use a non-raytraced implementation. More to the point, lack of mirrors doesn't impact the gameplay. It's something that's trotted out as a nice gimmick, 99% of the time it's not there, and you don't really notice that it's missing.
Hitman is an example that contradicts your point about gameplay, guards will see you in mirrors and act appropriately. They'll be doing that for gameplay with a non-graphical method, but you need to show it to the player graphically for them to appreciate the senses available
>Hitman is an example that contradicts your point about gameplay, guards will see you in mirrors and act appropriately.
See:
>It's something that's trotted out as a nice gimmick, 99% of the time it's not there, and you don't really notice that it's missing.
Yeah, it's a nice detail for the 1% of time that you're in a bathroom or whatever, but it's not like the immersion takes a hit when it's missing. Moreover because the game is third person, you can't even accurately judge whether you'll be spotted through a mirror or not.
GTA V's implementation did not work in their cars. Rear view and side view mirrors in cars are noticeably low quality and missing other cars while driving, which is pretty big for gameplay purposes.
Working mirrors are limited to less complex scenes in GTA. Hitman too I believe.
[dead]
Not if you want better fidelity: the VFX industry for film moved from rasterisation to raytracing / pathtracing (on CPU initially, and a lot of final frame rendering is still done on CPU due to memory requirements even today, although lookdev is often done on GPU if the shaders / light transport algorithms can be matched between GPU/CPU codepaths) due to the higher fidelity possible starting back in around 2012/2013.
It required discarding a lot of "tricks" that had been learnt with rasterisation to speed things up over the years, and made things slower in some cases, but meant everything could use raytracing to compute visibility / occlusion, rather than having shadow maps, irradiance caches, pointcloud SSS caches, which simplified workflows greatly and allowed high-fidelity light transport simulations of things like volume scattering in difficult mediums like water/glass and hair (i.e. TRRT lobes), where rasterisation is very difficult to get the medium transitions and LT correct.
lol, go play Cyberpunk 2077 with pathtracing and compare it to raster before you call it a gimmick.
i own an rtx 4090 and yes cyberpunk looks amazing with raytracing - but worth the $2000k and nvidia monopoly over the tech? a big resounding no (for me).
If you think of either crypto or gaming and not accelerated compute for advanced modeling and simulation when you hear Nvidia, you won't have sufficient perspective to answer this question.
What does RTX do, what does it replace, and what does it enable, for whom? Repeat for Physx, etc. Give yourself a bonus point if you've ever heard of Nvidia Omniverse before right now.
It's a transition happening.
Research and progress is necessary, Ray tracing is a clear advancement.
AMD could just easily skip it if they want to reduce costs, we could just not by the gpus. Non of it is happening.
It does look better and it would be a lot easier if we would only do ray tracing
If suggested usage means upscaling, it's a dubious trade off. That's why I'm not using it in Cyberpunk 2077, at least with RDNA 3 on Linux, since I don't want to use upscaling.
Not sure how much RDNA 4 and on will improve it.
Here is a good video by Digital Foundry looking at Metro Exodus Enhanced edition with devtools. Where they show what raytracing is and how it differs from regular lighting.
https://youtu.be/NbpZCSf4_Yk
simplified tldr: with raytracing you build the environment, designate which parts (like sun, lamps) emit light and you are done. With regular an artist has to spend hours to days adding many fake lightsources to get same result.
Arc will be successful because it will be in all mobile chips. The individual gpu market is smaller by a big factor and they are targeting the biggest part of that market with low cost.
Intel Arc could be Intel's comeback if they play it right. AMD's got the hardware to disrupt nVidia but their software sucks and they have a bad reputation for that. Apple's high-end M chips are good but also expensive like nVidia (and sold only with a high-end Mac) and don't quite have the RAM bandwidth.
Intel is close. Good history with software.
If they started shipping GPUs with more RAM, I think they'd be in a strong position. The traditional disruption is to eat the low-end and move up.
Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.
Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.
Reminds me of this old satire video: https://www.youtube.com/watch?v=s13iFPSyKdQ
Intel is run by fools. I don't see them coming back. They just don't have the willingness to compete and offer products with USPs. Intel today is just MBAs and the cheapest outsourced labor the MBAs can find.