找回密码
 注册

ATI Radeon Series 5800

[复制链接]
~DeatHMooN~ 发表于 28-9-2009 09:13:45 | 显示全部楼层 |阅读模式

马上注册成为YANBONG会员吧!
时下最热门的资讯、娱乐、贴图等分享都在这里等你发掘哦!

您需要 登录 才可以下载或查看,没有账号?注册

×
Radeon HD Series 5000 featuresSo we have just establishedproduct positioning and have seen what cards to expect. Today is allabout the Radeon HD 5870. Let's have a peek at some of the key featuresfor this product:
  • 1GB GDDR5 memory
  • ATI Eyefinity technology with support for up to six displays
  • ATI Stream technology
  • Designed for DirectCompute 5.0 and OpenCL
  • Accelerated Video Transcoding (AVT)
  • Compliant with DirectX 11 and earlier revisions
  • Supports OpenGL 3.1
  • ATI CrossFireX™ multi-GPU support for highly scalable performance
  • ATI Avivo™ HD video and display technology
  • Dynamic power management with ATI PowerPlay technology
  • DL-DVI, DL-DVI, DisplayPort, HDMI
  • PCI Express® 2.0 support
We'll address the majority offeatures in our article. But first let's focus on the sheer technicalspecifications. Transistor count for example. The number of transistorsalways works as an indicator of how powerful a product will be. Forexample, the Radeon HD 4870 which we all know and love for itsperformance had 956 million transistors embedded onto that die. The newRadeon HD 5800 GPUs have 2.15 billion transistors. Correct, that is2150 million transistors tucked away in a small chip. The fabricationnode was 40nm for this product, resulting in a die size of 334 mm²,which for AMD is monolithic, yet thanks to the 40nm fabrication processonly a third bigger than the previous 4890 GPU.
Now you'd think with so manytransistors high clock frequencies would be an issue. Incorrect, thehigh-end Radeon HD 5870 will be clocked at a cool 850 MHz. In fact weeven took it over 900 MHz without any issues.
The lower specced Radeon 5850 will be clocked at 725 MHz on the Core and Shader domain.
Shader processors then, we wentfrom 800 Shader processors on the Radeon HD 4850/4870/4890 to 1600shader processors (also called stream processors) on the Radeon HD5870. That's doubled up. The ROPs went up from 16 to 32 as well andsure... texture units from 40 to 80 as well.
The Radeon HD 5850 will havesome units cut away though. 1440 Shader cores, still we find 32 ROPSand 72 texture units. This product will for example be a good heapfaster than say, a GeForce GTX 285.
But before you get blinded byall the specs in a few lines of text, let's break down the two cardsannounced today in comparison to last year's Radeon HD 4870.
Radeon HD 4870Radeon HD 5850Radeon HD 5870
Process55nm40nm40nm
Transistors956M2.15B2.15B
Die Size263 mm²334 mm²334 mm²
Core Clock750 MHz725 MHz850 MHz
Shader Processors80014401600
Compute Performance1.2 TFLOPs2.09 TFLOPs2.72 TFLOPs
Texture Units407280
Texture Fillrate30.0 GTexels/s52.2 GTexels/s68.0 GTexels/s
ROPs163232
Pixel Fillrate12.0 GPixels/s23.2 GPixels/s27.2 GPixels/s
Z/Stencil48.0 GSamples/s92.8 GSamples/s108.8 GSamples/s
Memory TypeGDDR5GDDR5GDDR5
Memory Clock900 MHz1000 MHz1200 MHz
Memory Data Rate3.6 Gbps4.0 Gbps4.8 Gbps
Memory Bandwidth115.2 GB/s128.0 GB/s153.6GB/s
Maximum Board Power (TDP)160W170W188W
Idle Board Power90W27W27W
These numbers are downrightstaggering. We have not discussed it just yet, but memory; ATI willstick to DDR5 for both products. On the 5870 they'll up it a notchalright as the clock frequency has been upped a notch as well, as somedesign changed in the memory controllers.
Expect the Radeon HD 5870 tooutperform any current single-GPU based graphics card like the RadeonHD 4890 and GeForce GTX 285. And all that with a single chip utilizingless than 190 Watts.


Tech Lingo like Shaders ExplainedWith each new article it's goodto always look back and explain terms that have become common in ourvocabulary, yet a lot of you might not know what they mean. On thispage I like to explain the basics inside a graphics card and as suchexplain shaders and shader processors. Just so you know what we aretalking about. And if you know all this, please head on over to thenext page. But to understand what we are writing, you need tounderstand what's going on inside a GPU.
To understand what is going oninside that graphics card of yours, please allow me to explain what isactually happening inside that graphics processor and explainterminology like shaders in a very easy to understand manner (I hope).That and how it relates to rendering all that gaming goodness on yourscreen.
What does it take to get a gamerendered? E.g. What do we need to render a three dimensional object,in 2D on your monitor? We start off by building some sort of structurethat has a surface, and that surface is built from triangles. Trianglesare great as they are really quick and easy to compute. Now we need toprocess each triangle. Each triangle has to be transformed according toits relative position and orientation to the viewer.
The next step is to light thetriangle by taking the transformed vertices and applying a lightingcalculation for every light defined in the scene. At last the triangleneeds to be projected to the screen in order to rasterize it. Duringrasterization the triangle will be shaded and textured.
Graphics processors like theRadeon and GeForce series are able to perform a large sum of thesetasks. Actually, the first generation (say ten years ago) was able todraw shaded and textured triangles in hardware, which was a revolution.

The CPU still had the burden of feeding the graphics processorwith transformed and lit vertices, triangle gradients for shading andtexturing, etc. Integrating the triangle setup into the chip logic wasthe next step and finally even transformation and lighting (TnL) waspossible in hardware, reducing the CPU load considerably (surelyeveryone remembers the GeForce 256 right?).

The big disadvantage at thattime was that a game programmer had no direct (i.e. program driven)control over transformation, lighting and pixel rendering, because allthe calculation models were fixed on the chip. This is the point intime where shader design surfaced, limiting the programmer to get thebest graphics out of the graphics card.

We now finally get to the stage where we can explain shaders and shader processors.

Inthe year 2000 DirectX 8 was released, vertex and pixel shaders arrivedat and in the scene and allowed software and game developers to programtailored transformation and lighting calculations as well as pixelcoloring functionality which gave a new graphics dimension to thegaming experience, and games slowly started to look much more realistic.

Eachshader is basically nothing more than a relatively small program(programming code) executed on the graphics processor to control eithervertex, pixel or geometry processing. So a shader processor is in facta small floating point processor inside your GPU.

When weadvance to the year 2002, we see the release of DirectX 9 which mostgames still use. DX9 had the advantage of using much longer shaderprograms than before, with pixel and vertex shader version 2.0. Howeverthere still was a limitation. In the past, graphics processors have haddedicated units for diverse types of operations in the renderingpipeline, such as vertex processing and pixel shading.

With the introduction ofDirectX 10 it was time to move away from that somewhat inefficientfixed pipeline and create a new unified architecture, unified shaderarchitecture; shader processors that can handle a variety of shaders.
  • A shader processor: Each timewe mention a shader processor, this is one of the many little shaderprocessors inside your GPU. We also call these units stream processors.
  • Once I mention a shader... that's the program executed on the shader engine (the accumulated shader processor domain).
As stated, GPU manufacturerslike to call the shader processors... stream processors. Same idea,slightly different context. GPUs are stream processors; processors thatcan operate in parallel, aka many independent vertices and fragments atonce. A stream is simply a set of records that require similarcomputation.
With the birth of DX11 we nowhave several types of shaders: pixel, vertex, geometry and new is acompute shader (DirectCompute) allowing GPGPU functionality directlyover the GPU, and for Tessellation a domain and hull shader instructionset has been added.
I do hope you now understandthe concept of the GPU, what it is doing and the definition of shadersand shader (aka stream) processors.


Architecture Deep DiveFor the realgurus, allow me to sidetrack a minute and go a little deeper into thearchitecture of the GPU, the design block. For those wondering, NVIDIAis moving to a MIMD layout as they feel it will work out better forthem with stream computing (GPGPU), ATI however is using a SIMD layout.There are 20 SIMD clusters each with 16 thread processors. Each threadprocessor has 5 stream cores (and that makes 1600 shader cores). Sincethere are 80 texture units the card will have four per SIMD clusteravailable.
Each shader (thread) processor has:
  • Four stream cores plus one special function stream
  • Branch unit
  • General purpose registers
When we move onwards to textureunits and caches we notice some improvements as well. There's anincreased texture bandwidth (up to 68 billion bilinear filteredtexels/sec with up-to 272 billion 32-bit fetches/sec. But let's have alook at the GPU design with the help of a block diagram:
So this is how the GPU isarranged. Cache wise the GPU of course has embedded L1 and L2 caches.We know that the L1 cache can now handle texture fetches up to 1 TB/secon the L1 cache and 435 GB/sec in-between L1 and L2. Each memorycontroller has 128 kB L2 cache.
Hardware wise the architecturehad to change somewhat for new DX11 features. 90% of the designremained the same but obviously with DX11 also comes hardwaretessellation. As such, in the ASIC next to dual rasterizers we now spota new DX11 class hardware tessellation unit (we'll explain whattessellation is later). The tessellation unit will be programmablethrough DX11 hull and domain shaders and is a feature I'm reallyexcited about.
Onthe topic of image quality, texture filtering has been improved aswell, imposing an even better quality Anisotropic filtering.Anisotropic filtering no longer has angle dependency. You'll now havenear perfect Anisotropic filtering, look at the image to the right. Iunderstand that screenshots does not make a lot of sense to a lot ofyou, but really .. this is what we want to see, that's a near perfectfilter.
Memory wise the card will use256-bit DDR5 memory again. It's clocked faster though, addingadditional bandwidth. Memory bandwidth is so important.
Thanks to some changes in thememory controllers relativity has gone up, as well as energyefficiency. Especially with DX Compute where crunching data is soimportant and needs to be reliable, we now see a new feature; errordetection code is now embedded in the GDDR5 memory controller.
Some of the new memory controller improvements:
  • EDC - error detection code (CRC checks on data transfers)
  • GDDR5 Memory Clock Temperature compensation
  • Fast GDDR5 Link retraining (allows voltage and clock switching without any problems or hassle)
But yes, EDC is the most important one here as it improves reliability at very high clock frequencies.

block diagram of the memory controller setup
Voltage regulation
G
oodnews for the die-hard overclockers. This card will be softmodcompatible as it has programmable voltage regulators which allow you toin/decrease voltages on the GPU. VRM implementation is impressive,obviously it allows ATI to lower voltages when needed, saving on powerconsumption. Overclockers think the other way arround ;)
Four digital vGPU phases controlled with the Volterra VT1157SF, one digital uncore (GDDR5 IMC) phase with Volterra VT1157SF, 1+1 "digital" GDDR5 vDD+vDDQ phases with Volterra VT242WF, two Volterra VT1165MF controllers (vGPU & uncore).
The PCB surely shows a very impressive VRM design.
回复

使用道具 举报

 楼主| ~DeatHMooN~ 发表于 28-9-2009 09:15:20 | 显示全部楼层
本帖最后由 ~DeatHMooN~ 于 28-9-2009 09:18 编辑

DirectX 11Aah finally, a new DirectX.It's funny how most game developers skipped DX10 really. Face it, ifthere are not enough changes over DX9 then why should software houses [color=#3169b5 [color=#3169b5 !]invest in a new code path and thus spend [color=#3169b5 ]extra [color=#3169b5 ]moneyon development? This literally was a problem with DX generation 10.Next to that add the stupendous limitation from Microsoft to limit DX10to Windows Vista only. Probably the most horrendous call Microsoft evermade for an operating system.
Good news though, DirectX 11 isan extensive step upwards for both developers and gamers. Developerscan speed up their games and improve it with more complex shaders and afew new tricks like tesselation. Gamers on their side can have fasterrunning games with some really cool new eye candy. This is the newshader palette for developers to use: Vertex, Hull, Domain, Geometry,Pixel and Compute Shaders. With the compute shader comes DirectComputeas well, allowing Windows Vista or Windows 7 to utilize the GPUdirectly from within Windows. It's a first step but quite a number ofapplications that would benefit from GPU computing now can make use ofit. This really is a revolutionary step in development, as parallelprocessing can be really helpful in specific situations and thissoftware.
Here are the most prominent newfeatures of DirectX 11 (and I'm keeping this as simple as possible)that will effect you directly:
  • Shader model 5.0
  • Multi-threading
  • DirectCompute 11 - Physics and AI
  • Hardware Tessellation
  • Better shadows
  • HDR Texture compression
Let's highlight and discuss the five more important features that will effect you the most.
Multi-threaded rendering
Muchlike modern day applications and processors, it is now possible to fireoff code and datasets directly towards the GPU multi-threaded, we callthis multi-threaded rendering. Your gain here is efficiency. If aninstruction or shader has to be queued up (single threaded) thatcreates latency, a delay. The GPU as such can now handle all the datacompletely threaded. And that means better overall performance.

Think of a hundred cars that have to move over a single lane road from point A to B.
Now imagine a hundred lane road where all hundred cars have a lane available.

Which approach do you thinkwould get all cars to point B the quickest? Exactly. I'll probablyreceive a few emails from programmers and developers for thisoversimplified explanation though.
Fact is, and this you need toremember with multi-threaded rendering, DirectX will take betteradvantage of all the available processing cores.
DirectCompute 11
Anothernew feature in DirectX 11 I find very exciting is DirectCompute. Itallows Windows 7 and Vista developers to make use of the parallelprocessing power of modern video cards; software developers will haveaccess to the GPU and can use it to help out the system processor withtasks that involve say, high-quality video playback or high performancetranscoding.

In its most simple explanatoryform DirectCompute allows access to the GPU for Stream Computing(acceleration, post processing, whatever). As such DirectCompute allowsfor more easy access to the GPU’s many cores for parallel processing,thus utilizing the GPU for stuff other than gaming. Examples here areStream Computing, transcoding videos over the GPU (which is somethingwe'll be testing today as well later on in this review).
What about games you ask? Well,you could implement and use DirectCompute 11 for image processing andfiltering (post processing), order independent transparency (reallycool feature where you could see through an object like it was made outof glass), shadows rendering, physics, Artificial Intelligence andsure... Ray Tracing as well (though very limited).
I just touched order independent transparency (OID) and quickly wanted to show you that feature through a little video.
Now ATI will very likelyrelease this footage at high quality somewhere this week, but I made arecording of an OTD technology demo. The quality is poor as it isrecorded HD camera footage from a regular monitor. But you'll get theidea, in this demo we'll use a "Mech" and apply the OID technology(Proper rendering of sorted transparent geometry). Look closely at howyou can see through objects like that Mech as it where a 3D X-ray. Youcan actually use this feature in smoke, fire, hair, foliage, fences,grates and so on. In this particular demo DirectCompute is utilized toenable single pass transparent pixel sorting
Some stats: the environments is build out of of 343 thousand triangles, the Mech is built out of 262 thousand triangles.
  Where will DirectX ComputeShaders be used first? Well, it seems like the optimization andenhancement of post processing routines may well be an area that seesan immediate benefit. Compute Shaders is another area of functionalitywhere DirectX 10 and DirectX 10.1 graphics processors will gainbenefits under the DirectX 11 runtime.
The DirectX 11 API doesn't justhave specifications for Compute Shaders 5.0, but also 4.1 and 4.0 andas such ATI Radeon HD 4000 Series graphics cards will actually fallinto the Compute Shader 4.1 profile, bringing more functionality todevelopers, these are DirectX 10.1 class products.
NVIDIA will benefit from DirectCompute model 4.0 only as their GPUs are DX class 10.0.
GPU
DirectX 11 FeatureDX10DX 10.1DX 11
TesselationNoNoYes
Shader Model 5.0NoNoYes
DirectCompute 11NoNoYes
DirectCompute 10.1NoYesYes
DirectCompute 10YesYesYes
Multi-threading YesYesYes
HDR Texture compressionNoNoYes



Shader model 5.0

DirectX11 also introduces Shader Model 5 for High Level Shader Language(HLSL), providing a better way for graphics programmers to implementshader programs. It adds double-precision support, which allowsprogrammers to tackle shader specialization with polymorphism, objects,and interfaces.

We could go horribly in-depthon Shader Model 5.0 but it would be too far-fetched. Better, more andlonger shaders is what you need to remember.


Hardware Tessellation One feature that I am reallyexcited about personally is that we'll finally have a hardwaretessellation unit inside the GPU that DirectX can utilize. But what ishardware tessellation you might ask? We are going to spend an entirepage on this new feature that both DX11 class graphics cards fromNVIDIA and ATI will have embedded.
Well... allow me, did you grab a cup of coffee already?
What is tessellation? Simplyput it's adding more detail to 3D objects, real-time. And with thearrival of DX11 class graphics cards ATI and NVIDIA now include ahardware tessellation unit inside the GPU, a programmable tessellationunit.
Tessellation simply means increasing your polygon count to get more detail. Look at the image below.
Tessellation is theprocess of subdividing a surface into smaller shapes. To describeobject surface patterns, tessellation breaks down the surface of anobject into manageable polygons. Triangles or quadrilaterals are twocommonly used polygons in drawing graphical objects because computerhardware can easy manipulate and calculate these two simple polygons.An object divided into quads and subdivided into triangles forconvenient calculation.
Now at the firstframe you can see a face. There's a small number of polygons in there.It's anno 2009, and we demand more deatailed objects in our 3D scene.So by recursively applying a subdivision rule we can increase thenumber of polygons. Now look at the second and third faces. There's somuch more detail. This process can now be done 100% at GPU level inhardware without a significant impact on performance.
For DirectX 11 thetessellation portion of the pipeline has been wrapped with two newshader types that can be used, the Hull Shader and the Domain Shader.
Now some of you mighthave noticed it already from previous reviews. Tessellation isn't new,ATI already had a hardware tessellation unit in their GPUs for years.But the older units could not be addressed whatsoever in DirectX. Thetessellation units featured in the ATI Radeon HD 2000, HD 3000 and HD4000 series are all very much based on the same functionality found inthe XBOX 360 'Xenos' graphics chip.
Some more examples --Another good example for the usage of tessellation would be terrainbuilding. This technique is especially useful for creatingcomplex-looking terrain using a combination of very simple basegeometry with a height map, and a texture map. And perhaps moreinteresting is that this generated terrain can be deformed dynamicallyby manipulating the height map.
A scene could havemuch polygonal complexity closer to the viewer or camera, and fewerpolygons as distance from the camera increases.
Anyway, thoughtechnical and somewhat difficult to explain, try and rememberthis... tessellation will allow much higher quality rendering andanimation at very low GPU compute cost. The generic rule here is themore tessellation, the slower the GPU gets, yet since there's nowdedicated core logic for it on the GPU, it's fast and can boost yourdetail massively, thus giving an impression of sharpness and much finerquality.
As stated, the new DX11 tessellation unit is programmable though two new shaders, the Domainand Hull shader. And remember, the higher the level of tessellation,the closer to realism the sharpness of the surface approaches.


DX11 - HDR texture compression
Wedoubt you care much  about this info, but this is something developerslike and requested. With DX11 also comes new texture compressionmethods BC6 and BC7. Microsoft boasts that these two compressionformats are the best they can offer for the ratio of high-quality overperformance.  

Block compression 6 (BC6)compresses high dynamic range (HDR) data at a ratio of 6:1, givenhardware support for decompression. BC7 offers 3:1 compression ratiosfor 8-bit low dynamic range (LDR) data.
Anyway I'd like to end this little chapter on DX11 now. Some soon to be released games that are DX11 compatible will be:
  • Aliens vs. Predator (February 2010)
  • BattleForge (DX11 patch expected in October)
  • DiRT 2 (December 2009)
  • S.T.A.L.K.E.R: Call of Pripyat (October 2009)
  • Lord of the Rings Online (Q1 2010)
And sure, this is just ahandful, but in the upcoming year expect a lot of titles as DX11 is theway to go for developers. Okay, enough about DirectX 11. One lastthing, DirectX will become available on both Windows Vista and Windows7.

ATI Eyefinity Okay, the next new hot featurefor ATI Radeon graphics cards was already announced, ATI's Eyefinity.ATI introduces Eyefinity technology on their Radeon HD 5000 seriesgraphics cards. This literally boils down to multi-monitor desktop andgaming nirvana! You will have no problem connecting say, three 30"monitors at 2560x1600. The graphics card can take that resolution andin fact combine the screen resolution and play in it.
We can explain this really simply though; you guys remember our Matrox Triplehead2Go reviewsright? Well, ATI's Series 5000 graphics cards will be able to drive oneto six monitors per graphics card. We've seen and tested this live inaction, and it works really nicely. You can combine monitors and getyour groove on up-to 7680x3200 pixels separated over several monitors-- multiple monitors to be used as a single display. I think the limitis even 8000x8000 pixels, but don't hold me to that.
So some examples of what you can do here:
  • Single monitor setup at 2560x1600
  • Dual monitor setup at 2560x1600 per monitor
  • Three monitors setup at 2560x1600 per monitor
  • Six monitors setup at 1920x1080 per monitor
Eyefinity is looking reallynice, and sure we also understand that 99% of you guys will never usemore than two monitors. That other 1% definitely matches the Guru3daudience. Personally I like to game on three screens. It's reallyimmersive.
Mind you that for six monitor support a special edition (Eyefinity6)card will be launched with six display ports. Your average Radeon HD5870 will have three or four monitor outputs. In fact the reference5870 has two DVI, one HDMI and one display port connector all on onecard. If you are bold enough to go for a multi-monitor setup, it reallyis ideal to get three screens for flight sims, racing games, rolepaying games, real-time strategy, first-person shooters and sure, evenmultimedia apps.
At ATI's press events theyhooked up the Radeon HD 5870 to half a dozen DisplayPort outputs thatwere running at their full resolution, merging all six into a solitaryimage to hit a phenomenal live display. Eyefinity is modular and thusallows users to rearrange the number of discrete images created inaddition to their shape according to your liking. Guru3D users andgamers will no doubt find this setup to their liking. It will beinteresting to learn just what kind of living room you have if you wereto employ such a configuration. Please post your setups in our forums.
Also a note -- we'll bepublishing a dedicated article on Eyefinity in the future, but weexpect this to be a great feature for all kinds of simulations, theflight-sim community must be going wild for sure allright !


Power Consumption One of the biggestaccomplishments of the series 5000 graphics cards is the enhancement inthe power design, the implementation of voltage and clock regulation iseven more dynamic -- power management at a new level.
So we'll look purely at theRadeon HD 5870 now, in IDLE the GPU will clock down and lower itsvoltages on both GPU and memory. Have a look:
GPURadeon HD 4870Radeon HD 5850Radeon HD 5870
Max. Board Power (TDP)160W170W188W
Idle Board Power90W27W27W
The card obviously achieves alow 27W IDLE power consumption by clocking down with several powerstates. Thus a low engine (core) clock frequency with lowered voltagesand lower GDDR5 memory power. It's amazing though as your generichigh-end graphics card would normally consume 50~60 Watts when it idlesin Windows.
Things get even better though,the performance of the graphics card opposed to the last generationproducts has nearly doubled up in performance and design, yet the 5870has a TDP (peak wattage) of only 188 Watts. We think that is justawesome.
Though we haven't tested ityet, ATI also incorporated a new technology feature called ULPS --Ultra High Power State for multi-GPU configurations. We need to lookinto this, but typically with multiple GPUs installed you'd have a highIDLE power consumption, this seems to have been improved. More on thatin another article though.
We will test power consumption later on in this article.



Universal Video Decoder 2.0Always worth a mention is UVD,short for Universal Video Decoder. With proper 3rd party software likeWinDVD or PowerDVD you can enable support for UVD 2.0 which provideshardware acceleration of H.264 and VC-1 high definition video formatsused by Blu-ray and HD DVD. The video processor allows the GPU to applyhardware acceleration and video processing functions while keepingpower consumption & CPU utilization low.
You will have sheer decodingprecision on the Radeon 5000 series. Low CPU utilization whilst scoringmaximum image quality. One improvement has been made as well; you cannow for example upscale your 1920x1080 streams fine to a 2560x1600sized monitor (no more black borders).
New in the GPU architecture ofthe series 5000 is an updated video engine. It's really not massivelydifferent opposed to the old UVD engine, yet has two new additions forpost-processing, decoding and enhancing video streams. Dual streamdecoding is one of the new features. For example, if you playback aBlu-ray movie and simultaneously want to see a director's commentary(guided by video) you can now look at both the movie and in a smallerscreen see the additional content (like picture-in-picture). Obviouslythis is Blu-ray 2.0 compatibility here, and the additional content isan actual feature of the movie. But definitely fun to see.
New in Enhanced UVD 2.0
  • Hardware acceleration decode of two 1080P HD streams
  • Compatible with Windows Aero mode - playback of HD videos while Aero remains enabled
  • Video gamma - independent gamma control from Windows desktop.
  • Brighter whites - Blue Stretch processing increases the blue value of white colors for bright videos
  • Dynamic Video Range - Controls levels of black and white during playback
A recently added feature alsois Dynamic Contrast Enhancement. It does pretty much what the namesays; Dynamic Contrast Enhancement technology will improve the contrastratios in videos in real-time on the fly. It's a bit of a trivial thingto do, as there are certain situations where you do not want yourcontrast increased.
Another feature is DynamicColor Enhancement. It's pretty much a color tone enhancement featureand will slightly enforce a color correction where it's needed. We'llshow you that in a bit as I quite like this feature; it makes certainaspects of a movie a little more vivid.
Directly tied to the UVD engineis obviously also sound. AMD's Radeon series 3000, 4000 and 5000 cardscan pass lossless sound directly through the HDMI connector. This hasbeen upgraded as it's now possible to have 7.1 channel lossless sound192kHz / 24-bit. The HDMI audio output follows HDMI standard 1.3a andnow also supports Dolby True HD and DTS-HD audio. Obviously there isalso support for standard PCM, AC-3 and DTS.
To beable to playback high-def content you'll still need software likeWinDVD or PowerDVD, a HD source (Blu-ray player) and a HDCP monitor ortelevision.
For those interested in MKV / x.264 GPU based content acceleration, playback and image quality enhancements, please read this guide we have written. We spotted this lovely little free application to manage this.

ATI Stream
In the current day and agethere is more to graphics cards than just playing games. More and morenon-gaming related features can and are being offloaded to the GPU.Roughly a year ago ATI introduced ATI Stream. This is a software layerthat allows software developers to 'speak' with the GPU and have itprocess data using your graphics card. This really is the most simple& basic description I can give it. I have no idea where ATI Streamwill be heading now that DirectCompute is available.
In this article we'll show you a test where we utilize ATI Stream and NVIDIA CUDA to transcode videos over the GPU.
Now I'd like to point you towards one function you should all do with your GPU when it's doing nothing.
Folding@home using the ATI Radeon series 5000 GPU
Foldingat home is a project where you can have your GPU or CPU (when the PC isnot used) help out solving diseases, folding proteins. Over the past 12months a lot of progress has been made between the two partiesinvolved. And right now there is a GPU folding client available thatworks with Radeon 5000 series graphics processors. It is ATI Streambased, meaning that all Stream ready GPUs can start folding.

Guru3D team is ranking in theFolding@Home top 90, yes... I'm very proud of our guys crunching thesenumbers, especially since there are tens of thousands of other teams.The client is out, if possible please join team Guru3D and let's foldaway some nasty stuff. The good thing is, you won't even notice thatit's running.
Our Folding@home info can be found here:
  • Team Guru3D Homepage
  • Team Guru3D support forums
Our Guru3D team number is 69411and if you decide to purchase a 4000 series product, guys, promise meyou'll use it to fold for us. By making this move my dear friends,there are now 70 million GPUs available to compute the biggestmysteries in diseases and illnesses. Again, let's make Team Guru3D thebiggest one available guys, join our team.

Radeon HD 5870 GPU wafer

回复

使用道具 举报

 楼主| ~DeatHMooN~ 发表于 28-9-2009 09:19:52 | 显示全部楼层
Radeon HD 5870 Product Gallery
Okay, this is the part where wemove away from all the technical stuff and move on to the productphoto-shoot followed by initial tests on heat, power consumption andnoise levels. First up the photo-shoot.
So as you can see, here we havethe Radeon HD 5870. Quite a charming fellah to look at really. It'sencased completely and has that ridged cooling solution. the card isquite lengthy measuring 28 CM / 11 in.
When we flip the card around wesee a nice backplate installed. I'm a big fan of this. there is muchless risk of damage this way. The downside... it could trap heat.However, heat as we'll show you is not an issue with this product.
Connectivity. I think it issafe to say that we can leave the explanation of single and dual-linkDVI behind us. It is incredible to see how much connectivity this cardcan deal with. With the standard reference Radeon HD 5870 we alreadyhave four digital connectors at our disposal, we spot two DVI, an HDMIand a DisplayPort connector.
If you like to go 3x 2560x1600? Dude, not an issue. It's that kind of flexibility I like very much.


               
                          

Here we have flipped the card around once more and look at the top side. Let's zoom in a little though.
We feel it is quite respectablehow ATI managed to deal with the power envelope. Remember, performancehas doubled up, plus we have some new gadgets on-board. Meanwhile thepower consumption maxes out at just 188 Watts for the Radeon HD 5870.As such two 6-pin power connectors are all you need to connect, eachdelivering 75 Watts, adding up to 150 Watts.
And if you are wondering where the rest of the power comes from... another 75 Watts is delivered though the PCIe bus.
The Radeon HD 5870 willobviously also be CrossfireX compatible, you could hook up one or eventwo more of these cards and go really nuts. We however do recommend youto not use more than two GPUs, as drivers wise you'll quickly run intoproblems, let alone performance scaling becomes very inefficient.

At the rear side of thatrounded curve on the card we see two air intakes. The card is designedin such a manner that it will take in air from inside your PC andexhaust the heated air outside the PC.
As always we recommend you tohave a well ventilated PC with at least a 120mm intake and exhaust.Create some airflow fellas, really important.
Okay, who else is thinkingabout a car design here? Anyway, the cooling is working well and willkeep the card at acceptable heat levels. It however is (as is oftenwith ATI reference coolers) a tad on the noisy side. We'll talk aboutthat some more in the next few pages.
Oh wait ... stop the press.
Courtesy to Guru3D regular USFORCES, who made this photo ;)
And here we have the card all pimped out in our way too expensive test system. Looks good.


Hardware installationInstallation of the productreally is easy. Once the card is installed and seated into the PC wenow connect two 6-pin power connectors to the graphics card. And yes...do make sure your power supply is compatible.
You can now turn on your PC,boot into Windows, install the latest ATI Catalyst driver and after areboot all should be working. No further configuration is required orneeded.

The two 6-pin power connector headers -- you need to connect them both.
Energy consumptionWe'll now show you some testswe have done on overall power consumption of the PC. Looking at it froma performance versus wattage point of view, the power consumption ispretty good for a product of this caliber, according to ATI the 5870has a TDP of 188 Watts.
The methodology is simple: Wehave a device constantly monitoring the power draw from the PC. Afterwe have run all our tests and benchmarks we look at the recordedmaximum peak; and that's the bulls-eye you need to observe as the powerpeak is extremely important. Bear in mind that you are not looking atthe power consumption of the graphics card, but the consumption of theentire PC.
Our test system is a powerhungry Core i7 965 / X58 based and overclocked to 3.75 GHz. Next tothat we have energy saving functions disabled for this motherboard andprocessor (to ensure consistent benchmark results).
Our ASUS motherboard alsoallows adding power phases for stability, which we enabled as well. I'dsay on average we are using roughly 50 to 100 Watts more than astandard PC due to these settings and then add the CPU overclock,water-cooling, additional cold cathode lights etc.
Keep that in mind. Our normal system power consumption is much higher than your average system.
  • System in IDLE = 169 Watts
  • System with GPU in FULL Stress = 358 Watts
The monitoring device is reporting a maximum system wattage peak at roughly 350~400 Watts, and for a PC with this high-end card, this is simply low and certainly remains within acceptable levels.
The IDLE wattage is very okay,the card is clocking down massively, resulting in an all time low powerconsumption (for our test PC). We'll show you that in a graph in aminute.
Recommended Power SupplySo here's my power supply recommendation:
Radeon HD 5870
  • The card requires you to havea 500 Watt power supply unit at minimum if you use it in a high-endsystem. That power supply needs to have (in total accumulated) at least40 Amps available on the +12 volts rails.
Radeon HD 5870 CrossfireX
  • A second card requires you toadd another 188 Watts. You need a 700+ Watt power supply unit if youuse it in a high-end system. That power supply needs to have (in totalaccumulated) at least 55~60 Amps available on the +12 volts rails.
For each card that you add, add another 200 Watts as a safety margin.
There are many good PSUs out there, please do have a look at our many PSU reviews as we have loads of recommended PSUs for you to check out in there. What would happen if your PSU can't cope with the load?:
  • bad 3D performance
  • crashing games
  • spontaneous reset or imminent shutdown of the  PC
  • freezing during gameplay
  • PSU overload can cause it to break down
The core temperatureLet's have a look at the temperatures this huge cooler offers.
We now fire off ahefty shader application at the GPU and start monitoring temperaturebehavior as it would be when you are gaming intensely and continuously,we literally stress the GPU 100% here as you can see in the graph. Wemeasured at a room temperature of 21 degrees Celsius.
Now we report at two stages the GPU(s) in IDLE and under stress:
Here's what we get returned:
Card settingTEMP IDLE CTEMP FULL C
Radeon HD 587035~4077
As you can see weget very respectable temperatures returned. When the card is clockeddown and idling we see a temperature of roughly 42 degrees C (95F). Andwhen we completely stress out the GPU 100% for a while, temperaturesrise towards roughly 77 degrees C (170 F). You know what, that's okay.
But is the cooler very loud then?

Noise Levels coming from the graphics card
When graphics cards produce alot of heat, usually that heat needs to be transported away from thehot core as fast as possible. Often you'll see massive active fansolutions that can indeed get rid of the heat, yet all the fans thesedays make the PC a noisy son of a gun. I'm doing a little try out todaywith noise monitoring, so basically the test we do is extremelysubjective. We bought a certified dBA meter and will start measuringhow many dBA originate from the PC. Why is this subjective you ask?Well, there is always noise in the background, from the streets, fromthe HD, PSU fan etc etc, so this is by a mile or two not a precisemeasurement. You could only achieve objective measurement in a soundtest chamber.
The human hearing system hasdifferent sensitivities at different frequencies. This means that theperception of noise is not at all equal at every frequency. Noise withsignificant measured levels (in dB) at high or low frequencies will notbe as annoying as it would be when its energy is concentrated in themiddle frequencies. In other words, the measured noise levels in dBwill not reflect the actual human perception of the loudness of thenoise. That's why we measure the dBA level. A specific circuit is addedto the sound level meter to correct its reading in regard to thisconcept. This reading is the noise level in dBA. The letter A is addedto indicate the correction that was made in the measurement.Frequencies below 1kHz and above 6kHz are attenuated, where asfrequencies between 1kHz and 6kHz are amplified by the A weighting.
TYPICAL SOUND LEVELS
Jet takeoff (200 feet)120 dBA
Construction Site110 dBA Intolerable
Shout (5 feet)100 dBA
Heavy truck (50 feet) 90 dBA Very noisy
Urban street 80 dBA
Automobile interior 70 dBA Noisy
Normal conversation (3 feet) 60 dBA
Office, classroom 50 dBA Moderate
Living room 40 dBA
Bedroom at night 30 dBA Quiet
Broadcast studio 20 dBA
Rustling leaves 10 dBA Barely audible
The noise levels coming fromthe card paint a different picture. I'm not really thrilled about it.At IDLE you'll have no problem with the card whatsoever as noise levelsremain under 40 dBA.
Once the GPU starts to heat upthe fan RPM will go up as well. The card however remains steady atroughly 42 dBA which really is a very normal noise level. Not annoyingat all. So that's good as well.


Test Environment & equipmentHere is where webegin the benchmark portion of this article, but first let me show youour test system plus the software we used.
Mainboard
ASUS X58 ROG edition Rampage II Extreme

Processor
Core i7 965 @ 3750 MHz (3.6 + Turbo mode).
Graphics Cards
Radeon HD 5870
Diverse

Memory
6144 MB (3x 2048 MB) DDR3 1866 MHz Corsair @ 1500 MHz
Power Supply Unit
1200 Watt
Monitor
Dell 3007WFP - up to 2560x1600
OS related software
Windows 7 RTM 64-bit
DirectX 9/10 End User Runtime
ATI Catalyst 8.66 RC6 for Cypress
NVIDIA GeForce 190.38 WHQL

Software benchmark suite
  • Far Cry 2
  • Fallout 3
  • Call of Duty 5: World at War
  • Mass Effect
  • Crysis WARHEAD
  • Tom Clancy's HAWX
  • Anno 1404
  • 3DMark Vantage
  • Brothers in Arms: Hells Highway
  • Dead Space
  • GPU Transcoder
A word about 'FPS'
What arewe looking for in gaming performance wise? First off, obviously Guru3Dtends to think that all games should be played at the best imagequality (IQ) possible. There's a dilemma though, IQ often interfereswith the performance of a graphics card. We measure this in FPS, thenumber of frames a graphics card can render per second, the higher itis the more fluently your game will display itself.
A game's frames persecond (FPS) is a measured average of a series of tests. That testoften is a time demo, a recorded part of the game which is a 1:1representation of the actual game and its gameplay experience. Afterforcing the same image quality settings; this time-demo is then usedfor all graphics cards so that the actual measuring is as objective ascan be.
Frames per second
Gameplay
<30 FPS
very limited gameplay
30-40 FPS
average yet very playable
40-60 FPS
good gameplay
>60 FPS
best possible gameplay
  • So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost.
  • With 30 FPS up-to roughly 40 FPS you'll be very able to play thegame with perhaps a tiny stutter at certain graphically intensiveparts. Overall a very enjoyable experience. Match the best possibleresolution to this result and you'll have the best possible renderingquality versus resolution, hey you want both of them to be as high aspossible.
  • When a graphics card is doing 60 FPS on average or higher then youcan rest assured that the game will likely play extremely smoothly atevery point in the game, turn on every possible in-game IQ setting.
  • Over 100 FPS? You have either a MONSTER graphics card or a very old game.

刚刚上来YANBONG,但不知从何开始爬贴?欢迎使用 论坛导读功能
回复

使用道具 举报

horro 发表于 28-9-2009 12:03:31 | 显示全部楼层
不是很懂,但是是拿来装在pc的graficcard吧?价钱应该不便宜。
回复

使用道具 举报

 楼主| ~DeatHMooN~ 发表于 28-9-2009 16:50:43 | 显示全部楼层
不是很懂,但是是拿来装在pc的graficcard吧?价钱应该不便宜。
horro 发表于 28-9-2009 12:03


其实是很便宜..相比nvidia
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

ADVERTISEMENT

Archiver|手机版|小黑屋|YANBONG

GMT+8, 21-11-2024 21:22

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

重要声明:本网站是以即时上载留言的方式运作,本站对所有留言的真实性、完整性及立场等,不负任何法律责任。而一切留言之言论只代表留言者个人意见,并非本网站之立场,用户不应信赖内容,并应自行判断内容之真实性。由于讨论区是受到「即时留言」运作方式所规限,故不能完全监察所有即时留言,若读者发现有留言出现问题,请联络我们。本站有权删除任何留言及拒绝任何人士留言,同时亦有不删除留言的权利。切勿撰写粗言秽语、诽谤、渲染色情暴力或人身攻击的言论,敬请自律。
Proudly hosted by
LinodeDigital Ocean
快速回复 返回顶部 返回列表