~DeatHMooN~ 发表于 23-2-2009 21:33:02

GTX 295 , ATI 4870X2 性能比较

[近期高端GPU市场分析]
GPU的世界总是充满纷争,自2006年暑假NVIDIA发布G80新架构之后,由于竞争对手AMD的R600效能低下,使得无论是主流还是顶级市场,都是由NVIDIA占据主动。去年,代号GT200的GeForce GTX 260/280更是狠狠击溃了AMD顶级Radeon HD3870,高端市场一度是NVIDIA独孤求败。但好景不长,高傲的绿影军团显然低估了竞争对手新产品的实力,代号RV770的Radeon HD4870/4850的发布有如为市场注入了一针强心剂,将性价比路线走得淋漓尽致,虽然在性能上这两颗GPU都无法与GT200相抗衡,但红衫军团主攻中端的策略成功的吸引了绝大多数玩家的眼球。
http://img2.gamersky.com/news/UploadFiles_news/200812/20081202023004182.png
GPU市场风云变幻,玩家需谨慎选择RV770采用台积电55nm工艺制造,卓越的性能打开了中高端市场突破口,AMD延续了RV670时的战略,迅速推出了双GPU单PCB的顶级方案——Radeon HD 4870X2应运而生,单PCB上焊接了两颗RV770 GPU,并且拥有2GBGDDR5的高速大容量显存规格,内部采用PLX芯片组建CrossFire。此招一出,整个GPU市场发生了翻天覆地的变化,在顶级市场有着GeForce GTX 260/280和GeForce9800GX2等多位“性能高手”把守的NVIDIA被一击K.O,“卡皇”宝座失守。而坏消息并没有结束,RV770发布半年以来,此架构衍生出的各种中低端版本侵袭了由NVIDIA G92主导的中低端市场,消费群体最多的市场也几近沦陷。http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142605514.jpg]
多路显卡系统性能提升有限但对于广大玩家而言,在GPU上花费了数千甚至上万的成本,追求得来的性能到底能到多少呢?想用NVIDIA方案打败Radeon HD4870X2,只能购买两块GeForce GTX 280组建SLI;想要用AMD自己的方案来提升性能,就要再买一块Radeon HD4870X2组建四路交火;要打败四路交火,唯一可能的就是GeForce GTX 280 3-waySLI。还不得不考虑配备一块足以应付此配置的主板和超强力的CPU,连电源也至少要选用1000W级产品,为了那10-20%的性能提升,这样的竞争究竟意义何在?笔者个人更希望推出强力单卡来解决游戏问题,因为多卡的性能提升实在有限,那么今天这个新年愿望是不是能够实现呢?

[卧薪尝胆:NVIDIA新兵器杀到]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142605110.jpg
216核的GeForce GTX 2602008年已经过去了,在这过去的一年NVIDIA对于对手的强势反弹也并非无动于衷,先是在价格基本不变的情况下将GeForce GTX260的流处理器数量由192个增加到216个,而后又推出了采用55nm制造工艺的GeForce GTX260,频率调节幅度更大,发热更低,功耗更小,在国内外各大媒体测试中,与HD 4870相比均取得了总体性的胜利。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142606918.jpg
NVIDIA新皇到来但无论怎么样,要重新树立起品牌威信,就必须在最具说服力的高端市场做出表示,NVIDIA积蓄半年显然也是为了这一刻在做准备,继半年前卡皇争夺战溃败之后,绿影军团带着新的“兵器”重新回到了擂台,这就是我们今天的主角,将在今天的CE2 2009上发布的NVIDIA GeForceGTX 295。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142607206.jpg]
GeForce 9800GX2架构的开发并非一朝一夕的事情,在无法推出新架构以应对的时候,NVIDIA又一次祭出了“三明治”显卡方案,同样的事情也曾发生GeForce7950GX2和GeForce 9800GX2身上,历史是那么的惊人类似,GeForce 7950GX2的推出正是为了遮盖Radeon1900XTX的强势风头,GeForce 9800GX2推出的目的也是为了压制强力的Radeon HD3870X2。笔者最初猜测GeForce GTX 295将会采用双GeForce GTX 260方案,之前也有小道消息传出GTX295为双GTX 260方案,但后来NVIDIA官方给了我们确认,GeForce GTX 295在核心上会使用双GTX280方案,也就是拥有480个流处理器,但ROP/显存规格方面却采用双GTX 260的方案。

[ 本帖最后由 ~DeatHMooN~ 于 5-3-2009 11:54 编辑 ]

~DeatHMooN~ 发表于 23-2-2009 21:38:04

[皇位挑战者:NVIDIA GTX 295]
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142607309.jpg
GeForce GTX 295详细规格方面,NVIDIA GeForce GTX 295的每颗GPU将配备240个流处理器,80个纹理寻址/过滤单元,与GTX280相同。但每颗GPU只有7组ROP/ framebuffer总计28个ROP,896MB GDDR3显存448-bit位宽,这些与GTX260相同。核心频率,包括纹理单元和ROP都运行在576MHz下,流处理器频率为1242MHz,显存运行在1998MHz的有效频率下,频率方面大家可以看到也是与GTX 260保持一致的。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142607157.jpgGT200的晶体管数量为14亿颗,基于台积电65nm工艺制造。而GeForce GTX295将采用55nm工艺,理论上而言,工艺的提升将会带来发热和功耗的降低,提升GPU的每瓦工作效率,纸面上来看,GeForce GTX 295TDP为289W,而Radeon HD 4870X2 TDP为286W,不过具体的表现还是要用具体测试成绩说话。


]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142608787.jpg
NVIDIA GTX 295对比9800GX2第一眼瞥过,可能玩家朋友会看不出GeForce GTX 295和GeForce GTX 260/280的差别,因为它们在长短上保持一致,均为10.5英寸,而且由于同样采用双槽散热器设置,使得它们之间在厚度上也都基本一样,不仔细看确实很难分辨。


今天我们拿到的两块都是来自华硕的GTX 295,采用NVIDIA公版设计,“ASUS”的LOGO非常显眼。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142609367.jpg
双GTX 295继续看下去眼尖的玩家就会发现端倪,作为双GPU设计的GeForce GTX295从背后看只看到一个GPU布线,而这款新“巨无霸”和以前的GeForce 7950GX2、GeForce9800GX2一样,采用双PCB设计,两块PCB中间是一款NVIDIA专为GTX 295设计的散热片。
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142609944.jpg
风扇采用PCB双面开口进风设计,照顾到两侧的GPU和显存因为要顾及双PCB设计的散热问题,GeForce GTX 295的散热风扇入风口在两个PCB上均有开口,从显卡的前后方都能吸风,冷口气由两块PCB中间贯穿,经由散热片从背板I/O处吹走热量。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142610325.jpg
6-pin加8-pin的供电组合,设计TDP为289W供电方面,GeForce GTX 295的要求与GTX280相同,需要一个8-pin加6-pin辅助供电,NVIDIA官方资料显示为GTX 295 TDP约在289W,因为一块GTX280的单卡TDP就已经达到240W,可见55nm技术的使用确实减少了GPU运行所需的功耗。

2个DVI Dual-link和1个HMDI接口输出接口方面,两个DVI加一个HDMI的组合看似简单,其实在搭配各类转接头的情况下足以满足各种使用需求,在供电接口的旁边还搭配了S/PDIF数字音频输出,为没有HDMI接口的家庭影院提供了方便。


]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142611290.jpg
GTX 295裸照经过繁琐的拆解过程之后,GeForce GTX 295的真面目浮出了水面,玩家朋友可以从上图中看到,GTX 295由四个部分组成,搭载两块GPU的两片PCB,散热器以及外保护壳。
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142611304.jpg]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142612239.jpg
虽为55nm工艺,但GPU核心面积依然巨大GeForce GTX 295由两颗G200-400-B3核心GPU组合而成,B3步进表示采用了55nm制造工艺,但即使如此核心面积还是相当巨大,从与Intel Core 2 Duo E8500的对比上可以看出其身形的硕大。
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142612683.jpg
你看出两块PCB不同点了么?两片PCB乍一看基本上长得十分相似,因为它们都搭载了14颗显存,供电方面也完全一致,但稍微仔细就能看出明显不同,下面让我们来一一进行介绍。
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142613470.jpg
供电方面用料、做工均无可挑剔首先是供电方面,两张显卡都采用了大量的钽电容设计,单GPU 3+1相供电规格足以应对使用需求。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142613269.jpg
单面铺设14颗GDDR3显存GeForce GTX 295每颗GPU使用了14颗Hynix GDDR3 1.0ns显存,双GPU组建为1792MB/448bit显存规范,运行在1998MHz的有效频率上,显存带宽达到223.8GB/s。


]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142614654.jpg
NF200-P-SLI-A3芯片负责两颗GPU内部SLI]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142615726.jpg
视频输出芯片NVIO2-P-A2在配备SLI金手指的那块PCB上,我们可以看到NF200-P-SLI-A3芯片,这就是两颗GPU组建内部SLI的关键点,在旁边还可以看到内部SLI的双接口。在上方能看到NVIO2-P-A2芯片,支持双Dual-Link DVI及HDMI输出接口。
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142615945.jpg
两块PCB上的SLI接口在带DVI接口的这块PCB上我们也可以看到内部SLI接口,GeForce GTX 295支持外部SLI,也就是说两块GTX 295可以组建成为Quad-SLI规格,性能将非常彪悍。
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142616652.jpg
内部SLI桥内部SLI桥接器,两个的长短不同,接口也与常见的外部SLI不一样,看来是NVIDIA专为GTX 295开发的一项设计。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142616123.jpg
]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142617695.jpg
散热器做工不错,性能表现有待测试来考证让我们来看看散热器设计,这款中置散热器中间是一个大体积的铜块设计,吸收两边GPU发出的热量,从图片上可以看到,散热器其中一面还为NF200-P-SLI-A3桥接芯片提供了散热铜块。从散热器的顶部看去可以看到较为密集的散热片,配合涡轮风扇风道能有效的将GPU的热量带走。

[测试平台及方法介绍]

今天我们的测试平台可谓全球最强,将使用两块GeForce GTX 295组建Quad-SLI,对决对手自然是半年前横路杀出的宿敌AMD Radeon HD4870X2,后者也将以CrossFire的方式组建四路交火,全球最彪悍的GPU宿命之战即将打响。测试环境为IntelNehalem平台,CPU采用超频至3.6GHz 的Intel Core i7920,散热器使用有“风冷之王”美誉的利民U120-Extreme,主板为X58芯片组的华硕P6T-Deluxe,三通道内存总计3GBDDR3-1333,Windows VISTA Ultimate 32-bit安装在西部数据万转猛禽300GB上,电源同样采用全球最强的TTToughpower 1500W,整个测试平台明星云集,保障GPU性能充分发挥。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142617966.jpg]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142618502.jpg]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142619666.jpg软硬件安装完成以后,正确的测试方法是:开机进入到桌面上以后,待系统准备就绪后,才开始运行测试(关闭UAC、屏幕保护程序、系统还原、自动更新等对测试得分有干扰的系统任务)。所有测试项目都运行三遍,在测试成绩稳定、可靠的情况下,我们以其中最好的一次成绩为准。游戏测试将使用游戏内置的Benchmark,如果没有内置测试程序的我们将在同一场景使用Fraps软件记录帧率。

[ 本帖最后由 ~DeatHMooN~ 于 5-3-2009 11:54 编辑 ]

~DeatHMooN~ 发表于 23-2-2009 21:42:54

[基准性能测试:3DMark Vantage]
作为业界第一套基于DX10 的综合性基准测试工具,3DMarkVantage在发布前就早早地让众人翘首期盼了。它能全面多核心处理器、发挥多路显卡的优势,能在当前和未来一段时间内满足PC系统游戏性能测试的需求。3DMark Vantage是专门为DX10显卡量身打造的,而且只能运行在Windows VistaSP1操作系统下。它包括两个图形测试项目、两个处理器测试项目、六个特性测试项目。借助于DX10API的新技术和高效能,它为玩家带来了一场绚丽逼真的视觉特效盛宴。并且,3DMarkVantage还特别加入了对人工智能(AI)和物理加速的专门测试。
[]http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142619388.jpg
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142619517.jpg3DMarkVantage作为最权威的DX10性能检测软件,能充分反映GPU的综合实力,从我们的测试成绩上可以看到,无论单卡对决还是双卡火拼,NVIDIAGeForce GTX295都取得了分数的领先,Performance模式下单卡NVIDIA领先AMD有23%,双卡则领先35%左右;Extreme模式下单卡NVIDIA领先对手13%,双卡则领先19%。同时可以看到PhysX技术的使用使得NVIDIA GPU在CPU子项中分数领先对手一倍。

[基准性能测试:3DMark 06]

3DMark06是FuturMark公司推出的DX9平台测试软件绝唱。在3DMark05的基础上强化了SM3.0和HDR支持,支持多核心,多线程,各方面的要求都有提高。3Dmark06首次在显卡测试软件中引入CPU得分,使得它更适合分析整个系统的3D能力。共分为两个SM2.0测试,两个SM3.0/HDR测试和两个CPU测试,分别对显卡和CPU进行了全面的考验。虽然3Dmark 06是上一代的测试软件了,但是它的测试结果对于显卡的DX9性能仍然具有相当大的参考性。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142620502.jpghttp://img2.gamersky.com/news/UploadFiles_news/200901/20090109142620990.jpghttp://img2.gamersky.com/news/UploadFiles_news/200901/20090109142620603.jpg3DMark06虽然已经推出3年之久,但目前仍然有许多DX9.0C游戏受到广大玩家喜爱,所以这项测试的成绩更趋向于老游戏玩家,我们采用了1920x1200和2560x1600两个分辨率开启4AA16AF进行测试,在1920x1200分辨率下,单卡GTX 295领先HD4870X2约10%,而双卡则领先8%左右;而在分辨率为2560x1600时,NVIDIA单卡领先对手8%,双卡领先对手HD 4870X2CrossFire同样约为8%,幅度并不明显。

[基准性能测试:Lightsmark 2008]

Lightsmark是一款集中于光影效果和性能测试的新一代显卡3D性能测试软件,Lightsmark主要用来测试电脑运行下一代全局实时光照(Global Illumination )的能力,开发者表示,全局实时光照技术将应用在未来数月发布的多款游戏当中。
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142621447.jpg和新一代3DMark测试软件不同的是,Lightsmark对系统要求配置较低,可以让较多的用户参与到这款软件的测试中来,Lightsmark具体的系统需求如下:

- 任意支持SSE指令的x86/x64处理器

- 兼容OpenGL 2.0标准32M以上显存的任意显卡
1) NVIDIA GeForce 5xxx、6xxx、7xxx、8xxx (包括GeForce Go)配合2007年10月份之后最新驱动
2) AMD/ATI Radeon 9500-9800, Xxxx, X1xxx, HD2xxx (包括Mobility Radeon) 配合2007年10月份之后最新驱动
3) 包括FireGL、Quadro系列产品
4) 其它厂商的不支持OpenGL 2.0的显卡,或者比较老的驱动不在支持之列
http://img2.gamersky.com/news/UploadFiles_news/200901/20090109142621691.jpg光照测试软件Lightsmark2008对GPU的要求并不算高,但GPU之间的差距在这款软件的测试下能很准确的体现出来,我们采用了1920x1200fullscreen和2560x1600 fullscreen两个测试分辨率,从柱状图上大家可以明显看到,NVIDIA GeForce GTX295领先幅度非常明显,单卡性能便超过了对手的四路交火。单卡对比来看,2560x1600分辨率下GTX295领先76%,1920x1200则领先106%;双卡模式2560x1600分辨率GTX 295领先HD4870X2约35%,1920x1200下领先41%。

~DeatHMooN~ 发表于 23-2-2009 21:50:03

HD 4870 X2

Hello everybody and welcome to this R700 -- Radeon HD 4870 X2 preview.
We felt we first needed to come upwith a small explanation. Typically when new graphics cards arereleased, previewed or whatever, Guru3D is among the first websites topublish an article on it.
So last week, on Monday, a couple ofwebsites posted a preview while Guru3D did not have our card yet... ourneck hair grew back inwards. Especially considering we had been incontact with AMD regarding this very topic, the X2 cards.
What happened? -- Somewhere within AMDthe decision was made to arrange a preview, with worldwide roughly10-15 boards available for the previews. Within that same line ofreasoning / decision making, these preview boards would begeographically distributed. Meaning, USA, UK, France, Germany,Scandinavia ... but unfortunately the tiny little Netherlands was notincluded, and yes, that's exactly where our office is.
So despite the fact Guru3D writes inEnglish, we were not included for the preview just for the sole fact weare stationed in the Netherlands. It was seriously the biggest cockupI've seen in years. My contacts at AMD however got aware of myfrustration on preview day, last Monday. Guru3D & AMD got in aconference call on Tuesday, and by Thursday we had the preview board aswell.
So on behalf of Guru3D, our apologiesfor the delay, but finally today we'll actually look into a productthat's been a big success since it's release .. yet now doubled up. I'mof course talking about the mighty new Radeon HD 4800 series of cards,the X2. It was already known for a long time that AMD would release aproduct under codename R700. We just didn't know for sure if thatproduct was a single "GPU" based product, or a multi-"GPU" basedproduct.
We now have the answer, in very shortwording, you take a large PCB (printed circuit board), slap two 4870processors and a bridge-chip on there and call it a Radeon 4870 X2.It's surely not the most elegant method of getting a graphics card inthe high-end segment, it is however as this preview will show you, avery effective one.
Before we dive into the preview, weneed to mention that there is one restriction we got from AMD. We canuse any benchmark we want, yet are limited to a small number (fourgames) considering this is a preview. A second note that we must makeis that this is an early beta engineering test-sample, the power savingfeatures are disabled on these boards meaning that IDLE wattage andeven peak wattage will be off by a good number.
Other than that we have a green lightto tell you what we want. Good good .. as it should be an interestingread, let's have a peek first and then slide onto the next page please.
http://www.guru3d.com/imageview.php?image=14293                                                                           
      You are here: Home » Hardware reviews » Videocards
Radeon HD 4870 X2 preview               
By:Hilbert Hagedoorn |Edited by John A. Johnsen | Published: July 21, 2008
Select Page to Load1 - Two way introduction2 - First base with R7003 - Second base with R7004 - Installation, GPU temps, power cosumption & noise levels5 - Photos - The Card (1)6 - Photos - The Card (2)7 - Hard- and software used8 - Game Performance: Mass Effect9 - Game performance: Race Driver GRID10 - Game performance: Call of Duty 411 - Game Performance: S.t.a.l.k.e.r. 12 - GPU Performance: 3DMark Vantage 13 - Overclocking & Tweaking14 - The Conclusion                             
                                                                                                

RV770 + RV770 = R700Alright then grab some coffee andlet's get started. Roughly half way through August AMD will release the4800 series X2 products officially. What you see today is nothingfinal. Board-partners are free in their choices, you'll see X2 basedcards with 4850 configurations, 4870 configurations, gDDR3, gDDR5, 1GB, 2GB maybe even 4 GB memory. So that is why I want to mark thisarticle very clearly and specifically as a preview. We'll show someearly rough numbers based on this two-GPU configuration based product.But to understand the Radeon 48x0 X2 graphics card, you must learn alittle more about the GPUs that are empowering the product; which youalready know under the name Radeon HD 4850/4870, yet was developedunder codename RV770.
Going first base with RV770
By launching the RV770 chip, AMDeffective doubled up performance over the last-gen product especiallyfor you at a very acceptable price level. Quite an achievement. Butlet's talk a little about the GPU and the differences between the twomodels that where released (4850/4870), the RV770 based chipset. AMDput nearly a billion transistors into that GPU, which is now built upona 55-nm production.
The chip literally is 16 mm wide andhigh. Which for AMD still is quite large, for a 55nm product. Thenumber of transistors for a midrange product like this is extreme andtypically it's best to directly relate that to the number of shaderprocessors to get a better understanding. But first let's look at somenice examples of Die sizes of current architectures.
http://www.guru3d.com/imageview.php?image=14301
One of the two RV770 cores utilized on the R700 product.Now please understand that ATI uses adifferent architecture shader processors opposed to NVIDIA, so do notcompare the numbers that way; or in that manner. The Radeon 4850/4870have 800 scalar sub-processors (320 on the HD 3800 series) and now havea significant forty texture units (was 16 in last-gen architecture).
The stream/compute/shader processors(can we please just name them all shader processors?) definitely had agood number of changes; if you are into this geek talk, you'll spot 10SIMD clusters each carrying 80 32-bit Shader processors (thisaccumulates to 800 ). If I remember correctly, one SIMD unit can handledouble precision.
Much like we recently noticed inNVIDIA's GTX 200 architecture, the 80 scalar stream processors per unitper SIMD unit have 16KB of local data cache/buffer that is shared amongthe shader processors. Next to the hefty shader processor increase youprobably already noticed the massive amount of texture units. In thelast generation product we noticed 16 units, the 4800 series has 40units.
When you do some quick maths, that's2.5x the number of shader processors over the last-gen product, and2.5x the number of texture units. That's ... a pretty grand changefolks. Since the GPU has 800 shader processors it can produce the rawpower of 1000 to 1200 GFlops in simple precision. It's a bit lame andinaccurate to do but... divide the number of ATI's scalar shaderprocessors with the number 5 and you'll roughly equal the performanceto NVIDIA's stream processor, you could (in an abstract way) say thatthe 4800 series have 160 Shader units, if that helps you compare ittowards NVIDIA's scaling. Again there's nothing scientific or objectiveabout that explanation.
The number of transistors for aproduct priced like this is quite excessive and typically it's best todirectly relate that to the number of shader processors to get a betterunderstanding. Now please understand that ATI used different shaderprocessors opposed to NVIDIA, so do not compare the two in that manner;but the new Radeon 4850/4870 have 800 shader cores. Effectively thisproduct can poop out 1000/1200 GigaFLOPs of performance. Depending onhow that is measured of course.
An 4870 X2 (R700) has two 4870 cores (RV770) embedded onto the PCB.
http://www.guru3d.com/imageview.php?image=14285



Boththe 4850 and 4870 products utilize the same chip, make no mistake. Whatyou'll notice is that the 4850 will run at a 625 clock frequency andcomes with 512MB GDDR3 memory (framebuffer) clocked at 1986 MHz. Thesetwo factors are pretty much the biggest difference compared to the bigbrother 4870. Power requirements aren't bad either. You can expect a110 Watt peak watt power consumption per GPU in this configuration.That second configuration is obviouslythat sweet Radeon HD 4870. We see exactly the same GPU mounted on thisboard, yet there are some distinct differences to be found. Theperformance of this product is tweaked and maximized. You will noticethat AMD's board partners will have higher clock frequencies on thisproduct boosting out some more performance. Yet more importantly, thisis the product you guys hear so many rumors about .. the product withGDDR5 memory. GDDR5 is a first for sure.
There are some distinct advantages tobe found for GDDR5 memory. It's has much higher frequency based memoryversus tight timings. In the end this gives the Radeon HD 4870 aperformance boost as GDDR5 memory will leverage overall peak bandwidthto a theoretical (roughly) 3.6 Gbps. And that's just crazy fast (GDDR3on 4850 = 2.0 Gbps).
This is the biggest difference betweenthe two (4850/4870) models. Next to the memory, and I already mentionedthis, you can expect higher clock frequency for the core/shader domain,750 MHz will be the default clock frequency and memory wise ... thefrequency sees 3600 MHz.
http://www.guru3d.com/imageview.php?image=14299<>
Hover with your mouse over this photo to magnify.The product we preview today is basedon the 4870, yet two GPUs merged together with a bridge chip utilizingcrossfire technology to render your games faster. "What's Crossfire?"some of you might ask. A valid question as we take verbs like Crossfire& SLI for granted these days.

Well, just like NVIDIA'sSLI, Crossfire is a situation where you add a second, third or evenfouth similar generation graphics card (or in today's case GPU) to theone you already have in your PC and effectively try to double, triple,quadruple your raw rendering / gaming performance.

The idea isnot new at all though .. if you are familiar with the hardwaredevelopments over the past years you'll remember that 3dfx had a veryfamiliar concept with the Voodoo 2 graphics cards series. There aremultiple ways to manage two cards rendering one frame, think ofSupertiling, it's a popular form of rendering. Alternate frameRendering, each card will render a frame (even/uneven) or Split Framerendering, simply one GPU renders the upper or the lower part of theframe. So you see there are many methods where two or more GPUs can beutilized to bring you a gain in performance.
The R700 as tested today is based on a4870 configuration. You'll spot several models on the actual launchbased on multiple configurations.
Yet the engineering sample that wereceived is based on the 4870 as it has a core frequency of 750 MHz andthe product has it's gDDR5 memory clocked at 900 MHz which quadruplesitself to an effective frequency of 3600 MHz. Doubling up, also meansdoubling up the memory amount, which is now set at 2 GB on this card.See, each GPU will clone that memory. Example, each texture is sittingin both the framebuffers as the GPUs can not share that buffer --cloning.
Let's compile a chart and look at the differenced:
ATI Radeon
HD 4850ATI Radeon
HD 4870ATI Radeon
HD 3850Radeon
4870 X2 (R700)# of transistors965 million965 million666 million965 million x2Stream Processing Units800800320800x2Clock speed625 MHz750 MHz670 MHz750 MHzMemory Clock2000 MHz GDDR3 (effective)3600 MHz GDDR5 (effective)1.66 GHz GDDR3 (effective)3600 MHz GDDR5 (effective)Math processing rate (Multiply Add)1000 GigaFLOPS1200 GigaFLOPS428 GigaFLOPS2400 GigaFLOPSTexture Units40401640x2Render back-ends16161616x2Memory512MB GDDR3512MB GDDR5512MB GDDR31024 GDDR5 x2Memory interface256-bit256-bit256-bit256-bit x2Fabrication process55nm55nm55nm55nmPower Consumption (peak)~110W~160W~90W~300WAnd that R700 product we'll look at asAMD want to reveal a bit of what they are working on. Anyway, let'shave a look at the product and talk some more.
http://www.guru3d.com/imageview.php?image=14295
Frontlines: Fuel of WarThis is a game that's got a couple ofbig ambitions. The first is to provide a large-scale multiplayerexperience along the lines of Battlefield: Modern Combat. That means inaddition to running around on foot, you can jump in and control avariety of vehicles on the battlefield. However, it also wants to addwhat Battlefield sorely lacks, which is a compelling single-playerexperience. Perhaps the most impressive level is a completely war-torncityscape that has gutted skyscrapers everywhere. Even more startlingis that you can actually get into some of these towering husks, whichgives you an incredibly high perch. While that might seem a bit unfair,keep in mind that there are many ways for other players to get at you,such as the remote-controlled air drones that can fly up and shred youwith guns or rockets.
                                                    Frontlines: Fuel of War is a great title we recently added to our benchmark suite.
That's good performance, in-gameeverything possible image quality wise is maxed out. Very goodperformance across the board. Here we see that the 4870 X2is definitely on the lead.
http://www.guru3d.com/imageview.php?image=14178
3DMark Vantage (DirectX 10)3DMark Vantage focuses onthe two areas most critical to gaming performance: the CPU and the GPU.With the emergence of multi-package and multi-core configurations onboth the CPU and GPU side, the performance scale of these areas haswidened, and the visual and game-play effects made possible by theseconfigurations are accordingly wide-ranging. This makes covering theentire spectrum of 3D gaming a difficult task. 3DMark Vantage solvesthis problem in three ways:
1. Isolate GPU and CPU performance benchmarking into separate tests,
2. Cover several visual and game-play effects and techniques in four different tests, and
3. Introduce visual quality presets to scale the graphics test load up through the highest-end hardware.
To this end, 3DMark Vantagehas two GPU tests, each with a different emphasis on various visualtechniques, and two CPU tests, which cover the two most common CPU-sidetasks: Physics Simulation and AI. It also has four visual qualitypresets (Entry, Performance, High, and Extreme) available in theAdvanced and Professional versions, which increase the graphics loadsuccessively for even more visual quality. Each preset will produce aseparate, official 3DMark Score, tagged with the preset in question.
The graphics tests will havefour quality presets available: Entry, Performance, High and Extreme.Each preset specifies a certain setting for the rendering optionslisted in section 5.6. The graphics load increases significantly fromthe lowest to the highest preset. The Performance preset is targetedfor mid-range hardware with 256 MB of graphics memory. The Entry presetis targeted for integrated and low-end hardware with 128 MB of graphicsmemory. The higher presets require 512MB of graphics memory, and aretargeted for high-end and multi-GPU systems.
                                                    3DMark Vantage is obviously fresh fromthe shelves. We show two scored, first the Vantage GPU score (notoverall score) and in the lower segment, the 3DMark06 score. The newpress driver did some magic for 06 by the way. Not sure what to make ofit. But my general recommendation is simple, focus at the 3DMarkVantage GPU score results for an excellent rated and accurate score.
http://www.guru3d.com/imageview.php?image=14177
Gaming: S.T.A.L.K.E.R. - Shadow of Chernobyl Shortly after another disaster inChernobyl, the authorities surround the area with the Russianequivalent of the U.S. National Guard, and they begin to hear weirdscreams and rumblings coming from within. After a while though, most ofthem are returned to earlier posts. Curiosity gets the better of somepeople, so they sneak into the 30-kilometer area to do some goodold-fashioned investigating. These people are called Stalkers, and theyreport back to the authorities with their findings.
The 3D engine shines in a few keyareas, all crucial in shaping the game's atmosphere. It's got a hugedraw distance, which leads to the palpable feeling that this is a bigworld. Lighting and shadowing are its other big strengths. For thisbenchmark we have the in-game settings at maximum (AA/AF enabled),Dynamic lighting was enabled.
                                                    Image Quality setting:
[*]In-game Software Anti Aliasing enabled[*]16x anisotropic filtering[*]Dynamic lighting enabledStalker we set at maximum quality settings, we enable everything possible, alsodynamic lighting. S.T.A.L.K.E.R. does not support hardwareanti-aliasing, yet uses a software applied method which is enabled aswell. We again see theX2 take of real hard.
http://www.guru3d.com/imageview.php?image=14176
Gaming: F.E.A.R. As many of you will be aware, F.E.A.R(or First Encounter Assault & Recon in short) involves a rathermysterious looking girl in a red dress, a man with an unappetizingtaste for human flesh and some rather flashy action set pieces aka TheMatrix. All of this is brought together by one of the best game enginesaround.
F.E.A.R. makes its cinematicpretensions clear from the start. As soon as the credits roll, and themusic starts, you are treated to the full works. The camera pans acrossscores of troops locked 'n' loaded and ready to hunt you down, allseemingly linked to 'Paxton Fettel', a strange kind of guy withextraordinary psychic power capable of controlling battalions ofsoldiers and a habit of feeding off any poor unfortunate innocents -presumably to aid his powers of concentration. It doesn’t end there,after a short briefing at F.E.A.R. HQ you are sent off to hunt downFettel equipped with reflexes that are 'off the chart'. These reflexesare put to excellent use, with a slow motion effects like that of MaxPayne, or the before mentioned Matrix. But here, it is oooohhhh so muchmore satisfying thanks to the outstanding environmental effects. Sparksfly everywhere, as chunks of masonry are blasted from the walls andblood splatters from your latest victim. The physics are just great,with boxes sent flying, shelves tipped over, and objects hurtlingtowards your head. And the explosions, well, the explosions just haveto be seen, and what's so great about this is you can witness it in allits glory in slow motion.
Let me confirm to you that based onthis, F.E.A.R. will have you shaking on the edge of your seat, if notfalling off it. The tension is brought to just the right level with keymoments that will make your heart leap. Play the demo and you will seewhat I mean. The key to this, is the girl. Without revealing anythingsignificant, lets just say that she could take on the whole of Mars forcreepiness.
                                                    Image Quality setting:
[*]4x Anti Aliasing[*]16x anisotropic filtering[*]Soft Shadows DisabledF.E.A.R. has a built intest which we used to measure performance, you should try it yourself,it's really fun to look and compare with our results. Yet F.E.A.R.after all this time still is a tough title for the graphics cards;especially when you configure it to maximum image quality. This game isheavily pixel shaded and shows some dark and creepy effects.Again 4xAA and 16xAFwhere applied here. All settings to high, no soft shadows. 2560x1600 @the setting shows an average framerate of over 110 frames per second.What more needs to be said ? http://www.guru3d.com/imageview.php?image=14175



[ 本帖最后由 ~DeatHMooN~ 于 23-2-2009 21:51 编辑 ]

~DeatHMooN~ 发表于 23-2-2009 21:52:52

3D Mark 06 Score

http://images.hardwarecanucks.com/image/skymtl/GPU/PALIT-HD4870X2/HD4870X2-40.jpg

Blue_star 发表于 27-2-2009 20:24:21

以前我是Nvidia 的Fans~~
现在转ATI咯~又便宜性能又强~
页: [1]
查看完整版本: GTX 295 , ATI 4870X2 性能比较