Istorija Nvidia

Embed Size (px)

Citation preview

  • 7/28/2019 Istorija Nvidia

    1/18

    13 Years of Nvidia Graphics Cards

    001 Nvidias history begins with the NV1 chip, sold by SGS-THOMSON Microelectronics as theSTG-2000. That board included a 2D card, 3D accelerator, sound card, and a port for Sega Saturn

    game controllersall on the same PCI board. The best-known of these cards is the famous

    Diamond Edge 3D, released two years after Nvidias inception.

    The principal problem with the NV1 was in its management of 3D: it used quadratic texture

    mapping (QTM) instead of the technique used currently, which is based on polygons. DirectX

    appeared just after the card was released, and it used polygons, so the NV1 was a failure over the

    long term. Among the points worth mentioning are that the memory of the graphics card could be

    increased on certain models (from 2 MB to 4 MB) and that many of the games optimized were

    ported from the Saturn, since the card used a similar architecture.

    Nvidia NV1

    Date released September 1995

    Card interface PCI

    Fillrate 12 Mtexels/s

    DirectX version -

    Memory Type EDO/VRAM

    Maximum memory 4 MB

    Memory clock frequency 75 MHz

    Memory bus 64 bits

    Maximum bandwidth 0.6 GB/s

    Maximum resolution 600 x 1 200 / 15 bits

    Video out 1 x VGA

    RAMDAC 170 MHz

    The NV2 used the same rendering method and was never completed. It was to have been used in

    the Dreamcast console (which replaced the Saturn), but Sega finally chose a polygon-basedtechnology (PowerVR) and Nvidia abandoned QTM in favor of polygon-based rendering with the

    NV3.

    002 Riva 128 And Direct3D

    In 1997, Nvidias move to polygon-based 3D yielded the NV3, better known under the name Riva

    128. Little known fact: Riva stands for Real-time Interactive Video and Animation accelerator. Two

    versions of the chip existed: Riva 128 and Riva 128ZX. The difference was slight the ZX had a

    faster RAMDAC, 8 MB instead of 4 MB of memory, and AGP 2x support. The Riva 128 enjoyed a

    certain level of success because its price was attractive, despite quality that sometimes left a bit to

  • 7/28/2019 Istorija Nvidia

    2/18

    be desired compared to the 3Dfx products of the period. The Nvidia card offered 2D and 3D on the

    same card, as well as support for Direct3D. The OpenGL drivers were released only with the

    128ZX, though specific Quake versions existed (though not a complete ICD).

    Nvidia NV3 (Riva 128/128ZX)

    Date released April 1997 March 1998

    Card interface PCI/AGP 1x PCI/AGP 2x

    Fillrate 100 Mtexels/s 100 Mtexels/s

    Fillrate 100 Mpixels/s 100 Mpixels/s

    Rendering pipelines 1 1

    Texture units 1 1

    Chip clock frequency 100 MHz 100 MHz

    Fabrication process 0.35 0.35

    Number of transistors 3.5 million 3.5 million

    DirectX version 5 5

    Memory Type SDRAM SDRAM

    Maximum memory 4 MB 8 MB

    Memory clock frequency 100 MHz 100 MHz

    Memory bus 128 bits 128 bits

    Maximum bandwidth 1.6 GB/s 1.6 GB/sVideo out 1 x VGA 1 x VGA

    RAMDAC 206 MHz 250 MHz

    The Riva 128 was popular with OEMs due to its price, which was below that of a Voodoo Graphics

    card and provided Direct3D performance that was nearly the same. This was one of the first AGP

    cards, even if the Riva 128 used the interface essentially as a faster PCI bus. Finally, and

    somewhat amusingly, a very well known manufacturer was a competitor of Nvidias for

    performance with one of its graphics cards: Intel, with its i740. Times have changed.

    003 NV4: Twin Texels For The TNT

    In 1998, 3Dfx had a high-performance 3D card in the Voodoo2, but the card had major limitations.

    These included archaic memory management (separate textures), a 16 bit color ceiling, the need

    for a separate 2D graphics card, and PCI-only interface (in practice, though AGP models did exist).

    Then the Riva TNT arrived on the scene, which was a fast 3D card with a lot of memory (for the

    time) and built-in 2D capabilities. Except for video performance it had no MPEG2 acceleration, as

    ATIs cards did the TNT was a success. It was the first Nvidia card capable of applying two

    textures in a single pass, thus the name TNT for TwiN Texel.

    Nvidia NV4 (Riva TNT)

  • 7/28/2019 Istorija Nvidia

    3/18

    Date released 1998

    Card interface PCI/AGP 2x

    Fillrate 180 Mtexels/s

    Fillrate 180 Mpixels/sRendering pipelines 2

    Texture units 2

    Chip clock frequency 90 MHz

    Fabrication process 0.35

    Number of transistors 7 million

    DirectX version 6

    Memory Type SDRAM

    Memory 16 MB

    Memory clock frequency 110 MHz

    Memory bus 128 bits

    Maximum bandwidth 1.75 GB/s

    Video out 1 x VGA

    RAMDAC 250 MHz

    The TNT was a less powerful card than originally planned. Nvidia had wanted to bring out a faster

    card than the Voodoo2, using a 250 nm process with a clock speed of 110 MHz (200 MHz for thememory). In fact, the TNT used a 350 nm process and had a clock speed of 90 MHz, like the 3Dfx

    card, with the memory running at 110 MHz.

    004 NV5: The First Ultra

    In 1999, the TNT2 made its appearance. It was close to what the TNT was originally supposed to

    be, and can be thought of as a die shrink of the TNT from 350 to 250 nm. This was also the first

    time Nvidia used the name Ultra for one of its cards.

    The TNT2 cards were segmented in terms of frequency. At the time, Nvidia used only two versions

    (far from todays bewildering assortment): TNT2 and TNT2 Ultra. The TNT2 was a powerful card for

    its time, easily a match for the Voodoo3 while offering more features, even though there was still

    no MPEG2 decoding. It was also Nvidias first AGP 4x card, even though that standard wasnt really

    used with the TNT2.

    Nvidia NV5 (Riva TNT2, TNT2 Ultra)

    Date released March 1999 March 1999

    Card interface PCI/AGP 4x PCI/AGP 4x

    Fillrate 250 Mtexels/s 300 Mtexels/s

    Fillrate 250 Mpixels/s 300 Mpixels/s

  • 7/28/2019 Istorija Nvidia

    4/18

    Rendering pipelines 2 2

    Texture unit 2 2

    Chip clock frequency 125 MHz 150 MHz

    Fabrication process 0.25 0.25 Number of transistors 15 million 15 million

    DirectX version 6 6

    Memory Type SDRAM SDRAM

    Memory 32 MB 32 MB

    Memory clock frequency 150 MHz 183 MHz

    Memory bus 128 bits 128 bits

    Maximum bandwidth 2.4 GB/s 2.9 GB/s

    Video out 1 x VGA 1 x VGA

    RAMDAC 300 MHz 300 MHz

    The NV6, which also came out in 1999, was a cut-down version of the TNT2. It was sold under the

    name Vanta, Vanta LT and TNT2 M64. These cards were significantly slower than the TNT2 (and

    the original TNT), essentially because of their lower clock frequency and 64-bit memory bus. They

    were very successful with OEMs, however, who used the name TNT2 as bait.

    005 GeForce: The First GPU

    Late in 1999, Nvidia announced the GeForce 256. This was the first card to use what Nvidia called

    a GPU, but its major advance was really consumer hardware support for T&L (transform and

    lighting). This technology, already being used in Open GL and in professional 3D, performs

    calculations on triangles on the graphics card instead of on the CPU. The actual gain was

    considerable in certain cases, since the graphics card had approximately four times the power of a

    high-end CPU of the time (15 million triangles for the GeForce, as opposed to four million on a 550

    MHz Pentium III).

    The card used a different architecture from the TNT2. Instead of two rendering pipelines, each

    equipped with a texture unit, there were four pipelines with one texture unit, which gave the

    GeForce more rendering power at a lower clock frequency. The GeForce 256 was also the first card

    to use DDR SDRAM, increasing memory bandwidth.

    Nvidia NV10 (GeForce 256)

    Date released October 1999 February 2000

    Card interface PCI/AGP 4x PCI/AGP 4x

    Fillrate 480 Mtexels/s 480 Mtexels/s

    Fillrate 480 Mpixels/s 480 Mpixels/s

    Rendering pipelines 4 4

    Texture unit 4 4

  • 7/28/2019 Istorija Nvidia

    5/18

    Chip clock frequency 120 MHz 120 MHz

    Fabrication process 0.22 0.22

    Number of transistors 23 million 23 million

    DirectX version 7 7Memory Type SDRAM DDR

    Maximum memory 32 MB 32 MB

    Memory clock frequency 166 MHz 150 MHz (x2)

    Memory bus 128 bits 128 bits

    Maximum bandwidth 2.6 GB/s 4.8 GB/s

    Video out 1 x VGA 1 x VGA

    RAMDAC 350 MHz 350 MHz

    Video playback MPEG2 semi-hardware MPEG2 semi-hardware

    Nvidia moved directly from NV6 to NV10 for the GeForce 256, and the nomenclature of the

    following models was in steps of five, with variants for the low/high-end models. Also, the GeForce

    256 was the first Nvidia card to handle MPEG2 acceleration, but only partially (Motion

    Compensation). Finally, this was also the first consumer card with a DVI connector (via an external

    chip).

    006 NV15: Nvidia Improves The GeForce 256

    In the year 2000, Nvidia had a fast graphics card the GeForce 256 DDR but ATI was startingto get more competitive with its Radeon, which was both faster and more efficient. Nvidia

    responded with a new card, the GeForce 2 GTS. Using a 180 nm fab process, the card was

    noticeably faster than the GeForce 256. It doubled the number of texture units from 1 to 2 per

    rendering pipeline, which enabled the application of eight textures in a single pass. Nvidia released

    several versions of the card: the GTS (GigaTexel Shader, 200/166), Pro (200/200) and Ti

    (250/200).

    Nvidia NV15 (GeForce 2 GTS)

    Date released April 2000

    Card interface PCI/AGP 4x

    Fillrate 1600 Mtexels/s

    Fillrate 800 Mpixels/s

    Rendering pipelines 4

    Texture unit 8

    Chip clock frequency 200 MHz

    Fabrication process 0.18

    Number of transistors 25 millionDirectX version 7

  • 7/28/2019 Istorija Nvidia

    6/18

    Memory Type DDR

    Maximum memory 64 MB

    Memory clock frequency 166 MHz (x2)

    Memory bus 128 bitsMaximum bandwidth 5.3 GB/s

    Video out 1 x VGA

    RAMDAC 350 MHz

    Video playback MPEG2 semi-hardware

    In August 2000, pending the release of the GeForce 3, Nvidia put out the NV16 (GeForce 2 Ultra).

    This was not a new card, rather an NV15 with higher clock frequencies: 250 MHz for the GPU and

    230 MHz for the memory, compared to 200 and 166 MHz on the original card. This was also one of

    the most expensive cards ever produced by Nvidia.

    007 NV11: The First Low-End Version

    The GeForce 2 GTS had great performance, but also a high price tag, and Nvidia needed to offer a

    card for gaming enthusiasts who couldnt afford to spend a small fortune on a computer. The

    companys answer was the NV11, the GeForce 2 MX, also released in 2000. Unlike the TNT2 M64

    and Vanta, which in reality were nothing more than an NV5 with a 64-bit memory bus, the NV11

    had a new architecture derived from the GeForce 2 GTS. Nvidia did away with part of the rendering

    pipelines, but for multitexturing a GeForce 2 MX had more power than a GeForce 256.

    This was the first Nvidia card that could manage more than one display, and that function was toremain part of Nvidias midrange cards for a few years. The GeForce 2 MX had only SDR memory

    and was also the first GeForce to be released in a mobile version (the GeForce 2 Go).

    Nvidia NV11 (GeForce 2 MX)

    Date released June 2000

    Card interface PCI/AGP 4x

    Fillrate 700 Mtexels/s

    Fillrate 350 Mpixels/s

    Rendering pipelines 2

    Texture units 4

    Chip clock frequency 175 MHz

    Fabrication process 0.18

    Number of transistors 19 million

    DirectX version 7

    Memory Type SDRAM

    Maximum memory 64 MB

  • 7/28/2019 Istorija Nvidia

    7/18

    Memory clock frequency 166 MHz

    Memory bus 128 bits

    Maximum bandwidth 2.6 GB/s

    Video out 2 x VGA/DVIRAMDAC 350 MHz

    Video playback MPEG2 semi-hardware

    Nvidia brought out several versions of the GeForce 2 MX in addition to the standard model and the

    Go version. These included the MX400 (equipped with a GPU clocked at 200 MHz), the MX200 (with

    a 175 MHz GPU and 64-bit memory bus at 166 MHz) and the very poor MX100, with a GPU clocked

    at only 143 MHz and 32-bit memory (0.6 GB/s bandwidth). Finally, some rare cards were equipped

    with 64-bit DDR and were basically equivalent to the 128-bit SDR versions.

    008 Enter The GeForce 3

    In 2001, the GeForce 3 made its appearance. This card, the first to be DirectX 8 compatible,

    supported programmable pixel shaders. With 57 million transistors, the card used fairly

    conservative clock speeds and a GeForce 2 Ultra could outperform it in many cases (at the time it

    was released). The card brought a few improvements in memory management, but its complex

    architecture prevented Nvidia from developing an entry-level version.

    Nvidia NV20 (GeForce 3)

    Date released March 2001

    Card interface PCI/AGP 4xFillrate 2000 Mtexels/s

    Fillrate 1000 Mpixels/s

    Rendering pipelines 4

    Texture unit 8

    Vertex Shader units 1

    Pixel Shader version 1.1

    Chip clock frequency 250 MHz

    Fabrication process 0.15

    Number of transistors 57 million

    DirectX version 8

    Memory Type DDR

    Maximum memory 64 MB

    Memory clock frequency 230 MHz (x2)

    Memory bus 128 bits

    Maximum bandwidth 7.4 GB/s

  • 7/28/2019 Istorija Nvidia

    8/18

    Video out 1 x VGA

    RAMDAC 350 MHz

    Video playback semi-hardware

    Nvidia offered two different versions of the GeForce 3: the Ti 200, which was a little less expensivethan the original, and the Ti 500, which was more expensive. The former was clocked at 175/200

    (GPU/memory) and the latter at 240/250 MHz.

    009 The GeForce 4 That Was A GeForce 2

    Moving ahead to 2002, Nvidia had a card with performance in the GeForce 3, but it was too

    complex. Creating a new card based on its architecture (as had been done with the NV11) was a

    difficult proposition, and so Nvidia used the architecture of the GeForce 2 to create the NV17,

    marketed as the GeForce 4 MX. The cards used the same architecture as the GeForce 2 MX two

    pipelines capable of rendering two textures but ran at higher clock rates. The cards also used

    the memory management introduced with the GeForce 3, had hardware MPEG2 decoding, andsupported multiple displays. Still, they were DirectX 7 cards, and so were outdated from the time

    they were launched, despite adequate performance in some cases.

    The line included three cards: the MX420, MX440 and MX460. The first was clocked at 250 MHz for

    the GPU and 166 MHz (SDR) for the memory; the second ran at 275/200 (DDR), and the third at

    300/275 (DDR).

    Nvidia NV17 (GeForce 4 MX 440)

    Date released February 2002

    Card interface PCI/AGP 4x

    Fillrate 1100 Mtexels/s

    Fillrate 550 Mpixels/s

    Rendering pipelines 2

    Texture units 4

    Chip clock frequency 275 MHz

    Fabrication process 0.15

    Number of transistors 27 million

    DirectX version 7

    Memory Type DDR

    Maximum memory 128 MB

    Memory clock frequency 200 MHz (x2)

    Memory bus 128 bits

    Maximum bandwidth 6.4 GB/s

    Video out 2 x VGA/DVI

    RAMDAC 350 MHz

  • 7/28/2019 Istorija Nvidia

    9/18

    Video playback MPEG2 hardware

    In addition to the 420, 440 and 460 versions, Nvidia offered mobile versions (GeForce 4 Go), AGP

    8x versions (with the NV18 chip, the only improvement), and even a PCI Express version in 2003:

    the PCX4300, with an AGP 8x-to-PCI Express 16x bridge.

    010 NV2A: A GeForce In A Console

    In 2001, Microsoft introduced its first game console, the Xbox. It was very close to a PC in terms of

    architecture. It used an x86 processor and ran Windows and the graphics card was from Nvidia.

    The NV2A, as it was called, is an intermediate chip between the GeForce 3 and GeForce 4. It was

    well-optimized in the Xbox and supported DirectX 8.1 (through the consoles NT5 kernel), enabling

    the console to offer some very graphically impressive games for its time.

    Nvidia NV2A (Xbox)

    Date released November 2001Card interface N/A

    Fillrate 1864 Mtexels/s

    Fillrate 932 Mpixels/s

    Rendering pipelines 4

    Texture units 8

    Vertex units 2

    Chip clock frequency 233 MHz

    Fabrication process 0.15

    Number of transistors 63 million

    DirectX version 8.1

    Memory Type DDR

    Maximum memory 64 MB

    Memory clock frequency 200 MHz (x2)

    Memory bus 128 bits

    Maximum bandwidth 6.4 GB/s

    For the Xbox 360, ATI supplied the GPU and Nvidia went over to the enemy with its RSX chip used

    in the PlayStation 3.

    011 An Improved GeForce 3: The GeForce 4 Ti

    The successor to the GeForce 3, released in February 2002, was called the GeForce 4 Ti. Its

    architecture was similar to that of the NV20 (GeForce 3), but the NV25 was significantly faster due

    to its 150 nm process. Nvidia gave the GeForce 4 Ti approximately three times the Vertex Shader

    power of the GeForce 3 by increasing the clock frequency and doubling the number of ALUs. In

    addition, Nvidia improved LMA, the technology that limits memory bandwidth use by not

    calculating undisplayed data.

  • 7/28/2019 Istorija Nvidia

    10/18

    Nvidia sold three versions of the card: the Ti 4200, the Ti 4400 and the Ti 4600. The differences

    among the cards was in the clock speeds: 250 MHz for the GPU and 250 MHz for the memory (Ti

    4200); 275/275 for the Ti 4400; and 300/325 for the high-end Ti 4600.

    Nvidia NV25 (GeForce 4 Ti 4600)

    Date released February 2002

    Card interface PCI/AGP 4x

    Fillrate 2400 Mtexels/s

    Fillrate 1200 Mpixels/s

    Rendering pipelines 4

    Texture units 8

    Vertex Shader units 2Version Shader 1.3

    Chip clock frequency 300 MHz

    Fabrication process 0.15

    Number of transistors 63 million

    DirectX version 8

    Memory Type DDR

    Maximum memory 128 MB

    Memory clock frequency 325 MHz (x2)

    Memory bus 128 bits

    Maximum bandwidth 10.4 GB/s

    Video out 2 x VGA

    RAMDAC 350 MHz

    Video playback MPEG2 semi-hardware

    Late in 2002, the NV28 arrived. This GPU was similar to the NV25, simply adding AGP 8x support

    to the GeForce 4 Ti cards. The GeForce Ti 4800 (300/325) was identical to the GeForce 4 Ti 4600except for the addition of AGP 8x compatibility. The GeForce Ti 4200 128 MB had a lower

    bandwidth than the 64 MB version because the memory ran at 222 MHz compared to 250 MHz in

    the 64 MB version.

    012 NV30: Nvidia Loses With The FX 5800

    In January 2003, Nvidia released the GeForce FX 5800 (NV30). This card was criticized both for its

    performance, which was unworthy of a high-end card, and its high noise level. Released at around

    the same time, ATIs Radeon 9700 Pro was much more efficient and also faster. The NV30 was a

    commercial failure, even if Nvidia sometimes says that the failure is one of the best things that

    have happened to the company since it proved that you can never rest on your laurels.

  • 7/28/2019 Istorija Nvidia

    11/18

    Nvidia NV30 (GeForce FX 5800)

    Date released January 2003

    Card interface PCI/AGP 8x

    Fillrate (Mtexels) 3200 Mtexels/sFillrate (Mpixels) 1600 Mpixels/s

    Rendering pipelines 4

    Texture units 8

    Vertex Shader units 2

    Pixel Shader version 2.0a

    Chip clock frequency 400 MHz

    Fabrication process 0.13

    Number of transistors 125 million

    DirectX version 9

    Memory Type DDR2

    Memory (generally) 128 MB

    Memory clock frequency 400 MHz (x2)

    Memory bus 128 bits

    Maximum bandwidth 12.8 GB/s

    Video out 2 x VGA

    RAMDAC 400 MHz

    Video playback MPEG2 hardware

    The Ultra version of the card was faster (or shall we say less slow), with a clock speed of 500 MHz

    for the GPU and memory (DDR2).

    013 NV3x: Nvidia Releases FX (and PCX) Versions

    Even after the failure of the NV30, Nvidia kept the architecture, with the GeForce FX 5900

    replacing the GeForce FX 5800. With its 256-bit memory bus and improved vertex calculating

    power, the FX 5900 managed to hold its own against competing cards like the Radeon 9800 Pro.

    Nvidia also released entry-level and midrange versions of its GeForce FX: the FX5600 (NV31) and

    FX5700 (NV36) in the midrange, and the entry-level FX5200 (NV34). These cards are noteworthy

    in that the earlier midrange card (the GeForce 4 Ti 4200) could outperform them.

    Nvidia NV3x

    Name of the card NV35 (FX 5900) NV31 (FX 5600) NV36 (FX 5700) NV34 (FX 5200)

    Date released May 2003 March 2003 October 2003 March 2003

    Card interface PCI/AGP 8x PCI/AGP 8x PCI/AGP 8x PCI/AGP 8x

  • 7/28/2019 Istorija Nvidia

    12/18

    Fillrate (Mtexels) 3200 Mtexels/s 1300 Mtexels/s 1700 Mtexels/s 1000 Mtexels/s

    Fillrate (Mpixels) 1600 Mpixels/s 1300 Mpixels/s 1700 Mpixels/s 1000 Mpixels/s

    Rendering pipelines 4 4 4 4

    Texture units 8 4 4 4Vertex Shader units 3 1 3 1

    Chip clock frequency 400 MHz 325 MHz 425 MHz 250 MHz

    Fabrication process 0.13 0.13 0.13 0.13

    Number of transistors 130 million 80 million 82 million 47 million

    DirectX version 9 9 9 9

    Pixel Shader version 2.0a 2.0a 2.0a 2.0a

    Memory Type DDR DDR DDR DDR

    Memory (generally) 256 MB 128 MB 256 MB 128 MB

    Memory clock

    frequency

    425 MHz (x2) 275 MHz (x2) 250 MHz (x2) 200 MHz (x2)

    Memory bus 256 bits 128 bits 128 bits 128 bits

    Maximum bandwidth 27.2 GB/s 8.8 GB/s 8 GB/s 6.4 GB/s

    Video out 2 x VGA 2 x VGA 2 x VGA 2 x VGA

    RAMDAC 400 MHz 350 MHz 350 MHz 350 MHz

    Video playback MPEG2hardware

    MPEG2hardware

    MPEG2hardware

    MPEG2hardware

    Nvidia also released PCI Express cards the GeForce PCX series but they were essentially AGP

    cards with an AGP-to-PCI Express bridge. Some FX 5200 cards had a 64-bit bus (instead of 128-

    bit) and a slower memory clock frequency (166 MHz instead of 200 MHz).

    014N40/N45: Nvidia Gets Back In The Race With The GeForce 6800 and SLI

    After the failure of the NV30, it was imperative of Nvidia to snap back. And they did, with the

    NV40, also known as the GeForce 6800. This card was extremely efficient and more powerful than

    the FX 5900, due to its large number of transistors (222 million). The NV45, also called GeForce6800, was nothing more than an NV40 with an AGP-to-PCI Express bridge, giving the card support

    for the new standard, and above all, for SLI. The SLI technology couples two PCI Express GeForce

    6 cards to increase performance.

    Nvidia NV40 and NV45 (GeForce 6800 Ultra)

    Date released April 2004 March 2005

    Card interface AGP 8x PCI Express 16x

    Fillrate (Mtexels) 6400 Mtexels/s 6400 Mtexels/s

    Fillrate (Mpixels) 6400 Mpixels/s 6400 Mpixels/s

  • 7/28/2019 Istorija Nvidia

    13/18

    Rendering pipelines 16 16

    Texture units 16 16

    Vertex Shader units 6 6

    Chip clock frequency 400 MHz 400 MHzFabrication process 0.13 0.13

    Number of transistors 222 million 222 million

    DirectX version 9c 9c

    Pixel Shader Version 3.0 3.0

    Memory Type GDDR3 GDDR3

    Memory (generally) 256 MB 256 MB

    Memory clock frequency 550 MHz (x2) 550 MHz (x2)

    Memory bus 256 bits 256 bits

    Maximum bandwidth 35.2 GB/s 35.2 GB/s

    Video out 2 x VGA 2 x VGA

    RAMDAC 400 MHz 400 MHz

    Video playback MPEG2 hardware MPEG2 hardware

    Multi-GPU support N/A 2

    Cards based on the NV41 and NV42 were also produced. The NV41 is an NV40 with fewer

    processing units (12 pipelines and 5 vertex units) used in certain GeForce 6800 cards; the NV42 isan NV41 fabricated with a 110 nm process (and thus, less expensive to produce).

    015 GeForce 6 Invades The Planet

    After the GeForce 6800, Nvidia needed to introduce cards that were slower and less expensive. The

    NV40 was powerful, but its 222 million transistors limited fabrication yields and increased the price,

    so the two cards built from it, the GeForce 6600 and 6200, had only moderate success. The

    GeForce 6600, fabricated at 110 nm, was based on the NV43 and offered good performance at a

    decent price. The PCI Express versions of these cards could even operate in SLI mode.

    The GeForce 6600 was the first natively PCI Express Nvidia card; AGP versions used a PCI Express-

    to-AGP bridge. The GeForce 6200 was an entry-level card not very powerful but not very

    expensive. PCI Express, AGP, and PCI versions were produced, and there were also versions built

    into laptops.

    Nvidia NV43 and NV44 (GeForce 6600 GT and GeForce 6200)

    Date released August 2004 August 2004

    Card interface PCI Express 16x PCI Express 16x

    Fillrate (Mtexels) 4000 Mtexels/s 1400 Mtexels/s

    Fillrate (Mpixels) 2000 Mpixels/s 700 Mpixels/s

  • 7/28/2019 Istorija Nvidia

    14/18

    Rendering pipelines 4 2

    Texture units 8 4

    Vertex shader units 3 3

    Chip clock frequency 500 MHz 350 MHzFabrication process 0.11 0.11

    Number of transistors 143 million 77 million

    DirectX version 9c 9c

    Pixel Shader version 3.0 3.0

    Memory Type GDDR3 GDDR3

    Memory (generally) 128 MB 64 MB

    Memory clock frequency 450 MHz (x2) 350 MHz (x2)

    Memory bus 128 bits 64 bits

    Maximum bandwidth 14.2 GB/s 5.6 GB/s

    Video out 2 x VGA 2 x VGA

    RAMDAC 400 MHz 400 MHz

    Video playback MPEG2 hardware MPEG2 hardware

    Multi-GPU support 2 N/A

    The GeForce 6200 was the first TurboCache card from Nvidia. In addition to the dedicated memory

    (16 to 512 MB), the card can use system RAM as video memory. Some manufacturers tookadvantage of this to tout the GeForce 6200 as 256 MB, when in fact it had only 64 MB of

    dedicated memory. Note also that a built-in version of the NV44, the GeForce 6100, was included

    in certain Nvidia chipsets. The chip used a 90 nm process and had a single rendering pipeline and

    no dedicated memory.

    016 G70 and G71: Nvidia Changes Its Nomenclature

    In 2005, Nvidia announced the GeForce 7. The GPUs code name, which had traditionally been

    NVxx, changed to Gxx. The first card was the G70 (GeForce 7800), followed fairly quickly by the

    G71 (GeForce 7900). More powerful than the 6800 series, the GeForce 7800 was a success for

    Nvidia. The cards were sold in many different versions, such as the GTX and GS. AGP versions with

    a PCI Express-to-AGP bridge were also sold.

    Nvidia G70 and G71 (GeForce 7800 GTX and 7900 GTX)

    Date released June 2005 March 2006

    Card interface PCI Express 16x PCI Express 16x

    Fillrate (Mtexels) 13200 Mtexels/s 15600 Mtexels/s

    Fillrate (Mpixels) 8800 Mpixels/s 10400 Mpixels/s

    Rendering pipelines 16 16

  • 7/28/2019 Istorija Nvidia

    15/18

    Texture units 24 24

    Vertex units 8 8

    Chip clock frequency 550 MHz 650 MHz

    Fabrication process 0.11 0.09 Number of transistors 302 million 278 million

    DirectX version 9c 9c

    Pixel Shader version 3.0 3.0

    Memory Type GDDR3 GDDR3

    Memory (generally) 512 MB 512 MB

    Memory clock

    frequency

    850 MHz (x2) 800 MHz (x2)

    Memory bus 256 bits 256 bits

    Maximum bandwidth 54.4 GB/s 51.2 GB/s

    Video out 2 x VGA 2 x VGA

    RAMDAC 400 MHz 400 MHz

    Video playback MPEG2 hardware, WMV9 semi-

    hardware

    MPEG2 hardware, WMV9 semi-

    hardware

    Multi-GPU support 2 4 (2x2)

    With the GeForce 7900 Nvidia also used, for the first time, a technique its competitors had already

    been using: dual-GPU cards. The 7900GX2 and 7950GX2 had two G71s in parallel. The company

    was to re-use this technique in 2008 with the GeForce 9800GX2.

    017 G72 and G73: Low-end GeForce 7s

    As has become its habit, Nvidia released two other versions of its high-end architecture one

    entry-level (G72, GeForce 7300) and one midrange (G73, GeForce 7600). Both chips were

    fabricated with a 90 nm process and offered adequate performance. As is often the case, the

    mobile versions used the midrange chips, and the GeForce 7300 Go was very popular.

    Nvidia G72 and G73 (GeForce 7300 GS and 7600 GT)

    Date released January 2006 March 2006

    Card interface PCI Express 16x PCI Express 16x

    Fillrate (Mtexels) 2200 Mtexels/s 6720 Mtexels/s

    Fillrate (Mpixels) 1100 Mpixels/s 4480 Mpixels/s

    Rendering pipelines 2 8

    Texture units 4 12

    Vertex Shader units 3 5Chip clock frequency 550 MHz 560 MHz

  • 7/28/2019 Istorija Nvidia

    16/18

    Fabrication process 0.09 0.09

    Number of transistors 112 million 177 million

    DirectX version 9c 9c

    Pixel Shader version 3.0 3.0Memory Type GDDR GDDR3

    Memory (generally) 128 MB 256 MB

    Memory clock

    frequency

    400 MHz (x2) 700 MHz (x2)

    Memory bus 64 bits 128 bits

    Maximum bandwidth 6.4 GB/s 22.4 GB/s

    Video out 2 x VGA 2 x VGA + 2 x TDMS

    RAMDAC 400 MHz 400 MHz

    Video playback MPEG2 hardware, WMV9 semi-

    hardware

    MPEG2 hardware, WMV9 semi-

    hardware

    Multi-GPU support N/A 2

    Slower (7200 Go) and faster (7400 Go) portable versions were also produced, and an 80 nm

    version of the G73 was also sold by Nvidia.

    018 Nvidia And The 8800: GeForce 8 Or GeForce 9?

    In November 2006, Nvidia announced the G80. This chip and its derivatives were destined to have

    a long life. In fact, as of 2008, some of the fastest cards available from NVIDIA were still using a

    chip thats very close to this G80 (the G92). Nvidia got as much mileage as possible out of the G80

    and the move to a 65 nm process with the G92 allowed the company to save money on the cost of

    the chip. Nvidia varied the number of stream processors, the width of the memory bus, and clock

    speeds, in order to produce a plethora of GeForce 8800 and 9800 versions. Theres even a version

    with 2 GPUs: the GeForce 9800GX2.

    The GeForce 8800 series cards were all DirectX 10 compatible, and Nvidia scored a great success

    with this series, pending the arrival of the GeForce GTX.

    Nvidia G80 and G92 (GeForce 8800 GTX and 9800 GTX)

    Date released November 2006 April 2008

    Card interface PCI Express 16x PCI Express 16x (2.0)

    Fillrate (Mtexels) 18400 Mtexels/s 43875 Mtexels/s

    Fillrate (Mpixels) 13800 Mpixels/s 10800 Mpixels/s

    Rendering pipelines 24 16

    Texture units 32 64

    Stream Processors 128 128

  • 7/28/2019 Istorija Nvidia

    17/18

    Chip clock frequency 575 MHz 675 MHz

    Fabrication process 0.09 0.065

    Number of transistors 681 million 754 million

    DirectX version 10 10Pixel Shader version 4.0 4.0

    Memory Type GDDR3 GDDR3

    Memory (generally) 768 MB 512 MB

    Memory clock

    frequency

    900 MHz (x2) 1100 MHz (x2)

    Memory bus 384 bits 256 bits

    Maximum bandwidth 86.4 GB/s 70.4 GB/s

    Video out NVIO 2 x TDMS (DualLink), HDCP

    RAMDAC 400 MHz 400 MHz

    Video playback MPEG2 hardware, WMV9 semi-

    hardware

    MPEG2 hardware, H.264

    hardware

    Multi-GPU support 3 3

    Just for a laugh, lets run through all the GeForce 8800 series cards that have been released: the

    8800GS 374, 8800GS 768, 8800GTS 320, 8800GTS 640, 8800GTS 640 v2, 8800GTS 512, 8800GT

    256, 8800GT 512, 8800GT 1024, 8800GTX 768 and 8800 Ultra 768. Then theres the 9600GSO

    512, 9600GSO 384 and 9600GSO 768, and the 9800GX2 and 9800GTX not to mention the

    future 9800GTS and 9800GT. And thats not counting the mobile versions!

    019 Entry-Level GeForce 8s

    To be able to market economy versions of the card, Nvidia had to severely modify the G80. Given

    the number of transistors, it was out of the question to use it as-is. So the company offered three

    chips, more or less: the GeForce 8400 (G86), GeForce 8600 (G84) and GeForce 9600 (G94). Other

    versions existed (GeForce 8300, 8500, and so on), but those three models are the major ones. The

    G84 was much used in notebooks, as a high-end card, whereas in desktop PCs it was only a

    midrange GPU.

    Nvidia G84, G86 and G94 (GeForce 8600 GT, GeForce 8400 GS and 9600 GT)

    Date released April 2007 June 2007 February 2008

    Card interface PCI Express 16x PCI Express 16x PCI Express 16x (2.0)

    Fillrate (Mtexels) 3600 Mtexels/s 8640 Mtexels/s 20800 Mtexels/s

    Fillrate (Mpixels) 1800 Mpixels/s 4320 Mpixels/s 10400 Mpixels/s

    Rendering

    pipelines

    4 8 16

    Texture units 8 16 32

  • 7/28/2019 Istorija Nvidia

    18/18

    Stream Processors 16 32 64

    Chip clock

    frequency

    450 MHz 540 MHz 650 MHz

    Fabrication

    process

    0.08 0.08 0.065

    Number of

    transistors

    210 million 289 million 505 million

    DirectX version 10 10 10

    Pixel shader

    version

    4.0 4.0 4.0

    Memory Type DDR2 GDDR3 GDDR3

    Memory

    (generally)

    256 MB 256 MB 512 MB

    Memory clock

    frequency

    400 MHz (x2) 700 MHz (x2) 900 MHz (x2)

    Memory bus 64 bits 128 bits 256 bits

    Maximum

    bandwidth

    6.4 GB/s 22.4 GB/s 57.6 GB/s

    Video out 2 x TDMS (DualLink),

    HDCP

    2 x TDMS (DualLink),

    HDCP

    2 x TDMS (DualLink),

    HDCP

    RAMDAC 400 MHz 400 MHz 400 MHz

    Video playback MPEG2 hardware,

    H.264 hardware

    MPEG2 hardware,

    H.264 hardware

    MPEG2 hardware,

    H.264 hardware

    Multi-GPU

    support

    N/A 2 2

    The GeForce 8600 and GeForce 8400 were as mediocre as the G80 and GeForce 8800 were

    successful. The spread between high-end and midrange cards (before the arrival of the GeForce

    9600) is very wide for this generation, which causes problems for gamers.

    020