• Posted on October 25, 2017 4:57 pm
    Joseph Forbes
    No comments

    As outlined in the Adobe Support Lifecycle Policy, Adobe provides five years of product support from the general availability date of Adobe Acrobat and Adobe Reader. In line with that policy, support for Adobe Acrobat 11.x and Adobe Reader 11.x will end on October 15, 2017. What does End of Support mean? End of support means that Adobe no longer provides technical support, including product and/or security updates, for all derivatives of a product or product version (e.g., localized versions, minor upgrades, operating systems, dot and double-dot releases, and connector products). What should I do now? You may continue to use Acrobat XI and Reader XI, but Adobe will no longer provide any updates or address any existing bugs or security issues in the software. Because of this, it is strongly recommended that you update to the latest versions of Adobe Acrobat DC and Adobe Acrobat Reader DC. This will ensure that you benefit from all  new functional enhancements and security updates, not to mention support for newer operating systems. Technical support for Acrobat XI will also be discontinued.

    DATA, NERD NEWS, Software
  • Posted on July 13, 2017 12:03 pm
    Joseph Forbes
    No comments

    Internet or 'Net' Neutrality, by definition, means that there are no restrictions of any kind on access to content on the Web, no restrictions on downloads or uploads, and no restrictions on communication methods (email, chat, IM, etc.) It also means that access to the internet will not be blocked, slowed down, or sped up depending on where that access is based or who owns the access point(s). In essence, the internet is open to everyone. What does an open internet mean for the average Web user? When we get on the Web, we are able to access the entire Web: that means any website, any video, any download, any email. We use the Web to communicate with others, go to school, do our jobs, and connect with people all over the world. Because of the freedom that governs the Web, this access is granted without any restrictions whatsoever. Why is Net Neutrality important? Growth: Net neutrality is the reason that the Web has grown at such a phenomenal rate from the time it was created in 1991 by Sir Tim Berners-Lee (see also History of the World Wide Web). Creativity: Creativity, innovation, and unbridled inventiveness have given us Wikipedia, YouTube, Google, I Can Has Cheezburger, torrents, Hulu, The Internet Movie Database, Reddit, LifeWire, and many more. Communication: Net neutrality has given us the ability to freely communicate with people on a personal basis: government leaders, business owners, celebrities, work colleagues, medical personnel, family, etc., without restrictions.  Strong net neutrality rules should be left in place to ensure all of these things exist and thrive. If Net Neutrality rules are removed, everyone that uses the internet will lose these freedoms. Is Net Neutrality available worldwide? No. There are countries whose governments restrict their citizens’ access to the Web for political reasons. Vimeo has a great video on this very topic that explains how limiting access to the internet can impact everyone in the world. Is Net Neutrality in danger? Possibly. There are many companies that have a vested interest in making sure that access to the Web is not freely available. These companies are already in charge of most of the Web’s infrastructure, and they see potential profit in making the Web “pay for play”. This could result in restrictions on what Web users are able to search for, download, or read. Some people in the United States are even afraid that changes from the Federal Communications Commission (FCC) could result in a negative net neutrality ruling. At Fight for the Future's Battle for Net Neutrality site, you can send a letter directly to FCC and Congress and let them know how you feel. You can also file a document into the official FCC proceeding to let officials know whether or not you want Net Neutrality regulations to change or remain in place. It's a super wonky form with a couple of weird things (hey, this is the government!) so follow these instructions carefully: Visit ECFS Express at the FCC website. Type 17-108 in the Proceeding(s) box. Press Enter to turn the number to a yellow/orange box. Type your first name and last name in the Name(s) of Filer(s) box. Press Enter to turn your name into a yellow/orange box. Fill in the rest of the form as you would normally fill in an internet form. Check the Email Confirmation box. Tap or click the Continue to review screen button. On the next page, tap or click the Submit button. That's it! You've made your feelings known. What would happen if Net Neutrality were to be restricted or abolished? Net neutrality is the foundation of the freedom that we enjoy on the Web. Losing that freedom could result in consequences such as restricted access to websites and diminished download rights, as well as controlled creativity and corporate-governed services. Some people call that scenario the 'end of the internet.' What are "Internet fast lanes"? How are are they related to Net neutrality?  "Internet fast lanes" are special deals and channels that would give some companies exceptional treatment as far as broadband access and internet traffic. Many people believe that this would violate the concept of net neutrality. Internet fast lanes could cause issues because instead of Internet providers being required to provide the same service for all subscribers regardless of size/company/influence, they could be able to make deals with certain companies that would give them preferred access. This practice could potentially hamper growth, strengthen illegal monopolies, and cost the consumer. In addition, an open internet is essential for a continued free exchange of information – a bedrock concept that the World Wide Web was founded upon. Net neutrality is important Net neutrality in the context of the Web is somewhat new, but the concept of neutral, publicly accessible information and transfer of that information has been around since the days of Alexander Graham Bell. Basic public infrastructure, such as subways, buses, telephone companies, etc., are not allowed to discriminate, restrict, or differentiate common access, and this is the core concept behind net neutrality as well. For those of us who appreciate the Web, and want to preserve the freedom that this amazing invention has given us to exchange information, net neutrality is a core concept that we must work to maintain.

    Blog Entry, DATA, EDUCATION
  • Posted on December 17, 2016 7:39 pm
    Joseph Forbes
    No comments

    REPOST:   Oracle finally targets Java non-payers – six years after plucking Sun Oracle is massively ramping up audits of Java customers it claims are in breach of its licences – six years after it bought Sun Microsystems. A growing number of Oracle customers and partners have been approached by Larry Ellison’s firm, which claims they are out of compliance on Java. Oracle bought Java with Sun Microsystems in 2010 but only now is its License Management Services (LMS) division chasing down people for payment, we are told by people familiar with the matter. The database giant is understood to have hired 20 individuals globally this year, whose sole job is the pursuit of businesses in breach of their Java licences. In response, industry compliance specialists are themselves ramping up, hiring Java experts and expanding in anticipation of increased action from LMS in 2017 on Java. Huge sums of money are at stake, with customers on the hook for multiple tens and hundreds of thousands of dollars. The version of Java in contention is Java SE, with three paid flavours that range from $40 to $300 per named user and from $5,000 to $15,000 for a processor licence. The Register has learned of one customer in the retail industry with 80,000 PCs that was informed by Oracle it was in breach of its Java agreement. Oracle apparently told another Java customer it owed $100,000 – but the bill was slashed to $30,000 upon challenge. Experts are now advising extreme caution in downloading Java SE while those who’ve downloaded should review their use – and be prepared before LMS comes calling. Those gurus separately told The Reg of an upswing in customers seeking help on Java licensing having been contacted by LMS in the second half of 2016. “Oracle has started marking this as an issue,” one expert told The Reg on condition of anonymity. Our source claimed there had been an upswing in enquiries in the last five months. Craig Guarente, chief executive and founder of Palisade Compliance, told us Oracle’s not drawing the line at customers either, with partners feeling the LMS heat, too. “Oracle is targeting its partners. That makes people angry because they are helping Oracle,” he told us. Partners want to know: “How could Oracle do this to me?” “Java is something that comes up more and more with our clients because Oracle is pushing them more and more,” Guarente said. The root cause seems to be the false perception that Java is “free”. That perception dates from the time of Sun; Java under Sun was available for free - as it is under Oracle – but for a while Sun did charge a licensee fee to companies like IBM and makers of Blu-ray players, though for the vast majority, Java came minus charge. That was because Sun used Java as the thin end of the wedge to help sales of its systems. Oracle has taken the decision to monetise Java more aggressively. Java SE is a broad and all-encompassing download that includes Java SE Advanced Desktop, introduced by Oracle in February 2014, and Java SE Advanced and Java SE Suite, introduced by Oracle in May 2011. Java SE is free but Java SE Advanced Desktop, Advanced and Suite are not. Java SE Suite, for example, costs $300 per named user with a support bill of $66; there’s a per-processor option of $15,000 with a $3,300 support bill. Java SE comes with the free JDK and JRE, but Advanced Desktop, Advanced and Suite layer in additional capabilities such as Java Mission Control and Flight Recorder also known as JRockit Mission Control and JRockit Flight Recorder. Also added is the Microsoft Windows Installer Enterprise JRE Installer for large-scale rollout of Java. Java SE is free for what Oracle defines as “general purpose computing” – devices that in the words of its licence cover desktops, notebooks, smartphones and tablets. It is not free for what Oracle’s licence defines as “specialized embedded computers used in intelligent systems”, which Oracle further defines as - among other things - mobile phones, hand-held devices, networking switches and Blu-Ray players. It sounds simple enough, doesn't it? But it is customers in these general-purpose settings getting hit by LMS. The reason is there’s no way to separate the paid Java SE sub products from the free Java SE umbrella at download as Oracle doesn’t offer separate installation software. And you only become a designated user of, say, Java SE Suite, when you use the necessary bits associated with that profile – and then you pay accordingly. If you want to roll out Java SE in a big deployment, as you would following development of your app, then you’ll need Microsoft Windows Installer Enterprise JRE Installer – and that’s not part of the free Java SE. “People aren’t aware,” Guarente told The Reg. “They think Java is free - because it’s open source so you can use it. It’s not that the contracts are unclear; there’s a basic misunderstanding." Our anonymous compliance expert also added: If you download Java you get everything and you need to make sure you are installing only the components you are entitled to and you need to remove the bits you aren’t using. Commercial use is any use of those paid features. ’General purpose’ is vaguely defined – hence the reason for a lot of disputes. The moment you, as an organisation, are delivering something where Java is distributed to end users – something more and more companies are doing by distributing apps through which customers can obtain products and services – that is not general-purpose any more… and Oracle wants to make money from that. Why is Oracle acting now, six years into owning Java through the Sun acquisition? It is believed to have taken that long for LMS to devise audit methodologies and to build a detailed knowledge of customers’ Java estates on which to proceed. LMS is now poised to aggressively chase Java SE users in 2017. “I expect Oracle will increase this in 2017,” Guarente told The Reg. “All the trends show Oracle’s LMS audit team is being more aggressive and trying to drive more revenue than they were last year or the year before. I don’t think 2017 is going to see a kinder and gentler Oracle.“ What should you do? “If you download Java, you get everything – and you need to make sure you are installing only the components you are entitled to and you need to remove the bits you aren’t using,” our anonymous expert warned. “If you [already] have Java, make sure of the specific components you are really using and how they are being used and based on that, validate if you are having issued before Oracle figures it out.”

    LAW, MONEY, NERD NEWS
  • Posted on July 7, 2016 12:02 pm
    Joseph Forbes
    No comments

    The Coming of General Purpose GPUs Until the advent of DirectX 10, there was no point in adding undue complexity by enlarging the die area, which increased vertex shader functionality in addition to boosting the floating point precision of pixel shaders from 24-bit to 32-bit to match the requirement for vertex operations. With DX10's arrival, vertex and pixel shaders maintained a large level of common function, so moving to a unified shader arch eliminated a lot of unnecessary duplication of processing blocks. The first GPU to utilize this architecture was Nvidia's iconic G80. Four years in development and $475 million produced a 681 million-transistor, 484mm² behemoth -- first as the 8800 GTX flagship and 8800 GTS 640MB on November 8. An overclocked GTX, the 8800 Ultra, represented the G80's pinnacle and was sandwiched between the launches of two lesser products: the 320MB GTS in February and the limited production GTS 640MB/112 Core on November 19, 2007. Aided by the new Coverage Sample anti-aliasing (CSAA) algorithm, Nvidia had the satisfaction of seeing its GTX demolish every single and dual-graphics competitor in outright performance. Despite that success, the company dropped three percentage points in discrete graphics market share in the fourth quarter -- points AMD picked up on the strength of OEM contracts. MSI's version of the GeForce 8800 GTX The remaining components of Nvidia's business strategy concerning the G80 became reality in February and June of 2007. The C-language based CUDA platform SDK (Software Development Kit) was released in beta form to enable an ecosystem leveraging the highly parallelized nature of GPUs. Nvidia's PhysX physics engine as well as its distributed computing projects, professional virtualization and OptiX, Nvidia's ray tracing engine, are the more high profile applications using CUDA. Both Nvidia and ATI (now AMD) had been integrating ever-increasing computing functionality into the graphics pipeline. ATI/AMD would choose to rely upon developers and committees for the OpenCL path, while Nvidia had more immediate plans in mind with CUDA and high performance computing. To this end, Nvidia introduced its Tesla line of math co-processors in June, initially based on the same G80 core that had already powered the GeForce and Quadro FX 4600/5600, and after a prolonged development that included at least two (and possibly three) major debugging exercises, AMD released the R600 in May. Aided by the new Coverage Sample anti-aliasing (CSAA) algorithm, Nvidia had the satisfaction of seeing its GTX demolish every single and dual-graphics competitor in outright performance. Media hype made the launch hotly anticipated as AMD's answer to the 8800 GTX, but what arrived as the HD 2900 XT was largely disappointing. It was an upper-midrange card allied with the power usage of an enthusiast board, consuming more power than any other contemporary solution. The scale of the R600 misstep had profound implications within ATI, prompting strategy changes to meet future deadlines and maximize launch opportunities. Execution improved with RV770 (Evergreen) as well as the Northern and Southern Islands series. Along with being the largest ATI/AMD GPU to date at 420mm², R600 incorporated a number of GPU firsts. It was AMD's first DirectX 10 chip, its first and only GPU with a 512-bit memory bus, first vendor desktop chip with a tessellator unit (which remained largely unused thanks to game developer indifference and a lack of DirectX support), first GPU with integrated audio over HDMI support, as well as its first to use VLIW, an architecture that has remained with AMD until the present 8000 series. It also marked the first time since the Radeon 7500 that ATI/AMD hadn't fielded a top tier card in relation to the competition's price and performance. AMD updated the R600 to the RV670 by shrinking the GPU from TSMC's 80nm process to a 55nm node in addition to replacing the 512-bit bidirectional memory ring bus with a more standard 256-bit. This halved the R600's die area while packing nearly as many transistors (666 million versus 700 million in the R600). AMD also updated the GPU for DX10.1 and added PCI Express 2.0 support, all of which was good enough to scrap the HD 2000 series and compete with the mainstream GeForce 8800 GT and other lesser cards. In the absence of a high-end GPU, AMD launched two dual-GPU cards along with budget RV620/635-based cards in January 2008. The HD 3850 X2 shipped in April and the final All-In-Wonder branded card, the HD 3650, in June. Released with a polished driver package, the dual GPU cards made an immediate impact with reviewers and the buying public. The HD 3870 X2 comfortably became the single fastest card and the HD 3850 X2 wasn't a great deal slower. Unlike Nvidia's SLI solution, AMD instituted support for Crossfiring cards with a common ASIC. The Radeon HD 3870 X2 put two GPUs in a single card Pushing on from the G80's success, Nvidia launched its G92 as the 8800 GT on October 29 to widespread acclaim from tech sites, mainly due to its very competitive pricing. Straddling the $199 to $249 range, the 512MB card offered performance that invalidated the G80-based 8800 GTS. It mostly bested the HD 2900 XT and the HD 3870, which launched three weeks after the GT and generally came within 80% of the GTX. Unsurprisingly, this led to a shortage of 8800 GTs within weeks. Strong demand for the Nvidia's new contender and its 8600 GS/GT siblings helped push the company to a 71% discrete market share by year's end. Hard on the heels of the GT, Nvidia launched the G92-based 8800 GTS 512MB on December 11. While generally suffering in performance-per-dollar to the GT, the GTS's saving grace was its use of better binned GPUs that essentially equalled the GTX and the pricy 8800 Ultra when overclocked. The story of the GeForce 8 series would not be complete without adding the unfortunate postscript that was the use of high lead solder in the BGA of certain G86, G84, G73, G72/72M GPUs, and C51 and MCP67 graphics chipsets. This, allied with a low temperature underfill, inadequate cooling and an intensive regime of hot/cold cycles caused an inordinate number of graphics failures. If the 8 series were a technological triumph for Nvidia, the 9 series ushered in a period of stagnation. Nvidia switched to a Hitachi eutectic (high tin) solder, as used by AMD, in mid-2008 and notably changed the single slot reference design of the 8800 GT's cooler, adding more fan blades as well as tweaking the shroud to facilitate higher airflow. The G92 was suspected of being affected by the underfill issue as well, although dual-slot designs on the 8800 GTS 512M and non-reference cooler equipped cards didn't seem to be overly affected. The company absorbed $475.9 million in charges relating to the issue, which resulted in heavy customer backlash toward both Nvidia laptop OEMs, who had known of the issue for some time before it became public knowledge. Nvidia's place in the industry will be forever linked to this lowest point in its history. If the 8 series were a technological triumph for Nvidia, the 9 series ushered in a period of stagnation. The highlight of the range was also the first model launched in February 2008. The 9600 GT was based on the "new" G94, which was little more than a cut down G92 of the previous year built on the same 65nm TSMC process. Aggressive price cuts from AMD on the HD 3870 and HD 3850 along with falling prices from Nvidia's own 8800 GS and GT made the rest of the 9 series reside almost entirely under the rebrand banner. Initial 9800 GTs were 8800 GT rebadges while the 8800 GTS (G92) morphed into the 9800 GTX. Transitioning to TSMC's 55nm process shaved 20% in area from the G92 and allowed a small bump in clock frequency to produce the 9800 GTX+, the identical OEM GTS 150, as well as the GTS 250 that entered the retail channel fifteen months after the original 8-series card. Due to the late arrival of the flagship GT200 and the fact that AMD's HD 3870 X2 was now top dog in the single card arms race, Nvidia resorted to the time honored tradition of doubling up on GPUs by sandwiching two 9800 GTs together to make the 9800 GX2. While it won the benchmark race, most observers were quick to notice that selling a dual-9800 GT for the price of three individual 9800 GTs had limited appeal at best. Nvidia G200 GPU on a GTX 260 board By June, Nvidia released its GTX 260 and GTX 280 with the GT200 GPU, a 576mm² part that represents the largest production GPU die to date (Intel's Larrabee was estimated at 600-700mm²) and the largest production chip of any kind fabricated by TSMC. The GT200 reiterated Nvidia's desire to push GPGPU into the spotlight by incorporating dedicated double precision (FP64) and compute hardware into the design. The gaming-oriented architectural changes were more modest, but this didn't stop Nvidia from pricing the 280 at an eye-watering $649 or launching 3D Vision (3D gaming and video) drivers in conjunction with 3D shutter glasses and an IR emitter -- a very expensive package.   Nvidia GTX 200 series tech demo Pricing fell dramatically after the HD 4870 and 4850 arrived, with the GTX 280 dropping 38% to $400 and the GTX 260 25% to $299. AMD responded to the GT200 and G92 with the RV770. The first card, a lower-mainstream HD 4730, launched on June 8, with the mainstream and performance market HD 4850 and 4870 following on the 25. The launch had lost a measure of impact as specification leaks and stores began selling the HD 4850 a week before the NDA expired -- a common occurrence now, but less pervasive in 2008. The 4870 and 4850 became the first consumer graphics cards to use GDDR5 memory, which Nvidia eventually implemented eighteen months later with the GT215-based GT 240. The HD 4870 and 4850 earned rave reviews with its extensive feature list, including 7.1 LPCM sound over HDMI, general performance and multi-GPU scaling and, of course, the price. The card's sole drawback was its tendency to produce high local temperatures across the voltage regulation components in reference boards, which caused disproportionate failure rates and lockups -- especially when using burn-in software such as Furmark. In keeping with the previous generation and the "need" to curtail the GTX 280's two-month reign, AMD released the HD 4870 X2 in August. The card quickly entrenched itself at the top of review benchmark charts in most categories including performance, but also in the category of noise output and heat production thanks to the reference blower fan. Radeon HD 4870 X2 (above) and Radeon HD 4870 January of 2009 brought only an incremental tweak of Nvidia's line-up when the GT 200 transferred to TSMC's 55nm process. 55nm saw its use in the B3 revision chips, which first saw duty in September the previous year as the Core 216 version of the GTX 260. The company offered its GTX 295, which featured two cut down (ROPs and memory bus) GT200-B3s. The single-GPU variant of the card launched as the GTX 275 in April. But so would AMD's reply: a revised RV790XT-powered HD 4890 and the HD 4770 (RV740), which was also AMD's first 40nm card. The HD 4770, while not a major product in its own right, gave AMD immeasurable experience with TSMC's troubled 40nm process, which produced large variances in current leakage as well as high defect rates due to incomplete connections between metal layers in the GPU die. With this working knowledge, AMD was able to improve the foundry process issues that Nvidia faced with its Fermi architecture -- issues that hadn't presented themselves with Nvidia's initial miniscule 40nm GPUs. Nvidia rolled out its first 40nm products in July. The entry-level GT216 and GT218 came in the form of the GeForce 205, 210 and GT 220, all of which were OEM products until October when the latter two hit retail. They are only noteworthy for being Nvidia's first DX10.1 cards -- something AMD achieved with the HD 4870/4850 -- as well as improving sound capabilities with 7.1 audio, lossless LPCM audio, bitstreaming of Dolby TrueHD/DTS-HD/-HD-MA, and audio over HDMI. The series was aimed at the home theater market and was eventually rebranded as the 300 series in February 2010. TSMC's troubled 40nm process hit AMD's ability to capitalize on Nvidia's Fermi no-show as heavy demand outstripped supply. In the four months between September 2009 and February 2010, AMD completed a thorough top to bottom launch of four GPUs (Cypress, Juniper, Redwood and Cedar), which comprised the Evergreen family, starting with the top-tier HD 5870, followed a week later by the upper-midrange HD 5850. TSMC's troubled 40nm process hit AMD's ability to capitalize on Nvidia's Fermi no-show as heavy demand outstripped supply. This was in large part driven by AMD's ability to time Evergreen's release with Windows 7 and the adoption of DirectX 11. While DX11 took time to show substantial worth with Evergreen, another feature introduced with the HD 5000 made an immediate impact in the form of Eyefinity, which relies upon the flexibility of DisplayPort to enable as many as six display pipelines per board. These are routed to a convention DAC or a combination of the internal TMDS transmitters and DisplayPort. Previous graphics cards generally used a combination of VGA, DVI and sometimes HDMI, all of which needed a dedicated clock source per output. This added complexity, size and pin count to a GPU. DisplayPort negated the need for independent clocking and opened the way for AMD to integrate up to six display pipelines in their hardware, while software remains responsible for providing the user experience. This included bezel compensation and spanning the display across the panels at an optimum resolution.   Eyefinity: ATI's scalable multi-display tech (source: Tekzilla) The Evergreen series became class leaders across the board (issues with texture filtering aside), with the HD 5850 and HD 5770 attracting a large percentage of the cost-conscious gamer fraternity and the HD 5870 and dual-GPU HD 5970 providing an unequalled level of performance and efficiency. Nvidia finally (soft) launched its first Fermi boards six months later on April 12 by way of the GTX 470 and 480. None of the company's dies were fully functional -- as was the case with the following GF104 -- so Fermi's core speeds were rather conservative to curb power usage and memory bandwidth was lower due to Nvidia's inexperience with GDDR5 I/O. While the GTX 480 was greeted with a tepid response, Nvidia's second Fermi chip, the mainstream GF104 in the GTX 460, was a monumental success. Less than optimal yields on TSMC's 40nm process, which had already caused supply issues for AMD, became greatly magnified due to the GF100 Fermi's die size of 529mm². With die size, yield, power requirement and heat output all being inextricably linked, Nvidia's 400 series paid a high penalty for gaming performance compared to AMD's line-up. Quadro and Tesla variants of the GF100 suffered little in the marketplace, if at all, thanks to an in-place ecosystem within the professional markets. One aspect of the launch that did not disappoint was the introduction of transparency supersampling antialiasing (TrSSAA), which was to be used with the in-place coverage sampled AA (CSAA). While the GTX 480 was greeted with a tepid response, Nvidia's second Fermi chip, the mainstream GF104 in the GTX 460, was a monumental success. It offered good performance at great price, with the 192bit/768MB running $199 and the 256bit/1GB at $229. They launched a multitude of non-reference and factory overclocked cards with significant overclocking headroom available due to the conservative reference clocks chosen by Nvidia to aid in lowering power consumption. Part of the 460's positive reception stemmed from the muted expectations after the GF100's arrival. The GF104 was speculated to be no more than half a GF100 and would suffer appallingly next to AMD's Cypress GPU. This proved wrong. A second surprise awaited the blogging "experts" as well as AMD when Nvidia launched a refreshed version of the GF100, the GF110, in November. The updated part achieved what its predecessor couldn't -- namely enabling the whole chip. The resulting GTX 570 and 580 were what the original 400 series was supposed to be. Barts, the first AMD Northern Islands series GPU, arrived in October. More an evolution from Evergreen, Barts was designed to lower production costs from the Cypress die. Rather than offering a substantial increase in performance, the GPU looked to equal the previous HD 5830 and HD 5850 but saved substantially on GPU size. AMD pared away the stream processor (shader) count, overhauled and reduced the physical size of the memory controller (and the associated lowering of memory speed), and removed the ability to perform double-precision calculations. Barts did, however, have a tessellation upgrade over Evergreen. While performance increases weren't dramatic, AMD did upgrade facets of the display technology. DisplayPort was pushed to 1.2 (the ability to drive multiple monitors from one port, 120Hz refresh for high resolution displays, and bitstreaming audio), HDMI to 1.4a (3D 1080p video playback, 4K screen resolution), and the company added an updated video decoder with DivX support. AMD also improved the driver feature set by introducing morphological anti-aliasing (MLAA), a post-processing blur filter whose functionality -- especially at launch -- was extremely hit or miss. The introduction of the HD 6970 and HD 6950 added a conventional AA mode to the Catalyst driver with EQAA (Enhanced Quality AA), while AMD also implemented embryonic HD3D support, which was flaky at best, and Dynamic power usage, this time profiled with PowerTune. Generally speaking, the Cayman parts were better than the first generation Fermi chips. They were supposed to trump them but lagged by a few percentage points behind the second generation (the GTX 500s) and subsequent driver releases from both camps added further variance. Cayman's November launch was postponed a month with the HD 6970 and 6950 launching on December 15, and it represented a (brief) departure from the VLIW5 architecture, which ATI/AMD had been using continuously since the R300 series. The company instead used VLIW4, which dropped the fifth Special Function (or Transendental) execution unit in every stream-processing block. This was intended to withdraw an overabundance of resources to DX9 (and earlier) games while adding a more compute-orientated reorganization of the graphics pipeline. The integrated graphics of the Trinity and Richland series of APUs are the only other VLIW4 parts, and while AMDs newest graphics architecture is based upon GCN (Graphics Core Next), VLIW5 lives on in the HD 8000 series as rebrands of entry level Evergreen GPUs. Mirroring the GF100/GF110 progression, the GTX 460's successor -- the GTX 560 Ti -- arrived in January 2011. The GF114-based card featured a fully functional revised GF104, and proved to be as robust and versatile as its predecessor. It offered a myriad on non-reference interpretations with and without factory overclocks. AMD responded by lowering the cost of its HD 6950 and 6870 immediately, and so the GTX 560 Ti's price/performance advantage disappeared even as reviews were being penned. With mail in rebates offered by many board partners, the HD 6950 -- particularly the 1GB version -- made a more compelling buy. Nvidia GeForce GTX 590 reference board Nvidia's second major launch of 2011, more precisely on March 26, started with a bang. The GTX 590 married two fully functional GF110s to a single circuit board. The PR fallout started almost immediately. The boards were running a driver that didn't enable power limiting to the correct degree and that was paired with a BIOS that allowed high voltage. This oversight allowed an aggressive overvoltage to start blowing MOSFETS. Nvidia remedied the situation with a more restrained BIOS and driver, but the launch day activities prompted some scathing reviews and at least one popular YouTube video. The GTX 590 achieved no more than performance parity with the two-week-old HD 6990, AMD's own dual card. With no clear cut winner across the benchmarks, the products stirred up an endless stream of debates across forums, ranging from multi-GPU scaling, stock availability, benchmark relevance, testing methodology, and exploding 590s. The AMD Northern Islands successors, Southern Islands, began a staggered release schedule of the series, beginning on January 9, with the flagship HD 7970. It was the first PCI-E 3.0 card and the first recipient of AMD's GCN architecture built on TSMC's 28nm process node. Only three weeks later, the 7970 was joined by a second Tahiti-based card, the HD 7950, followed by the mainstream Cape Verde cards on February 15. The performance Pitcairn GPU-based cards hit the shelves a month later in March. The cards were good, but didn't provide earth-shattering gaming improvements over the previous 40nm based boards. This, combined with less competitive price tags that had been an AMD staple since the HD 2000 series, no WHQL drivers for two months and a non-functional Video Codec Engine (VCE), tempered the enthusiasm of many potential users and reviewers. One bonus of the Tahiti parts was the confirmation that AMD had left a lot of untapped performance available via overclocking. This had been a trade-off between power usage and heat output versus clock speed, but led to a conservative core and memory frequency. The need to maximize yield and an underestimation of Nvidia's Kepler-based GTX 680/670, may also have entered into the equation. Nvidia continued to diversify their feature set in GPU’s, by introducing the Kepler architecture. In previous generations, Nvidia led with the most complex chip to satisfy the high-end gaming community and to start the lengthy validation process for professional (Tesla/Quadro) models. This approach hadn't served the company particularly well in recent prior generations and so it seems the smaller GK107 and the performance-orientated GK104 received priority over the beastly GK110. The GK107 was presumably required since Nvidia had substantial OEM mobile contracts to fulfill and needed the GK104 for the premium desktop market. Both GPUs shipped as A2 revision chips. Mobile GK107s (GT 640M/650M, GTX 660M) began shipping to OEMs in February and were officially announced on March 22, the same day Nvidia launched its GK104-based GTX 680. In another departure from Nvidia's recent GPU design, the shader clock ran at the same frequency as the core. Since the GeForce 8 series, Nvidia had employed a shader running at least twice the core frequency -- as high as 2.67 times the core in the 9 series and exactly twofold in the 400 and 500. Nvidia realized that more cores running at a slower speed are more efficient for parallel workloads than fewer cores running at twice the frequency. The rationale for the change was predicated upon Nvidia shifting focus (in consumer desktop/mobile) from outright performance to performance-per-watt efficiency. More cores running at a slower speed are more efficient for parallel workloads than fewer cores running at twice the frequency. Basically, it was a refinement of the GPU versus CPU paradigm (many cores, lower frequency, high bandwidth and latency versus few cores, high frequency, lower bandwidth and latency). Reducing the shader clock also has the advantage of lowering power consumption and Nvidia further economised on design by drastically reducing the die's available double precision units, as well as reducing the bus width to a more mainstream 256-bit. These changes, along with a relatively modest base core speed augmented by a dynamic boost feature (overclock on demand), presented a much more balanced product -- albeit at the expense of compute ability. Yet if Nvidia had kept Fermi's compute functionality and bandwidth design, it would have been ridiculed for producing a large, hot, power-hungry design. The laws of physics yet again turned chip design into an art of compromise. Once again, Nvidia produced a dual GPU board. Because of the GK104's improved power envelope, the GTX 690 is essentially two GTX 680s in SLI. The only distinction is that the 690's maximum core frequency (boost) is 52MHz lower. While performance is still at the whim of the driver's SLI profiling, the card's functionality is first rate and its aesthetics worthy of the limited edition branding it wears. The GK 110 marks a departure from Nvidia's usual practice of launching a GPU first under the GeForce banner. First seen as the Tesla K20, the card was requested in large numbers for supercomputing contracts, with over 22,000 required for ORNL's Cray XK7 Titan, NCSA's Blue Waters, the Swiss CSCS Todi and Piz Daint systems. Consumers had to wait six months before the GK110 arrived as a GeForce. Dubbed GTX Titan, the lack of a numerical model number reinforces Nvidia's desire to see the card as a model separate from the existing (and likely following) Kepler series. At $999, Titan is aimed at ultra-enthusiasts and benchmarkers. Nvidia also widened the appeal to researchers and professionals on a budget as it marks the first time that the company has allowed a GeForce card to retain the same compute functionality as its professional Tesla and Quadro brethren. Nvidia GeForce GTX Titan The card quickly assumed top dog status in gaming benchmarks, especially evident in multi-monitor resolutions with super-sampled antialiasing applied. However, Nvidia's indifferent OpenCL driver support and a surge of recent gaming titles allied with AMD's Gaming Evolved program tempered the Titan's impact as much as its exorbitant price tag. June saw AMD play "me too" by offering the HD 7970 GHz Edition -- a 75MHz jump in core frequency with a further 50MHz boost available (as opposed to the dynamically-adjusted version offered by Nvidia). The GHz Edition represented the frequency that card probably should have started with in January. Unfortunately for AMD, the market this SKU targeted had already determined that the standard model was generally capable of the same (if not better) performance via overclocking at a substantially lower price and lower core voltage. AMD followed the HD 7970 GHz Edition with the HD 7950 Boost. Present and Future of PC Graphics, In a Nutshell So far, 2013 has seen Nvidia and AMD battle over a PC graphics discrete market share that is incrementally shrinking as game development and screen resolution fail to match the strides integrated graphics are making. In early 2002, Intel had a 14% PC graphics market share. With the arrival of its Extreme Graphics (830 to 865 chipsets), the company's share rose to 33%, then to 38% with the third and fourth generation DX 9 chipsets, and now to more than 50% with the DX10 GMA 4500 series. Integrating a GPU into the CPU means that Intel is now responsible for shipping around 60% of PC graphics. JPR: GPU market share in Q4 2012 Market share this quarter Market share last quarter Unit change quarter to quarter Share difference quarter to quarter Market share last year AMD 19.7% 21.0% -13.6% -1.2% 24.8% Intel 63.4% 60.0% -2.9% 3.4% 59.2% Nvidia 16.9% 18.6% -16.7% -1.73% 15.7% Via/S3 0.0% 0.4% -100% 0.0% 0.4% Total 100.0% 100.0% -8.2% 100.2%   The need for new graphics products becomes less apparent with every successive generation. Most titles are based on a ten-year-old API (DX 9 became available in December 2002) so image enhancements in games are becoming less focused on GPU load and more on post-processing filtering -- something that is unlikely to change even with DX11-compliant next-generation consoles. Reliance upon rasterization will continue as ray tracing proves to be a difficult nut to crack. All this unfortunately points to hardware junkies having less to tinker with in the future unless there is a fundamental evolution in game engines or the availability of affordable ultra-high resolution displays. Whichever way things go in the coming months and years, rest assured, we will continue to review upcoming GPUs on TechSpot. This is the last installment on the History of GPU series (part 4 out of 4). If you missed any of the previous chapters, take your pick below. Part 1: (1976 - 1995) The Early Days of 3D Consumer Graphics Part 2: (1995 - 1999) 3Dfx Voodoo: The Game-changer Part 3: (2000 - 2006) The Nvidia vs. ATI Era Begins Part 4: (2006 - 2013) The Modern GPU: Stream processing units a.k.a. GPGPU

    ENTERTIANMENT, Hardware, NERD NEWS
  • Posted on July 4, 2016 9:30 am
    Joseph Forbes
    No comments

    The evolution of the modern graphics processor begins with the introduction of the first 3D add-in cards in 1995, followed by the widespread adoption of the 32-bit operating systems and the affordable personal computer. The graphics industry that existed before that largely consisted of a more prosaic 2D, non-PC architecture, with graphics boards better known by their chip’s alphanumeric naming conventions and their huge price tags. 3D gaming and virtualization PC graphics eventually coalesced from sources as diverse as arcade and console gaming, military, robotics and space simulators, as well as medical imaging. The early days of 3D consumer graphics were a Wild West of competing ideas. From how to implement the hardware, to the use of different rendering techniques and their application and data interfaces, as well as the persistent naming hyperbole. The early graphics systems featured a fixed function pipeline (FFP), and an architecture following a very rigid processing path utilizing almost as many graphics APIs as there were 3D chip makers. While 3D graphics turned a fairly dull PC industry into a light and magic show, they owe their existence to generations of innovative endeavour. Over the next few weeks (this is the first installment on a series of four articles) we'll be taking an extensive look at the history of the GPU, going from the early days of 3D consumer graphics, to the 3Dfx Voodoo game-changer, the industry's consolidation at the turn of the century, and today's modern GPGPU. 1976 - 1995: The Early Days of 3D Consumer Graphics The first true 3D graphics started with early display controllers, known as video shifters and video address generators. They acted as a pass-through between the main processor and the display. The incoming data stream was converted into serial bitmapped video output such as luminance, color, as well as vertical and horizontal composite sync, which kept the line of pixels in a display generation and synchronized each successive line along with the blanking interval (the time between ending one scan line and starting the next). A flurry of designs arrived in the latter half of the 1970s, laying the foundation for 3D graphics as we know them. Atari 2600 released in September 1977 RCA’s “Pixie” video chip (CDP1861) in 1976, for instance, was capable of outputting a NTSC compatible video signal at 62x128 resolution, or 64x32 for the ill-fated RCA Studio II console. The video chip was quickly followed a year later by the Television Interface Adapter (TIA) 1A, which was integrated into the Atari 2600 for generating the screen display, sound effects, and reading input controllers. Development of the TIA was led by Jay Miner, who also led the design of the custom chips for the Commodore Amiga computer later on. In 1978, Motorola unveiled the MC6845 video address generator. This became the basis for the IBM PC’s Monochrome and Color Display Adapter (MDA/CDA) cards of 1981, and provided the same functionality for the Apple II. Motorola added the MC6847 video display generator later the same year, which made its way into a number of first generation personal computers, including the Tandy TRS-80. IBM PC’s Monochrome Display Adapter A similar solution from Commodore’s MOS Tech subsidiary, the VIC, provided graphics output for 1980-83 vintage Commodore home computers. In November the following year, LSI’s ANTIC (Alphanumeric Television Interface Controller) and CTIA/GTIA co-processor (Color or Graphics Television Interface Adaptor), debuted in the Atari 400. ANTIC processed 2D display instructions using direct memory access (DMA). Like most video co-processors, it could generate playfield graphics (background, title screens, scoring display), while the CTIA generated colors and moveable objects. Yamaha and Texas Instruments supplied similar IC’s to a variety of early home computer vendors. The next steps in the graphics evolution were primarily in the professional fields. Intel used their 82720 graphics chip as the basis for the $1000 iSBX 275 Video Graphics Controller Multimode Board. It was capable of displaying eight color data at a resolution of 256x256 (or monochrome at 512x512). Its 32KB of display memory was sufficient to draw lines, arcs, circles, rectangles and character bitmaps. The chip also had provision for zooming, screen partitioning and scrolling. SGI quickly followed up with their IRIS Graphics for workstations -- a GR1.x graphics board with provision for separate add-in (daughter) boards for color options, geometry, Z-buffer and Overlay/Underlay. Intel's $1000 iSBX 275 Video Graphics Controller Multimode Board was capable of displaying eight color data at a resolution of 256x256 (or monochrome at 512x512). Industrial and military 3D virtualization was relatively well developed at the time. IBM, General Electric and Martin Marietta (who were to buy GE’s aerospace division in 1992), along with a slew of military contractors, technology institutes and NASA ran various projects that required the technology for military and space simulations. The Navy also developed a flight simulator using 3D virtualization from MIT’s Whirlwind computer in 1951. Besides defence contractors there were companies that straddled military markets with professional graphics. Evans & Sutherland – who were to provide professional graphics card series such as the Freedom and REALimage – also provided graphics for the CT5 flight simulator, a $20 million package driven by a DEC PDP-11 mainframe. Ivan Sutherland, the company’s co-founder, developed a computer program in 1961 called Sketchpad, which allowed drawing geometric shapes and displaying on a CRT in real-time using a light pen. This was the progenitor of the modern Graphic User Interface (GUI). In the less esoteric field of personal computing, Chips and Technologies’ 82C43x series of EGA (Extended Graphics Adapter), provided much needed competition to IBM’s adapters, and could be found installed in many PC/AT clones around 1985. The year was noteworthy for the Commodore Amiga as well, which shipped with the OCS chipset. The chipset comprised of three main component chips -- Agnus, Denise, and Paula -- which allowed a certain amount of graphics and audio calculation to be non-CPU dependent. In August of 1985, three Hong Kong immigrants, Kwok Yuan Ho, Lee Lau and Benny Lau, formed Array Technology Inc in Canada. By the end of the year, the name had changed to ATI Technologies Inc. ATI got their first product out the following year, the OEM Color Emulation Card. It was used for outputting monochrome green, amber or white phosphor text against a black background to a TTL monitor via a 9-pin DE-9 connector. The card came equipped with a minimum of 16KB of memory and was responsible for a large percentage of ATI’s CAD$10 million in sales in the company’s first year of operation. This was largely done through a contract that supplied around 7000 chips a week to Commodore Computers. ATI's Color Emulation Card came with a minimum 16KB of memory and was responsible for a large part of the company’s CAD$10 million in sales the first year of operation. The advent of color monitors and the lack of a standard among the array of competitors ultimately led to the formation of the Video Electronics Standards Association (VESA), of which ATI was a founding member, along with NEC and six other graphics adapter manufacturers. In 1987 ATI added the Graphics Solution Plus series to its product line for OEM’s, which used IBM’s PC/XT ISA 8-bit bus for Intel 8086/8088 based IBM PC’s. The chip supported MDA, CGA and EGA graphics modes via dip switches. It was basically a clone of the Plantronics Colorplus board, but with room for 64kb of memory. Paradise Systems’ PEGA1, 1a, and 2a (256kB) released in 1987 were Plantronics clones as well. ATI EGA 800: 16-color VGA emulation, 800x600 support The EGA Wonder series 1 to 4 arrived in March for $399, featuring 256KB of DRAM as well as compatibility with CGA, EGA and MDA emulation with up to 640x350 and 16 colors. Extended EGA was available for the series 2,3 and 4. Filling out the high end was the EGA Wonder 800 with 16-color VGA emulation and 800x600 resolution support, and the VGA Improved Performance (VIP) card, which was basically an EGA Wonder with a digital-to-analog (DAC) added to provide limited VGA compatibility. The latter cost $449 plus $99 for the Compaq expansion module. ATI was far from being alone riding the wave of consumer appetite for personal computing. Many new companies and products arrived that year.. Among them were Trident, SiS, Tamerack, Realtek, Oak Technology, LSI’s G-2 Inc., Hualon, Cornerstone Imaging and Winbond -- all formed in 1986-87. Meanwhile, companies such as AMD, Western Digital/Paradise Systems, Intergraph, Cirrus Logic, Texas Instruments, Gemini and Genoa, would produce their first graphics products during this timeframe. ATI’s Wonder series continued to gain prodigious updates over the next few years. In 1988, the Small Wonder Graphics Solution with game controller port and composite out options became available (for CGA and MDA emulation), as well as the EGA Wonder 480 and 800+ with Extended EGA and 16-bit VGA support, and also the VGA Wonder and Wonder 16 with added VGA and SVGA support. A Wonder 16 was equipped with 256KB of memory retailed for $499, while a 512KB variant cost $699. An updated VGA Wonder/Wonder 16 series arrived in 1989, including the reduced cost VGA Edge 16 (Wonder 1024 series). New features included a bus-Mouse port and support for the VESA Feature Connector. This was a gold-fingered connector similar to a shortened data bus slot connector, and it linked via a ribbon cable to another video controller to bypass a congested data bus. The Wonder series updates continued to move apace in 1991. The Wonder XL card added VESA 32K color compatibility and a Sierra RAMDAC, which boosted maximum display resolution to 640x480 @ 72Hz or 800x600 @ 60Hz. Prices ranged through $249 (256KB), $349 (512KB), and $399 for the 1MB RAM option. A reduced cost version called the VGA Charger, based on the previous year’s Basic-16, was also made available. ATI Graphics Ultra ISA (Mach8 + VGA) ATI added a variation of the Wonder XL that incorporated a Creative Sound Blaster 1.5 chip on an extended PCB. Known as the VGA Stereo-F/X, it was capable of simulating stereo from Sound Blaster mono files at something approximating FM radio quality. The Mach series launched with the Mach8 in May of that year. It sold as either a chip or board that allowed, via a programming interface (AI), the offloading of limited 2D drawing operations such as line-draw, color-fill and bitmap combination (Bit BLIT). Graphics boards such as the ATI VGAWonder GT, offered a 2D + 3D option, combining the Mach8 with the graphics core (28800-2) of the VGA Wonder+ for its 3D duties. The Wonder and Mach8 pushed ATI through the CAD$100 million sales milestone for the year, largely on the back of Windows 3.0’s adoption and the increased 2D workloads that could be employed with it. S3 Graphics was formed in early 1989 and produced its first 2D accelerator chip and a graphics card eighteen months later, the S3 911 (or 86C911). Key specs for the latter included 1MB of VRAM and 16-bit color support. The S3 911 was superseded by the 924 that same year -- it was basically a revised 911 with 24-bit color -- and again updated the following year with the 928 which added 32-bit color, and the 801 and 805 accelerators. The 801 used an ISA interface, while the 805 used VLB. Between the 911’s introduction and the advent of the 3D accelerator, the market was flooded with 2D GUI designs based on S3’s original -- notably from Tseng labs, Cirrus Logic, Trident, IIT, ATI’s Mach32 and Matrox’s MAGIC RGB. In January 1992, Silicon Graphics Inc (SGI) released OpenGL 1.0, a multi-platform vendor agnostic application programming interface (API) for both 2D and 3D graphics. Microsoft was developing a rival API of their own called Direct3D and didn’t exactly break a sweat making sure OpenGL ran as well as it could under Windows. OpenGL evolved from SGI’s proprietary API, called the IRIS GL (Integrated Raster Imaging System Graphical Library). It was an initiative to keep non-graphical functionality from IRIS, and allow the API to run on non-SGI systems, as rival vendors were starting to loom on the horizon with their own proprietary APIs. Initially, OpenGL was aimed at the professional UNIX based markets, but with developer-friendly support for extension implementation it was quickly adopted for 3D gaming. Microsoft was developing a rival API of their own called Direct3D and didn’t exactly break a sweat making sure OpenGL ran as well as it could under the new Windows operating systems. Things came to a head a few years later when John Carmack of id Software, whose previously released Doom had revolutionized PC gaming, ported Quake to use OpenGL on Windows and openly criticised Direct3D. Fast forward: GLQuake released in 1997 versus original Quake Microsoft’s intransigence increased as they denied licensing of OpenGL’s Mini-Client Driver (MCD) on Windows 95, which would allow vendors to choose which features would have access to hardware acceleration. SGI replied by developing the Installable Client Driver (ICD), which not only provided the same ability, but did so even better since MCD covered rasterisation only and ICD added lighting and transform functionality (T&L). During the rise of OpenGL, which initially gained traction in the workstation arena, Microsoft was busy eyeing the emerging gaming market with designs on their own proprietary API. They acquired RenderMorphics in February 1995, whose Reality Lab API was gaining traction with developers and became the core for Direct3D. At about the same time, 3dfx’s Brian Hook was writing the Glide API that was to become the dominant API for gaming. This was in part due to Microsoft’s involvement with the Talisman project (a tile based rendering ecosystem), which diluted the resources intended for DirectX. As D3D became widely available on the back of Windows adoption, proprietary APIs such as S3d (S3), Matrox Simple Interface, Creative Graphics Library, C Interface (ATI), SGL (PowerVR), NVLIB (Nvidia), RRedline (Rendition) and Glide, began to lose favor with developers. It didn’t help matters that some of these proprietary APIs were allied with board manufacturers under increasing pressure to add to a rapidly expanding feature list. This included higher screen resolutions, increased color depth (from 16-bit to 24 and then 32), and image quality enhancements such as anti-aliasing. All of these features called for increased bandwidth, graphics efficiency and faster product cycles. By 1993, market volatility had already forced a number of graphics companies to withdraw from the business, or to be absorbed by competitors. The year 1993 ushered in a flurry of new graphics competitors, most notably Nvidia, founded in January of that year by Jen-Hsun Huang, Curtis Priem and Chris Malachowsky. Huang was previously the Director of Coreware at LSI while Priem and Malachowsky both came from Sun Microsystems where they had previously developed the SunSPARC-based GX graphics architecture. Fellow newcomers Dynamic Pictures, ARK Logic, and Rendition joined Nvidia shortly thereafter. Market volatility had already forced a number of graphics companies to withdraw from the business, or to be absorbed by competitors. Amongst them were Tamerack, Gemini Technology, Genoa Systems, Hualon, Headland Technology (bought by SPEA), Acer, Motorola and Acumos (bought by Cirrus Logic). One company that was moving from strength to strength however was ATI. As a forerunner of the All-In-Wonder series, late November saw the announcement of ATI’s 68890 PC TV decoder chip which debuted inside the Video-It! card. The chip was able to capture video at 320x240 @ 15 fps, or 160x120 @ 30 fps, as well as compress/decompress in real time thanks to the onboard Intel i750PD VCP (Video Compression Processor). It was also able to communicate with the graphics board via the data bus, thus negating the need for dongles or ports and ribbon cables. The Video-It! retailed for $399, while a lesser featured model named Video-Basic completed the line-up. Five months later, in March, ATI belatedly introduced a 64-bit accelerator; the Mach64. The financial year had not been kind to ATI with a CAD$2.7 million loss as it slipped in the marketplace amid strong competition. Rival boards included the S3 Vision 968, which was picked up by many board vendors, and the Trio64 which picked up OEM contracts from Dell (Dimension XPS), Compaq (Presario 7170/7180), AT&T (Globalyst),HP (Vectra VE 4), and DEC (Venturis/Celebris). Vision 968: S3's first motion video accelerator Released in 1995, the Mach64 notched a number of notable firsts. It became the first graphics adapter to be available for PC and Mac computers in the form of the Xclaim ($450 and $650 depending on onboard memory), and, along with S3's Trio, offered full-motion video playback acceleration. The Mach64 also ushered in ATI’s first pro graphics cards, the 3D Pro Turbo and 3D Pro Turbo+PC2TV, priced at a cool $599 for the 2MB option and $899 for the 4MB. ATI Mach64 VT with support for TV tuner The following month saw a technology start-up called 3DLabs rise onto the scene, born when DuPont’s Pixel graphics division bought the subsidiary from its parent company, along with the GLINT 300SX processor capable of OpenGL rendering, fragment processing and rasterisation. Due to their high price the company's cards were initially aimed at the professional market. The Fujitsu Sapphire2SX 4MB retailed for $1600-$2000, while an 8MB ELSA GLoria 8 was $2600-$2850. The 300SX, however, was intended for the gaming market. S3 seemed to be everywhere at that time. The high-end OEM marked was dominated by the company's Trio64 chipsets that integrated DAC, a graphics controller, and clock synthesiser into a single chip. The Gaming GLINT 300SX of 1995 featured a much-reduced 2MB of memory. It used 1MB for textures and Z-buffer and the other for frame buffer, but came with an option to increase the VRAM for Direct3D compatibility for another $50 over the $349 base price. The card failed to make headway in an already crowded marketplace, but 3DLabs was already working on a successor in the Permedia series. S3 seemed to be everywhere at that time. The high-end OEM marked was dominated by the company's Trio64 chipsets that integrated DAC, a graphics controller, and clock synthesiser into a single chip. They also utilized a unified frame buffer and supported hardware video overlay (a dedicated portion of graphics memory for rendering video as the application requires). The Trio64 and its 32-bit memory bus sibling, the Trio32, were available as OEM units and standalone cards from vendors such as Diamond, ELSA, Sparkle, STB, Orchid, Hercules and Number Nine. Diamond Multimedia’s prices ranged from $169 for a ViRGE based card, to $569 for a Trio64+ based Diamond Stealth64 Video with 4MB of VRAM. The mainstream end of the market also included offerings from Trident, a long time OEM supplier of no-frills 2D graphics adapters who had recently added the 9680 chip to its line-up. The chip boasted most of the features of the Trio64 and the boards were generally priced around the $170-200 mark. They offered acceptable 3D performance in that bracket, with good video playback capability. Other newcomers in the mainstream market included Weitek’s Power Player 9130, and Alliance Semiconductor’s ProMotion 6410 (usually seen as the Alaris Matinee or FIS’s OptiViewPro). Both offered excellent scaling with CPU speed, while the latter combined the strong scaling engine with antiblocking circuitry to obtain smooth video playback, which was much better than in previous chips such as the ATI Mach64, Matrox MGA 2064W and S3 Vision968. Nvidia launched their first graphics chip, the NV1, in May, and became the first commercial graphics processor capable of 3D rendering, video acceleration, and integrated GUI acceleration. They partnered with ST Microelectronic to produce the chip on their 500nm process and the latter also promoted the STG2000 version of the chip. Although it was not a huge success, it did represent the first financial return for the company. Unfortunately for Nvidia, just as the first vendor boards started shipping (notably the Diamond Edge 3D) in September, Microsoft finalized and released DirectX 1.0. The D3D graphics API confirmed that it relied upon rendering triangular polygons, where the NV1 used quad texture mapping. Limited D3D compatibility was added via driver to wrap triangles as quadratic surfaces, but a lack of games tailored for the NV1doomed the card as a jack of all trades, master of none. Most of the games were ported from the Sega Saturn. A 4MB NV1 with integrated Saturn ports (two per expansion bracket connected to the card via ribbon cable), retailed for around $450 in September 1995. Microsoft’s late changes and launch of the DirectX SDK left board manufacturers unable to directly access hardware for digital video playback. This meant that virtually all discrete graphics cards had functionality issues in Windows 95. Drivers under Win 3.1 from a variety of companies were generally faultless by contrast. ATI announced their first 3D accelerator chip, the 3D Rage (also known as the Mach 64 GT), in November 1995. The first public demonstration of it came at the E3 video game conference held in Los Angeles in May the following year. The card itself became available a month later. The 3D Rage merged the 2D core of the Mach64 with 3D capability. Late revisions to the DirectX specification meant that the 3D Rage had compatibility problems with many games that used the API -- mainly the lack of depth buffering. With an on-board 2MB EDO RAM frame buffer, 3D modality was limited to 640x480x16-bit or 400x300x32-bit. Attempting 32-bit color at 600x480 generally resulted in onscreen color corruption, and 2D resolution peaked at 1280x1024. If gaming performance was mediocre, the full screen MPEG playback ability at least went some way in balancing the feature set. The performance race was over before it had started, with the 3Dfx Voodoo Graphics effectively annihilating all competition. ATI reworked the chip, and in September the Rage II launched. It rectified the D3DX issues of the first chip in addition to adding MPEG2 playback support. Initial cards, however, still shipped with 2MB of memory, hampering performance and having issues with perspective/geometry transform, As the series was expanded to include the Rage II+DVD and 3D Xpression+, memory capacity options grew to 8MB. While ATI was first to market with a 3D graphics solution, it didn’t take too long for other competitors with differing ideas of 3D implementation to arrive on the scene. Namely, 3dfx, Rendition, and VideoLogic. Screamer 2, released in 1996, running on Windows 95 with 3dfx Voodoo 1 graphics In the race to release new products into the marketplace, 3Dfx Interactive won over Rendition and VideoLogic. The performance race, however, was over before it had started, with the 3Dfx Voodoo Graphics effectively annihilating all competition. This article is the first installment on a series of four. If you enjoyed this, make sure to join us next week as we take a stroll down memory lane to the heyday of 3Dfx, Rendition, Matrox and young company called Nvidia. Part 1: (1976 - 1995) The Early Days of 3D Consumer Graphics Part 2: (1995 - 1999) 3Dfx Voodoo: The Game-changer Part 3: (2000 - 2005) Down to Two: The Graphics Market Consolidation Part 4: (2006 - Present) The Modern GPU: Stream processing units a.k.a. GPGPU

    Blog Entry, ENTERTIANMENT, Hardware