Today's Hard|Forum Post
Today's Hard|Forum Post

Cheating the Cheaters

Our thoughts and feelings on how we evaluate video cards and the reasoning behind our new philosophy.

Introduction

It has been a while since we editorialized at length, particularly where it concerns the video card industry, but it is that time again. We changed our video card "review" format in the last year and we did this without much notice and with little explanation for doing so on the pages of HardOCP. Last week when we once again started seeing headlines, stories, and multi-page forum threads on the latest batch of video card "cheating", it got me thinking on just where HardOCP is on this issue and about our responsibility to our readers.

The [H]istory of Change

I guess the first time that I realized that HardOCP was آ“doing it wrong,آ” was quite a while ago. The problem was with our video card reviews. I remember a time when we had finished up reviewing a video card and I was headed out to San Francisco to talk about the hardware on The Screen Savers television show. It was Sunday, and I was flying out on Tuesday morning. It was then that I realized that I was supposed to be going on this TV show as some sort of آ“expertآ” and had never even played a game on the video card in question. We had just authored a review that would be read by hundreds of thousands of people and were about to go talk to a couple hundred thousand more and we had never used the video card in the very way it was likely to be used. Realizing my huge mistake, I quickly spent a solid 12 hours on Monday playing the latest retail games on the video card so that I could talk intelligently about the product on The Screen Savers.

That experience was somewhat of a turning point in my thinking as to how HardOCP used benchmarks, but if my life were a comic strip, the light bulb inked in over my head would have been a dim one. In mid-2002, we made some half-efforts to change the way we were doing video card reviews. Two articles that I can point out to you are our Radeon 9700 Gaming Experience and our Radeon 9700 IQ articles done in July and September of 2002. These were a move in the right direction as we began to see some of the big picture.

The big picture I was starting to see was HardOCPآ’s responsibility with its reviews, as many people were making video card buying decisions using our data and opinions. Of course with that being the case, it seems as though we had better be testing the video hardware in a way the gamers out there that are laying down the big bucks for the upgrade would be using it. While we continued to stay somewhat centered on the game play and image quality (IQ) elements of our content, I lost focus on what I had just realized as being truly important.

The main reason I lost focus was that I got too involved in worrying about what FutureMark and other synthetic benchmarks makers were doing with their 3D benchmarks and how it was impacting the people buying the cards. This situation was another light bulb for me that showed me a bit more of the big picture.

To put it simply, current synthetic benchmarks overall, do a disservice to the hardware community and everyone that will ever buy a 3D video card or a computer that has one installed. Outside of the possibility of being misleading by focusing on performance factors that are not representative of actual game play, there is another dark آ“secretآ” that is actually being paid for with your money, and at the same time forces hardware manufacturers to deliver less to you.

I still stand by our overall thoughts that we published at the time, but looking back I see that my actions taken were wrong. I wanted to hold game content developers responsible for us not being able to benchmark correctly. The content developers were not putting the tools in that were needed for us to do our job properly. I guess it is human nature for me wanting to look away from the truth of the situation and make the content developers the bad guys when in reality they had enough problems of their own to deal with. That being said, we did contact many developers and asked for them to put in the needed tool set for us to benchmark آ“correctlyآ” and many have done so since then. We continued down this path till the really big cheater was found out.