Review Gigabyte GeForce GTX 1080 G1 Gaming


Nvidia GeForce GTX1080, 8192MB GDDR5X, 1x DVI, 1x HDMI, 3x DisplayPort, Virtual Reality Ready
Nvidia GeForce GTX1080, 8192MB GDDR5X, 1x DVI, 1x HDMI, 3x DisplayPort, Virtual Reality Ready

 

Compact much power with Silent potential

The Gigabyte GeForce GTX 1080 G1 Gaming is the third party card the Nvidia GeForce GTX 1080 with GP104 “Pascal” in the editorial. It offers on the paper a slightly lower clock than the Asus GeForce GTX 1080 Strix OC Edition or the Inno3D iChill GeForce GTX 1080 X3, but therefore it comes in all compact dimensions. The selected Gigabyte combination of clock speeds, power consumption and cooling system rises.

Gigabyte GeForce GTX 1080 G1 Gaming Vary and Price

Variant of the Gigabytes GeForce GTX 1080 costs 759 euros less than the Founders Edition MSRP of 789 Euro or the Asus GeForce GTX 1080 Strix OC Edition ( 799 Euro ) more expensive than the Inno3D iChill GeForce GTX 1080 X3 ( 749 Euro ). The market price of the Founders Edition recently, though the first time droped.

In addition to the G1 Gaming, Gigabyte has announced at Computex 2016; GeForce GTX 1080 Xtreme Gaming. It relies on a new type of cooling system, promising higher clock rates, listed in trade it is not already.

three-fans gigabyte-geforce-gtx-1080-g1-gaming-vary-and-price

Clock speeds, PCB and cooling

Gigabyte has increases clock speeds compared to the case of the G1 Gaming Founders Edition. By 1,721 MHz base and 1860 MHz typical Boost clock, the clock speeds are slightly below competing models tested so far, but still 130 MHz on the clock rates of the Founders Edition. The memory at 5000 MHz.

The 1,712 / 1,860 MHz are on only when in Gigabytes Tool Xtreme Gaming in OC mode. Without the installation of the software are in the Gaming mode at 1,696 / 1,835 MHz. The Power Target the graphics card in both cases is 200 watt, which is ten percent higher than the Founders Edition and at the level of the Asus Strix OC Edition. With a maximum of 216 watts can gigabytes when manual intervention but less.

Clock Rates Compare

computer-compare

8-Pin for 200 Watt Power Target

The PCB manufactured by Gigabyte G1 Gaming provides an 8-pin PCIe connector. This is also taking into account the specifications nor sufficient for a maximum of 216 Watt Power Target. The power for GPU and VRAM consists of ten phases (8 + 2).

power-target

Comparatively compact

As the PCB of Inno3D iChill X3 also projects the PCB of the G1 Gaming beyond just typical 0.5 centimeters above the slot cover that Asus Strix OC Edition is higher. Because the cooling system fails barely larger, the overhang above the bend in the slot cover in this graphics card is only one centimeter. In Inno3D is 1.5, Asus even 2.5. Also in length, the G1 Gaming falls 28.1 centimeters almost two centimeters more compact, the radiator is exactly two slots high. The Founders Edition is a total length of 27.0 centimeters and an overhang of only 0.5 centimeters but still compact.

Direct Touch with three heatpipes

How Asus uses Gigabyte when cooler Wind Force 3X on directly to the GPU in contact copper heatpipes. Five heatpipes Asus Three Gigabyte opposite. They all stand with the GPU in contact and pull through both aluminum heatsink. The three mounted fan measuring a diameter of 80 millimeters and stand still at idle. The maximum speed is 4,200 r / min, half of them, regardless of load.

With 516 grams of cooler from Gigabyte is not only visually the lightest in the test environment (Inno3D: 703 grams, Asus: 659 grams). Seven screws hold it firmly on the PCB. Four pressing the heat pipes to the GPU, three make contact with the power supply here. By thermal pads and the memory chips to the radiator are connected.

 

The advantages of the Gigabyte GeForce GTX 1080 G1 Gaming lie in the invisible at first glance balanced overall concept. The partner card combines a still clear to the Founders Edition raised clock in relation to the Asus GeForce GTX 1080 Strix OC Edition and Inno3D iChill GeForce GTX 1080 X3 compact garment. Both the length and the overhang and the strict adherence to the height of two PCIe slots can make a difference in the case. Already without manual intervention, the cooling system proceeds quieter works than the reference version.

A compact compromise

Who can do without the last bit of power, finds in the Gigabyte GeForce GTX 1080 G1 Gaming a desired very silent partner card the GeForce GTX 1080 with currently 759 Euro Although 10 euros more than the version of Inno3D , but still less than one available from stock Founders Edition with 789 euros will cost – at first dealers without goods are in stock, the price of this version the last two days but first please . Three-year warranty with settlement on trade give all three manufacturers in the test, it is important however to keep in mind when Gigabyte: date is the date of production, not the purchase.

Conclusion

The Gigabyte GeForce GTX 1080 G1 Gaming is not the fastest. Instead, it finds the compromise of performance, dimensions and volume. Whoever wants to trim the video card to a very low level and is still twelve percent faster and cooler than the Founders Edition.

Full Specifications

graphics Controller

  • GPU manufacturers Nvidia
  • GPU GeForce GTX1080
  • Graphics chip clock frequency (MHz) 1721
  • Boost Clock (MHz) 1860
  • chip name GP104-400-A1 “Pascal”
  • CUDA cores 2560
  • API -support DirectX 12.0
  • power connection 1x 8-pin
  • interface PCI Express x16 3.0
  • cooling active 2-Slot

graphics memory

  • Graphics memory (MB) 8192
  • Graphic Memory Type GDDR5X
  • Graphics Memory Clock (MHz) 10010
  • Memory bus (bits) 256

monitor connections

  • DisplayPort 3
  • DisplayPort Version 1.2 (DisplayPort 1.2 certified DisplayPort 1.3 / 1.4-ready)
  • DVI 1
  • HDMI 1
  • HDMI version 2.0b

Dimensions / Weight

  • Weight (g) 1100
  • width (mm) 114
  • Height (mm) 41
  • Depth (mm) 280

information

  • Manufacturer warranty 3 year ( The manufacturer / warranty information, see here )
  • VR Ready
  • Virtual Reality Ready Yes
  • manufacturer Gigabyte
  • model designation GeForce GTX1080 Gaming X 8G

 

Review MSI GeForce GTX 1080 ARMOR 8G OC (NVIDIA)

MSI GeForce GTX 1080 ARMOR 8G OC is a high-end graphics card with the GeForce GTX 1080 GPU from NVIDIA. Built on the powerful Pascal architecture GeForce GTX 10 series of graphics cards delivers up to three times more power than the previous generation graphics cards, as well as innovative new gaming technologies and realistic VR experiences.

msi-fan
MSI GeForce GTX 1080 ARMOR 8G OC

Other features include NVIDIA Ansel, with extensive tools for the creation of screenshots in games and NVIDIA G-Sync and Stream Game.

NVIDIA, 8 GB, PCI Express
NVIDIA, 8 GB, PCI Express

The NVIDIA SLI-HB Bridge delivers twice the available transmission bandwidth compared to NVIDIA Maxwell architecture. Built on the GeForce GTX 1080 ARMOR 8G OC GPU has 2560 shader units and is clocked at 1657 MHz (Boost Clock to 1797 MHz), the 8 Gbyte large GDDR5X memory is connected via a 256-bit memory controller and works with 2502 MHz ( effective data rate 10,010 MHz).

Duan Fans
Duan Fans

 

Pros: Quiet, plenty of power
Disadvantages: high price

I have the graphics card now 5 days and have to say, all the games are now running at full resolution and everything on maximum graphics setting and it’s just fun. (4K)

GTA V, WOT, WOWS, ARMA, etc.

The old graphics card (GTX 580) was always 80 ° – 90 ° to the fight. Now around 70 degrees at 4K resolution and at outdoor temperatures above 30 degrees

If the necessary has money or – like me – can give vouchers Birthday to should simply treat such a good graphics card!

Marcel

Full Specifications

MSI ARMOR Geforce GTX 1080
MSI ARMOR Geforce GTX 1080

Graphics Controller

  • Graphic clock frequency: 1657 MHz
  • Graphics Chipset manufacturers: NVIDIA
  • Graphics Chipset: GP104-400

memory

  • Total Available Graphics Memory: 8GB
  • Memory Clock Speed: 10010 MHz
  • Graphics memory type: GDDR5X
  • Memory Interface: 256 bit
  • API Support: Yes

System requirements

  • Required operating system: Windows 7, Windows 8, Windows 8.1, Windows 10

Characteristics

  • Slot: PCI Express
  • Cooling: air cooling
  • Connections: 1x HDMI, 3x DisplayPort, 1x DVI-D DL
  • Maximum supported display: 4
  • HDCP compatible: Yes
  • Special features: MSI ARMOR 2X fans with TORX fan, Zero Frozr, airflow control technology, MSI Afterburner Übertacktungs software, Military Class 4 components, SLI, GPU Boost 3.0, NVIDIA Ansel, G-
  • SYNC, GameWorks, Game Stream, VR Ready
  • Width: 37 mm
  • Height: 140 mm
  • Depth: 279 mm

Nvidia Geforce Can Not Detected on PC and Notebook?

Many have complained when they will install a game or graphics software. Typically, pc or notebook with dual graphics, for example as an onboard Intel HD Graphics (Integrated) and Nvidia Graphics (dedicated), and the problem is detected only intel, not Nvidia Geforce. So, how to fix this?

Just simple ways. First, you need to open the Nvidia Control Panel.

Right-click on the desktop, and select Nvidia Control Panel
Right-click on the desktop, and select Nvidia Control Panel

Go to the tab Manage 3D Settings – Global Settings as shown in the figure below, in the drop down menu on the graphics processor is selected, choose the high-performance NVIDIA processor. Then click “Apply” at the bottom right side of the window.

setting-nvidia-geforce
High-Performance Nvidia Processor

Note:

You need to update the latest version of the Nvidia Driver to optimal your graphic with Nvidia Geforce. Also, download and install Nvidia Geforce Experience, and you will get the recommended setting for best gaming experience and more!

GTX 1050: Probable Specification For Medium Class

The line-up of NVIDIA Pascal is not yet complete and the Santa Clara’s company then think of other solutions. View the official presentation of the GTX 1060 3GB appears highly unlikely in GTX 1060 Ti, so users now expect GTX 1080 Ti and GTX 1050.

Nvidia Geforce GTX 1050

As can be no doubt about the first, the second is in the home straight ‘ arrival. In fact, the network circulates the technical specifications of the next American company’s board, which also break a whole new graphics chip, known as GP107. GeForce GTX 1050 is NVIDIA’s answer to AMD’s Radeon RX 460, so do not expect performance to scream, but rather the grassroots level performance, suitable for eSport and lightest video game titles. The new generation, however, provides a marked increase in performance, so in practice the 1050 GTX should be able to withstand even the most triple-A to Full HD resolution, of course, setting a graphic detail not too pushed.

The new generation, however, provides a marked increase in performance, so in practice the GTX 1050 should be able to withstand even the most triple-A to Full HD resolution, of course, setting a graphic detail not too pushed.

GP107

NVIDIA GeForce GTX 1050  is made with a manufacturing processor 16 nanometers FinFET, the same cards such as the Titan X or the GTX 1080.

Their engineers have produced the solution with graphics in mind more efficiency and more rapidity of calculation compared to the previous generation, features that offer a bit ‘all graphics cards with architecture Pascal.

GP107 is the brain of GTX 1050, and is the fourth graphics chip to be placed on the market by NVIDIA – chronologically we GP104, GP106 and GP102. GP 107 is intended to be the smallest GPU of the whole line-up, with a little spacious die, whereby also less expensive of his older brothers. The variant adopted by GTX 1050 GP107-400 code and, in keeping with the nomenclature used by the company to the other chips, it is easy to see how this is the fully unlocked version of the GPU, without MS disabled and lost CUDA cores.

GeForce 1050 GTX will also save significant power. According to Benchlife, this GPU will be in the range of 75 watts thermal design. This translates into 15 watts lower than the GeForce GTX 950. Nvidia GeForce GTX 1050 rumored price at USD199

Which is Better Nvidia Geforce GTX 1070 vs 980ti

There has seen posts say Nvidia Geforce GTX 1070 is better and Nvidia Geforce GTX 980ti is better. Let’s have a showdown.

I will run new benches tonight since Chris have my new cooling in place and some tweaks, but he will start it at this.

This bench is GTX 980ti classified on air. 1521 overclocked and factory bios.

Standard Firestrike

Standard Firestrike 1070 vs 980ti
The program that on the right is Cam software by nzxt

Consideration Your CPU Heat Under Load

Have you taken into consideration your CPU heat under load? fro example question, which is better upgrade route for this specs:

FX 6350 @ 4.2 NO TURBO
16GB DDR3 1866MHz RAM
R9 290X 4GB OC

Now the problem is I believe that my poor little 6350 cant cope with the R9 290X SO instead of swapping out for a newer re-branded version of an R9 (was guna go for 390X/Fury EVEN MORE BOTTLENECK)

amd radeon gpu

Now please dont say ‘Get Intel better upgrade route’ I would be more than happy to wait with my current system with a rx in until AM4 platform comes out then il get a new mobo/Zen CPU/DDR4 RAM.

So my question is will the rx480 8GB bottleneck my 6350 in the mean time? and if so will it bottleneck as much as my 290X OR if I swapped out for a 390X/Fury?

Also I see the rx480 as future proofing for when I get AM4.

-Ben

The answer

CPU shouldn’t be giving you any problems if it’s staying cool.

You’re probably building up heat and the CPU is being affected. As far as the come to the Darkside or Intel argument it’s pretty valid. Intel has a CPU that beats AMD in any spot on the board and does it at a lower price. Taking into consideration you made a bad choice by thinking you were actually saving money by choosing an AMD motherboard and the cost of replacing it with something you can use an Intel on you’re sort stuck with the bad choice or forced to buy your way out of that poor decision.

I love AMD though. Don’t get me wrong. They’re just aware they don’t need to compete with Intel and so they haven’t attempted to do so in recent years. It was a smart move for them. As far as GPU’s go they make Invidia seem like either a bad value as far as performance/price goes or just the leading GPU goes if someone wants the best but isn’t concerned with price necessarily. You’d be surprised how well some of Nvidia lower end GPU’s perform though as far as performance/price goes. I use a 2GB EVGA GT520 GPU in one of my builds that I literally spent like $20 on that plays BF4 in 720p at acceptable fps. People don’t really consider how much more you start to pay for what little actual performance you gain when going with Invidia and don’t even consider the lower end cards that are available and super affordable. If you’re going to try and save yourself money an Intel G3258 CPU overclocked and an AMD GPU would be the smartest option.

-Mike

Another answer

Don’t get a 8350 I have one and a 970 and it’s really struggling on the new games

Bottlenecking, fallout 4 when ever I’m in diamond city or in a area with alot of shadows it stutters at like 35 fps and there’s no point whern zen is about to release its 4 years old now AM3+ isnt all that great. I have to overclock it a bit for it to stay normal

This was my results with the Cpu at 4.6 GHZ

firestrike 970 3dmark nvidia amd 8350 4,6

Shadow quality and distance are already on low and medium and I turned off depth of field and it’s more than just fallout.

Ill go with it in a bit and let you know what happens. I have a feeling it could be my 970 its the EVGA FTW+ I thinks it’s overclock is messing it up

-Cory

 

Pascal Nvidia 1080 GTX DirectX 12 vs AMD Polaris

NVIDIA 1080 GTX architecture & Pascal has done much to narrow the gap DirectX 12 calculation sync with AMD, but is that enough? NVIDIA finally received some criticism ability to sync over the last few years, of the GTX 900 series graphics cards feature DirectX 12 the aim is to improve the utilization dramatically of resources and power of the modern GPU horsepower.

geforce gtx 1080

 

It is to eliminate disparities and decreased time. No need to be in the process of data by running multiple kernels simultaneously, to implement the graphics and compute workloads simultaneously.

This feature is especially present in some talk about DirectX 12 games out there. including the Ashes of the Singularity and the new Hitman. Where consistent NVIDIA graphics card to compete with AMD’s Radeons.

NVIDIA brings preemption speed and dynamic balance speed with Pascal & GTX 1080

With Pascal, Nvidia introduced an increase in the architecture. And through pre-emption, improved and faster context switching.

NVIDIA’s performance in the game is now lighter, examples of the types of workloads and the game will benefit diperbaikan by Pascal. This included into such things, physics and audio processing, post-processing effects and synchronized image transfer time is responsible for the main positions were accurate in visual enhancement VR (Virtual Reality).

async compute

Pascal now able to overlapping work dynamically in the process of data, things like PhysX and post-processing steps now be layered on top of graphics data processing path through dynamic load balancing. An achievement elaborate on Maxwell as it should be done through the static partitioning software. These improvements help reduce gaps in data processing and improve the utilization of the Pascal GPU.

pascal load balancing

Graphics and compute workloads are assigned to special block Nvidia GPU. With Maxwell, this means that developers have to sort of guess how compute and graphics work load should be shared through the static partition. Means the developer mean to say ‘ready or ok’ I want this graphic to get something more and calculations on adding resources.

Unfortunately, this will result in mixed work load is very inefficient unless the developers were able to get the right ratio, which is very difficult. Instead graphic or calculation of the workload will be completed the first part and part of the GPU resources should wait ‘silent’ to the other part is made to begin work on the next performance.

Is the balance of dynamic speed allows the workload to be shared more equitably? This means that developers really do not have to guess anymore because the GPU will now take over this responsibility.

If the graphics or the calculation of the workload finished the first, the workload of the other supported distributed throughout the second and GPU resources already completed its work. The result is all part of the GPU will be put to work and will not be wasted, complete tasks more quickly and consequently improve the performance of the GPU itself.

Critical time workload is a different challenge, as compared to balance the distribution of workload and resources, and this is an increase of pre-emption capabilities Pascal into a game.

Examples of important task is the time will be asynchronous transfer time in the VR environment. If the GPU fails to complete before the next screen refresh, then the entire frame will decrease. Literally the worst thing that could happen in the VR environment because it really bothered damping and can lead to a fatal error.

ashes of the singularity

Tests conducted Ashes of The Singularity in 4 k, 2560 × 1440 and 1920 × 1080 with DirectX 11, DirectX 12 with the calculation of synchronization disabled and with it enabled.

4K GTX 1080, visible loss of performance with DirectX 12 compared to DirectX 11. It also obtained performance is not turned off in sync with the calculation vs.

Comparison GTX 980 Ti also slow in DirectX 12 compared to 11 and completely lost slay calculation performance with synchronous vs GTX 1080

AMD R9 Fury X get the performance with DirectX 12 compared to 11 and more performance gain calculation once synchronization is enabled.

Nvidia’s Pascal: Everything We Know Right Now

We’ve learned that Nvidia Pascal code named gp100 may have been taped out on TSMC’s manufacturing process 16nm FinFET in June of last year. Interestingly, just shortly after AMD announced that it has taped out two FinFET chips. It is absolutely no coincidence that the two companies completed their FinFET designs at the same time. Both are pushing for a very aggressive schedule for the market debut of the next generation of GPU-based FinFET this year.

gpu pascal

 

What we know so far about Nvidia’s flagship Pascal GP100 GPU :

Pascal graphics architecture.
2x performance per watt estimated improvement over Maxwell.
To launch in 2016, purportedly the second half of the year.
DirectX 12 feature level 12_1 or higher.
Successor to the GM200 GPU found in the GTX Titan X and GTX 980 Ti.
Built on the 16nm FinFET manufacturing process from TSMC.
Allegedly has a total of 17 billion transistors, more than twice that of GM200.
Will feature four 4-Hi HBM2 stacks, for a total of 16GB of VRAM and 8-Hi stacks for up to 32GB for the professional compute SKUs.
Features a 4096-bit memory bus interface, same as AMD’s Fiji GPU power the Fury series.
Features NVLink (only compatible with next generation IBM PowerPC server processors)
Supports half precision FP16 compute at twice the rate of full precision FP32.

amd polaris

architecture

GeForce GTX 1060, 1070 E1080 mobile: Pascal PSU Notebook GPU

Power desktop graphics cards and low power consumption for mobile GPU GeForce GTX 1060, 1070 and 1080 Nvidia. Here are all the features.

Nvidia has renewed its family of graphics processors for notebook presenting solutions GeForce GTX 1060, 1070 and 1080. All three models are based on the architecture and Pascal are not accompanied by the traditional “M” suffix used in the past: it is products that follow the technical specifications of counterparties for desktop PC.

nvidia gtx new

Nvidia has embarked on this path with the GTX 980 notebook, identical to the GTX 980. To facilitate the expansion of this approach to architecture improvements, which allow for high performance and fuel economy and lower temperatures, and the transition to a lower production process , shared in this progress.

VR-Ready

exquisite

The mobile GPU GeForce GTX 1060, 1070 and 1080 are all “VR-ready” and according to Nvidia achieve similar performance to the desktop counterparts, delivering up to 76% more than their predecessors based architecture Maxwell. The GPU (GP104 / GP106) are made with 16-nanometer FinFET process at TSMC.

Lower TDP

GeForce GTX 1060, 1070 and 1080

Nvidia has been vague on the TDP, emphasizing that may differ according to different implementations of OEMs. What we can be sure is that the TDP is lower than that of the desktop versions, as these GPUs must operate within a notebook, with all the limitations that this entails.

gaming notebook

Although the basic frequencies of these solutions are lower than desktop proposals, the overclocking potential should be similar. For example, we got an increase of 225 MHz clock based on GTX 1080  in a laptop with Doom running. OEMs also can act as they wish on the GPU and this, most likely, will allow the arrival on the factory overclocked notebook market.

GTX 1080

Best GPU for superior screens

g-synch

According to Nvidia the great Pascal architecture increased performance enables laptops with 1080 GTX to ensure playable performance on 4K screens with high quality settings. Manufacturers can also carry notebooks with 1080p panels at 120 Hz. Previously users had to connect an external monitor via DisplayPort to have 120 Hz, but now you do not need. In addition, products will be available with 1440p screens and 120 Hz. The new GPU for Nvidia notebook also support the G-Sync, a feature that then we will find on various portable gaming.

 

Best Android Gaming Tablet Popular in 2016

Although the performance of the tablet is not as powerful as a laptop, but the charm of these gadgets are still in great demand. Tablet is a device that never loses its charm, because it is very convenient when used to move even for gaming.

With so many tablets in circulation create confusion for users to buy the best gaming tablet in 2016, to the following our recommended best tablet for gaming in 2016.

HTC Google Nexus 9

HTC and Google’s tablet is very supportive to be used to play games for Android v5.0 operating system brings Lollipop, a large screen and a steady image sharpness.

Gaming Tablet HTC Google Nexus 9 vicozo

In addition, HTC Google Nexus 9 is also equipped with a durable battery and great processor. Google Nexus has 9 IPS LCD display measuring 8.9 inches with a resolution of 2048 x 1536, the processors used on this tablet is NVIDIA Tegra K1 with 2GB RAM.

Samsung Galaxy Tab S 8.4

In general, Samsung tablet is quite luxurious to use gaming as a feature offered quite complete as ac WiFi, microSD, MHL, fingerprint scanners and more.

Gaming tablet Samsung Galaxy Tab S 8.4

Samsung Galaxy Tab S 8.4 screen has Super AMOLED technology brings with 8.4-inch diagonal display with a resolution of 2560 x 1600 pixels. With an Exynos 5 Octa 5420, Samsung Galaxy Tab S 8.4 is currently using Android 4.4 KitKat ready to be updated to Android 5.0 Lollipop.

Nvidia Shield Tablet

The next best gaming tablet is Nvidia Shield tablet, because the device is a package 2 in 1. Only use one tablet, users can get complete package capable to work and gaming.

Gaming tablet Nvidia Shield Tablet

Specifications are clearly powerful. Having a super fast 2.2 Ghz processor Quad-core with Nvidia Tegra K1. The screen is wide enough and was fitted with a diagonal size of 8 inches and a resolution of 1920 x 1200 pixels.

In addition, there is also a HDMI port to play games on the big screen, Android 5.0 Lollipop ready to go in and have the ability to transport stream PC games to tablets.

So, buddy, which tablet would you choose?

Nvidia is About Performace, AMD is About Value

My friend ask to me :

Bro, I want to ask, when the specification is almost the same, why most amd prices are cheaper than Nvidia?

I think, AMD is a good choice for those who invest long-term, all right, some AMD graphic card have cheaper than Nvidia.

amd vs ati

Performance = nvidia
Value = amd

Take a look at the patch Vulkan DOOM, in AMD card until 7xxx series legacy, it can still be improved the performance up to 60% while in nvidia not influence anything.

Different performance when comparing RX480 with GTX1060. Also DX11 (yes, nvidia more optimal in DX11) try games like DX12, a little difference.

Moreover, if a game has Async feature, I’m sure GTX1060 should work well.

If you look at the latest Firestrike benchmark, by default test Async: On. It seems like the games of the future will support many Async, for that, IMHO still good long-term value in AMD