Best AMD processors for gaming - AMD

AMD has managed to become a solid competitor in the gaming CPU space. The latest Ryzen 5000 series processors based on the Zen3 architecture are becoming the go-to choice for a lot of gamers around the world. The biggest advantage that the new Ryzen series offers over Intel is power efficiency. In fact, even the last-generation Ryzen 3000 series processors, have proven to offer rock-solid performance with comparatively less power draw. Intel recently launched its new 11th-gen Rocket Lake-S series of desktop processors, however, it hasn't received a lot of positive feedback primarily due to the fact that the company continues to drag its 14nm++ process.
If you are in the market for buying a new CPU for gaming, then AMD is a pretty good choice. Let's check out the best AMD CPUs for gaming that you should buy today.
AMD Ryzen 5 5600X​The newly launched Ryzen 5 5600X is the best AMD Ryzen CPU that you should buy for gaming in 2021. It offers the best performance to value ratio and thanks to AMD’s Zen 3 architecture, it draws less power compared to Intel counterparts. The processor is highly recommended for all sorts of games, whether you want fast frame rates or a high-resolution experience.
Clock speeds: 3.7GHz – 4.6GHz
6-Cores, 12 Threads
35MB L3 Cache
PCIe 4.0
65W TDP
~$279
Buy from Amazon
AMD Ryzen 7 5800X​The new octa-core champion, the Ryzen 7 5800X takes on Intel's new 11th-gen Core i9-11900K. While both offer almost similar performance, AMD is selling the 5800X at over $100 less than Intel. That in itself is a huge point to consider, especially since the chipset crisis has lead to consumers hunting for products left, right, and center. Additionally, as mentioned with the case of the 5600X, this one also draws comparatively less power thanks to the 7nm processor.
Clock speeds: 3.8GHz – 4.7GHz
8-Cores, 16 Threads
35MB L3 Cache
PCIe 4.0
105W TDP
~$420
Buy from Amazon
AMD Ryzen 9 5900X​The latest top-of-the-line CPU offering from AMD in 2021, the 12-core Ryzen 9 5900X throws Intel’s latest Core i9-11900K and even the 10-core Core i9-10900K from last year, out of the park in almost every single aspect. It not only offers a better performance package, but it manages power and thermal more efficiently thanks to the 7nm process. It is currently selling more expensive than AMD’s suggested price, but it is totally worth it and should last you for years to come.
Clock speeds: 3.7GHz – 4.8GHz
12-Cores, 24 Threads
64MB L3 Cache
PCIe 4.0
105W TDP
~$549
Buy from Amazon
Best APU: AMD Ryzen 5 3400G​The chipset crisis continues to haunt us, with most gamers unable to get a hold of a new GPU. But if you are planning to build a budget gaming PC, then you should consider the Ryzen 5 3400G. Since it is an APU, it comes with integrated graphics that should be enough for 720p or 1080p gaming at low to mid settings. Additionally, both the CPU and GPU are unlocked which means there is potential for tweaking them as well. AMD has announced that its latest APUs, the Ryzen 7 5700G and Ryzen 5 5600G, will be hitting stores in August. These are going to be much better than the 3400G, so hold on to your money if you can.
Clock speeds: 3.7GHz – 4.2GHz
4-Cores, 8 Threads
4MB L3 Cache
PCIe 3.0
Radeon RX Vega 11 Graphics
65W TDP
~$149
Buy from Amazon
These are currently the best AMD processors for gaming, and while you might point out that there is also the Ryzen 9 5950X, that would just be overkill for a gaming rig. For a more balanced setup, it is best to either go for the Ryzen 5 5600X if your main purpose is only gaming. If you plan to do gaming alongside multiple tasks like streaming and video rendering, then get the Ryzen 7 5800X or the 5900X if your budget allows.

Recently, Ryzen 5 5600G and 5700G have better APU for now

5800X is the best for gaming giving better scores than 5600X & 1-CCX(8-core) so lowest latency.
myaccountaccess krogerfeedback

Related

[INFO] Intel's pushing for Android ...

The following article is not even remotely related to E4GT (or Samsung for that matter) but I found it very interesting... There's a strong possibility of Intel dominating all mobile processors starting 2014 - 2015 ...
http://liliputing.com/2012/04/intel-pushes-atom-chip-for-android-devices.html
EDIT: I just noticed that the website (or maybe the user) removed the second post that I copied below.
You can skip the actual article, but read the comments (from user CyberGusa) :
While as for what advantages Intel can start to offer, it's what Chippy from UMPCPortal would call High Dynamic Range Computing (HDRC). Unlike ARM, Intel is fully capable of scaling from the mobile range to the full desktop range.
This will be especially true if Windows 8 is successful, as x86 can offer legacy support where ARM can't, and can provide the higher range performance that ARM is still many years away from being able to provide as their high end next gen offerings will only rival the present gen Intel ATOMs.
MS in particular is patenting a way to easily switch between CPU's when docking. So could make a Windows system literally scale from mobile to laptop and even desktop by just docking it.
The closest ARM based devices will get to this scaling is switching from a ARM to higher end Intel or AMD chip when docked but this will also involve switching from a mobile OS to a desktop one to fully take advantage of the switch.
Though Google is making progress towards making Android a more desktop friendly OS, like with Webtop and similar UI optimizations that take over when docked that would allow Android to take advantage of such scaling but would still be more limited than switching to a true desktop OS that isn't designed with the limits that a mobile OS will have to deal with no matter how the UI is altered and optimized,
Failure of Windows 8 though could well give ARM the advantage.
Intel though is hedging its bets with support for Android and of course the Tizen project. They already bought a company last year that provides them the option to easily switch between two OS instantly, without rebooting.
While they are compensating for what advantages ARM has over them by keeping ahead of the manufacturing shrink curve by at least a year.
So while ARM is heading towards 32 and 28nm productions, Intel is heading toward 22nm and that combined with the architectural updates could potentially start giving Intel the edge.
Mind also that there have been problems with the 28nm production and Intel has strategically not helped ARM with this issue. So time table for many gives them limited time for market penetration before Intel will be able to come out with their own 22nm chips and 14nm is scheduled for 2014.
Also consider that it's not the general consumer market at stake here but also the embedded and server markets, which could give Intel more of a advantage considering that x86 hardware can run pretty much any OS but ARM is still limited to OS already optimized for it.
While ARM is also depending on Windows 8 being a success to provide it a mainstream desktop OS to provide the ability to start competing in the traditional PC markets, and thus would also be negatively effected if Windows 8 fails.
So while ARM is looking good for the rest of this year, it remains to be seen if that will remain true next year and Intel should never be underestimated.
More comments from the same user (CyberGusa):
Right now Intel only has dual core in their higher end ATOM lineup and up to 8 cores for the server market, neither of which are competing with ARM yet.
The upcoming dual core Medfield is mainly just planned for the Tablet market and shouldn't effect the Smart Phone market.
So the main advantage of ARM solutions right now is that they're much more mainstream for the mobile market, with Intel only beginning to compete for the first time. Much like how Nvidia when they first introduced the Tegra and shows slow beginnings are not indicative of how they will do in a year or two.
While as already mentioned the Intel ATOM's are still using pretty much the same architecture as when it was first introduced to the market in 2008. This is like comparing the Cortex A15 to the older Cortex A8 based ARM chips and having the Cortex A8 solution still holds its own.
So having it even come in the same ball park is actually a testament to how much ARM still has to catch up for the higher performance range they're only now entering.
Mind beating the ATOM isn't really hard, as that's the bottom of Intel's chip offerings, with the Core i-Series offering multiples times better performance that ARM is still years away from even getting close to.
While the next gen ATOM's coming out next year are Intel's equivalent of a A15 update to the ATOM. Introducing many of the technology they developed for Ivy Bridge to the ATOM.
Like Intel's Tri-Gate Transistors, a HD 4000 based GMA, putting the entire lineup under SoC, offering a wider range of processor configurations, finally adding Out Of Order Processing to the ATOM, among many other improvements.
While ARM manufacturers are having problems, the delay in moving to 28nm being the most outstanding right now, which is why many are still opting for 32nm. Especially those who have yet to deal with the increased problem of power leakage as they continue to shrink the FAB.
Even Apple is still on 45nm with their latest iPad and had to increase the battery size by 70% to compensate for the increased power consumption of the retina display and the quad core GPU's requires.
So they may up their game but it's going to get harder for them here on out as ARM was designed for low power and low performance and need time to evolve to be able to apply itself to higher end applications.
While Intel already dominates the higher end and just wants to start penetrating into the lower end and that's going to be arguably easier for them to do than for ARM to keep on increasing its performance.
Mind, ARM is still a 32bit architecture and only recently introduced designs for 64bit. This means they're still years away from going fully 64bit and for now we're only going to see enhancements like 64bit memory management.
While it's not easy to continue providing increasing performance and still keep costs and power consumption low. Also ARM customizations has the down side of increased hardware fragmentation.
So it's not like Intel doesn't stand a chance, it's just going to take awhile to see if they can really start competing in the mobile market or have to stay in the higher end PC market.
Comment as you see fit, and keep in mind these are just opinions, not facts !!!
First...
Sent from my SPH-D710 using xda premium
Even if Intel is not on top by then they will make sure the bar is set high. Good read.
Sent from my SPH-D710 using xda premium
Good read. If Intel is truly interested in advancing the mobile field, I can see them doing big things in the future. At the very least, the competition they bring to the market will keep everyone else on their toes.
Transmission sent from a Galaxy S II, CODENAME style.
intel will show other processor companies how its done. their technology is quality when compared to AMD. but AMD tries to be more innovative. in the end i went with expensive intel to build my computer
Competition premotes innovation. I have read that the next few generations of processors are already developed but they only release one at a time to guarantee profits and to not outrun what they have. So, with more chips in competition this will help us see better processors faster. It will also lower cost. So, a phone might cost the same 4 years from now instead of more. I personally think it is a great idea. Even if there chips weren't much better they still will help. It is a win win for the consumer. Great article!
Sent from Team KC's founding member HTC Evo 4G LTE.
Oh and Intel is known for making low battery consumption processors. Can't wait for that by them making small chips only nanometers big
Sent from my SPH-D710 using xda premium
kc_exactly said:
Competition premotes innovation. I have read that the next few generations of processors are already developed but they only release one at a time to guarantee profits and to not outrun what they have. So, with more chips in competition this will help us see better processors faster. It will also lower cost. So, a phone might cost the same 4 years from now instead of more. I personally think it is a great idea. Even if there chips weren't much better they still will help. It is a win win for the consumer. Great article!
Sent from Team KC's founding member HTC Evo 4G LTE.
Click to expand...
Click to collapse
In my personal opinion, I think Intel does make outstanding processors, but their marketing skills are not customer friendly. Take for example the numerous options for the speed of a processor they sell...
let's say the new processor X came out with speeds of 1.6 GHz, and then 3 months later, they come out with the same processor X but with improved speeds of 2.2 GHz at 40% increased cost... and after another 3 months they release the Black Edition processor X with the ultimate speed of 2.4 GHz at double and even triple the price of the original !!!
Do you honestly think they will redesign the production line just to make the new and improved Black Edition processor X ??? I don't think so... In my opinion, they're probably selling the exact same processor X from the beginning to the end, but they slow down the speed in the early versions and they gradually release to full potential ... In this way, they sell the same processor (which cuts down the design/engineering and production costs) yet they stay very profitable and ahead of the market curve by announcing an improved product every 3 months !!!
In other words, the same processor X will sell as follows:
1st release) Speed minus 40% (no overclock) ... "Regular" price
2nd release) Speed minus 30% (no overclock) ... "Regular" price + 15%
...................................................................................
...................................................................................
Black Edition) Speed and overclock unlocked ... "Regular" price + 300%
The worst thing they ever did (starting with Core processors, such as i3, i5...) was to incorporate the video card into the processor, and to lock out other video card vendors from the system ... In this way, they sell the processor AND the video card at the same time, and there's no more competition at the same time !!! They call this bull **** integration something like "system on a chip" for better power consumption ... WHEN WAS THE FIRST TIME ANYONE LOOKED FORWARD TO INSTALLING AND BENCHMARKING INTEL VIDEO CARDS IN THEIR COMPUTERS ??? Why do you think AMD bought ATI video card manufacturer ?
And you think it wouldn't get any worst ? Recently Intel started to sell their TOP OF THE LINE PROCESSORS without their video integrated cards ... That means that us, the consumers, have to PAY EXTRA FOR LESS PRODUCT just to get away from their marketing schemes !!!
In the end, we probably pay "regular" price when processor X is introduced, then it's all profits from there on for Intel.
Now back to cell phones ... think of the same scenario applied to your phone with Intel Inside ...
The above are just my personal opinions on Intel ... tell me if I'm wrong ! Say thanks if you believe I helped you open your eyes !
peryp9 said:
In my personal opinion, I think Intel does make outstanding processors, but their marketing skills are not customer friendly. Take for example the numerous options for the speed of a processor they sell...
let's say the new processor X came out with speeds of 1.6 GHz, and then 3 months later, they come out with the same processor X but with improved speeds of 2.2 GHz at 40% increased cost... and after another 3 months they release the Black Edition processor X with the ultimate speed of 2.4 GHz at double and even triple the price of the original !!!
Do you honestly think they will redesign the production line just to make the new and improved Black Edition processor X ??? I don't think so... In my opinion, they're probably selling the exact same processor X from the beginning to the end, but they slow down the speed in the early versions and they gradually release to full potential ... In this way, they sell the same processor (which cuts down the design/engineering and production costs) yet they stay very profitable and ahead of the market curve by announcing an improved product every 3 months !!!
In other words, the same processor X will sell as follows:
1st release) Speed minus 40% (no overclock) ... "Regular" price
2nd release) Speed minus 30% (no overclock) ... "Regular" price + 15%
...................................................................................
...................................................................................
Black Edition) Speed and overclock unlocked ... "Regular" price + 300%
The worst thing they ever did (starting with Core processors, such as i3, i5...) was to incorporate the video card into the processor, and to lock out other video card vendors from the system ... In this way, they sell the processor AND the video card at the same time, and there's no more competition at the same time !!! They call this bull **** integration something like "system on a chip" for better power consumption ... WHEN WAS THE FIRST TIME ANYONE LOOKED FORWARD TO INSTALLING AND BENCHMARKING INTEL VIDEO CARDS IN THEIR COMPUTERS ??? Why do you think AMD bought ATI video card manufacturer ?
And you think it wouldn't get any worst ? Recently Intel started to sell their TOP OF THE LINE PROCESSORS without their video integrated cards ... That means that us, the consumers, have to PAY EXTRA FOR LESS PRODUCT just to get away from their marketing schemes !!!
In the end, we probably pay "regular" price when processor X is introduced, then it's all profits from there on for Intel.
Now back to cell phones ... think of the same scenario applied to your phone with Intel Inside ...
The above are just my personal opinions on Intel ... tell me if I'm wrong ! Say thanks if you believe I helped you open your eyes !
Click to expand...
Click to collapse
Since when Intel has Black Edition CPU?
And about locking out other video card vendors from the system, are you sure you know what you talking about?
locoboi187 said:
intel will show other processor companies how its done. their technology is quality when compared to AMD. but AMD tries to be more innovative. in the end i went with expensive intel to build my computer
Click to expand...
Click to collapse
Intel can school everyone else on microprocessor development, manufacturing, budget, evolution...but...x86 is known power hungry. I'm sure if they keep reducing their process (which they will) they can get x86 to match arm, energy consumption wise. But, on the same token, ARM will (WILL) get developed to a point where they will match x86 performance wise.
It's anybodies race. It's early (yes, very early) in the mobile computing game. Intel could very well pull through with it's very refined architecture....that's also regarded as crufty as fnck. The ARM architecture could very well also be refined to the point where they get as many operations per clock...both neck-and-neck on power efficiency.
All said, I'm both excited and doubtful in intel's ability. Microsoft is becoming irrelevant at an amazing speed...perhaps it's intel's turn as well. Wintel? Armdroid? A mix of the two?
Exciting times. Bring on the competition.
Intel never had something called "black editions". They have processors known as "Extreme editions" which are the highest quality bin CPU's which did not get chosen for the Xeon server cpus. These costs $999.
The next batch would be the second highest binned ones which would costs ~$500. Then the next are the average ones which passed all the tests but wasn't as high quality as the higher end models. These are the $200-300 ones.
The rest probably get thrown out.
Now the their integrated solution is a step foward in providing all in one solutions. They did not locking out video card makers who make discrete chipsets which absolutely crushes the integrated HD 2000/3000's. What makes these integrated solutions so attractive is the fact the their intel sync (?) encoding and other stuff is literally mind blowing.
You probably don't even know what you're talking about... like seriously? BE's are AMD's.... video makers are mainly dedicated with PCI-e interfaces....
*Edit*
Intel innovates crazily when pushed heavily. AMD punished Intel for its pentium 4 and forced them either step up or be irrelevant and stepped up they did... conroe... nehalam... clarksfield...sandy bridge.. ivy bridge...
I have no reason to believe if Arm shoved into intel into a corner like AMD did, they wouldn't pounce like they did on amd... let's just say.. if history has taught us anything... I' would feel really bad for ARM due to intels insane budgets, R&D, and advanced chipmaking facilities..
lilotimz said:
Intel never had something called "black editions". They have processors known as "Extreme editions" which are the highest quality bin CPU's which did not get chosen for the Xenon's server cpus. These costs $999.
The next batch would be the second highest binned ones , which would costs ~$500. Then the next are the average ones which did passed all the tests but wasn't as high quality as the higher end models. These are the $200-300 ones.
The rest probably get thrown out.
Now the their integrated solution is a step foward in providing all in one solutions. Not locking out video card makers who make discrete chipsets which absolutely crushes the integrated HD 2000/3000's. What makes these integrated solutions so attractive is the fact the their intel sync (?) encoding and other stuff is literally mind blowing.
You probably don't even know what you're talking about... like seriously? BE's are AMD's.... video makers are mainly dedicated with PCI-e interfaces....
*Edit*
Intel innovates crazily when pushed heavily. AMD punished Intel for its pentium 4 and forced them either step up or be irrelevant and stepped up they did... conroe... nehalam... clarksfield...sandy bridge.. ivy bridge...
I have no reason to believe if Arm shoved into intel into a corner like AMD did... let's just say.. if history has taught us anything... I' would feel really bad for ARM due to intels insane budgets, R&D, and advanced chipmaking facilities..
Click to expand...
Click to collapse
Intel Black Edition ... Intel Extreme Edition ... the idea was "top of the line". Look at the point I'm trying to make, not the wrong words I used.
Intel may claim that their integrated graphics are great for many thing, but look at the larger picture... pay premium dollar for the ability to use the video card of your choice !!
EDIT: The cheapest processor comes with integrated graphics, while the most expensive one comes without it. I remember when I bought my laptop a few years back (1st generation Intel i5). I was reading about Intel not allowing manufacturers to put other cards in order to bypass the integrated one. In the end, I bought this Intel i5 laptop with NVidia GeForce 325M with Optimus. Check to see how Optimus works with Intel's integrated card and you'll understand what I meant in my previous post.
In the end, the main point I'm trying to get across, is that Intel's products are great (except their video cards) but their marketing scheme will hurt the consumers if they take control of the mobile processor.
All the info by the commenter not withstanding, I have a hard time taking anyone who uses "effect" instead of "affect" seriously.
Sent from my SPH-D710 using xda premium

Tegra 3 Overclock..?

I'm loving my yoga 11, however at times I just feel that Windows 8 RT slows down especially when multi-tasking. Since our Tegra's are clocked at 1.3Ghz and the same Chip in android devices runs at 1.5, with overclocked kernels available to run at 1.8-2.0Ghz, what are the chances we see this type of hack/development come to windows 8 RT? Im not sure the security obstacles that would present, but haven't seen much on this to even know if someone has looked into this or actively working on method to do so.
Thanks!
I have been thinking about this as well. Im sure it can be done, but by who? thats the question. Im sure we can easily squeeze some more power out of our device. Good luck to whoever spearheads this
ej_424 said:
I'm loving my yoga 11, however at times I just feel that Windows 8 RT slows down especially when multi-tasking. Since our Tegra's are clocked at 1.3Ghz and the same Chip in android devices runs at 1.5, with overclocked kernels available to run at 1.8-2.0Ghz, what are the chances we see this type of hack/development come to windows 8 RT? Im not sure the security obstacles that would present, but haven't seen much on this to even know if someone has looked into this or actively working on method to do so.
Thanks!
Click to expand...
Click to collapse
The tegra isnt overclocked to 1.5 in android devices. There are actually 3 models of the Tegra 3 at different clock speeds. The one used in the RT is the lowest model (1.2GHz) overclocked to 1.3GHz already. I believe the other models are 1.4 and 1.6 with a few ROMs adding about 100MHz overclock as needed. 2ghz seems extreme though.
SixSixSevenSeven said:
The tegra isnt overclocked to 1.5 in android devices. There are actually 3 models of the Tegra 3 at different clock speeds. The one used in the RT is the lowest model (1.2GHz) overclocked to 1.3GHz already. I believe the other models are 1.4 and 1.6 with a few ROMs adding about 100MHz overclock as needed. 2ghz seems extreme though.
Click to expand...
Click to collapse
I've thought about this as well but have always been too scared to ask. Windows is obviously not foreign to processor scaling and power management, perhaps there's a way to make a custom power plan or something. Maybe the way to approach overlooking is not 'like' Android, but 'like' regular old windows. I have no idea and am a noob, but I thought I'd just toss that out there.
SixSixSevenSeven said:
The tegra isnt overclocked to 1.5 in android devices. There are actually 3 models of the Tegra 3 at different clock speeds. The one used in the RT is the lowest model (1.2GHz) overclocked to 1.3GHz already. I believe the other models are 1.4 and 1.6 with a few ROMs adding about 100MHz overclock as needed. 2ghz seems extreme though.
Click to expand...
Click to collapse
http://www.nvidia.com/object/tegra-3-processor.html
Its support for Windows RT is still under development. It isn't overclocked on the Surface RT/Vivo Tab but underclocked to compensate for the missing support for the fifth battery saver core.
We should expect the performance and battery to get better as they iron this out :laugh:
Actually, for those who have gotten Surface RT since launch... I bet most of you have already experience better performance after each monthly firmware update
LastBattle said:
http://www.nvidia.com/object/tegra-3-processor.html
Its support for Windows RT is still under development. It isn't overclocked on the Surface RT/Vivo Tab but underclocked to compensate for the missing support for the fifth battery saver core.
We should expect the performance and battery to get better as they iron this out :laugh:
Actually, for those who have gotten Surface RT since launch... I bet most of you have already experience better performance after each monthly firmware update
Click to expand...
Click to collapse
That's a very good news indeed and we should then probably be able to run the Tablet at 1.6Ghz Quad core instead of the actual 1.3Ghz quad core :good:
LastBattle said:
http://www.nvidia.com/object/tegra-3-processor.html
Its support for Windows RT is still under development. It isn't overclocked on the Surface RT/Vivo Tab but underclocked to compensate for the missing support for the fifth battery saver core.
We should expect the performance and battery to get better as they iron this out :laugh:
Actually, for those who have gotten Surface RT since launch... I bet most of you have already experience better performance after each monthly firmware update
Click to expand...
Click to collapse
No where in that link does it mention it being underclocked. The 1.4ghz single core/1.3 quad core is a feature of the entire tegra product line, not jsut the surface RT.
It does mention that the 5th battery saver core doesnt work on windows RT though, that will help.
Interesting: There is a "~MHz" key in regedit under local machine -> Hardware -> Description -> System -> Central processor -> 0, 1, 2, or 3. It is set to 1300, but changing it doesn't do anything and it reverts upon reboot.
Even if we can't overclock this thing, is there a way to resurrect the "High Performance" power plan that disappeared in RT? One that would set the CPU to 100% by default, all the time?
Any update or more info on this?
bigsnack said:
Any update or more info on this?
Click to expand...
Click to collapse
+1
hope to see a 'high performance' feature on the pwr mgnment as well, especially when we are hooking up RT onto the power line and battery life is not so much of an issue in this case.
Rogerngks said:
hope to see a 'high performance' feature on the pwr mgnment as well, especially when we are hooking up RT onto the power line and battery life is not so much of an issue in this case.
Click to expand...
Click to collapse
iirc, you can still set your cpu states through powercfg in the command line. I might be wrong though.
Is the 5th power saving core just disabled or not present on our hardware?
bigsnack said:
Is the 5th power saving core just disabled or not present on our hardware?
Click to expand...
Click to collapse
According to NVidia's website, Tegra 3 for RT is "still under development." (http://www.nvidia.com/object/tegra-3-processor.html) It also lists it as only being quad-core on Windows 8 devices.
I had personally reeealy hoped that one of the highlights for RT 8.1 was going to be reworked support for the 5th core, bringing performance and battery life improvements. Alas, it was not to be.
jtg007 said:
According to NVidia's website, Tegra 3 for RT is "still under development." (http://www.nvidia.com/object/tegra-3-processor.html) It also lists it as only being quad-core on Windows 8 devices.
I had personally reeealy hoped that one of the highlights for RT 8.1 was going to be reworked support for the 5th core, bringing performance and battery life improvements. Alas, it was not to be.
Click to expand...
Click to collapse
I cant see how the 5th core would bring a performance improvement. The system cannot use the 5th core as an actual 5th core, it shuts most of the other cores down to sleep when it needs the 5th which is also an incredibly low performance core, its just for power saving really, or simply hopping around the UI and checking your email, NVidia claim that android can also play video while running purely on the 5th core although this never happened on my Nexus 7 without any other apps running, it carried on running using 1 of the main cores for that.
Would definitely boost the battery life though and thats not something to be ignored. But there are few times where that 5th core really comes into its own, perhaps it just wasn't worth the time for MS to add companion core support to windows RT 8.1 when not all RT tablets use the tegra.
SixSixSevenSeven said:
I cant see how the 5th core would bring a performance improvement. The system cannot use the 5th core as an actual 5th core, it shuts most of the other cores down to sleep when it needs the 5th which is also an incredibly low performance core, its just for power saving really, or simply hopping around the UI and checking your email, NVidia claim that android can also play video while running purely on the 5th core although this never happened on my Nexus 7 without any other apps running, it carried on running using 1 of the main cores for that.
Would definitely boost the battery life though and thats not something to be ignored. But there are few times where that 5th core really comes into its own, perhaps it just wasn't worth the time for MS to add companion core support to windows RT 8.1 when not all RT tablets use the tegra.
Click to expand...
Click to collapse
I always thought that the 5th core could run simultaneously with the other 4 to manage background tasks, etc, thus leaving less side work for the others. I could be wrong though. Also, I know of only one RT tab to NOT use Tegra (Dell), and it was the first to drop price and flop.
Anyways, the exciting thing about kexec/Linux prospects is that if we were to get in, there are a lot of Android and Linux versions that run on Tegra 3, which hopefully means we wouldn't have too tough of a time getting at that 5th core working then.
Sent from my SCH-I535 using xda app-developers app
Well the Samsung Ativ Tab RT is also using the S4 cpu, but that device had a limited release from what it seems like in North America. I too was under the assumption that th3 5th core could be used at the same time with the other cores, which could free up power for other things. Like the 5th core would be used for the low power task, while at the same the the other 4 cores are being used for a more process heavy task.
It would be interesting to have Android or Linux running in a dual boot situation on our RT devices, or if even possible do what Samsung is doing, and have it emulated in Windows so you can run apps side by side.
No, the 5th core is not an actual 5th core. The idea is you have 4 full blown cores at 1.2, 1.4 or 1.6ghz depending on the tegra model (and then the tegra can overclock automatically to 1.3, 1.5 or 1.7), thats quite power hungry really. But as CPU usage falls the tegra shuts a few cores off, if the system cant benefit from all 4 cores being active it will drop to 3, then 2 and then 1. Sometimes even that 1 core running at 1.2ghz is compartively power hungry, so the tegra shuts the final core down and fires up the companion core which I think runs around the 700MHz range, its slow at any rate, its also built optimised purely for power consumption over performance. Idea is you can go from a full quad core chip when you need the performance but then when the device is idling you can switch over to the companion core and shut the main 4 all off and save alot of power.
NVidia claim that the companion core combined with the hardware video acceleration of the tegra should be able to play HD videos on its own. That doesnt really seem to happen outside of the lab. But when you lock the screen on your android device it often jumps into companion core mode, you can browse around the android home screen and use a few lightweight apps on the companion core no problem, and when it does begin to struggle the tegra just has to skip over to its main core and gradually bring the other 3 main cores online as it needs them.
It never has the companion and main cores on in a state able to be used by the operating system simultaneously though.
Samsungs so called octa-core chips also do the same. They arent really octa core chips, in reality they are a quad core cortex A15 chip and a quad core lower clock speed cortex A9 chip (possibly even A7) on the same piece of silicon, when CPU load is high it runs as a quad core A15, when it doesnt need so much performance it shuts down the A15 and swaps for the A9, the 2 CPU's are near separate and at any one time the chip is only running as a single quad core processor not an octacore. Similar to the companion core design this can lead to a massive boost in battery life. In both A15 and A9 modes the processor is capable of shutting down individual cores as need be.
Tegra may well be the chip in all main tablets, but when microsoft first started working on windows RT there were meant to be qualcomm snapdragon, NVidia tegra and texas instruments OMAP devices all coming to market so of course microsoft at the time needed RT to run on all 3. The original plan was that there would be56 3rd party manufacturers manufacturing RT tablets, 2 per chip vendor except TI. Originally qualcomm partnered with HP and Samsung, NVidia with Lenovo and Asus and Toshiba with TI In the end TI dropped out and shortly after downscaled OMAP production (I think it has completely stopped with the exception of existing contracts now, or at least chips intended for tablet usage have been, they had a few industrial chips under the OMAP branding that might still be available, their ARM based microcontroller and DSP lines are still going fine), TI took Toshiba with them. Of course by the time TI dropped out there were already running builds of RT. HP dropped out and were replaced by dell. Acer were slated to be joining the program but didn't, when MS unveiled the surface that killed it for acer.
Another limitation is that Windows RT is essentially just an ARM port of windows 8, windows 8 and the NT kernel in general didnt already have support for the companion core or similar tech, it would be pointless adding it to the base NT kernel as hardly any devices use it and it would probably lead to issues introducing it only for tegra.
Surely Microsoft can see that getting the maximum out of the CPUs in their own devices is a good thing? I get that they have to support a few ARM architectures, but there's no reason why Windows RT can't be optimised with a specific update for the Surface?
bydandie said:
Surely Microsoft can see that getting the maximum out of the CPUs in their own devices is a good thing? I get that they have to support a few ARM architectures, but there's no reason why Windows RT can't be optimised with a specific update for the Surface?
Click to expand...
Click to collapse
It would be a maintenance nightmare. You know the way everyone *****es and moans about the non existent android fragmentation (or at the very least hugely over exaggerated)? Now apply that to windows RT, its already a struggling platform. You don't want more ammo for the opposition, the extra effort probably isn't worth it. Under sleep mode or single core mode (non companion, RT will scale back to single core non companion happily) the battery life is good enough, companion would be nice, but non essential. Companion core would need to be supported at a kernel level. It would be a nightmare to keep one version of the kernel (if you don't know what a kernel is, consider it the chassis of a car or the foundations of a house, its the very core of the operating system) for each tablet.

How is powervr g6430 rogue when campared to top class gpu's like adreno 405 etc?

Is power vr g6430 any good when campared to adreno gpu's?
http://www.gsmarena.com/apple_iphone_5s_vs_lg_g2_vs_nokia_lumia_1020-review-997p5.php
The same GPU used on iPhone 5s. Based on this benchmark, it's better than Adreno 330 I think.
Adreno 405 isn't top class GPU. According to GFLOPS numbers, 405 is better than 1st gen Adreno 320 (S4 Pro, S4 Prime) and weaker 2nd gen.
But all about benchmarks, the most important is user experience and last but not least is optimization
GrandpaaOvekill said:
Is power vr g6430 any good when campared to adreno gpu's?
Click to expand...
Click to collapse
Adreno 405 is only half as power of powervgr g6430
Adreno 405 is middle range gpu
While powervgr g6430, adreno 320, 330, 420 are last year and current flagship gpu
Gpu mostly rated by gflops
http://kyokojap.myweb.hinet.net/gpu_gflops/
And adreno each generation have basic, mid, high power gpu..
Adreno 405 is 4th generation (05 means basic) and can match 3rd Gen mid
Adreno 420 is 4th generation (20 is mid) and can match 3rd Gen high gpu
See gflops of each in that above link
And yes optimization is the most for gaming
The PowerVR G6430 in Zenfone 2 is clocked higher than iphone 5s but lower than ipads and Atom 3570. Its performance is between the Adreno 330 and 430 which is excellent given that it was designed in 2012 and released in 2013. Reclocking it at 640Mhz like its 3570 brother should give a nice run for its price, still technically, it won't be as fast as Adreno 430. However, in real world usage and coupled with a more powerful Intel cpu, it should match it as the CPU is able to extract more GPU power.
If you are really looking at the most powerful mobile GPU, the Nvidia Tegra X1 is at the top, close to twice the performance of the top Qualcomm 810 GPU, Adreno 430. In Antutu, it only scores 75K because the CPU is slower than others like Intel. 75K is still unbreakeable for the moment. Surely, Nvidia and ATI have much more experience in the GPU domain so its not surprising that they are the fastest.
Now, only if ATI partner with Intel to provide us with 14nm goodies :angel:
p.s: To have a broader picture, the Tegra X1 chip is close to twice the performance of a PS3 which is astonishing considering its small size and 2W max power consumption.
Nvidia Shield TV based on Tegra X1 has active cooling system.
So, how it can be compared to phone SoCs?
My bad, I though it was found in the Nvidia Shield tablet. Its its brother the Kepler K1 that is currently used but still at 365 GFlops on nvidia website, it competes with the adreno 430. Note that the PS3 was 192 GFlops.
Interesting fact is that the Tegra X1 actually draws much Less power at idle and slightly less power (1w less than Kepler) at load. Kepler would peak at 11w. Thanks to the new 20nm tech in Maxwell cores efficiency. The Nvidia TV Shield has much more and larger components to power, its also for sure clocked higher.
''According to Nvidia, the power consumption in a tablet powered by Tegra X1 will be on par with Tegra K1. In fact, idle power consumption will be even lower thanks to the various architecture improvements. Tegra K1 was designed to operate at around 5-8 watts, with infrequent peaks up to 11 watts when running stressful benchmarks, so the X1 will be well within the realm of tablet power requirements.'' Source: greenbot.com
Heres this too: http://www.pcper.com/reviews/Processors/NVIDIA-Announces-Tegra-X1-Maxwell-Hits-Ultra-Low-Power
I really like the fact that PC manifacturers enter the mobile market, after all, they were building computer components for ages. This will open the door to more powerfull and cheaper SoCs especially because they have the ability to mass produce and develop the latest tech with many factory plants worldwide.
aziz07 said:
My bad, I though it was found in the Nvidia Shield tablet. Its its brother the Kepler K1 that is currently used but still at 365 GFlops on nvidia website, it competes with the adreno 430. Note that the PS3 was 192 GFlops.
Interesting fact is that the Tegra X1 actually draws much Less power at idle and slightly less power (1w less than Kepler) at load. Kepler would peak at 11w. Thanks to the new 20nm tech in Maxwell cores efficiency. The Nvidia TV Shield has much more and larger components to power, its also for sure clocked higher.
''According to Nvidia, the power consumption in a tablet powered by Tegra X1 will be on par with Tegra K1. In fact, idle power consumption will be even lower thanks to the various architecture improvements. Tegra K1 was designed to operate at around 5-8 watts, with infrequent peaks up to 11 watts when running stressful benchmarks, so the X1 will be well within the realm of tablet power requirements.'' Source: greenbot.com
Heres this too: http://www.pcper.com/reviews/Processors/NVIDIA-Announces-Tegra-X1-Maxwell-Hits-Ultra-Low-Power
I really like the fact that PC manifacturers enter the mobile market, after all, they were building computer components for ages. This will open the door to more powerfull and cheaper SoCs especially because they have the ability to mass produce and develop the latest tech with many factory plants worldwide.
Click to expand...
Click to collapse
Maxwell can very power hungry when you clock it all the way up, and X1 has more CUDA cores than K1. X1 has 2 SMM with 256 total while K1 only has 1 SMX with 192.
also, pc manufacturers have always been in the mobile market, or you could even say they started the mobile market. for instance, Apple was a pc manufacturer, steve jobs dedicated 70% of his life to PC rather than phones. samsung makes everything and they have a lot of experience too in making notebooks. so the two most powerful (or most successful) players in the mobile sector are also pc manufacturers, what do you mean by pc manufacturers entering the mobile market?
Its getting off topic but Intel or Apple weren't the first one to build a cell phone. Intel was the first company to build a CPU though. Motorola built the 1st cellphone.
On a sidenote, Apple never really built anything except for aesthetics, it started with IBM building for them after non-success with Synertek for a couple of months. Btw, Samsung does not manifacture PC CPUs or GPUs. Only CPU they build is the Exynos for mobile. I think you misinterpreted the fact the they sell laptops, yes they do, but they are not the one building its major components, its Intel and AMD. They may build its memory components but not CPU or GPU.
You are seeing technology the other way around. If we take, let say, a 2 years old gpu and a new one. The new one can have double the transitor and components count yet still consume less power. Its about architechture efficiency and transistor nm. e.g. the Intel in our Zenfone 2 is built with 3D 22nm transistor which is more power efficient. That's how tech flow.
Anyway, apple is slowly declining, Intel is building their PC segment, replacing IBM, and Samsung is building their next iphone and taking care of the mobile segment. We can already see whats next.
I have been building PCs for over 15 years, its my hobby.
@ mods There should be a ''resolved'' button just like other forums so threads don't get cluttered lol
GrandpaaOvekill said:
Is power vr g6430 any good when campared to adreno gpu's?
Click to expand...
Click to collapse
I know benchmarks aren't everything, but GFX gives a good idea of the performance difference between the two. Basically, the PowerVR G6430 is much more powerful than the Adreno 405.
PowerVR G6430:
https://gfxbench.com/result.jsp?ben...VR Rogue G6430&base=device&ff-check-desktop=0
Adreno 405:
https://gfxbench.com/result.jsp?ben...ter=Adreno 405&base=device&ff-check-desktop=0
Here's some videos of a Zenfone 2 with a phone that utilizes the SD 615/Adreno 405 combo
https://www.youtube.com/watch?v=N3DcRHXrTHg
https://www.youtube.com/watch?v=TYZr53U2Tfk
Hope this helps.

A10 chips for Apple's iPhone 7 appeared on Geekbench, the power level with A9X

We've had plenty of information about the design of the iPhone 7. But its power, why? Recently A10 processor of the iPhone appeared on page 7 above Geekbench test mononuclear performance, the results show it on par with Apple A9X network.
A10 chip production is expected to follow the process of TSMC 16nmFinFET. Based on the numbers on the chart, you can see the performance of Apple's single-core A9 A10 about 20% higher, and almost on par with A9X being used on two versions of the iPad Pro, as well as the most powerful processor Apple at the moment.
Compared with previous generations, the A10 does not have the power jump, formerly the A9 launched, it really is a big step forward compared to the Apple A8. Anyhow with performance on par with mononuclear A9X, the A10 chip also really strong, unknown multicore performance of it will be.
Also rumored that the A10 will apply the method of "fan-out" to reduce the size of the chip. Due to the 16nm manufacturing process, so many people are hoping that the ability of the A10 power consumption will be lower than previous generation
Besides, a popular Weibo account about giving information of the mobile processor has said that A10 is still a dual-core chip with a single-core performance strong.
Apple iPhone 7 is said to be launched in September, and there are at least two versions, of which the premium version will be equipped with dual cameras. TECHRUM will continuously update information about the smartphone, this is an eagerly anticipated.
source: Techrum.vn

Best AMD processors for performance

The past couple of years has seen AMD gain a better grip on the CPU market with its Ryzen series. While the Ryzen 3000 series of processors competed strongly against Intel last year, the latest generation has become a favorable choice of many thanks to the excellent performance. Gamers, PC building enthusiasts, and even professionals prefer going for the Ryzen 5000 series instead of Intel. One of the reasons for that is AMD’s Zen 3 architecture based on the 7nm node, whereas Intel is still stuck on its 14nm architecture for the past six years.
Let’s check out the best AMD CPUs for performance
AMD Ryzen 9 5950X
AMD continues to offer high-end desktop (HEDT) class processors to mainstream users with the Ryzen 9 5950X. Featuring 16-cores and 32-threads, it is one of the most powerful processors from the company. It isn’t affordable by any means especially when you look at the $799 price tag, but compared to other competitive HEDT processors, this is actually a really good price. If you don’t want to jump over to the Threadripper series, then this is your best bet.
Clock speeds: 3.4GHz – 4.9GHz
16-Cores, 32 Threads
64MB L3 Cache
PCIe 4.0
105W TDP
~$920
Buy from Amazon
AMD Ryzen 9 5900X
Sitting below the 5950X is the 12-core Ryzen 9 5900X that gives Intel’s latest Core i9-11900K a run for its money. It’s an incredibly powerful processor for gaming and creative workloads, at the same time it manages power and thermals more efficiently thanks to the 7nm process. The processor delivers more performance per watt consumed, compared to the 8-core 11900K. The only issue is that the Ryzen 9 5900X is difficult to get hold of and is currently selling more expensive than AMD’s suggested price.
Clock speeds: 3.7GHz – 4.8GHz
12-Cores, 24 Threads
64MB L3 Cache
PCIe 4.0
105W TDP
~$680
Buy from Amazon
AMD Ryzen 7 5800X
It is neck-to-neck when comparing the Ryzen 7 5800X with Intel’s Core i7-11700K. While it is slightly more expensive than the Intel counterpart, it's worth it paying extra as it offers faster gaming performance and almost the same performance when it comes to core-CPU-based tasks. There is also the additional benefit of the 5800X’s lower power consumption, which means it can reach its full performance potential even on less expensive motherboards.
Clock speeds: 3.8GHz – 4.7GHz
8-Cores, 16 Threads
32MB L3 Cache
PCIe 4.0
105W TDP
~$400
Buy from Amazon
AMD Ryzen 9 5980HX
The newly launched AMD Ryzen 9 5980HX laptop CPU is part of AMD's 5000 series 'Cezanne' generation. It is targeted towards high-performance laptops. The octa-core processor comes with a base clock speed of 3.3GHz and a boost clock of 4.8GHz. The TDP is rated at 45W which is quite impressive for a powerful processor like this. According to AMD, thanks to the Zen 3 architecture, the new 5000 series has made significant leaps in IPC compared to the previous generation with an average IPC gain of 19-percent.
Clock speeds: 3.3GHz – 4.8GHz
8-Cores, 16 Threads
16MB L3 Cache
PCIe 4.0
35-45W TDP
Beast cpu still in 2022

Categories

Resources