How is powervr g6430 rogue when campared to top class gpu's like adreno 405 etc? - ZenFone 2 Q&A, Help & Troubleshooting

Is power vr g6430 any good when campared to adreno gpu's?

http://www.gsmarena.com/apple_iphone_5s_vs_lg_g2_vs_nokia_lumia_1020-review-997p5.php
The same GPU used on iPhone 5s. Based on this benchmark, it's better than Adreno 330 I think.
Adreno 405 isn't top class GPU. According to GFLOPS numbers, 405 is better than 1st gen Adreno 320 (S4 Pro, S4 Prime) and weaker 2nd gen.
But all about benchmarks, the most important is user experience and last but not least is optimization

GrandpaaOvekill said:
Is power vr g6430 any good when campared to adreno gpu's?
Click to expand...
Click to collapse
Adreno 405 is only half as power of powervgr g6430
Adreno 405 is middle range gpu
While powervgr g6430, adreno 320, 330, 420 are last year and current flagship gpu
Gpu mostly rated by gflops
http://kyokojap.myweb.hinet.net/gpu_gflops/
And adreno each generation have basic, mid, high power gpu..
Adreno 405 is 4th generation (05 means basic) and can match 3rd Gen mid
Adreno 420 is 4th generation (20 is mid) and can match 3rd Gen high gpu
See gflops of each in that above link
And yes optimization is the most for gaming

The PowerVR G6430 in Zenfone 2 is clocked higher than iphone 5s but lower than ipads and Atom 3570. Its performance is between the Adreno 330 and 430 which is excellent given that it was designed in 2012 and released in 2013. Reclocking it at 640Mhz like its 3570 brother should give a nice run for its price, still technically, it won't be as fast as Adreno 430. However, in real world usage and coupled with a more powerful Intel cpu, it should match it as the CPU is able to extract more GPU power.
If you are really looking at the most powerful mobile GPU, the Nvidia Tegra X1 is at the top, close to twice the performance of the top Qualcomm 810 GPU, Adreno 430. In Antutu, it only scores 75K because the CPU is slower than others like Intel. 75K is still unbreakeable for the moment. Surely, Nvidia and ATI have much more experience in the GPU domain so its not surprising that they are the fastest.
Now, only if ATI partner with Intel to provide us with 14nm goodies :angel:
p.s: To have a broader picture, the Tegra X1 chip is close to twice the performance of a PS3 which is astonishing considering its small size and 2W max power consumption.

Nvidia Shield TV based on Tegra X1 has active cooling system.
So, how it can be compared to phone SoCs?

My bad, I though it was found in the Nvidia Shield tablet. Its its brother the Kepler K1 that is currently used but still at 365 GFlops on nvidia website, it competes with the adreno 430. Note that the PS3 was 192 GFlops.
Interesting fact is that the Tegra X1 actually draws much Less power at idle and slightly less power (1w less than Kepler) at load. Kepler would peak at 11w. Thanks to the new 20nm tech in Maxwell cores efficiency. The Nvidia TV Shield has much more and larger components to power, its also for sure clocked higher.
''According to Nvidia, the power consumption in a tablet powered by Tegra X1 will be on par with Tegra K1. In fact, idle power consumption will be even lower thanks to the various architecture improvements. Tegra K1 was designed to operate at around 5-8 watts, with infrequent peaks up to 11 watts when running stressful benchmarks, so the X1 will be well within the realm of tablet power requirements.'' Source: greenbot.com
Heres this too: http://www.pcper.com/reviews/Processors/NVIDIA-Announces-Tegra-X1-Maxwell-Hits-Ultra-Low-Power
I really like the fact that PC manifacturers enter the mobile market, after all, they were building computer components for ages. This will open the door to more powerfull and cheaper SoCs especially because they have the ability to mass produce and develop the latest tech with many factory plants worldwide.

aziz07 said:
My bad, I though it was found in the Nvidia Shield tablet. Its its brother the Kepler K1 that is currently used but still at 365 GFlops on nvidia website, it competes with the adreno 430. Note that the PS3 was 192 GFlops.
Interesting fact is that the Tegra X1 actually draws much Less power at idle and slightly less power (1w less than Kepler) at load. Kepler would peak at 11w. Thanks to the new 20nm tech in Maxwell cores efficiency. The Nvidia TV Shield has much more and larger components to power, its also for sure clocked higher.
''According to Nvidia, the power consumption in a tablet powered by Tegra X1 will be on par with Tegra K1. In fact, idle power consumption will be even lower thanks to the various architecture improvements. Tegra K1 was designed to operate at around 5-8 watts, with infrequent peaks up to 11 watts when running stressful benchmarks, so the X1 will be well within the realm of tablet power requirements.'' Source: greenbot.com
Heres this too: http://www.pcper.com/reviews/Processors/NVIDIA-Announces-Tegra-X1-Maxwell-Hits-Ultra-Low-Power
I really like the fact that PC manifacturers enter the mobile market, after all, they were building computer components for ages. This will open the door to more powerfull and cheaper SoCs especially because they have the ability to mass produce and develop the latest tech with many factory plants worldwide.
Click to expand...
Click to collapse
Maxwell can very power hungry when you clock it all the way up, and X1 has more CUDA cores than K1. X1 has 2 SMM with 256 total while K1 only has 1 SMX with 192.
also, pc manufacturers have always been in the mobile market, or you could even say they started the mobile market. for instance, Apple was a pc manufacturer, steve jobs dedicated 70% of his life to PC rather than phones. samsung makes everything and they have a lot of experience too in making notebooks. so the two most powerful (or most successful) players in the mobile sector are also pc manufacturers, what do you mean by pc manufacturers entering the mobile market?

Its getting off topic but Intel or Apple weren't the first one to build a cell phone. Intel was the first company to build a CPU though. Motorola built the 1st cellphone.
On a sidenote, Apple never really built anything except for aesthetics, it started with IBM building for them after non-success with Synertek for a couple of months. Btw, Samsung does not manifacture PC CPUs or GPUs. Only CPU they build is the Exynos for mobile. I think you misinterpreted the fact the they sell laptops, yes they do, but they are not the one building its major components, its Intel and AMD. They may build its memory components but not CPU or GPU.
You are seeing technology the other way around. If we take, let say, a 2 years old gpu and a new one. The new one can have double the transitor and components count yet still consume less power. Its about architechture efficiency and transistor nm. e.g. the Intel in our Zenfone 2 is built with 3D 22nm transistor which is more power efficient. That's how tech flow.
Anyway, apple is slowly declining, Intel is building their PC segment, replacing IBM, and Samsung is building their next iphone and taking care of the mobile segment. We can already see whats next.
I have been building PCs for over 15 years, its my hobby.
@ mods There should be a ''resolved'' button just like other forums so threads don't get cluttered lol

GrandpaaOvekill said:
Is power vr g6430 any good when campared to adreno gpu's?
Click to expand...
Click to collapse
I know benchmarks aren't everything, but GFX gives a good idea of the performance difference between the two. Basically, the PowerVR G6430 is much more powerful than the Adreno 405.
PowerVR G6430:
https://gfxbench.com/result.jsp?ben...VR Rogue G6430&base=device&ff-check-desktop=0
Adreno 405:
https://gfxbench.com/result.jsp?ben...ter=Adreno 405&base=device&ff-check-desktop=0
Here's some videos of a Zenfone 2 with a phone that utilizes the SD 615/Adreno 405 combo
https://www.youtube.com/watch?v=N3DcRHXrTHg
https://www.youtube.com/watch?v=TYZr53U2Tfk
Hope this helps.

Related

[Q] Tegra 3 VS Mali-400 ?

Hi
Which is better? Tegra3 or Mali 400
I don't know mate, this is what my phone after the update is capable of now.
Sent from my HTC One X using xda premium
Well it will be a race for sure
Mali might be faster (or maybe not), but Tegra 3 definitely better. Because it has better, enhanced games. Developers develop for Tegra. They don't develop for Mali or Adreno.
One guy complained that Shadowgun looks better on my phone than on his iPad3 - I had to explain that I'm running THD version, that we have those Tegra enhanced games. That makes a difference.
Tegra 3 will run all games. Adreno/Mali will require Chainfire3D with plugins to run Tegra games.
Thats my view on that.
The Mali 400 is old now, it`s not what the sg3 is getting surely.
John.
Even if SGS3 will get Mali T-604, I will stick with Tegra 3 for now. Unless I see games dedicated for T-604, and more than just one.
more...
http://www.engadget.com/2012/04/20/galaxy-s-iii-leak/
according to this it will have the 400
antipesto93 said:
http://www.engadget.com/2012/04/20/galaxy-s-iii-leak/
according to this it will have the 400
Click to expand...
Click to collapse
Didn't notice it mention 400. But if true people would find it disappointing, even that 400 is still serious piece of hardware. Given 720p screen, performance would be worse compared to SGS2.
The Mali's performance is the same as the Tegra 3's in graphics benchmarks I've done on my Note Vs my Prime and my One X (just goes to show how average the Tegra 3 GPU really is I think, no better than something at least 6 months older). Disappointing it's not the upgraded GPU if that is accurate, but doesn't differentiate the products at all.
Tinderbox (UK) said:
The Mali 400 is old now
Click to expand...
Click to collapse
Technically, so is Teg3. S4 uses 28nm and the 4212 uses 32nm. Teg3 is two 45nm A9 chips glommed together because Nvidia wanted to be first to market with a next-gen chip. It's the least advanced of any of the three SoCs. From a GPU perspective none of the three really move the ball forward and are just evolutionary vs. revolutionary. If I had to guess best overall performance I’d say 4212, Teg3, and S4 in that order. Because S4 and the 4212 are on smaller dies they’ll be more efficient and handily beat Teg3 at battery life (except maybe at idle).
delete post.
BarryH_GEG said:
Technically, so is Teg3. S4 uses 28nm and the 4212 uses 32nm. Teg3 is two 45nm A9 chips glommed together because Nvidia wanted to be first to market with a next-gen chip. It's the least advanced of any of the three SoCs. From a GPU perspective none of the three really move the ball forward and are just evolutionary vs. revolutionary. If I had to guess best overall performance I’d say 4212, Teg3, and S4 in that order. Because S4 and the 4212 are on smaller dies they’ll be more efficient and handily beat Teg3 at battery life (except maybe at idle).
Click to expand...
Click to collapse
tegra3 is actually made on the 40nm. nvidia still has tsmc's 40nm process and is migrating towards 28nm with desktop GPUs and will eventually migrate to 28nm with the tegra3+.
i hate how people always say that its a bad thing that apple didn`t upgrade the gpu but fust added more cores or samsung didn`t change the mali 400 gpu. the fact is that the mali and sgx543mp2 were ahead when they were released. now there is actual competition like the adreno 320 and tegra 3/4. a simple overclocked sgx or mali chip is enough to keep up with the competition.
NZtechfreak said:
The Mali's performance is the same as the Tegra 3's in graphics benchmarks I've done on my Note Vs my Prime and my One X (just goes to show how average the Tegra 3 GPU really is I think, no better than something at least 6 months older). Disappointing it's not the upgraded GPU if that is accurate, but doesn't differentiate the products at all.
Click to expand...
Click to collapse
Mali 400/450 is a 2nd generation GPU like tegra 2, only 44 millions polygons/sec, My Adreno 205 is 41 millions & The Tegra 3 is 129 millions.
Gameloft games in the end of 2012 will need 100 millions...
The Mali 3rd generation is Mali T-604/640 & Mali say that's it is 500% the performances of previous Mali GPU's :
http://www.arm.com/products/multimedia/mali-graphics-hardware/mali-t604.php
500% is using quad-core optimised applis (only tegra 3 will have it in less than 2 years) but it's 250% in dual-core...
As Tegra 3 is equal to T-604, Mali 400 is pawned...
-1st gen (Adreno 200, mali 200/300, SGX Power VR 520/530 & tegra 1)
-2nd gen (Adreno 205, Mali 400MP/450MP, SGX Power VR 540/554 & tegra 2)
-3rd gen (Adreno 220/225/320, Mali T604/640, SGX Power VR G 6200/6430 & Tegra 3)
Sekhen said:
Mali 400/450 is a 2nd generation GPU like tegra 2, only 44 millions polygons/sec, My Adreno 205 is 41 millions & The Tegra 3 is 129 millions.
Gameloft games in the end of 2012 will need 100 millions...
The Mali 3rd generation is Mali T-604/640 & Mali say that's it is 500% the performances of previous Mali GPU's :
http://www.arm.com/products/multimedia/mali-graphics-hardware/mali-t604.php
500% is using quad-core optimised applis (only tegra 3 will have it in less than 2 years) but it's 250% in dual-core...
As Tegra 3 is equal to T-604, Mali 400 is pawned...
-1st gen (Adreno 200, mali 200/300, SGX Power VR 520/530 & tegra 1)
-2nd gen (Adreno 205, Mali 400MP/450MP, SGX Power VR 540/554 & tegra 2)
-3rd gen (Adreno 220/225/320, Mali T604/640, SGX Power VR G 6200/6430 & Tegra 3)
Click to expand...
Click to collapse
link to your numbers about Tegra 3?
I have not used any device with Mali 400. Sorry mate~~
I think that tegra 3 is better but we have to attend the 3.x kernel to solve the battery problem properly.
Sent from my HTC One X using xda app-developers app
Mali-400 is good and strong and Tegra 3 might not be the fastest one there is, but it's the only one that gets best looking games. On top of that, Tegra 3 Plus is coming soon and then next year another one with direct x and supposed console-like performance. See what Nvidia does for desktops and just hope they keep the pace with mobile GPU and we will get there too. I don't really consider non-tegra device unless it amazes me with noticeably better power efficiency or optimized games start coming out for it.
Would you buy non-nvidia and non-ati graphics card for your pc?
schriss said:
Mali-400 is good and strong and Tegra 3 might not be the fastest one there is, but it's the only one that gets best looking games. On top of that, Tegra 3 Plus is coming soon and then next year another one with direct x and supposed console-like performance. See what Nvidia does for desktops and just hope they keep the pace with mobile GPU and we will get there too. I don't really consider non-tegra device unless it amazes me with noticeably better power efficiency or optimized games start coming out for it.
Would you buy non-nvidia and non-ati graphics card for your pc?
Click to expand...
Click to collapse
Exactly!
Given the choice, I would buy a Tegra device over anything else.

Tegra 3 Overclock..?

I'm loving my yoga 11, however at times I just feel that Windows 8 RT slows down especially when multi-tasking. Since our Tegra's are clocked at 1.3Ghz and the same Chip in android devices runs at 1.5, with overclocked kernels available to run at 1.8-2.0Ghz, what are the chances we see this type of hack/development come to windows 8 RT? Im not sure the security obstacles that would present, but haven't seen much on this to even know if someone has looked into this or actively working on method to do so.
Thanks!
I have been thinking about this as well. Im sure it can be done, but by who? thats the question. Im sure we can easily squeeze some more power out of our device. Good luck to whoever spearheads this
ej_424 said:
I'm loving my yoga 11, however at times I just feel that Windows 8 RT slows down especially when multi-tasking. Since our Tegra's are clocked at 1.3Ghz and the same Chip in android devices runs at 1.5, with overclocked kernels available to run at 1.8-2.0Ghz, what are the chances we see this type of hack/development come to windows 8 RT? Im not sure the security obstacles that would present, but haven't seen much on this to even know if someone has looked into this or actively working on method to do so.
Thanks!
Click to expand...
Click to collapse
The tegra isnt overclocked to 1.5 in android devices. There are actually 3 models of the Tegra 3 at different clock speeds. The one used in the RT is the lowest model (1.2GHz) overclocked to 1.3GHz already. I believe the other models are 1.4 and 1.6 with a few ROMs adding about 100MHz overclock as needed. 2ghz seems extreme though.
SixSixSevenSeven said:
The tegra isnt overclocked to 1.5 in android devices. There are actually 3 models of the Tegra 3 at different clock speeds. The one used in the RT is the lowest model (1.2GHz) overclocked to 1.3GHz already. I believe the other models are 1.4 and 1.6 with a few ROMs adding about 100MHz overclock as needed. 2ghz seems extreme though.
Click to expand...
Click to collapse
I've thought about this as well but have always been too scared to ask. Windows is obviously not foreign to processor scaling and power management, perhaps there's a way to make a custom power plan or something. Maybe the way to approach overlooking is not 'like' Android, but 'like' regular old windows. I have no idea and am a noob, but I thought I'd just toss that out there.
SixSixSevenSeven said:
The tegra isnt overclocked to 1.5 in android devices. There are actually 3 models of the Tegra 3 at different clock speeds. The one used in the RT is the lowest model (1.2GHz) overclocked to 1.3GHz already. I believe the other models are 1.4 and 1.6 with a few ROMs adding about 100MHz overclock as needed. 2ghz seems extreme though.
Click to expand...
Click to collapse
http://www.nvidia.com/object/tegra-3-processor.html
Its support for Windows RT is still under development. It isn't overclocked on the Surface RT/Vivo Tab but underclocked to compensate for the missing support for the fifth battery saver core.
We should expect the performance and battery to get better as they iron this out :laugh:
Actually, for those who have gotten Surface RT since launch... I bet most of you have already experience better performance after each monthly firmware update
LastBattle said:
http://www.nvidia.com/object/tegra-3-processor.html
Its support for Windows RT is still under development. It isn't overclocked on the Surface RT/Vivo Tab but underclocked to compensate for the missing support for the fifth battery saver core.
We should expect the performance and battery to get better as they iron this out :laugh:
Actually, for those who have gotten Surface RT since launch... I bet most of you have already experience better performance after each monthly firmware update
Click to expand...
Click to collapse
That's a very good news indeed and we should then probably be able to run the Tablet at 1.6Ghz Quad core instead of the actual 1.3Ghz quad core :good:
LastBattle said:
http://www.nvidia.com/object/tegra-3-processor.html
Its support for Windows RT is still under development. It isn't overclocked on the Surface RT/Vivo Tab but underclocked to compensate for the missing support for the fifth battery saver core.
We should expect the performance and battery to get better as they iron this out :laugh:
Actually, for those who have gotten Surface RT since launch... I bet most of you have already experience better performance after each monthly firmware update
Click to expand...
Click to collapse
No where in that link does it mention it being underclocked. The 1.4ghz single core/1.3 quad core is a feature of the entire tegra product line, not jsut the surface RT.
It does mention that the 5th battery saver core doesnt work on windows RT though, that will help.
Interesting: There is a "~MHz" key in regedit under local machine -> Hardware -> Description -> System -> Central processor -> 0, 1, 2, or 3. It is set to 1300, but changing it doesn't do anything and it reverts upon reboot.
Even if we can't overclock this thing, is there a way to resurrect the "High Performance" power plan that disappeared in RT? One that would set the CPU to 100% by default, all the time?
Any update or more info on this?
bigsnack said:
Any update or more info on this?
Click to expand...
Click to collapse
+1
hope to see a 'high performance' feature on the pwr mgnment as well, especially when we are hooking up RT onto the power line and battery life is not so much of an issue in this case.
Rogerngks said:
hope to see a 'high performance' feature on the pwr mgnment as well, especially when we are hooking up RT onto the power line and battery life is not so much of an issue in this case.
Click to expand...
Click to collapse
iirc, you can still set your cpu states through powercfg in the command line. I might be wrong though.
Is the 5th power saving core just disabled or not present on our hardware?
bigsnack said:
Is the 5th power saving core just disabled or not present on our hardware?
Click to expand...
Click to collapse
According to NVidia's website, Tegra 3 for RT is "still under development." (http://www.nvidia.com/object/tegra-3-processor.html) It also lists it as only being quad-core on Windows 8 devices.
I had personally reeealy hoped that one of the highlights for RT 8.1 was going to be reworked support for the 5th core, bringing performance and battery life improvements. Alas, it was not to be.
jtg007 said:
According to NVidia's website, Tegra 3 for RT is "still under development." (http://www.nvidia.com/object/tegra-3-processor.html) It also lists it as only being quad-core on Windows 8 devices.
I had personally reeealy hoped that one of the highlights for RT 8.1 was going to be reworked support for the 5th core, bringing performance and battery life improvements. Alas, it was not to be.
Click to expand...
Click to collapse
I cant see how the 5th core would bring a performance improvement. The system cannot use the 5th core as an actual 5th core, it shuts most of the other cores down to sleep when it needs the 5th which is also an incredibly low performance core, its just for power saving really, or simply hopping around the UI and checking your email, NVidia claim that android can also play video while running purely on the 5th core although this never happened on my Nexus 7 without any other apps running, it carried on running using 1 of the main cores for that.
Would definitely boost the battery life though and thats not something to be ignored. But there are few times where that 5th core really comes into its own, perhaps it just wasn't worth the time for MS to add companion core support to windows RT 8.1 when not all RT tablets use the tegra.
SixSixSevenSeven said:
I cant see how the 5th core would bring a performance improvement. The system cannot use the 5th core as an actual 5th core, it shuts most of the other cores down to sleep when it needs the 5th which is also an incredibly low performance core, its just for power saving really, or simply hopping around the UI and checking your email, NVidia claim that android can also play video while running purely on the 5th core although this never happened on my Nexus 7 without any other apps running, it carried on running using 1 of the main cores for that.
Would definitely boost the battery life though and thats not something to be ignored. But there are few times where that 5th core really comes into its own, perhaps it just wasn't worth the time for MS to add companion core support to windows RT 8.1 when not all RT tablets use the tegra.
Click to expand...
Click to collapse
I always thought that the 5th core could run simultaneously with the other 4 to manage background tasks, etc, thus leaving less side work for the others. I could be wrong though. Also, I know of only one RT tab to NOT use Tegra (Dell), and it was the first to drop price and flop.
Anyways, the exciting thing about kexec/Linux prospects is that if we were to get in, there are a lot of Android and Linux versions that run on Tegra 3, which hopefully means we wouldn't have too tough of a time getting at that 5th core working then.
Sent from my SCH-I535 using xda app-developers app
Well the Samsung Ativ Tab RT is also using the S4 cpu, but that device had a limited release from what it seems like in North America. I too was under the assumption that th3 5th core could be used at the same time with the other cores, which could free up power for other things. Like the 5th core would be used for the low power task, while at the same the the other 4 cores are being used for a more process heavy task.
It would be interesting to have Android or Linux running in a dual boot situation on our RT devices, or if even possible do what Samsung is doing, and have it emulated in Windows so you can run apps side by side.
No, the 5th core is not an actual 5th core. The idea is you have 4 full blown cores at 1.2, 1.4 or 1.6ghz depending on the tegra model (and then the tegra can overclock automatically to 1.3, 1.5 or 1.7), thats quite power hungry really. But as CPU usage falls the tegra shuts a few cores off, if the system cant benefit from all 4 cores being active it will drop to 3, then 2 and then 1. Sometimes even that 1 core running at 1.2ghz is compartively power hungry, so the tegra shuts the final core down and fires up the companion core which I think runs around the 700MHz range, its slow at any rate, its also built optimised purely for power consumption over performance. Idea is you can go from a full quad core chip when you need the performance but then when the device is idling you can switch over to the companion core and shut the main 4 all off and save alot of power.
NVidia claim that the companion core combined with the hardware video acceleration of the tegra should be able to play HD videos on its own. That doesnt really seem to happen outside of the lab. But when you lock the screen on your android device it often jumps into companion core mode, you can browse around the android home screen and use a few lightweight apps on the companion core no problem, and when it does begin to struggle the tegra just has to skip over to its main core and gradually bring the other 3 main cores online as it needs them.
It never has the companion and main cores on in a state able to be used by the operating system simultaneously though.
Samsungs so called octa-core chips also do the same. They arent really octa core chips, in reality they are a quad core cortex A15 chip and a quad core lower clock speed cortex A9 chip (possibly even A7) on the same piece of silicon, when CPU load is high it runs as a quad core A15, when it doesnt need so much performance it shuts down the A15 and swaps for the A9, the 2 CPU's are near separate and at any one time the chip is only running as a single quad core processor not an octacore. Similar to the companion core design this can lead to a massive boost in battery life. In both A15 and A9 modes the processor is capable of shutting down individual cores as need be.
Tegra may well be the chip in all main tablets, but when microsoft first started working on windows RT there were meant to be qualcomm snapdragon, NVidia tegra and texas instruments OMAP devices all coming to market so of course microsoft at the time needed RT to run on all 3. The original plan was that there would be56 3rd party manufacturers manufacturing RT tablets, 2 per chip vendor except TI. Originally qualcomm partnered with HP and Samsung, NVidia with Lenovo and Asus and Toshiba with TI In the end TI dropped out and shortly after downscaled OMAP production (I think it has completely stopped with the exception of existing contracts now, or at least chips intended for tablet usage have been, they had a few industrial chips under the OMAP branding that might still be available, their ARM based microcontroller and DSP lines are still going fine), TI took Toshiba with them. Of course by the time TI dropped out there were already running builds of RT. HP dropped out and were replaced by dell. Acer were slated to be joining the program but didn't, when MS unveiled the surface that killed it for acer.
Another limitation is that Windows RT is essentially just an ARM port of windows 8, windows 8 and the NT kernel in general didnt already have support for the companion core or similar tech, it would be pointless adding it to the base NT kernel as hardly any devices use it and it would probably lead to issues introducing it only for tegra.
Surely Microsoft can see that getting the maximum out of the CPUs in their own devices is a good thing? I get that they have to support a few ARM architectures, but there's no reason why Windows RT can't be optimised with a specific update for the Surface?
bydandie said:
Surely Microsoft can see that getting the maximum out of the CPUs in their own devices is a good thing? I get that they have to support a few ARM architectures, but there's no reason why Windows RT can't be optimised with a specific update for the Surface?
Click to expand...
Click to collapse
It would be a maintenance nightmare. You know the way everyone *****es and moans about the non existent android fragmentation (or at the very least hugely over exaggerated)? Now apply that to windows RT, its already a struggling platform. You don't want more ammo for the opposition, the extra effort probably isn't worth it. Under sleep mode or single core mode (non companion, RT will scale back to single core non companion happily) the battery life is good enough, companion would be nice, but non essential. Companion core would need to be supported at a kernel level. It would be a nightmare to keep one version of the kernel (if you don't know what a kernel is, consider it the chassis of a car or the foundations of a house, its the very core of the operating system) for each tablet.

[Q] Is exynos worth buying?

The snapdragon version isn't available in my country, so I will have to buy the exynos (Pretty cheap right now $500 equivalent). The thing is reviews say the snapdragon doesn't lag a bit while exynos is made for a large device.
Is the performance really this bad? I'm not into eons right now by the way.
No its not worth buying the snapdragon version. My s4 is faster than my note...
Sent from my GT-I9505 using xda app-developers app
With HMP enabled there is no comparison between the two, exynos is up to 50% faster and potentially more efficient. With HMP disabled (as things currently are) then qualcomm is the slightly better chip, but I'm not convinced that the difference is enough to prefer one soc over the other...
In short Exynos 5420 is artificially neutered to seem worse than qualcomm, yet -even so- going either way won't make much of a difference...
Do you have any benchmarks to prove your claim of a speed bump of 50 %?
to OP
There are a lot of threads about Exynos vs snapdragon, long story short
Exynos , tad better cpu
Snapdragon tad better gpu
I've had both, ended with exynos , because I didn't need 4g, but needed 32 GB ( in scandinavia 4 G seems to be 16 gb only)
Lag was more or less the same
I felt the battery time on the exynos was a tad better
They felt equally as snappy when they needed to
BUT!!!
App support was a tad better on Snapdragon, ie more apps in the plastore worked with the snapdragon version, a few more games etc... no big deal for me, but still get me ticked of when I noticed a few apps I bought weren't compatible ( yet?!) with the new exynos chip ( but worked with my sammy S3 also exynos chip, older )
Exynos is fine. I've played with both and from a UI and app use perspective you can't tell the difference. Adreno's a bit faster than Mali but no so much as to drastically alter performance. Some games are better optimized for Adreno so depending on your choice of games it could make a difference. As for app compatibility it's more likely the 2,560x1,600 display that's causing the issue not the specific SoC. If there were huge differences between Exynos and S-800 or drastic app performance differences and app compatibility issues it would be all over the N3 forum and it's not.
DeBoX said:
Do you have any benchmarks to prove your claim of a speed bump of 50 %?
to
Click to expand...
Click to collapse
HMP for 8 cores have not yet released but look at Note 3 Neo, it uses 2 less large cores and it posts the same antutu score as our note, so by adding two more large cores you can expect the score to be about 50% more. As I said that is only true were all 8 cores would be used at the same time and they are not throttled (that is why I said "up to").
Stevethegreat said:
Look at Note 3 Neo, it uses 2 less large cores and it posts the same antutu score as our note
Click to expand...
Click to collapse
Not really. It also has a 267 PPI display which is benefitting its graphics scores in AnTuTu compared to the SGS4 at 441 PPI and N3 at 386 PPI.
http://www.nairaland.com/1597298/samsung-budget-galaxy-note-neo
S-800 vs. Exynos on the N3...
BarryH_GEG said:
Not really. It also has a 267 PPI display which is benefitting its graphics scores in AnTuTu compared to the SGS4
Click to expand...
Click to collapse
I was more properly referring to CPU scores which are the only ones benefitted from HMP.
I ran a quick AnTuTu (cpu) test to my Exynos 5420 equipped note and here are the results: http://i.imgur.com/zD32DZQ.png
Notice how remarkably similar they are to note neo's cpu score:
http://www.gsmarena.com/showpic.php3?sImg=newsimg/14/01/sgn3n-leak/gsmarena_006.jpg&idNews=7538
Note that note neo has only two large cores which are clocked lower by 10% compared to exynos 5420 and it still posts almost the same score merely by employing the help of the small cores. Now add two large cores more and you'd get 50% more performance, it's simple math really...
Now I'm not saying that it would be a performance that we would actually see in most occasions , it would either be throttled or -even- not supported by most apps but still it's potentially there (which was my point by saying "up to").
What will *definitely* be there if HMP is to be enabled is better battery -though- as it would make more efficient use of the small cores. Since exynos 5422 is also on 28nm yet has HMP enabled leads me to believe that we lack HMP for strategic reasons (so that samsung will sell more exynos 5422 / qualcomm equipped machines)
Stevethegreat said:
I was more properly referring to CPU scores which are the only ones benefitted from HMP.
I ran a quick AnTuTu (cpu) test to my Exynos 5420 equipped note and here are the results: http://i.imgur.com/zD32DZQ.png
Notice how remarkably similar they are to note neo's cpu score:
http://www.gsmarena.com/showpic.php3?sImg=newsimg/14/01/sgn3n-leak/gsmarena_006.jpg&idNews=7538
Note that note neo has only two large cores which are clocked lower by 10% compared to exynos 5420 and it still posts almost the same score merely by employing the help of the small cores. Now add two large cores more and you'd get 50% more performance, it's simple math really...
Now I'm not saying that it would be a performance that we would actually see in most occasions , it would either be throttled or -even- not supported by most apps but still it's potentially there (which was my point by saying "up to").
What will *definitely* be there if HMP is to be enabled is better battery -though- as it would make more efficient use of the small cores. Since exynos 5422 is also on 28nm yet has HMP enabled leads me to believe that we lack HMP for strategic reasons (so that samsung will sell more exynos 5422 / qualcomm equipped machines)
Click to expand...
Click to collapse
You can't divorce the impact of display area size and PPI from CPU performance. The GPU doesn't absolve the CPU's role in graphics output. An i3 PC with a killer graphics card will perform worse graphically than an i7 PC with a lesser card because most computational (not rendering, texture mapping, vectoring, and decoding) work is still done on the CPU. So I have no idea what AnTuTu's testing to come up with a CPU rating in isolation but if it's a real-time performance test the CPU's role in graphics output is impacting it. So comparing the Neo with a 5.5" display and 267 PPI against the N10.1-14 with a 10.1" display and 299 PPI isn't going to get you a relevant CPU comparison. That's why I used the N3 and SGS4 as comparisons because only the PPI is off. And the Neo would be well behind the SGS4 in the cumulative AnTuTu test if it had the same PPI because the lower workload of the lower PPI is artificially enhancing its score. At the end of the day an isolated CPU number is pretty meaningless. It's like bench horsepower in a car vs. horsepower to the wheels. A higher bench rating means nothing because none of us drive an engine, we drive a car. The total AnTuTu number (AKA: drive train loss) is more relevant even though it doesn't support the point you're trying to make about HMP.
http://en.wikipedia.org/wiki/Graphics_processing_unit#Computational_functions
BarryH_GEG said:
You can't divorce the impact of display area size and PPI from CPU performance. The GPU doesn't absolve the CPU's role in graphics output. An i3 PC with a killer graphics card will perform worse graphically than an i7 PC with a lesser card because most computational (not rendering, texture mapping, vectoring, and decoding) work is still done on the CPU. So I have no idea what AnTuTu's testing to come up with a CPU rating in isolation but if it's a real-time performance test the CPU's role in graphics output is impacting it. So comparing the Neo with a 5.5" display and 267 PPI against the N10.1-14 with a 10.1" display and 299 PPI isn't going to get you a relevant CPU comparison. That's why I used the N3 and SGS4 as comparisons because only the PPI is off. And the Neo would be well behind the SGS4 in the cumulative AnTuTu test if it had the same PPI because the lower workload of the lower PPI is artificially enhancing its score. At the end of the day an isolated CPU number is pretty meaningless. It's like bench horsepower in a car vs. horsepower to the wheels. A higher bench rating means nothing because none of us drive an engine, we drive a car. The total AnTuTu number (AKA: drive train loss) is more relevant even though it doesn't support the point you're trying to make about HMP.
http://en.wikipedia.org/wiki/Graphics_processing_unit#Computational_functions
Click to expand...
Click to collapse
Maybe so, but the benchmark in question runs off screen. So while in real life resolution matter in Antutu Cpu score, or super pi , or, or, it doesn't. HMP will make the Cpu 50% faster in multi threaded operations, I never claimed it makes the total machine faster by the same amount. For example an HMP equipped note 2014 will score around 40000 in Antutu , NOT 49500. I don't see where we disagree, I merely think you misunderstood my initial claim
If you live for real world use, the Exynos Note is a wonderful tablet. If you live in the world of needing the highest quadrant and antutu scores you should pass.
Sent via Tapatalk and my thumbs.
Stevethegreat said:
With HMP enabled there is no comparison between the two, exynos is up to 50% faster and potentially more efficient. With HMP disabled (as things currently are) then qualcomm is the slightly better chip, but I'm not convinced that the difference is enough to prefer one soc over the other...
In short Exynos 5420 is artificially neutered to seem worse than qualcomm, yet -even so- going either way won't make much of a difference...
Click to expand...
Click to collapse
How did you enable HMP? My note 3 snap dragon is so much faster than my note.
Sent from my SM-N900T using XDA Premium 4 mobile app
Stevethegreat said:
HMP for 8 cores have not yet released but look at Note 3 Neo, it uses 2 less large cores and it posts the same antutu score as our note, so by adding two more large cores you can expect the score to be about 50% more. As I said that is only true were all 8 cores would be used at the same time and they are not throttled (that is why I said "up to").
Click to expand...
Click to collapse
It will never be released for Exynos 5420 either, unless Samsung want alot of complains about fried Exynos 5420 chipsets. Also they already said it wont release HMP for Exynos 5420 cause of the heat.
dt33 said:
It will never be released for Exynos 5420 either, unless Samsung want alot of complains about fried Exynos 5420 chipsets. Also they already said it wont release HMP for Exynos 5420 cause of the heat.
Click to expand...
Click to collapse
Once again, that's not the reason that they won't release it, if anything the chip would be cooler because more use of A7 cores would be possible and if all 8 cores are needed Samsung could choose to throttle the thing. The reason that they don't release it is the Exynos 5422 which is the same chip but with all 8 cores enabled (also 28nm)...
So no fried socs, lesser profits more like

Best AMD processors for gaming

AMD has managed to become a solid competitor in the gaming CPU space. The latest Ryzen 5000 series processors based on the Zen3 architecture are becoming the go-to choice for a lot of gamers around the world. The biggest advantage that the new Ryzen series offers over Intel is power efficiency. In fact, even the last-generation Ryzen 3000 series processors, have proven to offer rock-solid performance with comparatively less power draw. Intel recently launched its new 11th-gen Rocket Lake-S series of desktop processors, however, it hasn't received a lot of positive feedback primarily due to the fact that the company continues to drag its 14nm++ process.
If you are in the market for buying a new CPU for gaming, then AMD is a pretty good choice. Let's check out the best AMD CPUs for gaming that you should buy today.
AMD Ryzen 5 5600X​The newly launched Ryzen 5 5600X is the best AMD Ryzen CPU that you should buy for gaming in 2021. It offers the best performance to value ratio and thanks to AMD’s Zen 3 architecture, it draws less power compared to Intel counterparts. The processor is highly recommended for all sorts of games, whether you want fast frame rates or a high-resolution experience.
Clock speeds: 3.7GHz – 4.6GHz
6-Cores, 12 Threads
35MB L3 Cache
PCIe 4.0
65W TDP
~$279
Buy from Amazon
AMD Ryzen 7 5800X​The new octa-core champion, the Ryzen 7 5800X takes on Intel's new 11th-gen Core i9-11900K. While both offer almost similar performance, AMD is selling the 5800X at over $100 less than Intel. That in itself is a huge point to consider, especially since the chipset crisis has lead to consumers hunting for products left, right, and center. Additionally, as mentioned with the case of the 5600X, this one also draws comparatively less power thanks to the 7nm processor.
Clock speeds: 3.8GHz – 4.7GHz
8-Cores, 16 Threads
35MB L3 Cache
PCIe 4.0
105W TDP
~$420
Buy from Amazon
AMD Ryzen 9 5900X​The latest top-of-the-line CPU offering from AMD in 2021, the 12-core Ryzen 9 5900X throws Intel’s latest Core i9-11900K and even the 10-core Core i9-10900K from last year, out of the park in almost every single aspect. It not only offers a better performance package, but it manages power and thermal more efficiently thanks to the 7nm process. It is currently selling more expensive than AMD’s suggested price, but it is totally worth it and should last you for years to come.
Clock speeds: 3.7GHz – 4.8GHz
12-Cores, 24 Threads
64MB L3 Cache
PCIe 4.0
105W TDP
~$549
Buy from Amazon
Best APU: AMD Ryzen 5 3400G​The chipset crisis continues to haunt us, with most gamers unable to get a hold of a new GPU. But if you are planning to build a budget gaming PC, then you should consider the Ryzen 5 3400G. Since it is an APU, it comes with integrated graphics that should be enough for 720p or 1080p gaming at low to mid settings. Additionally, both the CPU and GPU are unlocked which means there is potential for tweaking them as well. AMD has announced that its latest APUs, the Ryzen 7 5700G and Ryzen 5 5600G, will be hitting stores in August. These are going to be much better than the 3400G, so hold on to your money if you can.
Clock speeds: 3.7GHz – 4.2GHz
4-Cores, 8 Threads
4MB L3 Cache
PCIe 3.0
Radeon RX Vega 11 Graphics
65W TDP
~$149
Buy from Amazon
These are currently the best AMD processors for gaming, and while you might point out that there is also the Ryzen 9 5950X, that would just be overkill for a gaming rig. For a more balanced setup, it is best to either go for the Ryzen 5 5600X if your main purpose is only gaming. If you plan to do gaming alongside multiple tasks like streaming and video rendering, then get the Ryzen 7 5800X or the 5900X if your budget allows.
Recently, Ryzen 5 5600G and 5700G have better APU for now
5800X is the best for gaming giving better scores than 5600X & 1-CCX(8-core) so lowest latency.
myaccountaccess krogerfeedback

Best AMD processors for performance

The past couple of years has seen AMD gain a better grip on the CPU market with its Ryzen series. While the Ryzen 3000 series of processors competed strongly against Intel last year, the latest generation has become a favorable choice of many thanks to the excellent performance. Gamers, PC building enthusiasts, and even professionals prefer going for the Ryzen 5000 series instead of Intel. One of the reasons for that is AMD’s Zen 3 architecture based on the 7nm node, whereas Intel is still stuck on its 14nm architecture for the past six years.
Let’s check out the best AMD CPUs for performance
AMD Ryzen 9 5950X
AMD continues to offer high-end desktop (HEDT) class processors to mainstream users with the Ryzen 9 5950X. Featuring 16-cores and 32-threads, it is one of the most powerful processors from the company. It isn’t affordable by any means especially when you look at the $799 price tag, but compared to other competitive HEDT processors, this is actually a really good price. If you don’t want to jump over to the Threadripper series, then this is your best bet.
Clock speeds: 3.4GHz – 4.9GHz
16-Cores, 32 Threads
64MB L3 Cache
PCIe 4.0
105W TDP
~$920
Buy from Amazon
AMD Ryzen 9 5900X
Sitting below the 5950X is the 12-core Ryzen 9 5900X that gives Intel’s latest Core i9-11900K a run for its money. It’s an incredibly powerful processor for gaming and creative workloads, at the same time it manages power and thermals more efficiently thanks to the 7nm process. The processor delivers more performance per watt consumed, compared to the 8-core 11900K. The only issue is that the Ryzen 9 5900X is difficult to get hold of and is currently selling more expensive than AMD’s suggested price.
Clock speeds: 3.7GHz – 4.8GHz
12-Cores, 24 Threads
64MB L3 Cache
PCIe 4.0
105W TDP
~$680
Buy from Amazon
AMD Ryzen 7 5800X
It is neck-to-neck when comparing the Ryzen 7 5800X with Intel’s Core i7-11700K. While it is slightly more expensive than the Intel counterpart, it's worth it paying extra as it offers faster gaming performance and almost the same performance when it comes to core-CPU-based tasks. There is also the additional benefit of the 5800X’s lower power consumption, which means it can reach its full performance potential even on less expensive motherboards.
Clock speeds: 3.8GHz – 4.7GHz
8-Cores, 16 Threads
32MB L3 Cache
PCIe 4.0
105W TDP
~$400
Buy from Amazon
AMD Ryzen 9 5980HX
The newly launched AMD Ryzen 9 5980HX laptop CPU is part of AMD's 5000 series 'Cezanne' generation. It is targeted towards high-performance laptops. The octa-core processor comes with a base clock speed of 3.3GHz and a boost clock of 4.8GHz. The TDP is rated at 45W which is quite impressive for a powerful processor like this. According to AMD, thanks to the Zen 3 architecture, the new 5000 series has made significant leaps in IPC compared to the previous generation with an average IPC gain of 19-percent.
Clock speeds: 3.3GHz – 4.8GHz
8-Cores, 16 Threads
16MB L3 Cache
PCIe 4.0
35-45W TDP
Beast cpu still in 2022

Categories

Resources