Intel’s Tiger Lake 11th Gen Core i7-1185G7 Review and Deep Dive: Baskin’ for the Exoticby Dr. Ian Cutress & Andrei Frumusanu on September 17, 2020 9:35 AM EST
- Posted in
- Tiger Lake
- Willow Cove
- 11th Gen
- Tiger King
The big notebook launch for Intel this year is Tiger Lake, its upcoming 10nm platform designed to pair a new graphics architecture with a nice high frequency for the performance that customers in this space require. Over the past few weeks, we’ve covered the microarchitecture as presented by Intel at its latest Intel Architecture Day 2020, as well as the formal launch of the new platform in early September. The missing piece of the puzzle was actually testing it, to see if it can match the very progressive platform currently offered by AMD’s Ryzen Mobile. Today is that review, with one of Intel’s reference design laptops.
Like a Tiger Carving Through The Ice
The system we have to hand is one of Intel’s Reference Design systems, which is very similar to the Software Development System (SDS) we tested for Ice Lake last year. The notebook we were sent was built in conjunction with one of Intel’s OEM partners, and is meant to act as an example system to other OEMs. This is slightly different to the software development system, which was mainly for the big company software developers (think Adobe) for code optimization, but the principle is still the same: a high powered system overbuilt for thermals and strong fans. These systems aren’t retail, and so noise and battery life aren’t part of the equation of our testing, but it also means that the performance we test should be some of the best the platform has to offer.
Our reference design review sample implements Intel’s top tier Tiger Lake ‘Core 11th Gen’ processor, the Core i7-1185G7. This is a quad core processor with hyperthreading, offering eight threads total. This processor also has the full sized new Xe-LP graphics, with 96 execution units running up to 1450 MHz.
I haven’t mentioned the processor frequency or the power consumption, because for this generation Intel is deciding to offer its mobile processors with a range of supported speeds and feeds. To complicate the issue, Intel by definition is only publically offering it in the mix-max form, whereas those of us who are interested in the data would much rather see a sliding scale.
|Intel Core i7-1185G7 'Tiger Lake'|
|Base Frequency at 12 W||1200 MHz|
|Base Frequency at 15 W||1800 MHz|
|Base Frequency at 28 W||3000 MHz|
|1C Turbo up to 50 W||4800 MHz|
|All-core Turbo up to 50 W||4300 MHz|
|L2 Cache||1.25 MB per core
|L3 Cache||12 MB
96 Execution Units
1350 MHz Turbo
|Memory Support||32 GB LPDDR4X-4266
64 GB DDR4-3200
In this case, the Core i7-1185G7 will be offered to OEMs with thermal design points (TDPs) from 12 W to 28 W. An OEM can choose the minimum, the maximum, or something in-between, and one of the annoying things about this is that as a user, without equipment measuring the CPU power, you will not be able to tell, as the OEMs do not give the resellers this information when promoting the notebooks.
For this reference design, it has been built to offer both, so in effect it is more like a 28 W design for peak performance as to avoid any thermal issues.
At 12 W, Intel lists a base frequency of 1.2 GHz, while at 28 W, Intel lists a base frequency of 3.0 GHz. Unfortunately Intel does not list the value that we think is most valuable – 15 W – which would enable fairer comparisons with the previous generation Intel hardware as well as the competition. After testing the laptop, we can confirm that the 15 W value as programmed into the silicon (so we’re baffled why Intel wouldn’t tell us) is 1.8 GHz.
In both 12 W and 28 W scenarios, the processor can turbo up to 4.8 GHz on one core / two threads. This system was built for thermals or power to not to be an issue, so the CPU can boost to 4.8 GHz in both modes. Not only that, but the power consumption while in the turbo modes is limited to 55 W, for any TDP setting. The turbo budget for the system increases with the thermal design point of the processor, and so when in 28 W mode, it will also turbo for longer. We observed this in our testing, and you can find the results in the power section of this review.
The Reference Design
Intel sampled its Reference Design to a number of the press for testing. We had approximately 4 days with the device before it had to be handed back, enough to cover some key areas such as best-case performance on CPU and GPU, microarchitectural changes to the core and cache structure, and some industry standard benchmarks.
There were some caveats and pre-conditions to this review, similar to our initial Ice Lake development system test, because this isn’t a retail device. The fans were fully on and the screen was on a fixed brightness. Intel also requested no battery life testing, because the system hasn't been optimized for power in the same way a retail device would - however as we only had a 4 day review loan, that meant that battery life testing wasn’t possible anyway. Intel also requested no photography of the inside of the chassis, because again this wasn’t an optimized retail device. The silicon photographs you see in this review have been provided by Intel .
When Intel’s regional PR teams started teasing the reference design on twitter (e.g. UK, FR), I initially thought this was an Honor based system due to the blue chamfered bezel like the Magicbook I reviewed earlier in the year. This isn’t an Honor machine, but rather one of the bigger OEMs known for its mix of business and gaming designs.
Large keypad, chiclet style keys, and a 1080p display. For ports, this design only has two Type-C, both of which can be used for power or DisplayPort-over-Type C. The design uses the opening of the display to act as a stand for the main body of the machine.
On the back is a big vent for the airflow in. Under the conditions of the review sample we’re not able to take pictures of the insides, however it’s clear that this system was built with an extra dGPU in mind. Intel wasn’t able to comment on whether the OEM it partnered with will use this as a final design for any of its systems, given some of the extra elements added to the design to enable its use as a reference platform.
For the full system build, it was equipped with Intel’s AX201 Wi-Fi 6 module, as well as a PCIe 3.0 x4 Samsung SSD.
|Intel Reference Design: Tiger Lake|
|CPU||Intel Core i7-1185G7
Four Cores, Eight Threads
1200 MHz Base at 12 W
1800 MHz Base at 15 W
3000 MHz Base at 28 W
4800 MHz Turbo 1C up to 50W
4300 MHz Turbo nT up to 50W
|GPU||Integrated Xe-LP Graphics
96 Execution Units, up to 1450 MHz
|DRAM||16 GB of LPDDR4X-4266 CL36|
|Storage||Samsung 1 TB NVMe PCIe 3.0 x4 SSD|
|Display||14-inch 1920x1080, Fixed Brightness|
|IO||Two Type-C ports
Supporting Charge, DP over Type-C
|Wi-Fi||Intel AX201 Wi-Fi 6 CNVi RF Module|
|Power Modes||15 W, no Adaptix
28 W, no Adaptix
28W, with Adaptix
The first devices to market with the Core i7-1185G7 will have either LPDDR4X-4266 (32 GB) or DDR4-3200 (64 GB). Intel advertised these chips also supporting LPDDR5-5400, and we confirmed with the engineers that this initial silicon revision is built for LPDDR5, however it is still in the process to be validated. Coupled with the high cost of LPDDR5, Intel expects LP5 systems a bit later in the product cycle life-time, probably in Q1 2021.
On storage: Tiger Lake technically supports PCIe 4.0 x4 from the processor. This can be used for a GPU or SSD, but Intel sees it mostly for fast storage. Given the prevalence of PCIe 4.0 SSDs on the market already, it was curious to see the reference designs without a corresponding PCIe 4.0 drive. Intel’s official reason for not equipping the system with such a drive was along the lines of ‘they’ve not been in the market for long and so we weren’t able to validate in time’. This is immediately and painfully laughable – PCIe 4.0 x4 enabled drives, built on Phison’s E16 controller, have been in the market for six months. We reported on them last year at Computex. To be clear, Intel’s argument here isn’t simply that it didn’t have enough time to validate it, it is the combination of validation time plus the argument that the drives haven’t been out in the market long enough for validation. This is wrong. If the drives had only been in the market for 6-8 weeks, perhaps I might agree with them, but to say it when the drives have been out for 24+ weeks amazes me.
The real reason why this system doesn’t have a PCIe 4.0 x4 drive is because the E16 drives are too power hungry. The E16 is based on Phison’s E12 PCIe 3.0 SSD controller, but with the PCIe 3.0 removed and PCIe 4.0 added, without much adjustment to the compute side of the controller or the efficiency point of the silicon. As a result, the E16-based drives can score up to 8 W for a peak throughput of 5 GB/s. A properly designed from-the-ground-up PCIe 4.0 x4 drive should be able to reach 8 GB/s at theoretical peak, preferably in that 2-4 W window.
Adding an 8 W PCIe 4.0 SSD to a notebook, as we’ve said since they were launched, is a bad idea. Most laptops don’t have the cooling requirements for such a power hungry SSD, causing hot spots and thermal overrun, but also the effect on battery life would be easily noticeable. If Intel had said that ‘current PCIe 4.0 x4 drives on the market aren’t suitable due to the high power consumption of current solutions, however future drives will be much more suitable’, I would have agreed with them as a valid reason for not using one in the reference design. It makes sense – it certainly makes more sense than the reason first given about not being in the market long enough for validation.
Beyond all this, by the time Tiger Lake notebooks come to market, new drives built on Phison’s E18 and Samsung’s Elpis PCIe 4.0 controllers are likely to be available. Whether these will be available in sufficient numbers for notebook deployment would be an interesting question, and so we are likely to see a mix of PCIe 3.0 and PCIe 4.0 enabled NVMe SSDs. I’m hopeful the OEMs and resellers will identify which are being used at the point of sale, or offer different SKU variants between PCIe 3.0 and PCIe 4.0, but I wouldn’t put money on it.
Priority on Power
Normal operation on a notebook is for the processor to be offered at a specific thermal design point, and any changes to the power plan in the operating system will affect how long the system uses its turbo mode, or requirements to enter higher power states. This is because most notebooks are built to be optimized around that single thermal design point.
In our Ice Lake development system (and in a few select OEM designs, like the Razer Stealth), the power slider while in the ‘Balanced’ power mode allowed us to choose between a 15 W power mode and a 25 W power mode, adjusting the base frequency (and subsequently the turbo budget) of the processor. The chassis was built for the higher power modes, and it allowed anyone using the development system to see the effect of the performance between the two thermal design points.
For our Tiger Lake reference design, we have a similar adjustment at play. The power slider can choose either 15 W mode or 28 W mode (note that this is different to the 12 W to 28 W mode that Intel’s Tiger Lake is meant to offer, which I found odd for leaving out, but good in the sense that we could do 15W to 15W comparisons). There is also a third option: 28 W with Intel’s Dynamic Tuning enabled, also known as Adaptix.
Intel’s Dynamic Tuning/Adaptix is a way for the system to more carefully manage turbo power and power limits based on the workload at hand. With Adaptix enabled, the idea is that the power can be more intelligently managed, giving a longer turbo profile, as well as a better all-core extended turbo where the chassis is capable. Intel has always stated that Adaptix is an OEM-level optimization, and it wasn’t enabled in our Ice Lake testing system due to that system not being optimized in the same way.
However for our Tiger Lake system it has been enabled - at least in the 28 W mode anyway. Technically Adaptix could be enabled at any thermal design point, even at 12 W, but in all cases it should offer better performance in line with what the chassis can provide and the OEM feels safe. It still remains an OEM-enabled optimization tool, and Intel believes that the 28 W with Adaptix mode on the reference design should showcase Tiger Lake in its best light.
More info later in the review.
As a first look at Tiger Lake’s performance, our goal with this review is to confirm the claims Intel has made. The new platform has new features, and Intel has promoted its performance against the competition and previous generation. We’ll also go into microarchitectural details.
Page two will be a brief primer on the fundamental updates on Tiger Lake: the transition to 10nm ‘SuperFin’ technology, the enhanced frequency, and the graphics. We’ll also cover the core as compared to Ice Lake, as well as the SoC level changes such as cache and updated hardware blocks.
We’ll then move onto the new data. Page three will cover the minor changes in the core when it comes to instructions, as well as updates to security. We’ll also cover cache performance, latency, and a key part of modern computing in frequency ramping on page four.
For the power consumption part of the coverage, I’m going to cover it into two brackets: how Intel compares to its own previous generation at 15 W, then moving onto the difference between a 15 W Tiger Lake and a 28 W Tiger Lake, which is going to be a running theme throughout this review.
In Intel’s own announcement for Tiger Lake, the company pitted the 28 W version of Tiger Lake against the best power and thermal setting on an AMD 15 W processor; we’re going to see if those performance comparisons actually hold water, or if it’s simply a diversionary tactic to show Intel has the upper hand by using almost 2x the power.
We’ll also cover our CPU gaming benchmark suite, tested at both 1080p maximum as well as 720p minimum. Intel made big claims about its new Xe-LP graphics architecture against AMD, so we will see how these measure up, both in 15 W Tiger Lake and 28 W Tiger Lake modes.
- Tiger Lake: Playing with Toe Beans
- 10nm Superfin, Willow Cove, Xe, and new SoC
- New Instructions and Updated Security
- Cache Performance, Core-to-Core Latency, and Frequency Ramping
- Power Consumption: Comparing 15 W TGL to 15 W ICL
- Power Consumption: Comparing 15 W TGL to 28 W TGL
- CPU Performance: SPEC 2006, SPEC 2017
- CPU Performance: Office and Web
- CPU Performance: Simulation and Science
- CPU Performance: Encoding and Rendering
- CPU Performance: Legacy and Synthetic
- Xe-LP GPU Performance: Borderlands 3, Gears Tactics
- Xe-LP GPU Performance: Final Fantasy XIV, Final Fantasy XV
- Xe-LP GPU Performance: Civilization 6, Deus Ex Mankind Divided
- Xe-LP GPU Performance: World of Tanks, Strange Brigade
- Conclusion: Is Intel Smothering AMD in Sardine Oil?
Post Your CommentPlease log in or sign up to comment.
View All Comments
blppt - Saturday, September 26, 2020 - linkSure, the box sitting right next to my desk doesn't exist. Nor the 10 or so AMD cards I've bought over the past 20 years.
2 7970s (for CFX)
1 Sapphire 290x (BF4 edition, ridiculously loud under load)
2 XFX 290 (much better cooler than the BF4 290x) mistakenly bought when I thought it would accept a flash to 290x, got the wrong builds, for CFX)
2 290x 8gb sapphire custom edition (for CFX, much, much quieter than the 290x)
1 Vega 64 watercooled (actually turned out to be useful for a Hackintosh build)
1 5700xt stock edition
Yeah, i just made this stuff up off the top of my head. I guarantee I've had more experience with AMD videocards than the average gamer. Remember the separate CFX CAP profiles? I sure do.
So please, tell me again how I'm only a Nvidia owner.
Santoval - Sunday, September 20, 2020 - linkIf the top-end Big Navi is going to be 30-40% faster than the 2080 Ti then the 3080 (and later on the 3080 Ti, which will fit between the 3080 and the 3090) will be *way* beyond it in performance, in a continuation of the status quo of the last several graphics card generations. In fact it will be even worse this generation, since Big Navi needs to be 52% faster than the 2080 Ti to even match the 3070 in FP32 performance.
Sure, it might have double the memory of the 3070, but how much will that matter if it's going to be 15 - 20% slower than a supposed "lower grade" Nvidia card? In other words "30-40% faster than the 2080 Ti" is not enough to compete with Ampere.
By the way, we have no idea how well Big Navi and the rest of the RDNA2 cards will perform in ray-tracing, but I am not sure how that matters to most people. *If* the top-end Big Navi has 16 GB of RAM, it costs just as much as the 3070 and is slightly (up to 5-10%) slower than it in FP32 performance but handily outperforms it in ray-tracing performance then it might be an attractive buy. But I doubt any margins will be left for AMD if they sell a 16 GB card for $500.
If it is 15-20% slower and costs $100 more noone but those who absolutely want 16 GB of graphics RAM will buy it; and if the top-end card only has 12 GB of RAM there goes the large memory incentive as well..
Spunjji - Sunday, September 20, 2020 - link@Santoval, why are you speaking as if the 3080's performance characteristics are not already known? We have the benchmarks in now.
More importantly, why are you making the assumption that AMD need to beat Nvidia's theoretical FP32 performance when it was always obvious (and now extremely clear) that it has very little bearing on the product's actual performance in games?
The rest of your speculation is knocked out of what by that. The likelihood of an 80CU RDNA 2 card underperforming the 3070 is nil. The likelihood of it underperforming the 3080 (which performs like twice a 5700, non-XT) is also low.
Byte - Monday, September 21, 2020 - linkNvidia probably has a good idea how it performs with access to PS5/Xbox, they know they had to be aggressive this round with clock speeds and pricing. As we can see 3080 is almost maxed, o/c headroom like that of AMD chips, and price is reasonable decent, in line with 1080 launch prices before minepocalypse.
TimSyd - Saturday, September 19, 2020 - linkAhh don't ya just love the fresh smell of TROLL
evernessince - Sunday, September 20, 2020 - linkThe 5700XT is RDNA1 and it's 1/3rd the size of the 2080 Ti. 1/3rd the size and only 30% less performance. Now imagine a GPU twice the size of the 5700XT, thus having twice the performance. Now add in the node shrink and new architecture.
I wouldn't be surprised if the 6700XT beat the 2080 Ti, let alone AMD's bigger Navi 2 GPUs.
Cooe - Friday, December 25, 2020 - linkHahahaha. "Only matching a 2080 Ti". How's it feel to be an idiot?
tipoo - Friday, September 18, 2020 - linkI'd again ask you why a laptop SoC would have an answer for a big GPU. That's not what this product is.
dotjaz - Friday, September 18, 2020 - link"This Intel Tiger" doesn't need an answer for Big Navi, no laptop chip needs one at all. Big Navi is 300W+, no way it's going in a laptop.
RDNA2+ will trickle down to mobile APU eventually, but we don't know if Van Gogh can beat TGL yet, I'm betting not because it's likely a 7-15W part with weaker Quadcore Zen2.
Proper RDNA2+ APU won't be out until 2022/Zen4. By then Intel will have the next gen Xe.
Santoval - Sunday, September 20, 2020 - linkIntel's next gen Xe (in Alder Lake) is going to be a minor upgrade to the original Xe. Not a redesign, just an optimization to target higher clocks. The optimization will largely (or only) happen at the node level, since it will be fabbed with second gen SuperFin (formerly 10nm+++), which is supposed to be (assuming no further 7nm delays) Intel's last 10nm node variant.
How well will that work, and thus how well 2nd gen Xe will perform, will depend on how high Intel's 2nd gen SuperFin will clock. At best 150 - 200 MHz higher clocks can probably be expected.