Power Management Device Latencies Measurement

=PM Devices constraintes measurements=

Introduction
To correctly implement the device latency constraint support it is needed to get accurate measurements of the system low power modes overhead:
 * Total amount of time taken for a device to become accessible, and so the time for the device to wake-up from a given low power mode.
 * It includes turning the clocks on, bringing the clock domain out of inactive, power domain out of RET or OFF (with context restore) state.
 * This constraint mainly governs the deepest device idle state (only clocks cut, clock domain in inactive, power domain in RET or off) acceptable to the device at any given time.

This wiki page details the measurements setup and the results. The latency data is to be fed into the constraints latency patches.

Kernel patches & build
Some kernel changes are required for the kernel instrumentation. The patches and config are attached to this page.


 * Starting point: linux-omap master branch as of Sep 2 2011.


 * GPIO instrumentation
 * e27b7a5dbb8cbc126b332e7e89b4e01e3d0aa286 OMAP3: Add HW tracing code


 * GPT instrumentation
 * c8ae9658b20f76ce2eb69d796b400668dce6339a OMAP3: Use GPT12 timer for low level PM instrumentation

Changes: enable IDLE, DSS for Beagle, Initramfs Busybox root FS
 * Kernel config for Beagleboard

HW traces details
The trace points are connected on Beagleboard rev B7.


 * Trace A: on the USER button, at the connection to R36. This signal is the system wake-up event. The trigger is set on the raising edge of the signal.
 * Trace B: USR1 LED (GPIO_149). This signal is set at the end of omap_sram_idle, along with trace_power_start(POWER_WAKEN, 7, smp_processor_id);. This allows to synchronize the time between the HW and the SW traces.

!Warning! The HW power supplies and external clocks are not cut off in this config (no support for System OFF in l-o), so the HW latencies are lower than expected. The HW measurements need to be performed as soon as l-o supports the System OFF. The measurements from TI are used for the real HW latency.

Here are some scope screenshots showing the time delta between the wake-up event (USER button press, trace A) and the end of omap_sram_idle (USR1 Led).

For RET mode, showing a delta of 408us:

For OFF mode, showing a delta of 2700us:

GPT tracer
Since GPT12 is used as a wake-up source from the idle mode, it can be used to track the timings during the wake-up sequence. A patch is needed to let the timer count after it overflowed and woke up the system.

The GPT runs on 32KHz clock and so the resolution is limited to 30.518us. Given the latencies to measure for OFF mode, the resolution is accpetable.

4 GPT measurements are performed during the wake-up:
 * At the wake-up event the GPT overflows and the counter value is 0,
 * At the time the WFI instrcution is done, before the MPU context restore code (in ASM),
 * At the same time as the SW tracers 1 and 7. This allows to synchronize the HW and SW tracers.

SW trace usage
Enable the power events and dump the trace:
 * 1) echo 1 > /debug/tracing/events/power/enable
 * 2) cat /debug/tracing/trace_pipe &

Enable the system idle in RET mode:
 * 1) echo 5 > /sys/devices/platform/omap/omap-hsuart.0/sleep_timeout
 * 2) echo 5 > /sys/devices/platform/omap/omap-hsuart.1/sleep_timeout
 * 3) echo 5 > /sys/devices/platform/omap/omap-hsuart.2/sleep_timeout


 * 1) echo 0 > /debug/pm_debug/enable_off_mode
 * 2) echo 1 > /debug/pm_debug/sleep_while_idle

Trace output: [  62.311462] *** GPT12 wake-up (HW wake-up, ASM restore, delta trace1-7): 183, 0, 244 us       => Dump of GPT timing deltas -0    [000]    62.241608: power_start: type=1 state=1 cpu_id=0                  => Idle start -0    [000]    62.241608: power_start: type=4 state=1 cpu_id=0                  => First suspend SW trace in omap_sram_idle -0    [000]    62.241638: power_start: type=4 state=2 cpu_id=0                  => ... -0    [000]    62.241669: power_start: type=4 state=3 cpu_id=0 -0    [000]    62.241699: power_domain_target: name=neon_pwrdm state=1 cpu_id=0 -0    [000]    62.241699: power_start: type=4 state=4 cpu_id=0 -0    [000]    62.241699: clock_disable: name=uart3_fck state=0 cpu_id=0 -0    [000]    62.241730: power_start: type=4 state=5 cpu_id=0 -0    [000]    62.241730: clock_disable: name=uart1_fck state=0 cpu_id=0 -0    [000]    62.241730: clock_disable: name=uart2_fck state=0 cpu_id=0 -0    [000]    62.241760: power_start: type=4 state=6 cpu_id=0 -0    [000]    62.241760: power_start: type=4 state=7 cpu_id=0 -0    [000]    62.241760: power_start: type=4 state=8 cpu_id=0                  => Last suspend SW trace in omap_sram_idle -0    [000]    62.311188: power_start: type=5 state=1 cpu_id=0                  => First resume SW trace in omap_sram_idle -0    [000]    62.311188: power_start: type=5 state=2 cpu_id=0                  => ... -0    [000]    62.311188: power_start: type=5 state=3 cpu_id=0 -0    [000]    62.311188: power_start: type=5 state=4 cpu_id=0 -0    [000]    62.311218: clock_enable: name=uart1_fck state=1 cpu_id=0 -0    [000]    62.311310: clock_enable: name=uart2_fck state=1 cpu_id=0 -0    [000]    62.311310: power_start: type=5 state=5 cpu_id=0 -0    [000]    62.311340: clock_enable: name=uart3_fck state=1 cpu_id=0 -0    [000]    62.311340: power_start: type=5 state=6 cpu_id=0 -0    [000]    62.311432: power_start: type=5 state=7 cpu_id=0                  => Last resume SW trace in omap_sram_idle -0    [000]    62.311462: power_end: cpu_id=0                                   => Idle end

Enable the system idle in OFF mode:
 * 1) echo 5 > /sys/devices/platform/omap/omap-hsuart.0/sleep_timeout
 * 2) echo 5 > /sys/devices/platform/omap/omap-hsuart.1/sleep_timeout
 * 3) echo 5 > /sys/devices/platform/omap/omap-hsuart.2/sleep_timeout


 * 1) echo 1 > /debug/pm_debug/enable_off_mode
 * 2) echo 1 > /debug/pm_debug/sleep_while_idle

Trace output: / # echo 1 > /debug/pm_debug/enable_off_mode / #                        sh-503   [000]    70.862366: power_domain_target: name=iva2_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862396: power_domain_target: name=mpu_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862396: power_domain_target: name=neon_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862396: power_domain_target: name=core_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862427: power_domain_target: name=cam_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862457: power_domain_target: name=dss_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862488: power_domain_target: name=per_pwrdm state=0 cpu_id=0 sh-503  [000]    70.862488: power_domain_target: name=usbhost_pwrdm state=0 cpu_id=0 / # [ 557.240020] *** GPT12 wake-up (HW wake-up, ASM restore, delta trace1-7): 1495, 915, 488 us    => Dump of GPT timing deltas -0    [000]   557.156769: power_start: type=1 state=1 cpu_id=0                  => Idle start -0    [000]   557.156769: power_start: type=4 state=1 cpu_id=0                  => First suspend SW trace in omap_sram_idle -0    [000]   557.156769: power_start: type=4 state=2 cpu_id=0                  => ... -0    [000]   557.156830: power_start: type=4 state=3 cpu_id=0 -0    [000]   557.156830: power_domain_target: name=neon_pwrdm state=0 cpu_id=0 -0    [000]   557.156830: power_start: type=4 state=4 cpu_id=0 -0    [000]   557.156860: clock_disable: name=uart3_fck state=0 cpu_id=0 -0    [000]   557.156891: power_start: type=4 state=5 cpu_id=0 -0    [000]   557.156891: clock_disable: name=uart1_fck state=0 cpu_id=0 -0    [000]   557.156921: clock_disable: name=uart2_fck state=0 cpu_id=0 -0    [000]   557.157013: power_start: type=4 state=6 cpu_id=0 -0    [000]   557.157013: power_start: type=4 state=7 cpu_id=0 -0    [000]   557.157898: power_start: type=4 state=8 cpu_id=0                  => Last suspend SW trace in omap_sram_idle -0    [000]   557.236084: power_start: type=5 state=1 cpu_id=0                  => First resume SW trace in omap_sram_idle -0    [000]   557.236145: power_start: type=5 state=2 cpu_id=0                  => ... -0    [000]   557.236206: power_start: type=5 state=3 cpu_id=0 -0    [000]   557.236267: power_start: type=5 state=4 cpu_id=0 -0    [000]   557.236389: clock_enable: name=uart1_fck state=1 cpu_id=0 -0    [000]   557.236450: clock_enable: name=uart2_fck state=1 cpu_id=0 -0    [000]   557.236450: power_start: type=5 state=5 cpu_id=0 -0    [000]   557.236481: clock_enable: name=uart3_fck state=1 cpu_id=0 -0    [000]   557.236511: power_start: type=5 state=6 cpu_id=0 -0    [000]   557.236572: power_start: type=5 state=7 cpu_id=0                  => Last resume SW trace in omap_sram_idle -0    [000]   557.236602: power_end: cpu_id=0                                   => Idle end

Results interpretation
The low power transition sequence is pictured as nested calls to functions:

The measured results (from the HW and SW traces) are mapped to the pictured states according to the following table:

PSI measurements results
Some timings measurements have been made by the TI PSI team. The following tables gives the results for the sleep and wake-up latencies for the C-states:



Note: in the linux code there is no C7/C8/C9 as in the table. C7 is MPU OFF + CORE OFF, which is identical to C9 in the table.

A model with the energy spent in the C-states has been built from the measured numbers. Here is the graph of the energy vs time:



Taking the minimum energy from the graph allows to identify the 4 energy-wise interesting C-states: C1, C3, C5, C9 and the threshold time for those C-states to be efficient:



Notes:
 * The measurements have been performed at OPP50
 * No data has been measured for C9 (MPU OFF + CORE OFF). Data from the HW and SW trace points are used to fill in the results.
 * The sys_offmode signal is not supported and so not used for the measurements. A value of 8ms is used in the table. From the T2 scripts page the value should be 11.5ms. The measurements data and the threshold for C9 need to be corrected.
 * The sys_clkreq signal is not used and so a correction is needed. ToBeDone

HW and SW measurements results
Here are the results for full RET and full OFF modes:

Aggregated timings results
From the various sources of data the following figures are derived for all C-states (timings in us). The results are used in the cpuidle table (in arch/arm/mach-omap2/cpuidle34xx.c).

Notes:
 * The power efficient C-states are identifed as C1, C3, C5, C7
 * (1) To force the cpuidle algorithm to chose the power efficient C-states, the other C-states have a threshold value equal to the next power efficient C-state
 * (2) The threshold value is derived using the intersection of C3 and C4 in the graph
 * (3) No sys_clkoff is supported, this value need to be corrected with the correct value of SYSCLK on/off timings (1ms for sysclk on, 2.5ms for sysclk off)
 * (4) From the 'HW and SW measurements results' here above and the T2 scripts page, this value is equal to the HW and SW parts, so 11500 + (915 + 488 + 30)
 * (5) The new threshold value is derived using the intersection of C5 and C9 in the graph. However since the sleep and wake-up values are different, C9 is offset in time and in energy by a constant factor (from the initial value of (3760 + 8794) to the new value (4300 + 12933)), and the intersection gives the new threshold

Results for individual power domains
Since cpuidle only manages the MPU (and depending power domains), the wake-up latency values for the other power domains must be measured separately. By adjusting the target states of the power domains (in /debug/pm_debug/xxxx_pwrdm/suspend) the following combinations have been measured. All values are in us:

HW and SW measurements results
The HW and SW tracers are used to measure the wake-up latencies of the power domains. The results are in the table: Notes:
 * sys_clkreq and sys_offmode are not supported, only the SW restore timing values are relevant.

The significative power domains latencies are derived from the table as follows:

Those figures are used in the code as the power domains wake-up latencies for RET and OFF, cf. arch/arm/mach-omap2/powerdomains3xxx_data.c.

ToDo

 * Measure the wake-up latencies for all power domains for OMAP3
 * Measure and add figures for OMAP4
 * Correct some numbers when sys_clkreq and sys_offmode are supported

C1 performance problem: analysis
A serious performance degradation has been noticed during transfers from the NAND device using DMA, cf. http://marc.info/?l=linux-omap&m=133467316214021&w=2 for detailed discussion and patches [1]->[6]. The C1 C-state has a very big latency and degrades the use case performance.

Setup

 * Beagleboard (OMAP3530) at 500MHz,
 * l-o master kernel + functional power states + per-device PM QoS. It has been checked that the changes from l-o master do not have an impact on the performance.
 * The data transfer is performed using dd from a file in JFFS2 to /dev/null: 'dd if=/tmp/mnt/a of=/dev/null bs=1M count=32'.

Results
Here are the results on Beagleboard:
 * Without using DMA: 4.7MB/s,
 * Using DMA

This shows there is some serious performance issue with the C1 C-state but also that the patches [7]->[10] are providing some solutions.

Notes:
 * Patches for [7] are at http://marc.info/?l=linux-omap&m=133587781712039&w=2
 * Patches for [8] are at http://marc.info/?l=linux-omap&m=133527749024432&w=2
 * Patch for [9] is at http://marc.info/?l=linux-omap&m=133656106811605&w=2
 * Patch for [10] is at http://marc.info/?l=linux-omap&m=133656106911606&w=2

Main contributors
Here are the contributors inside __omap3_enter_idle (averaged over 30 samples):



The main contributors are:
 * (140us) pwrdm_pre_transition and pwrdm_post_transition,
 * (105us) omap2_gpio_prepare_for_idle and omap2_gpio_resume_after_idle. This could be avoided if PER stays ON in the latency-critical C-states,
 * (78us) pwrdm_for_each_clkdm(mpu, core, deny_idle/allow_idle),
 * (33us estimated) omap_set_pwrdm_state(mpu, core, neon),
 * (11 us) clkdm_allow_idle(mpu). Is this needed?

The HW idle time is 6.5us, which is neglectable compared to the SW overhead required to reach the idle state.

Use case idle stats
Using only the cpuidle tracepoints, the average times in idle are (averaged over 60 samples):

Notes:
 * From the above stats, the average latencies in C1 (397us;349us;246us;178us) exceed the idle duration without cpuidle (113us), hence the performance degradation.
 * The registers cache optimizes the low power mode transitions, but is not sufficient to obtain a big gain. A few unused domains are transitioning, which causes a big penalty in the idle path.
 * khilman's optimizations are really helpful. Furthermore it optimizes farther the registers cache statistics accesses.
 * The [1]+[8]+[9]+[10] combination brings the performance close to the non CPUIDLE case (no IDLE, no omap_sram_idle, all pwrdms to ON).

Registers cache accesses stats
The number of registers accesses are shown in PM debug using a registers cache statistics patch. The debug log shows the big number of accesses in this optimized use case ([0]+[7]+[8]) and the cache efficiency:

/ # cat /debug/pm_debug/count usbhost_pwrdm (ON),OFF:719,OSWR:0,CSWR:578,INA:0,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8279, hit 29, rate 0% sgx_pwrdm (OFF),OFF:1,OSWR:0,CSWR:0,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8275, hit 26, rate 0% core_pwrdm (ON),OFF:19,OSWR:0,CSWR:20,INA:14,ON:54,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0,RET-MEMBANK2-OFF:0. Cache access 14960, hit 3966, rate 26% per_pwrdm (ON),OFF:33,OSWR:0,CSWR:20,INA:0,ON:54,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 18811, hit 7817, rate 41% dss_pwrdm (ON),OFF:719,OSWR:0,CSWR:578,INA:0,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8279, hit 29, rate 0% cam_pwrdm (OFF),OFF:1,OSWR:0,CSWR:1,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 10907, hit 2657, rate 24% neon_pwrdm (ON),OFF:19,OSWR:0,CSWR:1271,INA:7,ON:1298,RET-LOGIC-OFF:0. Cache access 12611, hit 2885, rate 22% mpu_pwrdm (ON),OFF:19,OSWR:0,CSWR:1271,INA:7,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 18390, hit 7396, rate 40% iva2_pwrdm (OFF),OFF:1,OSWR:0,CSWR:1,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0,RET-MEMBANK2-OFF:0,RET-MEMBANK3-OFF:0,RET-MEMBANK4-OFF:0. Cache access 8281, hit 31, rate 0% usbhost_clkdm->usbhost_pwrdm (1) sgx_clkdm->sgx_pwrdm (0) per_clkdm->per_pwrdm (16) cam_clkdm->cam_pwrdm (0) dss_clkdm->dss_pwrdm (1) core_l4_clkdm->core_pwrdm (21) core_l3_clkdm->core_pwrdm (4) d2d_clkdm->core_pwrdm (0) iva2_clkdm->iva2_pwrdm (0) neon_clkdm->neon_pwrdm (0) mpu_clkdm->mpu_pwrdm (0) prm_clkdm->wkup_pwrdm (0) cm_clkdm->core_pwrdm (0) / #

It can be noted that some power domains have a cache hit rate of 0%, because they are unused (i.e. not controlled by any driver). Still a lot of registers accesses are performed in the idle path.

Here is the statistics patch: diff --git a/arch/arm/mach-omap2/pm-debug.c b/arch/arm/mach-omap2/pm-debug.c index ed9846e..632db47 100644 --- a/arch/arm/mach-omap2/pm-debug.c +++ b/arch/arm/mach-omap2/pm-debug.c @@ -119,6 +119,9 @@ static int pwrdm_dbg_show_counter(struct powerdomain *pwrdm, void *user) seq_printf(s, ",RET-MEMBANK%d-OFF:%d", i + 1, 				pwrdm->ret_mem_off_counter[i]); +	seq_printf(s, ". Cache access %d, hit %d, rate %d%%", +			pwrdm->pwrdm_cache_access, pwrdm->pwrdm_cache_hit, +			(100 * pwrdm->pwrdm_cache_hit)/pwrdm->pwrdm_cache_access); seq_printf(s, "\n"); return 0; diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c index 537595c..1b80ee9 100644 --- a/arch/arm/mach-omap2/powerdomain.c +++ b/arch/arm/mach-omap2/powerdomain.c @@ -711,10 +711,12 @@ static int pwrdm_cache_read(struct powerdomain *pwrdm, int index, int *value) if (index >= PWRDM_CACHE_SIZE) return -EINVAL; +	pwrdm->pwrdm_cache_access++; if (!(pwrdm->cache_state & (1 << index))) return -ENODATA; *value = pwrdm->cache[index]; +	pwrdm->pwrdm_cache_hit++; return 0; } diff --git a/arch/arm/mach-omap2/powerdomain.h b/arch/arm/mach-omap2/powerdomain.h index 92386bd..a9eae1c 100644 --- a/arch/arm/mach-omap2/powerdomain.h +++ b/arch/arm/mach-omap2/powerdomain.h @@ -172,6 +172,10 @@ struct powerdomain { struct mutex lock; int state; int cache[PWRDM_CACHE_SIZE]; + +	int pwrdm_cache_access; +	int pwrdm_cache_hit; + 	long cache_state; unsigned state_counter[PWRDM_MAX_FUNC_PWRSTS]; unsigned ret_logic_off_counter;

Device latency patches
PM QoS device constraint code patches

T2 scripts information page

Presentation slides: Fosdem/ELC 2012


--Jpihet 24 Apr 2012