What are the symptoms/effects of too high a tick rate in a RTOS? -


i grateful if offer explanation of effects of high tick rate in rtos or direct me resource explains clearly?

the context of question... running ucos-ii tick rate of 10000 (os_ticks_per_sec=10000). outside recommended setting of 10 100. while investigating issue noticed setting , flagged anomaly. there no detail in ucos manual (i can see) explaining why recommended range. have 8 tasks (plus interrupts) running different priorities assume higher tick rate means switching in highest priority task faster. in our case ensuring user interface addressed on less important maintenance tasks.

from can see consensus recommend against setting tick rate in rtos "too" high due "overhead". seems common suggest use of interrupts lower tick rates. fair enough but, unclear on detectable downsides tick rate increases. example, freertos documentation states "the rtos demo applications use tick rate of 1000hz. used test rtos kernel , higher required." go on tasks of equal priority switch - lead kernel occupying lot of processing time negative. presume intended speeding increasing tick rate become negative kernel consumes of processor time. maybe answer need. in our case tasks have different priority not think (as?) applicable.

ultimately, trying determine if our software running high tick rate or how close threshold. think intended benefit during development stabilise user interface. hoping answer not entirely empirically based!

the scheduler runs on every tick, if example scheduler take 10 microseconds run , tick occurs every 10ms, scheduling overhead in absence of other scheduling events 0.1%, if tick occurs every 100us, overhead 10%. in extreme case tick rate high in scheduler , never running tasks!

the actual scheduling overhead depend of course on processor speed. faster processor able cope faster tick, there no benefit running faster tick applications needs it eats cpu time used useful work. recommendation of 10 100 relates adequate systems; aim being fast necessary.

by spending more time in scheduler necessary, greater scheduling latency , jitter may occur tasks scheduled on events other timers or delays. if example interrupt occurs , handler triggers task; task may delayed when interrupt occurs while scheduler processing tick.

a faster tick rate not make run faster, increases resolution of timers , delays may used, conversely reduces range. 16 bit timer @ 100us tick rate roll-over after 6.55 seconds, while 10ms tick roll on after 10 minutes 55 seconds. if timers 32 bit, perhaps less of issue.

you need ask resolution (and possibly range) need timers , delays; seems unlikely need 100us resolution if ui "most important" task (although "importance" inappropriate method of priority allocation in real-time system - ringing alarm bells!).

if need higher resolution 1 task - signal sampling adc @ niquist rate example, better of using independent timer perhaps? if set fast obtain timely response polled events, in case better arrange such events generate interrupts.

i assume higher tick rate means switching in highest priority task faster.

not faster, possibly more frequently. tick rate not affect context switch time, , task waiting on timer or delay run when timer/delay expires. when tick occurs, timers , delays decremented , context switch occurs when 1 expires. having faster tick, increase number of times scheduler runs , decides nothing! set timers , delays value takes tick rate account, changing tick rate not affect timing of existing tasks.


Comments

Popular posts from this blog

java - Plugin org.apache.maven.plugins:maven-install-plugin:2.4 or one of its dependencies could not be resolved -

Round ImageView Android -

How can I utilize Yahoo Weather API in android -