Dec 4, 2011

Higher l2arc_write_max is considered harmful

This post is completely wrong. It shows how to tune L2ARC of ZFS as follows.
  • Change the Record Size to a much lower value than 128 KB.
  • Change the l2arc_write_max to a higher value.
  • Set the l2arc_noprefetch=0.
All of them causes bad effects.

You must not set the record size of ZFS except you use ZFS for DBMSs. ZFS automatically chooses a proper record size of each file. Setting the record size spoils it.

You shouldn't set the l2arc_write_max a higher value to boost filling L2ARC. It causes an opposite effect. ZFS uses this parameter to decide the cycle of the filling. It chooses 1 sec cycle if the size of data to fill L2ARC is equal to or less than the half of this parameter. Otherwise it chooses 200 ms cycle. A higher value lets ZFS choose 1 sec cycle and slows down the filling.

If you want to boost it, you should set the l2arc_headroom a higher value. A proper value depends on the workloads of your ZFS. The default value 2 is too small in many cases. Higher value allows ZFS to gather data up to the l2arc_write_max and choose 200 ms cycle.

You shouldn't set the l2arc_noprefetch 0. If you set 0, ZFS tends to store unused  prefetched data to L2ARC. This data unnecessarily evicts cached blocks from L2ARC and lowers the efficiency of L2ARC.

That post shows the design of L2ARC is based on the characteristics of SSDs in the old days. This is actually true. L2ARC expects rather low IOPS and avoid writing much data. You can't however improve the performance of ZFS by adjusting the parameters of L2ARC to nowadays incredible SSDs. L2ARC can't work effectively on such messed parameters by design.

No comments:

Post a Comment