AIX Tuning point - CPU,MEM,VG + FAQ
2012.11.20 22:54
원문 : http://www.ischo.net -- 조인상 // 시스템 엔지니어
Writer : http://www.ischo.net -- ischo // System Engineer in Replubic Of Korea
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
Writer : http://www.ischo.net -- ischo // System Engineer
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. CPU Tuning
1-1. CPU Scheduling Tuning
- No tuning is required by default.
- Tuning to reduce CPU polling process response time.
# schedo –p –o smt_snooze_delay=-1
: it means SMT thread entered snooze status when there is no CPU load for the amount of time specified in the smt_snooze_delay value
value "-1" means "disable" and SMT thead always awaken.
On systems using CPU pool like micro-partition, not recommended value '-1' because CPU would works dedicated not shared.
- On Power7 systems, smt_snooze_delay must be '0'. It works low performance.
1-2. CPU folding Issue
- CPU folding : Performance can be improved by driving computing power of physical core to selected virtual CPU
# schedo –p –o vpm_xvcpus=0 // Enable CPU folding (default enabled)
- But DBMS like ORACLE,Sybase recommeds "Disable" by CPU folding degrades DBMS performance.
# schedo –p –o vpm_xvcpus=-1 // Disable CPU folding
2. MEM Tuning
2-1. An issue where memory runs into the file system cache and runs out of memory
- Default value of memory parameter in AIX 5.3
: minperm% = 20
: maxperm% = 80
: maxclient% = 80
: strict_maxperm = 0
: strict_maxclient = 1
: lru_file_repage = 1
: page_steal_method = 0
If there is a lot of file system I / O, the memory for the file system cache is heavily used and Application does not have enough memory to use
To prevent this, memory can be secured by forcibly limiting the memory area to be used as a file system cache
In Experience, below value is recommended.
# vmo –p –o maxclient%=20 –o maxperm%=20 –o minperm%=10 –o lru_file_repage=0
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
- After AIX 6.1, The file cache and the memory pool for operation are managed separately.
In this method, paging space out does not occur unless the operating memory uses more than 97% of the total memory.
- If lru_file_repage is 1, priority is given to the most used file cache / operation memory, and if 0, priority is given to the operation memory.
- If the page_steal_method value is 0, the file cache and the operation memory are managed by one pool, and if the value is 1, each is managed by a separate pool.
# vmo –p –o maxclient%=90 –o maxperm%=90 –o minperm%=3 –o lru_file_repage=0
# vmo –r –o page_steal_method=1 (required reboot)
3. VG Issue
- VG Spec
Standard VG |
BIG VG |
Scalable VG | |
MAX PVs |
32 |
128 |
1024 |
MAX LVs |
255 |
511 |
4095 |
MAX active VGs |
255 |
255 |
255 |
LVM Work Speed |
fast |
slow |
slow (by number of LVs) |
* LVM Work Speed : It means the speed of LVM related work such as lvcreate. It does not means file system I / O speed.
* In order to use raw device in BIG VG, we have to use -TO in mklv option, which is an option for Oracle in BIG VG design.
It is not required for use Standard VG or Scalable VG.
- Unless you have a specific reason, it is better to use Scalaeble VG.
4. Network Parameter Tuning
4-1. Tuning Default Network parameters
- For high Network Load Systems, recommended network parameter values.
# no –p –o tcp_recvspace = 65536 –o tcp_sendspace = 131072 –o udp_recvspace = 655360 –o udp_sendspace = 65536 –o rfc1323 = 1 –o tcp_nodelayack=1 –o tcp_nagle_limit=0
- For 10Gbit Ethernet, it required more bigger value.
# chdev –l en# -a tcp_recvspace=655360 -a tcp_sendspace=262144 –a tcp_nodelay=1 -o rfc1323=1
- When uniform broadcasting performance is required, To avoid excessive buffering on ethernet, run below command
# chdev -l ent# -a large_send=no -a chksum_offload=no -a tx_que_sz=4096 -a txdesc_que_sz=256 –P
# no –p –o udp_recvspace=42080 –o udp_sendspace=9216 –o tcp_nodelayack=1
( Tuning to reduce buffering to prevent data from being pushed down at once. )
4-2. Missing Sessions in Server-to-L4/Firewall Area
- Most transaction processing applications use the connection pool to put existing connections in ESTABLISHED state and transfer data
(Because it does not have to go through the TCP OPEN / CLOSE step)
- The time from ESTABLISHED to CLOSE can be adjusted with the tcp_keepidle, tcp_keepcnt, and tcp_keepintvl parameters. .
- When two servers are running under a network Switch such as L4,
If the tcp_keepidle value set in L4 is shorter than the tcp_keepidle set in the eserver, L4 switch CLOSE SOCKET faster than server.
In this case, the server's tcp_keepidle value should be lower than L4.
# no -p -o tcp_keepidle=1080 -o tcp_keepcnt=4 -o tcp_keepintvl=10
(default)
tcp_keepcnt = 8 send probe times
tcp_keepidle = 14400 Keep idle time (half second)
tcp_keepintvl = 150 send probe period (half second)
5. TimeZone
- A bug exists in the new Timezone setting in AIX 6.1.
- If there is an error, you can set it the same way as AIX 5.3.
# smitty chtz_date
Standard Time ID(only alphabets) --> KORST
Standard Time Offset from CUT([+|-]HH:MM:SS) ---> -9
[ reboot required ]
6. ETC FAQs
- Improved performance with SMT features?
SMT is fully capable of improving performance when CPU utilization (especially logical CPU usage) exceeds about 40% to 50%.
The performance improvement of SMT is up to 60%, but it varies depending on application characteristics and system environment.
- Taking 12 GB of memory on AIX OS only?
In fact, kernel heap and other kernel segments use 12GB of memory.
Lab's description: mainly used for process and thread tables, i / o buffers, pinned code, kernel data structures, filesystem metadata, and RAS requirements.
: If you have a lot of devices, heavy I / O, or large physical memory, use of kernel memory will increase.
: It is normal for the OS to occupy more than 12GB of memory.
큰 도움이 되는 좋은 정보 감사합니다.