Introduction
The
purpose of this document is to describe how memory is used and the tools, both
supported and unsupported, that are availble to examine/report memory
usage. See below for details.
[ TOP
]
The memory
line in the output of
swapinfo...
- The memory
line is
infamously misleading and does not refer to acual
physical memory use!!!! Rather it is the size of
pseudoswap, which happens to be calculated
to be 75% of the size of RAM (a.k.a.
memory.) As the name ("pseudo") implies,
it does NOT exist. Pseudoswap is enabled by default with the kernel
parameter swapmem_on(5) set to 1.
Don't worry about this line, just
look at the total line for total used and sometimes it's interesting to look
at the device PCT USED as an indication of how much swapping has occurred
since the box was last rebooted.
- To read more on
pseudoswap, refer to:
[
TOP ]
swapinfo description
and example...
- Use 'swapinfo
-tm' to get a complete/total
picutre of swap usage (see below for example and details.) Pay
particular attention to the total line as
it indicates how much swap space has been
actually reseved for swap.
When this percentage gets near 100%, processes will not start up
(unable to fork process) and new shared memory segments can not be
created.
- swapinfo
-tm example and
explanation:
Mb Mb Mb
PCT
START/
Mb
TYPE
AVAIL USED FREE USED LIMIT RESERVE PRI
NAME
dev
288 83
205 29%
0 - 1
/dev/vg00/lvol2
reserve
- 141
-141
memory
102 41
61
40%
total
390 265 125
68% -
0 -
- are
the actual physical swap
device(s)
- show
if swapping has actually
occurred. In other words, the PCT USED column
in the dev lines represents
the value last attained during a previous period of
swapping. This is
analogous to the high-water mark
that a flood leaves.
- to check to see if swapping is
currently occuring, use 'vmstat -v
5 5' to see if the 'po' (page outs) is sustained above 0.
- indicate
how much of the swap device(s) has(have) been set aside for memory should
it need to be swapped.
- indicative
of how much of pseudo-swap has been
reserved
- when present,
indicates pseudo-swap is
enabled (i.e. swapmem_on kernel paraemter is set to 1, which
is the default.) The size of
pseudoswap is calculated to be 75% of
the size of RAM (a.k.a. memory.)
In other words, it does not refer to acual
physical memory use!!!!
Pseudo-swap was designed specifically for large
memory systems for
which acutal swapping is never (or rarely) expected to occur, so there’s
less need to use actual
physical disk space for swap. For more information,
see swapmem_on(5)
,
which reads:
In
previous versions of HP-UX, system configuration required sufficient
physical swap space for the maximum possible
number
of processes on the system. This is because HP-UX reserves swap space for
a process when it is created, to
ensure
that a running process never needs to be killed due to insufficient
swap.
This was
difficult, however, for systems needing gigabytes of swap space with
gigabytes of physical memory, and those
with
workloads where the entire load would always be in core. This tunable was
created to allow system swap space to
be
less than core memory. To accomplish this, a portion of physical
memory is set aside as 'pseudo-swap' space.
While
actual swap space is still available, processes still reserve all the swap
they will need at fork or execute time from
the
physical device or file system swap. Once this swap is completely used,
new processes do not reserve swap, and
each
page which would have been swapped to the physical device or file system
is instead locked in memory and
counted
as part of the pseudo-swap space.
- the PCT USED
value shown in the
total line indicates how much swap space has
been actually reseved for swap.
When this percentage gets near 100%, processes will not start up
(unable to fork process) and new shared memory segments can not be
created.
[ TOP
]
A. Review Buffer
Cache size
- Review Buffer
Cache size - Buffer cache is, by default, 50% of RAM (see kernel parameter
dbc_max_pct(5)). A buffer cache sweet spot
is 400
Mb or
20% of memory,
whichever is
smaller. But of course, this may vary
from system to system. To check the current size of the buffer cache,
either "sysdef | grep
bufpages” (and multiply by 4096 to
approximate the current size of buffer cache) or use glance’s memory screen to see
what size “BufCache” is.
- Note: Although buffer cache is dynamic in
size, decrease only occurs under memory pressure and then only decreases very
slowly. So, the buffer cache often grows farily quickly to
dbc_max_pct and only decreases (and slowly)
when memory presssure is high.
Q: How much
RAM(memory) does system have?
A: Choose one of the following
methods:
1. Use adb to query the kernel for the size of
physical memory:
a. 11.23
# echo phys_mem_pages/2d | adb /stand/vmunix
/dev/kmem
11.11
# echo phys_mem_pages/D | adb
-k /stand/vmunix
/dev/mem
10.x # echo physmem/D | adb -k
/stand/vmunix /dev/mem
b. multiply output of adb by 4096 to get the size of
RAM.
2. Run: # dmesg | grep Phys
3. Check with glance in the Memory Report, look at the value of Phys Mem.
[ TOP ]
B. Monitoring Memory
Usage
- There are 3
ways for memory to be allocated, all requiring an equivalent amount of
swap.
- plain
memory as allocated
with malloc(3C) system call.
- shared
memory as allocated
with shmget(2) system
call.
- memory mapped
files as allocated
with mmap(2) system
call.
[ TOP
]
- plain memory
- as allocated with malloc(3C), can be examined with: ps, procsize, kmeminfo, and glance. See below for
details on each of these utilities/tools.
- is used by process' data space, stack
space, and text space, limited by kernel parameters maxdsiz(5),
maxssiz(5), and maxtsiz(5)
respectively, for 32-bit apps; and maxdsiz_64bit(5),
maxssiz_64bit(5), and maxtsiz_64bit(5)
respectively, for 64-bit apps.
a) Use ps(1), to report process memory usage
(does NOT include shared memory or mmap files), and sort(1) to see
largest memory users first.
# UNIX95= ps
-eo vsz,ruser,pid,args | sort -rn | more
-
# UNIX95= ps -eo
vsz,ruser,pid,args | sort -rn | more
26332
ids 1685 ./idsagent -a
5296
root 2178
/usr/sbin/stm/uut/bin/tools/monitor/fpl_em
4760
root 2713 /opt/perf/bin/rep_server -t
SCOPE /var/opt/perf/datafiles/loggl
4068
root 1487
/opt/perf/bin/scopeux
4052
root 1243 /opt/dce/sbin/rpcd
3364
root 2715 /opt/perf/bin/alarmgen -svr
2714 -t alarmgen /var/opt/perf/data
3180
root 1465
/opt/perf/bin/midaemon
3148
root 1495 /usr/sbin/swagentd
-r
- And then look at the 1st column in the output to see
the amount of memory used by this process for data/text and stack. This
value is in pages, so multiply by 4096 to determine the size in
bytes.
Anytime you see that the
size (SZ) is a four-digit number, that's relatively large, so it's one to
watch over time and to see if it continues to grow, and therefore may have
a memory leak.
- If you see a process called
mib2agt using an excessive amount of memory, this
binary, mib2agt, has a known memory leak fixed in an 11.X patch
called PHSS_27858 (ITRC ftp site download). This
patch DOES NOT require a reboot. Furthermore, mib2agt can be killed and then restarted with
“kill
mib2agt_PID” and
“/usr/sbin/mib2agt”. But only restart it if you have need of
supporting SNMP requests (e.g. OpenView). If not needed, can be configured
to not start at bootup by modifying /etc/rc.config.d/SnmpMib2.
- Alternatively, to look at both virtual size as well
as the actual size:
# UNIX95=1 ps -efo
vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more
12252 627 2745 /opt/OV/bin/ovdbrun
-c /var/opt/OV/share/databases/analysis/
9060 1214 2362 /opt/omni/lbin/rds
-d
8808
1892 2677 /opt/hpwebjet-5.5/hpwebjetd
b) For information beyond data, statck, text
usage, can use an unsupported utility called procsize, which breaks down memory by
: UAREA, TEXT, DATA,
STACK, Shared Memory (SHMEM), & Memory Mapped files (MMAP).
- procsize
is available for download here:
- For example:
- Look at breakdown of memory usage, per
process:
# ./procsize -fnc
| more
pid
Comm UAREA
TEXT DATA STACK SHMEM
IO MMAP Total
2916 getty
v 4
5 6
4 0 0
349 369
2287 prm3d
v 68
6 671
513 0
0 37212
38471
.
.
.
- NOTE: numbers in the output of
procsize
are the number of 4K pages. So, mulitply by
4096 to get a byte
count.
- TRICK:
Here's the command to use to
sort by the 11th column (total memory usage)
# ./procsize -fcn |sort -rnk 11 |
more
To save top 50
memory users in a file:
# ./procsize -fcn
| sort -rnk 11 | head -50 >> /tmp/procsize.log
c) Can also
look at memory usage, by
process, with an unsupported utility called kmeminfo.
# ./kmeminfo –user
kmeminfo
(3.57)
libp4
(7.124): Opening /stand/vmunix /dev/kmem
Boot time: Mon Nov 25 12:01:58 2002
Dump
time: Mon Jan 6 12:24:21 2003
-----------------------------------------------------------
Summary
of user processes memory usage:
Process
list sorted by resident set size ...
proc vas
p_pid va_rss va_prss va_ucount command
0x0ab6180
0x1de3400
3185 3895
3865 7678 mxagent
0x0abca00
0x1c1b900
1538 2051
2040 8867 rbootd
0x0ac19c0
0x1f46f00
3184 1563
1533 7207 mxrmi
0x0abccc0
0x1d55800
2434 1236
1032 5364 rds
0x0acb3c0
0x2095200 12454
919 889 7415
kmeminfo
0x0ac2d00
0x1e49800
2635 527
471 8715 ns-slapd
0x0ac1f40
0x1df5a00
2684 380
365 3371 ns-admin
0x0abc480
0x1bbb000
1476 359
277 3349 dmisp
0x0ac4300
0x1e83600
2853 318
264 4237 hpwebjetd
d) Can use glance's process list or application list. For
example:
PROCESS
LIST
Users=
5
User CPU Util
Cum
Disk Thd
Process
Name PID PPID Pri Name ( 100%
max) CPU IO Rate RSS
Cnt
--------------------------------------------------------------------------------
pax
13819 13818 148 root 2.7/ 5.8
273.3 9.4/32.8 284kb
1
glance 14464 1822 158
root 2.1/ 3.1 3.0
0.0/ 2.1 4.3mb
1
scopeux
1715 1 127 root 1.7/
0.2 518.4 1.5/ 0.0 4.1mb
1
swapper
0 0 127 root 1.5/
0.8 2213.0 0.3/ 0.0 16kb
1
java
10095 1 168 root
1.0/ 2.7 348.7 0.0/ 4.2 42.0mb
28
vxfsd
35 0 138 root 0.2/
0.1 289.4 1.9/ 1.3 352kb
16
APPLICATION LIST
Users= 5
Num Active CPU AvgCPU Logl
Phys Res Virt
Idx
Application Procs
Procs Util Util IO
IO Mem Mem
--------------------------------------------------------------------------------
1
other
2 0 0.0 0.0
0.0 0.0 804kb 19.3mb
2
network
55 5 0.4 0.3
0.0 0.0 12.1mb 35.4mb
3
memory_management
3 3 1.6 1.8
0.0 1.1 96kb 376kb
4
other_user_root 101
34 52.3 43.1 60.9 65.2 109.9mb
614.0mb
- A trial
version of Glance is
available on the application
CDs (usally on cd #2 or
#3)
- Glance
is not available for download.
Glance product
#'s
- 11.x s700:
B3691AA B3699AA
- Trial version: B3691AA_TRY B3699AA _TRY
- 11.x s800:
B3693AA B3701AA
- Trial version: B3693AA_TRY
B3701AA_TRY
[ TOP ]
- 2) shared memory as allocated with shmget(2) system call.
- a)
Can look at shared memory usage with ipcs(1). For
example:
# ipcs
–mpb | more
IPC
status from /dev/kmem as of Wed Mar 3 07:39:51 2004
T
ID KEY
MODE OWNER
GROUP SEGSZ CPID LPID
Shared Memory:
m 0
0x41200007 --rw-rw-rw-
root root 348
636 636
m 1
0x4e000002 --rw-rw-rw-
root root 61760 636
638
m 2
0x41241878 --rw-rw-rw-
root root 8192
636 638
m 3
0x000024ef --rw-rw-rw-
root root 7712 1143
1137
m 4
0x30205f0d --rw-rw-rw-
root root 1048576 1184
1226
m 1605
0x0c6629c9 --rw-r-----
root root 19059552 1823
13457
m
606 0x49180013 --rw-r--r--
root root 22908 1804
1903
m 7
0x06347849 --rw-rw-rw-
root root 77384 1823
1903
m 7208
0x5e1c019c --rw-------
root sys 512 19627
19627
m 3409
0x00000000 D-rw-------
root root 213272 2198
2198
m 10 0x011c0082
--rw------- www
other 100000 2203 2204
- TRICKS:
- To total the
shared memory usage, run:
# ipcs -mpb | sed -n
'/^m/p' | \
awk
'{total+=$(NF-2)}END{printf(“%d\n”, total)}'
- And if total is at or near 1.75
Gb or 2.75Gb then address as a 32-bit
limitation issue.
- To find processes, if still running,
that last touched (LPID) shared
memory segments:
# ps -ef | `ipcs -mpb | sed -n '/^m/p' |
\
awk
'{printf("%s ", $NF)} END{printf("\n")}' | \
sed 's/\
/\|/g'| sed 's/\|$//' | \
awk
'{printf("egrep -e %s\n",$0)}' | \
sed
's/ \-e / \-e \"/' |sed 's/$/\"/'`
[
TOP ]
- b) Can look at shared memory usage with an
unsupported tool called shminfo
- shminfo
is available for download here:
System: hprc.external.hp.com
(192.170.19.51)
Login: eh
Password: spear9
ftp://eh:spear9@hprc.external.hp.com/ shminfo.README.txt
ftp://eh:spear9@hprc.external.hp.com/ shminfo.sh
# ./shminfo
Shared space from Window id 0 (global):
Space
Start End Kbytes Usage
Q2 0x00006fea.0x40000000-0x7fff0000 1048512 FREE
Q3 0x00000000.0x80000000-0x80001000 4 SHMEM
id=0
Q3 0x00000000.0x80001000-0x80002000 4 OTHER
Q3 0x00000000.0x80002000-0x80102000 1024 SHMEM id=201
Q3 0x00000000.0x80102000-0x81202000 17408 OTHER
Q3 0x00000000.0x81202000-0x8121b000 100 SHMEM id=3602
Q3 0x00000000.0x8121b000-0x81eea000 13116 FREE
Q3 0x00000000.0x81eea000-0x81efd000 76 SHMEM id=3
Q3 0x00000000.0x81efd000-0x82df0000 15308 OTHER
Q3 0x00000000.0x82df0000-0x82df6000 24 SHMEM
id=4004
Q3 0x00000000.0x82df6000-0x83aa6000 12992 OTHER
# ./shminfo –64bit
libp4 (7.91): Opening /stand/vmunix /dev/kmem
Loading symbols from /stand/vmunix
shminfo (3.7)
Global 64-bit shared quadrants:
===============================
Space Start
End
Kbytes Usage
Q1
0x09957000.0x0000000000000000-0x000003ffffffffff 4294967296
FREE
Q4
0x08343400.0xc000000000000000-0xc00003ffffffffff 4294967296
FREE
(note:shminfo indicates shared memory and OTHER means memory mapped files.)
- TRICK:
- If a particular shared memory
segment is of interest, and if you want to know which processes are
attached to that shared memory segment, you can use shminfo -s id (where id is the
shared memory identifier.)
# ./shminfo -s
8010
libp4
(7.91): Opening /stand/vmunix /dev/kmem
Loading
symbols from /stand/vmunix
shminfo
(3.7)
Shmid 8010:
struct
shmid_ds at 0xc84c10
Pseudo vas at
0x49d0ca80
Pseudo pregion at 0x4c2b6200
Shared
region at 0x4c2b5ac0
Segment at
0x12f2400.0xc33ba000
Segment allocated out of "Global 32-bit
quadrant 4"
Processes using this
segment:
proc=0x4c19f040 (pid 3097 "httpd"): vas=0x49d0cd00,
SHMEM preg=0x4c3062c0
proc=0x48d87040 (pid 3094 "httpd"):
vas=0x49d0cbc0, SHMEM preg=0x4c2e2840
proc=0x49e3b040 (pid 3089
"httpd"): vas=0x4c262680, SHMEM preg=0x4c2baec0
[ TOP ]
- c) Can look at shared memory usage by process with an
unsupported utility
called procsize, which breaks down memory by various
types, specifically: UAREA, TEXT, DATA, STACK, Shared Memory, Memory Mapped
files.
- For example:
- Look at breakdown of memory usage, per
process:
# ./procsize -fnc
| more
pid
Comm UAREA
TEXT DATA STACK SHMEM
IO MMAP Total
2916 getty
v 4
5 6
4 0
0 349 369
2287 prm3d
v 68
6 671
513 0 0
37212
38471
.
.
.
- TRICK:
Here's the command to use to
sort the processes by total memory usage, most to
least.
#
./procsize -fcn |sort -rnk 11 |
more
- TRICK:
Here's the command to use to
sort the processes by shared memory usage, most to
least.
#
./procsize -fcn |sort -rnk 8 |
more
[
TOP ]
- 3) memory mapped files as allocated
with mmap(2) system
call.
- a) The are no system commands that will
report memory mapped file usage
- b)
Can use an
unsupported utility called shminfo to see use of memory mapped
memory.
- In the
output of shminfo,
memory mapped files are shown as
“OTHER”.
- See above
for examples and download site for shminfo.
- c) Can use
procsize to
see which processes use memory mapped files
- In the output of
procsize
,
memory mapped files are shown under the
"MMAP”
column.
- See above
for examples and download site for
procsize
- TRICK: Use the following to sort
by
MMAP
column:
# ./procsize -fcn |sort -rnk 10 |
more
[ TOP
]
C. OS Memory
Leaks/Hogs
- 1. Check for known
OS Memory Leaks with an unsupported utility called
kmeminfo
The Response
Center can assist with analyzing the output of kmeminfo.
- 2. Checking for
known memory hogs
- The JFS inode cache, is sized by
the
vx_ninode kernel
parameter. The default
value of
vx_ninode is determined by size of RAM and is different
for 11.00 -vs- 11.11.
- For
example:
- an
11.11 system with 8GB of memory, vx_ninode is defaulted to
256,000
- an 11.0 system
8GB of memory, vx_ninode is defaulted to
144,000
- For most situations, a smaller
value for vx_ninode is reasonable, say 20,000 for
example
- Lowering vx_ninode results in a large savings of
memory.
- To see the size of the JFS
(3.3 and above) inode cache:
#
echo
"vxfs_ninode/D" | adb -k /stand/vmunix /dev/mem
- To see how many JFS (3.3 and above) inodes are currently
cached:
#
echo
"vx_cur_inodes/D" | adb -k /stand/vmunix /dev/mem
- To gage* the size of a systems
JFS inode cache, looking at the output of
kmeminfo, use the following table to know which
bucket/arena JFS inode cache uses.
OS JFS
version
arena/bucket*
11.11
3.5
vx_icache_arena
11.11
3.3
M_TEMP
11.00
32-bit
3.1
bucket[10]
11.00
64-bit
3.1
bucket[11]
11.00 32-bit/64-bit
3.3
bucket[10]
*
NOTE: JFS inode cache is one of the consumers of
bucket/arena.
[ TOP ]
D. Application Memory
Leaks
- Use a tool to capture a baseline
of memory use
per process,
- Then gather subsequent reports to see if
there is a steady increase in memory use.
- To check for 3rd
Party Memory Leaksm, try PurifyTM
to troubleshoot for this problem. NOTE, this is not an endorsement of
Purify.
http://www.rational.com
[ TOP ]
E. 32-bit memory limitation,
General Information
- For 32 bit
applications, the maximum size
of a single shared memory segment a process can attach is limited to 1
GB (shmmax <= 0x40000000). The first 1 Gb resides in quadrant
3, and quadrant 4 only has 0.75 Gb reserved for shared memory and
memory mapped files. So the total memory addressable by the default
executable type is 1.75 Gb.
An executable type
SHMEM_MAGIC has been defined which adds
the use of quadrant 2 for shared memory and memory mapped files.
This additional 1 Gb results in a system-wide maximum of 2.75 GB
of shared memory and memory mapped files address space. See below
for more details about SHMEM_MAGIC. (note: for more on 32-bit
memory limitation, see: “Understanding Shared Memory on PA-RISC
Systems”, ITRC Doc id RCMEMKBAN00000027.)
- If you see “out of memory” or “not enough space” when running an application, and
there is pleny of free swap
space then the application may be requesting shared memory or may be
mapping files to memory (with
shmget() and mmap() system calls respectively) and the problem may due
to 32-bit memory limitation/contention and the options
are...
- Q: Is application 32-bit or 64-bit?
- A: To determine if an application
binary is 32-bit or 64-bit, use the
file(1)
# file /usr/bin/ksh
/usr/bin/ksh: PA-RISC1.1 shared executable dynamically
linked
# file /stand/vmunix
/stand/vmunix: ELF-64 executable object file - PA-RISC 2.0
(LP64)
[ TOP ]
F.
SHMEM_MAGIC
- SHMEM_MAGIC executables can address 2.75 Gb
- For more details than explained below
(about SHARE_MAGIC, EXEC_MAGIC, AND SHMEM_MAGIC), see ITRC doc id
rcfaxmemory001
("Shared_magic Explained")
- SUMMARY
- To get SHMEM_MAGIC, the executable needs to have
been previously linked with
EXEC_MAGIC ("ld
-N") and then can be chatr'ed to
get SHMEM_MAGIC (i.e. with the "chatr -M" option.)
- Check with the vendor's application support
to see if their application supports SHMEM_MAGIC.
- PATCHES - Unlike 11.0 which doesn't require patches for
SHMEM_MAGIC, 10.20 needs patches.
The 10.20 patches are:
- PHKL_16750
(for s700)
PHKL_16751 (for s800)
- These are LITS (Line-In-The-Sand)
patches and will never be superseded.
- PHSS_21110
(linker/ld
patch)
- Note: PHSS_21110, may be
superseded. Please check for the
latest patches at the IT ResourceCenter
(ITRC) at the following web
site: http://www.itrc.hp.com/
- Q: Is application set
up with SHMEM_MAGIC?
- A: To determine if the 32-bit
application is setup with SHMEM_MAGIC or is capable of SHMEM_MAGIC, use the chatr(1)
# chatr /usr/bin/bdf |grep -i
executable
shared executable
executable from stack: D (default)
# chatr /opt/oracle/bin/orasrv | grep
executable
normal SHMEM_MAGIC
executable
executable from stack: D (default)
|
Executable
Type as Reported By chatr
|
Magic
Type
|
Capabilities
|
|
shared
executable |
SHARE_MAGIC |
can ONLY address 1.75 Gb |
|
normal
executable |
EXEC_MAGIC |
can be chatr'd (with –M option) to obtain SHMEM_MAGIC. |
|
normal SHMEM_MAGIC
executable
|
SHMEM_MAGIC |
can address 2.75
Gb |
[ TOP ]
G. How much data space can application
get?
- Normally, 32bit
apps can, as
allowed by maxdsiz, can get up to ~940 Mb of data space, unless they have
been linked with EXEC_MAGIC, in which case, they can get upwards of 1.9 Gb of data
space.
- Alternatively,
chatr(1)
may be used to enable the third quadrant private for data
space. Thus enabling an addition 1 Gb quadrant to be used for data
space.
- For example:
chatr +q3p enable executable_name
- To determine what the
executable is capable of, run:
# chatr
executable_name
- if it shows as "shared
executable" then it can only get to about 940Mb,
- if it shows "EXEC_MAGIC",
then it can get
upwards of
1.9 Gb of data space.
- maxdsiz - maximum size (in bytes) of the data segment/space for any user
process.
[ TOP ]
H. Memory Windows
- There is a Memory Windows White Paper in pdf format and is also availabine in an ascii text version
- /usr/share/doc/mem_wndws.txt
- There is a Memory Windows
summary in the 11i
release notes.
- Memory Windows are NOT supported by
- OpenView (see non-support statement for
HP-UX 11 Memory Windows in the OVO/UNIX 7.1 Release
Notes.)
- Memory
Windows are supported (contact Vendor for
details)
by:
- Overview - Running without memory windows,
HP-UX has limitations for shared resources on 32-bit applications. All
applications in the system are limited to a total of 1.75GB of shared memory,
2.75GB if compiled as SHMEM_MAGIC. In a system with 16GB of physical memory,
only 1.75 can be used for shared resources!!!! To address this limitation, a
functional change has been made (Memory Windows was introduced by patches at
11.0) to allow 32-bit processes to create unique memory windows for
shared objects like shared memory. This allows cooperating applications
to create 1GB of shared resources without exhausting the system-wide resource.
Part of the virtual address space remains globally visible to all processes,
so that shared libraries are accessible no matter what memory window they are
in. The following customer-visible changes have been made
for memory windows:
- New kernel tunable, max_mem_window(5), to
configure the number of memory windows a system can have.
- New set of commands and files and their
assciated man pages:
- GOTCHA: The default (SHARE_MAGIC) executable’s maximum
size memory window is 1 gigabyte. Any consumption beyond 1 gigabyte consumes
space from the 4th
quadrant which is shared across
*ALL* processes in the system.
This is important, any application within a memory window that uses more
than 1 gigabyte of shared memory consumes quadrant 4 resources that are shared by all processes no
matter what memory window they occupy.
- Check with the vendor's application
support to see if they support Memory Windows.
- Patches - Memory Windows was originally
introduced with 11.0 Patches:
PHKL_13810
(Kernel)
PHCO_13811
(Commands)
- The current* patches are PHKL_18543
&
PHCO_23705
- 11.11 (11i) does
not need patches for Memory
Windows.
- Is system already
configured for Memory Windows?
- Either, test with
setmemwindow(1M), for example:
# setmemwindow date
- Memory Windows is not configured if you get
nothing from the date(1)
command (i.e. nothing comes back), or if you get an error like this:
Error(12), unable to
set memory window(-1)
- Or check the kernel
parameter max_mem_window to see if it has been set,
with:
# grep max_mem_window
/stand/system
- NOTE: If you do NOT see max_mem_window(5)
as a kernel configurable parameter in SAM, then you can
install the latest 11.0 SAM patch (SAM was
first made aware of
max_mem_window with PHCO_21187) or you can
add max_mem_window to system file manually and then generate a
new kernel.
# ./memwin_stats -w
Entry USER_KEY KERN_KEY
QUAD2_AVAIL QUAD3_AVAIL PID
REFCNT
Memory Windows:
0
Global 0
262144
262144 0
357
1
Private
1
0
0
0 1
#
./memwin_stats -m
Shared
Memory:
T
ID KEY
MODE OWNER
GROUP UserKey KernId
m 0
0x41200007 --rw-rw-rw-
root root2139031040 2139031040
m 1
0x4e000002 --rw-rw-rw-
root root2139031040 2139031040
m 2
0x41241878 --rw-rw-rw-
root root2139031040 2139031040
m 3
0x000024ef --rw-rw-rw-
root root2139031040 2139031040
m 4
0x30205f0d --rw-rw-rw-
root root2139031040 2139031040
m 1605 0x0c6629c9
--rw-r----- root
root2139031040 2139031040
m 606 0x49180013
--rw-r--r-- root
root2139031040 2139031040
m 7
0x06347849 --rw-rw-rw-
root root2139031040 2139031040
m 7208 0x5e1c019c
--rw-------
root sys2139031040
2139031040
m 3409 0x00000000
D-rw------- root
root2139031040 2139031040
m 10
0x011c0082 --rw-------
www other2139031040 2139031040
#
./memwin_stats -p 1226
Process
Id (1226)
User Key:
-1
Kernel Id:
0
[ TOP ]
- Alternative ps
command - Alternatively, you can use
the UNIX95 options to look at both Virtual Size as well as the
actual Size.
- Run:
# UNIX95=1 ps -efo
vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more
- For
example:
# UNIX95=1 ps -efo
vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more
VSZ SZ
PID COMMAND
12252
627 2745 /opt/OV/bin/ovdbrun -c
/var/opt/OV/share/databases/analysis/
9060 1214 2362
/opt/omni/lbin/rds -d
8808 1892 2677
/opt/hpwebjet-5.5/hpwebjetd
[ TOP
]
I. Memory Usage from "physmem", "swapinfo", "top",
and "glance".
- How do I undertand/resolve the
different result about the memory usuage from "physmem",
"swapinfo", "top", and "glance".
- Physical
Memory
- Can use dmesg to report
Physical memory (RAM) info. For example:
# dmesg | grep Phys
Physical: 212992 Kbytes,
lockable: 152792 Kbytes, available: 178188 Kbytes
- Note:
- This system has 2GB physical
memory.
- Lockable memory is used for
- Process images and
overhead locked using the plock() system call (see HP-UX Reference
entry plock(2)).
- Shared
memory segments locked with the SHM_LOC command of the shmctl() system
call (see HP-UX Reference entry shmctl(2)).
- Miscellaneous dynamic
kernel data structures used by the shared memory system and some
drivers
- Can also report Physical memory (RAM)
size with adb:
11.x # echo phys_mem_pages/D | adb /stand/vmunix /dev/kmem
physmem:
physmem: 524288
10.x # echo physmem/D
|adb -k /stand/vmunix /dev/kmem
physmem:
physmem: 524288
[
TOP
]
# swapinfo -tam
Mb Mb Mb
PCT START/ Mb
TYPE
AVAIL USED FREE USED LIMIT
RESERVE PRI NAME
dev
288 59 229
20% 0
- 1 /dev/vg00/lvol2
reserve
- 146 -146
memory
102 45
57 44%
total
390 250 140
64% -
0 -
- The "memory" line in the output
of swapinfo is NOT physical memory,
rather it is pseudoswap which is calculated to be 75% the size of
RAM.
- MORE...
[ TOP ]
System:
opie1
Sat Dec 18 22:10:07 2004
Load averages: 3.39, 4.42,
4.54
193 processes: 155 sleeping, 38 running
Cpu states:
LOAD
USER NICE SYS IDLE BLOCK
SWAIT INTR SSYS
3.39 0.0% 0.0% 0.0% 100.0%
0.0% 0.0% 0.0% 0.0%
Memory: 515216K (441660K) real, 1537516K (1434024K) virtual, 1365132K
free
^
^ ^
^ ^
|
| |
| |
1
2 3
4 5
- Memory is not all of physcial, memory, it is:
- Total physical memory in the system DEDICATED to text, data or stack
segments for all processes on the system.
- Total physical memory for runnable processes, as opposed to sleeping
processes.
- Total memory dedicated to text, data or stack segments for all processes
on the system. Some of this is paged out to disk (that is,
not all of this is in current physical memory.)
- Total memory for runnable processes, as opposed to sleeping or stopped
processes.
- Physical memory the system considers to be unused and available to new
processes. When this value is low, swapping is likely to
occur.
- For more about top, see top(1) man page. For example:
TTY PID USERNAME PRI
NI SIZE RES STATE TIME
%WCPU %CPU COMMAND
? 2034
root 154 20 3936K 1084K sleep 1066:16 14.88
14.85 X
? 18768
root 154 30 312K 724K
sleep 10:29 6.98 6.97 dtscreen
? 1818 root 152 20
3080K 1460K run 18:29 1.13 1.13
ns-admin
top(1)
top(1)
CPU Processor number on which
the process
is
executing (only on
multi-processor
systems).
TTY Terminal interface used by
the
process.
PID Process ID
number.
USERNAME Name of the owner of the process. When
the
-u option is specified, the user ID
(uid)
is displayed instead of
USERNAME.
PRI Current priority of the
process.
NI Nice value ranging from
-20 to
+20.
SIZE Total size of the process in
kilobytes.
This includes text, data, and
stack.
RES Resident size of the process
in
kilobytes.
The resident size information is, at
best,
an approximate
value.
STATE Current state of the process.
The
various
states are sleep, wait, run, idl, zomb,
or
stop.
TIME Number of system and CPU seconds
the
process has
consumed.
%WCPU Weighted CPU (central processing
unit)
percentage.
%CPU Raw CPU percentage. This
field is used
to
sort the top
processes.
COMMAND Name of the command the process
is
currently running.
[ TOP ]
- SAM
- For SAM's System Properties, see "Help" for description of values shown, which are
similar to those descriptions given
above for
top(1).
- ============================================================================
SAM
Areas:Performance Monitors:System Properties-> Memory
screen:
============================================================================
System Properties
(opie1)
/-----------------------------------------------------------------------\
|
^
| [ Refresh
]
|
|
/-----------\/--------\/------------------\/---------\/---------\
| | Processor || Memory
|| Operating System || Network || Dynamic
|
|
/------------/
\----------------------------------------------\
|
|/------------------------------------------------------------------\|
| ||Physical
Memory: 3010.7
MB
||
| ||Real
Memory:
||
| || Active:
452098.5
KB
||
| ||
Total:
516787.5
KB
||
| ||Virtual
Memory:
||
| || Active:
1450485.6
KB
|||
| ||
Total:
1538524.9
KB
|||
| ||Free Memory
Pages: 340696 at 4
KB/page
|||
| ||Swap
Space:
|||
| ||
Avail:
4096
MB
|||
| ||
Used:
674
MB
||v
\-----------------------------------------------------------------------/
-------------------------------------------------------------------------
[ OK
]
[ Help ]
============================================================================
SAM
Areas:Performance Monitors:System Properties-> Dynamic
screen:
============================================================================
System Properties
(opie1)
/-----------------------------------------------------------------------\
|
^
| [ ] Auto
Refresh
|
|
/-----------\/--------\/------------------\/---------\/---------\
| | Processor || Memory || Operating System || Network ||
Dynamic
|
|
/-----------------------------------------------------/
\----\
|
|/------------------------------------------------------------------\|
|
||Processor:
||
| || Active
Processors:
1
||
|
||Memory:
||
| || Real Active:
460138.1
KB
||
| || Virtual Active:
1464187.7
KB
||
| || Free Memory
Pages: 339804 at 4
KB/page
|||
| || Swap
Space:
|||
| ||
Used:
675
MB
|||
| ||
Free:
3421
MB
|||
| ||Operating
System:
|||
| || Unique Users Logged In:
1
||v
\-----------------------------------------------------------------------/
-------------------------------------------------------------------------
[ OK
]
[ Help ]
[ TOP
]
B3690A GlancePlus
C.03.05.00
10:25:25 raw
9000/735 Current
Avg High
--------------------------------------------------------------------------------
Cpu
Util
S SN NARU
U | 95% 27%
95%
Disk
Util |
0% 1%
19%
Mem
Util S
SU UB
B | 91%
91% 91%
Swap
Util U UR R |
77% 77% 77%
--------------------------------------------------------------------------------
MEMORY
REPORT Users=
7
Event Current
Cumulative Current Rate
Cum Rate High Rate
--------------------------------------------------------------------------------
Page
Faults 1
791
0.1
4.8
164.7
Page
In 1
190
0.1
1.1
30.9
Page
Out 0 1
0.0
0.0
0.1
KB Paged
In 16kb
468kb
2.8
2.8 160.0
KB Paged
Out 0kb
4kb
0.0
0.0
0.7
Reactivations
0 0
0.0
0.0
0.0
Deactivations
0 0
0.0
0.0
0.0
KB
Deactivated
0kb 0kb
0.0
0.0
0.0
VM
Reads 1
31
0.1
0.1
10.5
VM
Writes
0
1
0.0
0.0
0.1
Total VM : 121.1mb Sys Mem
: 13.8mb User Mem: 91.3mb Phys Mem: 144.0mb
Active VM: 73.7mb Buf
Cache: 26.4mb Free Mem: 12.6mb
- Total
VM:
The total private virtual memory (in KBs unless
otherwise specified) at the end of the interval. This is the sum
of the virtual allocation of private data and stack regions for all
processes.
- Active VM:
The total virtual memory (in KBs unless
otherwise specified) allocated for processes currently are on the run queue
or processes that have executed recently. This is the sum of the
virtual memory sizes of the data and stack regions for these
processes.
- Sys Mem: The amount of physical memory KBs unless
otherwise specified) used by the system
(kernel) during the interval. System memory does not
include the buffer cache.
- On HP-UX 10.20 and 11.0, this metric
does not include some kinds of dynamically allocated kernel
memory, which has always been reported in the GBL_MEM_USER*
metrics.
- On HP-UX 11i and beyond, this metric does
include some kinds of dynamically allocated kernel
memory.
- Buf Cache:
The amount of physical memory (in KBs
unless otherwise specified) used by the buffer cache during the
interval. The buffer cache is a memory pool used
by the system to stage disk IO data for
the driver.
- User Mem:
The amount of physical memory (in KBs
unless otherwise specified) allocated to user code and data at the end of
the interval. User memory regions include code, heap,
stack, and other data areas including shared memory.
This does not include memory for buffer cache.
- On HP-UX 10.20 and 11.0, this metric does include some kinds
ofdynamically allocated kernel
memory.
- On HP-UX 11i and beyond, this metric does not include some kinds
ofdynamically allocated kernel memory, which
now is reported in the GBL_MEM_SYS*
metrics.
- Large fluctuations in this metric can be caused by programs
whichallocate large amounts of memory and then either release
the memory or terminate. A slow continual increase in
this metric may indicate a program with a memory
leak.
- Free Mem:
The amount of memory not allocated (in KBs
unless otherwise specified). As this value drops, the
likelihood increases that swapping or paging
out to disk may occur to satisfy new memory requests.
- Phys Mem: The amount of physical memory in the system (in
KBs unless otherwisespecified). Banks with bad
memory are not counted.
- Note that on some machines, the
Processor Dependent Code (PDC) code usesthe upper 1MB of memory
and thus reports less than the actual physical memory of the
system. Thus, on a system with 256MB of physical memory, this metric and
dmesg(1M) might only report 267,386,880 bytes (255MB).
This is all the physical memory that software on the
machine can access.
[ TOP
]
J. Troubleshooting Examples: "Not
enough space" , "out
of memory", "Not enough
core" (a.k.a. HPUX
errno 12, ENOMEM.)
- "call to mmap
failed" when accompanied by "not enough
space":
- If there is no lack of available swap, then
the cause is typically contention for and/or fragmention of the 32-bit
address space used by shared memory and memory mapped files, which can
be viewed using an unsupported utility called shminfo (example
download).
- "Not enough space" (examples),
"out of
memory", or "Not enough
core" (a.k.a. HPUX errno 12, ENOMEM)
- 1.) If the app/db is requesting shared memory,
and the amount requested is more than the value of the shmmax(5) kernel
parameter, then shmmax needs to be
increased.
- If it is determined that shmmax is not causing the failure,
then the requested amount of shared memory could not be obtained due to
lack of requested amount of contiguous
memory (i.e memory is fragmented).
- 2.) Whether or not app/db is using shared memory,
the problem may be caused by not enough free swap space. So, check to see if
there is enough swap. Use 'swapinfo
-tm' and see how much total free space there is.
- 3.) If not a problem with shmmax nor with swap, then, the
cause is most likely either data
/ stack kernel parameters OR shared
memory configuration or contention/fragmentation.
- To
see if the problem is due to data / stack
kernel parameters, determine which one, you can use tusc to
trace the system calls to see which system call is failing and to see the
ERRNO.
- a.) If malloc() ... system call
equivalent in tusc output would be called "brk"... then check data/stack
kernel
parameters...
- data/stack
kernel parameters
will halt
processes when their stack or data grows near (or attempts to pass)
the maximum defined by maxssiz and
maxdsiz
(respectively.)
- NOTE:
Prior to HP-UX 11.23 (a.k.a. HP-UX
11i Version 1.6), these
kernel parameters (maxssiz and
maxdsiz) are static and so changes to them
require a reboot.
- Normally 32bit
apps can only get to ~940
Mb of data
space. 32-bit apps can get upwards of 1.9 Gb of data space
if the executable is compiled with EXEC_MAGIC (ld -N) or if
chatr is used to enable third quadrant
private
(chatr +q3p
enable executable_name). To determine what the
executable is capable of, use 'chatr
executable_name' and if it shows as
'shared
excutable'
then it can only get to about 940Mb.
- This is harder to determine and may need
trial and error of increasing maxssiz and/or
maxdsiz
until
error stops.
- If the
process takes long enough to fail, then you can monitor it's stack
and data usage with procsize.
- The default for maxssiz
is 8Mb and the default
for maxdsiz is 64
Mb, they may need to be doubled, tripled, or
quadrupled to resolve (i.e. unless Vendor recommends/knows
good values, use
trial-and-error.)
- b.) If
mmap() or shmget() is failing, then check
for shared memory
configuration or contention / fragmentation...
- shared memory configuration or
contention / fragmentation
- Typically this is seen where other
applications/dbs are using up shared memory to the extent that there
is not any more left.
- The options are, either:
- reboot will defragment this
memory.
- or memory use should be
reduced (by lowering the amount requested by the apps/dbs).
- or temporarily shutdown apps/dbs that
are using 32bit address space.
- or use Memory
Windows (Memory Windows allows 32-bit processes to create
private/unique memory windows for shared objects like shared
memory.)
- To view the existing memory usage,
including fragmentation and largest FREE memory segment, use shminfo.
[
TOP ]
- "Not enough space" examples:
- /usr/lib/dld.sl:Call to mmap() failed - ZEROES
/usr/lib/libdce.1
/usr/lib/dld.sl:Not enough space
/usr/sbin/sam[221]: 1067
Abort(coredump)
- From attempting to
run sam
in TUI (Text) mode with swap 99% used.
- OBAM INTERNAL ERROR: Cannot fork: Not enough space
sam:
Error: The cpp(1) command failed on file: /usr/sam/lib/C/fal.ui.
- From attempting to run sam in GUI (graphic) mode
with swap 99% used.
- sam: FATAL ERROR: Unable to load library
"/usr/obam/lib/libIDMawrt.1": Not enough
space
- From
attempting to run
sam in TUI
(Text) mode with
swap 99% used.
- /usr/lib/dld.sl:Call to mmap()
failed - BSS /usr/lib/libnsl.1
/usr/lib/dld.sl:Not enough
space
sh: 2885 Abort(coredump)
- Seen during login [as root user] when system had swap at 99% used.
[ TOP ]
Summary - Memory
Reporting
- Download memory.tar.Z from the following ftp
site:
System: hprc.external.hp.com
(192.170.19.51)
Login: eh
Password: spear9
ftp://eh:spear9@hprc.external.hp.com/
- Extract into a
directory of your choosing (e.g. /tmp).
For example:
#
uncompress memory.tar.Z
# tar xvf memory.tar
- From the directory
of that you chose (e.g. /tmp), run the memory
script at least twice. Once after reboot
(as a
baseline), and then again when memory issue is evident.
Also run memory script periodically (after at least a couple of hours, or
maybe even days later).
- For
example:
# ./memory
Running........Done.
Memory Report
file is: /tmp/memory.03231400.txt
#
- Look for where all of the memory is being used.
Work to understand memory use or reduce the
memory use by applications/os/databases.
Compare the results of running these when the memory
issue is evident *and* after a reboot (as a
baseline.)
- Memory Report Examples (including output of
swapinfo, ps, procsize, shminfo, and kmeminfo):
[
TOP
]
References -
- Documents
- Tools - Unsupported
- White
Papers
- HP-UX Memory
Management white paper
- on 11.00 and prior, version 1.3 is in /usr/share/doc called
mem_mgt.txt or
mem_mgt.ps
[ TOP ]
#./shminfo|morelibp4 (7.91): Opening /stand/vmunix
/dev/kmemLoading
symbols from /stand/vmunixshminfo (3.7)Global 32-bit shared
quadrants:===============================
Space
Start End Kbytes
UsageQ4
0x063a7c00.0xc0000000-0xc0005fff 24
OTHERQ4
0x063a7c00.0xc0006000-0xc0006fff 4 SHMEM
id=0Q4
0x063a7c00.0xc0007000-0xc000dfff 28
OTHERQ4
0x063a7c00.0xc000e000-0xc000ffff 8 SHMEM
id=2Q4
0x063a7c00.0xc0010000-0xc0291fff 2568
OTHERQ4
0x063a7c00.0xc0292000-0xc0299fff 32 SHMEM id=1
lockedQ4
0x063a7c00.0xc029a000-0xc0309fff 448
OTHERQ4
0x063a7c00.0xc030a000-0xc030ffff 24 SHMEM
id=405Q4
0x063a7c00.0xc0310000-0xc03aafff 620
OTHERQ4
0x063a7c00.0xc03ab000-0xc03abfff 4
FREEQ4
0x063a7c00.0xc03ac000-0xc03ddfff 200
OTHERQ4
0x063a7c00.0xc03de000-0xc03dffff 8
FREEQ4
0x063a7c00.0xc03e0000-0xc03f9fff 104
OTHERQ4
0x063a7c00.0xc03fa000-0xc03fbfff 8
FREEQ4
0x063a7c00.0xc03fc000-0xc07cdfff 3912
OTHERQ4
0x063a7c00.0xc07ce000-0xc07cffff 8
FREEQ4
0x063a7c00.0xc07d0000-0xc07e1fff 72
OTHERQ4
0x063a7c00.0xc07e2000-0xc07e3fff 8
FREEQ4
0x063a7c00.0xc07e4000-0xc07e8fff 20
OTHERQ4
0x063a7c00.0xc07e9000-0xc07effff 28
FREEQ4
0x063a7c00.0xc07f0000-0xc086efff 508
OTHERQ4
0x063a7c00.0xc086f000-0xc086ffff 4
FREEQ4
0x063a7c00.0xc0870000-0xc08a2fff 204
OTHERQ4
0x063a7c00.0xc08a3000-0xc08a3fff 4
FREEQ4
0x063a7c00.0xc08a4000-0xc08aefff 44
OTHERQ4
0x063a7c00.0xc08af000-0xc08b3fff 20
FREEQ4
0x063a7c00.0xc08b4000-0xc08bbfff 32
OTHERQ4
0x063a7c00.0xc08bc000-0xc08bffff 16
FREEQ4
0x063a7c00.0xc08c0000-0xc09aafff 940
OTHERQ4
0x063a7c00.0xc09ab000-0xc09abfff 4
FREEQ4
0x063a7c00.0xc09ac000-0xc09b0fff 20
OTHERQ4
0x063a7c00.0xc09b1000-0xc09b3fff 12
FREEQ4
0x063a7c00.0xc09b4000-0xc09b9fff 24
OTHERQ4
0x063a7c00.0xc09ba000-0xc09bffff 24
FREEQ4
0x063a7c00.0xc09c0000-0xc0a12fff 332
OTHERQ4
0x063a7c00.0xc0a13000-0xc0a13fff 4
FREEQ4
0x063a7c00.0xc0a14000-0xc0a1cfff 36
OTHERQ4
0x063a7c00.0xc0a1d000-0xc0a1ffff 12
FREEQ4
0x063a7c00.0xc0a20000-0xc0a2afff 44
OTHERQ4
0x063a7c00.0xc0a2b000-0xc0a2bfff 4
FREEQ4
0x063a7c00.0xc0a2c000-0xc0a35fff 40
OTHERQ4
0x063a7c00.0xc0a36000-0xc0a37fff 8
FREEQ4
0x063a7c00.0xc0a38000-0xc0a3efff 28
OTHERQ4
0x063a7c00.0xc0a3f000-0xc0a3ffff 4
FREEQ4
0x063a7c00.0xc0a40000-0xc0adefff 636
OTHERQ4
0x063a7c00.0xc0adf000-0xc0adffff 4
FREEQ4
0x063a7c00.0xc0ae0000-0xc0af1fff 72
OTHERQ4
0x063a7c00.0xc0af2000-0xc0afffff 56
FREEQ4
0x063a7c00.0xc0b00000-0xc0b21fff 136
OTHERQ4
0x063a7c00.0xc0b22000-0xc0b23fff 8
FREEQ4
0x063a7c00.0xc0b24000-0xc0b30fff 52
OTHERQ4
0x063a7c00.0xc0b31000-0xc0b3ffff 60
FREEQ4
0x063a7c00.0xc0b40000-0xc0b80fff 260
OTHERQ4
0x063a7c00.0xc0b81000-0xc0b83fff 12
FREEQ4
0x063a7c00.0xc0b84000-0xc0b8cfff 36
OTHERQ4
0x063a7c00.0xc0b8d000-0xc0ba3fff 92
FREEQ4
0x063a7c00.0xc0ba4000-0xc0bb0fff 52
OTHERQ4
0x063a7c00.0xc0bb1000-0xc0bbffff 60
FREEQ4
0x063a7c00.0xc0bc0000-0xc0c0bfff 304
OTHERQ4
0x063a7c00.0xc0c0c000-0xc0c0ffff 16
FREEQ4
0x063a7c00.0xc0c10000-0xc0c27fff 96
OTHERQ4
0x063a7c00.0xc0c28000-0xc0c3ffff 96
FREEQ4
0x063a7c00.0xc0c40000-0xc0c5afff 108
OTHERQ4
0x063a7c00.0xc0c5b000-0xc0c5ffff 20
FREEQ4
0x063a7c00.0xc0c60000-0xc0c87fff 160
OTHERQ4
0x063a7c00.0xc0c88000-0xc0c8ffff 32
FREEQ4
0x063a7c00.0xc0c90000-0xc0cc8fff 228
OTHERQ4
0x063a7c00.0xc0cc9000-0xc0ccffff 28
FREEQ4
0x063a7c00.0xc0cd0000-0xc0ce5fff 88
OTHERQ4
0x063a7c00.0xc0ce6000-0xc0cfffff 104
FREEQ4
0x063a7c00.0xc0d00000-0xc0e61fff 1416
OTHERQ4
0x063a7c00.0xc0e62000-0xc0e6ffff 56
FREEQ4
0x063a7c00.0xc0e70000-0xc0e9cfff 180
OTHERQ4
0x063a7c00.0xc0e9d000-0xc0e9ffff 12
FREEQ4
0x063a7c00.0xc0ea0000-0xc0eb2fff 76 SHMEM
id=2203Q4
0x063a7c00.0xc0eb3000-0xc0ebffff 52
FREEQ4
0x063a7c00.0xc0ec0000-0xc0ed6fff 92
OTHERQ4
0x063a7c00.0xc0ed7000-0xc0edffff 36
FREEQ4
0x063a7c00.0xc0ee0000-0xc0efcfff 116
OTHERQ4
0x063a7c00.0xc0efd000-0xc0efffff 12
FREEQ4
0x063a7c00.0xc0f00000-0xc12fafff 4076
OTHERQ4
0x063a7c00.0xc12fb000-0xc12fffff 20
FREEQ4
0x063a7c00.0xc1300000-0xc14d5fff 1880
OTHERQ4
0x063a7c00.0xc14d6000-0xc14dffff 40
FREEQ4
0x063a7c00.0xc14e0000-0xc14f5fff 88
OTHERQ4
0x063a7c00.0xc14f6000-0xc14fffff 40
FREEQ4
0x063a7c00.0xc1500000-0xc1f92fff 10828 SHMEM
id=4Q4
0x063a7c00.0xc1f93000-0xc1f9ffff 52
FREEQ4
0x063a7c00.0xc1fa0000-0xc1fc4fff 148
OTHERQ4
0x063a7c00.0xc1fc5000-0xc1fcffff 44
FREEQ4
0x063a7c00.0xc1fd0000-0xc1fe8fff 100
OTHERQ4
0x063a7c00.0xc1fe9000-0xc1ffffff 92
FREEQ4
0x063a7c00.0xc2000000-0xc2165fff 1432
OTHERQ4
0x063a7c00.0xc2166000-0xc217ffff 104
FREEQ4
0x063a7c00.0xc2180000-0xc21edfff 440
OTHERQ4
0x063a7c00.0xc21ee000-0xc21fffff 72
FREEQ4
0x063a7c00.0xc2200000-0xc25d1fff 3912
OTHERQ4
0x063a7c00.0xc25d2000-0xc25fffff 184
FREEQ4
0x063a7c00.0xc2600000-0xc2647fff 288
OTHERQ4
0x063a7c00.0xc2648000-0xc267ffff 224
FREEQ4
0x063a7c00.0xc2680000-0xc277cfff 1012
OTHERQ4
0x063a7c00.0xc277d000-0xc277ffff 12
FREEQ4
0x063a7c00.0xc2780000-0xc27b7fff 224
OTHERQ4
0x063a7c00.0xc27b8000-0xc27bffff 32
FREEQ4
0x063a7c00.0xc27c0000-0xc2896fff 860
OTHERQ4
0x063a7c00.0xc2897000-0xc28bffff 164
FREEQ4
0x063a7c00.0xc28c0000-0xc2956fff 604
OTHERQ4
0x063a7c00.0xc2957000-0xc29f8fff 648 SHMEM
id=41Q4
0x063a7c00.0xc29f9000-0xc29fffff 28
FREEQ4
0x063a7c00.0xc2a00000-0xc2bb8fff 1764
OTHERQ4
0x063a7c00.0xc2bb9000-0xc2c9ffff 924
FREEQ4
0x063a7c00.0xc2ca0000-0xc2cd3fff 208
OTHERQ4
0x063a7c00.0xc2cd4000-0xc2efffff 2224
FREEQ4
0x063a7c00.0xc2f00000-0xc30b6fff 1756
OTHERQ4
0x063a7c00.0xc30b7000-0xc30bffff 36
FREEQ4
0x063a7c00.0xc30c0000-0xc30f8fff 228
OTHERQ4
0x063a7c00.0xc30f9000-0xc323ffff 1308
FREEQ4
0x063a7c00.0xc3240000-0xc331afff 876
OTHERQ4
0x063a7c00.0xc331b000-0xc462dfff 19532 SHMEM
id=10006Q4
0x063a7c00.0xc462e000-0xc5940fff 19532 SHMEM
id=7Q4
0x063a7c00.0xc5941000-0xc6c53fff 19532 SHMEM
id=8Q4
0x063a7c00.0xc6c54000-0xc7f66fff 19532 SHMEM
id=9Q4
0x063a7c00.0xc7f67000-0xc9279fff 19532 SHMEM
id=10Q4
0x063a7c00.0xc927a000-0xca58cfff 19532 SHMEM
id=11Q4
0x063a7c00.0xca58d000-0xcb89ffff 19532 SHMEM
id=12Q4
0x063a7c00.0xcb8a0000-0xccbb2fff 19532 SHMEM
id=13Q4
0x063a7c00.0xccbb3000-0xcdec5fff 19532 SHMEM
id=14Q4
0x063a7c00.0xcdec6000-0xcf1d8fff 19532 SHMEM
id=15Q4
0x063a7c00.0xcf1d9000-0xd04ebfff 19532 SHMEM
id=16Q4
0x063a7c00.0xd04ec000-0xd17fefff 19532 SHMEM
id=17Q4
0x063a7c00.0xd17ff000-0xd2b11fff 19532 SHMEM
id=18Q4
0x063a7c00.0xd2b12000-0xd3e24fff 19532 SHMEM
id=19Q4
0x063a7c00.0xd3e25000-0xd5137fff 19532 SHMEM
id=20Q4
0x063a7c00.0xd5138000-0xd8a70fff 58596 SHMEM
id=21Q4
0x063a7c00.0xd8a71000-0xdc86efff 63480 SHMEM
id=22Q4
0x063a7c00.0xdc86f000-0xe0760fff 64456 SHMEM
id=23Q4
0x063a7c00.0xe0761000-0xe4652fff 64456 SHMEM
id=24Q4
0x063a7c00.0xe4653000-0xe7f8bfff 58596 SHMEM
id=25Q4
0x063a7c00.0xe7f8c000-0xebe7dfff 64456 SHMEM
id=26Q4
0x063a7c00.0xebe7e000-0xefd6ffff 64456 SHMEM
id=27Q4
0x063a7c00.0xefd70000-0xefffffff 2624 FREE