blob: 58067d2be4b834e1c40dbfed0ad0496b00a9ed28 [file] [log] [blame]
page.title=Overview of Android Memory Management
page.tags=ram,memory,paging,mmap
@jd:body
<div id="qv-wrapper">
<div id="qv">
<h2>In this document</h2>
<ol class="nolist">
<li><a href="#gc">Garbage collection</a></li>
<li><a href="#SharingRAM">Sharing Memory</a></li>
<li><a href="#AllocatingRAM">Allocating and Reclaiming App Memory</a></li>
<li><a href="#RestrictingMemory">Restricting App Memory</a></li>
<li><a href="#SwitchingApps">Switching Apps</a></li>
</ol>
<h2>See Also</h2>
<ul>
<li><a href="{@docRoot}training/articles/memory.html">Manage Your App's Memory</a>
</li>
<li><a href="{@docRoot}studio/profile/investigate-ram.html">Investigating Your RAM Usage</a>
</li>
</ul>
</div>
</div>
<p>
The Android Runtime (ART) and Dalvik virtual machine use
<a href="http://en.wikipedia.org/wiki/Paging" class="external-link">paging</a>
and <a href="http://en.wikipedia.org/wiki/Memory-mapped_files" class="external-link">memory-mapping</a>
(mmapping) to manage memory. This means that any memory an app
modifies&mdash;whether by allocating
new objects or touching mmapped pages&mdash;remains resident in RAM and
cannot be paged out. The only way to release memory from an app is to release
object references that the app holds, making the memory available to the
garbage collector.
That is with one exception: any files
mmapped in without modification, such as code,
can be paged out of RAM if the system wants to use that memory elsewhere.
</p>
<p>
This page explains how Android manages app processes and memory
allocation. For more information about how to manage memory more efficiently
in your app, see
<a href="{@docRoot}training/articles/memory.html">Manage Your App's Memory</a>.
</p>
<!-- Section 1 #################################################### -->
<h2 id="gc">Garbage collection</h2>
<p>
A managed memory environment, like the ART or Dalvik virtual machine,
keeps track of each memory allocation. Once it determines
that a piece of memory is no longer being used by the program,
it frees it back to the heap, without any intervention from the programmer.
The mechanism for reclaiming unused memory
within a managed memory environment
is known as <i>garbage collection</i>. Garbage collection has two goals:
find data objects in a program that cannot be accessed in the future; and
reclaim the resources used by those objects.
</p>
<p>
Androids memory heap is a generational one, meaning that there are
different buckets of allocations that it tracks,
based on the expected life and size of an object being allocated.
For example, recently allocated objects belong in the <i>Young generation</i>.
When an object stays active long enough, it can be promoted
to an older generation, followed by a permanent generation.
</p>
<p>
Each heap generation has its own dedicated upper limit on the amount
of memory that objects there can occupy. Any time a generation starts
to fill up, the system executes a garbage collection
event in an attempt to free up memory. The duration of the garbage collection
depends on which generation of objects it's collecting
and how many active objects are in each generation.
</p>
<p>
Even though garbage collection can be quite fast, it can still
affect your app's performance. You dont generally control
when a garbage collection event occurs from within your code.
The system has a running set of criteria for determining when to perform
garbage collection. When the criteria are satisfied,
the system stops executing the process and begins garbage collection. If
garbage collection occurs in the middle of an intensive processing loop
like an animation or during music playback, it can increase processing time.
This increase can potentially push code execution in your app past the
recommended 16ms threshold for efficient and smooth frame rendering.
</p>
<p>
Additionally, your code flow may perform kinds of work that
force garbage collection events to occur
more often or make them last longer-than-normal.
For example, if you allocate multiple objects in the
innermost part of a for-loop during each frame of an alpha
blending animation, you might pollute your memory heap with a
lot of objects.
In that circumstance, the garbage collector executes multiple garbage
collection events and can degrade the performance of your app.
</p>
<p>
For more general information about garbage collection, see
<a href="https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)"
class="external-link">Garbage collection</a>.
</p>
<!-- Section 2 #################################################### -->
<h2 id="SharingRAM">Sharing Memory</h2>
<p>
In order to fit everything it needs in RAM,
Android tries to share RAM pages across processes. It
can do so in the following ways:
</p>
<ul>
<li>
Each app process is forked from an existing process called Zygote.
The Zygote process starts when the system boots and loads common
framework code and resources
(such as activity themes). To start a new app process,
the system forks the Zygote process then
loads and runs the app's code in the new process.
This approach allows most of the RAM pages allocated for
framework code and resources to be shared across all app processes.
</li>
<li>
Most static data is mmapped into a process.
This technique allows data to be shared
between processes, and also allows it to be paged
out when needed. Example static data include:
Dalvik code (by placing it in a pre-linked <code>.odex</code>
file for direct mmapping), app resources
(by designing the resource table to be a structure
that can be mmapped and by aligning the zip
entries of the APK), and traditional project
elements like native code in <code>.so</code> files.
</li>
<li>
In many places, Android shares the same dynamic
RAM across processes using explicitly allocated
shared memory regions (either with ashmem or gralloc).
For example, window surfaces use shared
memory between the app and screen compositor, and
cursor buffers use shared memory between the
content provider and client.
</li>
</ul>
<p>
Due to the extensive use of shared memory, determining
how much memory your app is using requires
care. Techniques to properly determine your app's
memory use are discussed in
<a href="{@docRoot}studio/profile/investigate-ram.html">Investigating Your RAM Usage</a>.
</p>
<!-- Section 3 #################################################### -->
<h2 id="AllocatingRAM">Allocating and Reclaiming App Memory</h2>
<p>
The Dalvik heap is constrained to a
single virtual memory range for each app process. This defines
the logical heap size, which can grow as it needs to
but only up to a limit that the system defines
for each app.
</p>
<p>
The logical size of the heap is not the same as
the amount of physical memory used by the heap.
When inspecting your app's heap, Android computes
a value called the Proportional Set Size (PSS),
which accounts for both dirty and clean pages
that are shared with other processes—but only in an
amount that's proportional to how many apps share
that RAM. This (PSS) total is what the system
considers to be your physical memory footprint.
For more information about PSS, see the
<a href="{@docRoot}studio/profile/investigate-ram.html">Investigating Your RAM Usage</a>
guide.
</p>
<p>
The Dalvik heap does not compact the logical
size of the heap, meaning that Android does not
defragment the heap to close up space. Android
can only shrink the logical heap size when there
is unused space at the end of the heap. However,
the system can still reduce physical memory used by the heap.
After garbage collection, Dalvik
walks the heap and finds unused pages, then returns
those pages to the kernel using madvise. So, paired
allocations and deallocations of large
chunks should result in reclaiming all (or nearly all)
the physical memory used. However,
reclaiming memory from small allocations can be much
less efficient because the page used
for a small allocation may still be shared with
something else that has not yet been freed.
</p>
<!-- Section 4 #################################################### -->
<h2 id="RestrictingMemory">Restricting App Memory</h2>
<p>
To maintain a functional multi-tasking environment,
Android sets a hard limit on the heap size
for each app. The exact heap size limit varies
between devices based on how much RAM the device
has available overall. If your app has reached the
heap capacity and tries to allocate more
memory, it can receive an {@link java.lang.OutOfMemoryError}.
</p>
<p>
In some cases, you might want to query the
system to determine exactly how much heap space you
have available on the current devicefor example, to
determine how much data is safe to keep in a
cache. You can query the system for this figure by calling
{@link android.app.ActivityManager#getMemoryClass() }.
This method returns an integer indicating the number of
megabytes available for your app's heap.
</p>
<!-- Section 5 #################################################### -->
<h2 id="SwitchingApps">Switching apps</h2>
<p>
When users switch between apps,
Android keeps apps that
are not foreground&mdash;that is, not visible to the user or running a
foreground service like music playback&mdash;
in a least-recently used (LRU) cache.
For example, when a user first launches an app,
a process is created for it; but when the user
leaves the app, that process does <em>not</em> quit.
The system keeps the process cached. If
the user later returns to the app, the system reuses the process, thereby
making the app switching faster.
</p>
<p>
If your app has a cached process and it retains memory
that it currently does not need,
then your app&mdash;even while the user is not using it&mdash;
affects the system's
overall performance. As the system runs low on memory,
it kills processes in the LRU cache
beginning with the process least recently used. The system also
accounts for processes that hold onto the most memory
and can terminate them to free up RAM.
</p>
<p class="note">
<strong>Note:</strong> When the system begins killing processes in the
LRU cache, it primarily works bottom-up. The system also considers which
processes consume more memory and thus provide the system
more memory gain if killed.
The less memory you consume while in the LRU list overall,
the better your chances are
to remain in the list and be able to quickly resume.
</p>
<p>
For more information about how processes are cached while
not running in the foreground and how
Android decides which ones
can be killed, see the
<a href="{@docRoot}guide/components/processes-and-threads.html">Processes and Threads</a>
guide.
</p>