[systemd-bugs] [Bug 75566] New: tmpfs sizes, specifically for /sys/fs/cgroup
bugzilla-daemon at freedesktop.org
bugzilla-daemon at freedesktop.org
Thu Feb 27 01:32:51 PST 2014
https://bugs.freedesktop.org/show_bug.cgi?id=75566
Priority: medium
Bug ID: 75566
Assignee: systemd-bugs at lists.freedesktop.org
Summary: tmpfs sizes, specifically for /sys/fs/cgroup
QA Contact: systemd-bugs at lists.freedesktop.org
Severity: normal
Classification: Unclassified
OS: All
Reporter: michael+freedesktop at stapelberg.de
Hardware: Other
Status: NEW
Version: unspecified
Component: general
Product: systemd
In http://bugs.debian.org/739574, a user is concerned that the amount of RAM
that can be used by the sum of all his tmpfs mounts is bigger than the
available amount of RAM. Specifically, he has 4G of RAM in the machine, and the
following tmpfs mounts:
tmpfs 1,9G 0 1,9G 0% /dev/shm
tmpfs 1,9G 464K 1,9G 1% /run
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 100M 0 100M 0% /run/user
Now, specifically for /sys/fs/cgroup, which contains no files and only seems to
be used for mounting cgroups in subdirectories, I suppose we could use a
small-ish size= parameter?
What’s the story with regards to /dev/shm and /run? I see that on (older?)
Ubuntu, /dev/shm points to /run/shm. Why don’t we use the same setup in
systemd? Is there any benefit in having two separate tmpfs mounts?
And is the concern about RAM exhaustion actually a real one or are we missing
something? (Personally, I think it’s unlikely that some process will entirely
exhaust more than one tmpfs mount, but it _could_ happen, right?)
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/systemd-bugs/attachments/20140227/0c4bb0d2/attachment.html>
More information about the systemd-bugs
mailing list