Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Real Time Documentation / How to Shield Your Linux Resources / Full Featured Cpuset Manipulation Commands
Applies to SUSE Linux Enterprise Real Time 12 SP5

3 Full Featured Cpuset Manipulation Commands

While basic shielding as described above is useful and a common use model for cset, there comes a time when more functionality will be desired to implement your strategy. To implement this, cset provides two subcommands: set, which allows you to manipulate cpusets; and proc, which allows you to manipulate processes within those cpusets.

3.1 The set Subcommand

To do anything with cpusets, you must be able to create, adjust, rename, move, and destroy them. The set subcommand allows the management of cpusets in such a manner.

3.1.1 Creating and Destroying Cpusets with set

The basic syntax of set for cpuset creation is:

tux > cset set -c 1-3 -s my_cpuset1
cset: --> created cpuset "my_cpuset1"

This creates a cpuset named my_cpuset1 with a CPUSPEC of CPU1, CPU2 and CPU3. The CPUSPEC is the same concept as described in the Section 2.2, “Setup and Teardown of the Shield”. The set subcommand also takes a -m/--mem option that lets you specify the memory nodes the set will use and flags to make the CPUs and MEMs exclusive to the cpuset. If you are on a non-NUMA machine, leave the -m option out and the default memory node 0 will be used.

Like with shield, you can adjust the CPUs and MEMs with subsequent calls to set. If, for example, you want to adjust the my_cpuset1 cpuset to only use CPUs 1 and 3 (and omit CPU2), then issue the following command.

tux > cset set -c 1,3 -s my_cpuset1
cset: --> modified cpuset "my_cpuset

cset will then adjust the CPUs that are assigned to the my_cpuset1 set to only use CPU1 and CPU3.

To rename a cpuset, use the -n/--newname option. For example:

tux > cset set -s my_cpuset1 -n super_set
cset: --> renaming "/cpusets/my_cpuset1" to "super_set"

Renames the cpuset called my_cpuset1 to super_set.

To destroy a cpuset, use the -d/--destroy option as follows.

tux > cset set -d super_set
cset: --> processing cpuset "super_set", moving 0 tasks to parent "/"...
cset: --> deleting cpuset "/super_set"
cset: done

This command destroys the newly created cpuset called super_set. When a cpuset is destroyed, all the tasks running in it are moved to the parent cpuset. The root cpuset, which always exists and always contains all CPUs, cannot be destroyed. You may also give the --destroy option a list of cpusets to destroy.

Note
Note: Information About the Mounted Cpuset File System

The cset subcommand creates the cpusets based on a mounted cpuset file system. You do not need to know where that file system is mounted, although it is easy to figure out (by default it is on /cpusets). When you give the set subcommand a name for a new cpuset, it is created wherever the cpuset file system is mounted.

To create a cpuset hierarchy, then you must give a path to the cset set subcommand. This path will always begin with the root cpuset, for which the path is /. For example:

tux > cset set -c 1,3 -s top_set
cset: --> created cpuset "top_set"


tux > cset set -c 3 -s /top_set/sub_set
cset: --> created cpuset "/top_set/sub_set"

These commands created two cpusets: top_set and sub_set. The top_set uses CPU1 and CPU3. It has a subset of sub_set which only uses CPU3. Once you have created a subset with a path, then if the name is unique, you do not need to specify the path to affect it. If the name is not unique, then cset will complain and ask you to use the path. For example:

tux > cset set -c 1,3 -s sub_set
cset: --> modified cpuset "sub_set

This command adds CPU1 to the sub_set cpuset for its use. Note that using the path in this case is optional.

If you attempt to destroy a cpuset which has sub-cpusets, cset will complain and not do it unless you use the -r/--recurse and the --force options. If you do use --force, then all the tasks running in all subsets of the deletion target cpuset will be moved to the target’s parent cpuset and all cpusets.

Moving a cpuset from under a certain cpuset to a different location is not implemented.

3.1.2 Listing Cpusets with set

To list cpusets, use the set subcommand with the -l/--list option. For example:

tux > cset set -l
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       320   1    /
one          3 n          0 n       0     1    /one

This shows that there is currently one cpuset present called one. (Of course there is also the root set, which is always present.) The output shows that the one cpuset has no tasks running in it. The root cpuset has 320 tasks running. The -X for CPUs and MEMs fields denotes whether the CPUs and MEMs in the cpusets are marked exclusive to those cpusets. Note that the one cpuset has subsets as indicated by a 1 in the Subs field. You can specify a cpuset to list with the set subcommand as follows:

tux > cset set -l -s one
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
one          3 n          0 n       0     1   /one
two          3 n          0 n       0     1   /one/two

This output shows that there is a cpuset called two in cpuset one and it also has subset. You can also ask for a recursive listing as follows:

tux > cset set -l -r
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       320   1    /
one          3 n          0 n       0     1    /one
two          3 n          0 n       0     1    /one/two
three        3 n          0 n       0     0    /one/two/three

This command lists all cpusets existing on the system since it asks for a recursive listing beginning at the root cpuset. Incidentally, should you need to specify the root cpuset you can use either root or / to specify it explicitly—just remember that the root cpuset cannot be deleted or modified.

3.2 The proc Subcommand

Now that you know how to create, rename and destroy cpusets with the set subcommand, the next step is to manage threads and processes in those cpusets. The subcommand to do this is called proc and it allows you to exec processes into a cpuset, move existing tasks around existing cpusets, and list tasks running in specified cpusets. For the following examples, let us assume a cpuset setup of two sets as follows:

tux > cset set -l
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       309   2    /
two          2 n          0 n       3     0    /two
three        3 n          0 n       10    0    /three

3.2.1 Listing Tasks With proc

Operation of the proc subcommand follows the same model as the set subcommand. For example, to list tasks in a cpuset, you need to use the -l/--list option and specify the cpuset by name or, if the name exists multiple times in the cpuset hierarchy, by path. For example:

tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 3 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
root     16141 4300  Soth bash
root     16171 16141 Soth bash
root     16703 16171 Roth python ./cset proc -l two

This output shows us that the cpuset called two has CPU2 only attached to it and is running three tasks: two shells and the python command to list it. Note that cpusets are inherited so that if a process is contained in a cpuset, then any children it spawns also run within that set. In this case, the python command to list set two was run from a shell already running in set two. This can be seen by the PPID (parent process ID) of the python command matching the PID of the shell.

Additionally, the SPPr field needs explanation. SPPr stands for State, Policy and Priority. You can see that the initial two tasks are stopped and running in timeshare priority, marked as oth (for other). The last task is marked as running, R and at timeshare priority, oth. If any of these tasks would have been at real time priority, the policy would be shown as f for FIFO or r for round robin. The priority would be a number from 1 to 99. See below for an example.

tux > cset proc -l -s root | head -7
cset: "root" cpuset of CPUSPEC(0-3) with 309 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
root        1     0 Soth init [5]
root        2     0 Soth [kthreadd]
root        3     2 Sf99 [migration/0]
root        4     2 Sf99 [posix_cpu_timer]

This output shows the first few tasks in the root cpuset. Note that both init and [kthread] are running at timeshare; however, the [migration/0] and [posix_cpu_timer] kernel threads are running at real-time policy of FIFO and priority of 99. Incidentally, this output is from a system running the real-time Linux kernel which runs some kernel threads at real-time priorities. And finally, note that you can use cset as any other Linux tool and include it in pipelines as in the example above.

Taking a peek into the third cpuset called three, we see:

tux > cset proc -l -s three
cset: "three" cpuset of CPUSPEC(3) with 10 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
alext    16165     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16169     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16170     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16237     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16491     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16492     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16493     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    17243     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    17244     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    17265     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...

This output shows that a lot of beagled tasks are running in this cpuset and it also shows an ellipsis () at the end of their listings. If you see this ellipsis, that means that the command was too long to fit onto an 80 character screen. To see the entire command line, use the -v/--verbose flag:

tux > cset proc -l -s three -v | head -4
cset: "three" cpuset of CPUSPEC(3) with 10 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
alext    16165     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg --autostarted --indexing-delay 300

3.2.2 Execing Tasks with proc

To exec a task into a cpuset, the proc subcommand needs to be employed with the -e/--exec option. Let’s exec a shell into the cpuset named two in our set. First we check to see what is running that set:

tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 0 tasks running

tux > cset proc -s two -e bash
cset: --> last message, executed args into cpuset "/two", new pid is: 20955

tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 2 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
root     20955 19253 Soth bash
root     20981 20955 Roth python ./cset proc -l two

You can see that initially, two had nothing running in it. After the completion of the second command, we list two again and see that there are two tasks running: the shell which we execed and the python cset command that is listing the cpuset. The reason for the second task is that the cpuset property of a running task is inherited by all its children. Since we executed the listing command from the new shell which was bound to cpuset two, the resulting process for the listing is also bound to cpuset two. Let’s test that by running a new shell with no prefixed cset command.

tux > bash


tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 3 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
root     20955 19253 Soth bash
root     21118 20955 Soth bash
root     21147 21118 Roth python ./cset proc -l two

Here again we see that the second shell, PID 21118, has a parent PID of 20955 which is the first shell. Both shells, and the listing command, are running in the two cpuset.

Note
Note: Separating the Tool Options From the cset Command

cset follows the tradition of separating the tool options from the command to be execed options with a double dash (--). This is not shown in this simple example, but if the command you want to exec also takes options, separate them with the double dash as follows:

tux > cset proc -s myset -e mycommand -- -v

The -v will be passed to mycommand, and not to cset.

Tip
Tip: Execing a Shell Into the Shield Is Useful

Execing a shell into a cpuset is a useful way to experiment with running tasks in that cpuset since all children of the shell will also run in the same cpuset. Finally, if you misspell the command to be execed, the result may be puzzling. For example:

tux > cset proc -s two -e blah-blah
cset: --> last message, executed args into cpuset "/two", new pid is: 21655
cset: **> [Errno 2] No such file or directory

The result is no new process even though a new PID is output. The reason for the message is of course that the cset process forked in preparation for exec, but the command blah-blah was not found to execute it.

3.2.3 Moving Tasks with proc

Although the ability to exec a task into a cpuset is fundamental, you will most likely be moving tasks between cpusets more often. Moving tasks is accomplished with the -m/--move and -p/--pid options to the proc subcommand of cset. The move option tells the proc subcommand that a task move is requested. The -p/--pid option takes an argument called a PIDSPEC (PID Specification). The PIDSPEC defines which tasks get operated on.

The PIDSPEC can be a single process ID, a list of process IDs separated by commas, and a list of process ID ranges also separated by commas. For example:

--pid 1234

This PIDSPEC argument specifies that PID 1234 will be moved.

--pid 1234,42,1934,15000,15001,15002

This PIDSPEC argument specifies that only listed tasks will be moved.

-p 5000,5100,6010-7000,9232

This PIDSPEC argument specifies that tasks 5000, 5100 and 9232 will be moved along with any existing task with PID in the range 6010 through 7000 inclusive.

Note
Note: Information About the Range In a PIDSPEC

A range in a PIDSPEC does not need to have running tasks for every number in that range. In fact, it is not even an error if there are no tasks running in that range; none will be moved in that case. The range simply specifies to act on any tasks that have a PID or TID that is within that range.

In the following example, we move the current shell into the cpuset named two with a range PIDSPEC and back out to the root cpuset with the Bash variable for the current PID.

tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 0 tasks running


tux > echo $$
19253


tux > cset proc -m -p 19250-19260 -t two
cset: moving following pidspec: 19253
cset: moving 1 userspace tasks to /two
cset: done


tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 2 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
root     19253 16447 Roth bash
root     29456 19253 Roth python ./cset proc -l -s two


tux > cset proc -m -p $$ -t root
cset: moving following pidspec: 19253
cset: moving 1 userspace tasks to /
cset: done


tux > cset proc -l -s two
cset: "two" cpuset of CPUSPEC(2) with 0 tasks running

Use of the appropriate PIDSPEC can thus be handy to move tasks and groups of tasks. Additionally, there is one more option that can help with multi-threaded processes, and that is the --threads flag. If this flag is used together with the proc move command with a PIDSPEC and if any of the task IDs in the PIDSPEC belongs to a thread in a process container, then all the sibling threads in that process container will also get moved. This flag provides an easy mechanism to move all threads of a process by simply specifying one thread in that process. In the following example, we move all the threads running in cpuset three to cpuset two by using the --threads flag.

tux > cset set two three
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
two          2 n          0 n       0     0    /two
three        3 n          0 n       10    0    /three


tux > cset proc -l -s three
cset: "three" cpuset of CPUSPEC(3) with 10 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
alext    16165     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16169     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16170     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16237     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16491     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16492     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    16493     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    17243     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    17244     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext    27133     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...


tux > cset proc -m -p 16165 --threads -t two
cset: moving following pidspec: 16491,16493,16492,16170,16165,16169,27133,17244,17243,16237
cset: moving 10 userspace tasks to /two
[==================================================]%
cset: done


tux > cset set two three
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
two          2 n          0 n       10    0    /two
three        3 n          0 n       0     0    /three

3.2.3.1 Moving All Tasks From One Cpuset to Another

There is a special case for moving all tasks currently running in one cpuset to another. This can be a common use case, and when you need to do it, specifying a PIDSPEC with -p is not necessary so long as you use the -f/--fromset and the -t/--toset options.

In the following example, we move all 10 beagled threads back to cpuset three with this method.

tux > cset proc -l two three
cset: "two" cpuset of CPUSPEC(2) with 10 tasks running
USER      PID   PPID  SPPr TASK NAME
--------  ----- ----- ---- ---------
alext     16165     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -…
alext     16169     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     16170     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     16237     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     16491     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     16492     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     16493     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     17243     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     17244     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
alext     27133     1 Soth beagled /usr/lib64/beagle/BeagleDaemon.exe --bg -...
cset: "three" cpuset of CPUSPEC(3) with 0 tasks running


tux > cset proc -m -f two -t three
cset: moving all tasks from two to /three
cset: moving 10 userspace tasks to /three
[==================================================]%
cset: done


tux > cset set two three
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
two          2 n          0 n       0     0    /two
three        3 n          0 n       10    0    /three

3.2.3.2 Moving Kernel Threads With proc

Kernel threads are special and cset detects tasks that are kernel threads and will refuse to move them unless you also add a -k/--kthread option to your proc move command. Even if you include -k, cset will still refuse to move the kernel thread if they are bound to specific CPUs. The reason for this is system protection.

Several kernel threads, especially on the real-time Linux kernel, are bound to specific CPUs and depend on per-CPU kernel variables. If you move these threads to a different CPU than what they are bound to, you risk at best that the system will become horribly slow, and at worst a system hang. If you must move those threads (after all, cset needs to give the knowledgeable user access to the keys), then you also need to use the --force option.

Warning
Warning: Use --force With Care

Overriding a task move command with --force can have dire consequences for the system. Be sure of the command before you force it.

In the following example, we move all unbound kernel threads running in the root cpuset to the cpuset named two by using the -k option.

tux > cset proc -k -f root -t two
cset: moving all kernel threads from / to /two
cset: moving 70 kernel threads to: /two
cset: --> not moving 76 threads (not unbound, use --force)
[==================================================]%
cset: done

You will note that we used the fromset→toset facility of the proc subcommand and we only specified the -k option (not the -m option). This has the effect of moving all kernel threads only.

Note that only 70 actual kernel threads were moved and 76 were not. The reason that 76 kernel threads were not moved was because they are bound to specific CPUs. Now, let’s move those kernel threads back to root.

tux > cset proc -k -f two -t root
cset: moving all kernel threads from /two to /
cset: ** no task matched move criteria
cset: **> kernel tasks are bound, use --force if ok


tux > cset set -l -s two
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
two          2 n          0 n       70    0    /two

cset refused to move the kernel threads back to root because it says that they are bound. Let’s check this with the taskset command.

tux > cset proc -l -s two | head -5
cset: "two" cpuset of CPUSPEC(2) with 70 tasks running
USER     PID   PPID  SPPr TASK NAME
-------- ----- ----- ---- ---------
root         2     0 Soth [kthreadd]
root        55     2 Soth [khelper]


tux > taskset -p 2
pid 2's current affinity mask: 4


tux > cset set -l -s two
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
two          2 n          0 n       70    0    /two

Of course, since the cpuset named two only has CPU2 assigned to it, once we moved the unbound kernel threads from root to two, their affinity masks got automatically changed to only use CPU2. This is evident from the taskset output which is a hex value. To really move these threads back to root, we need to force the move as follows.

tux > cset proc -k -f two -t root --force
cset: moving all kernel threads from /two to /
cset: moving 70 kernel threads to: /
[==================================================]%
cset: done

3.2.4 Destroying Tasks

There actually is no cset subcommand or option to destroy tasks—it’s not really needed. Tasks exist and are accessible on the system as normal, even if they happen to be running in one cpuset or another. To destroy tasks, use the usual CtrlC method or by using the kill(1) command.

3.3 Implementing Shielding With set and proc

With the preceding material on the set and proc subcommands, we now have the background to implement the basic shielding model, like the shield subcommand.

One may pose the question why we want to do this, especially since shield already does it? The answer is that sometimes one needs more functionality than shield need to implement one’s shielding strategy. In those cases you need to first stop using shield since that subcommand will interfere with the further application of set and proc. However, you will still need to implement the functionality of shield to implement successful shielding.

Remember from the above sections describing shield, that shielding has at minimum three cpusets: root, which is always present and contains all CPUs; system which is the non-shielded set of CPUs and runs unimportant system tasks; and user, which is the shielded set of CPUs and runs your important tasks. Remember also that shield moves all movable tasks into system and, optionally, moves unbound kernel threads into system as well.

You start first by creating the system and user cpusets as follows. Let's assume that the machine is a four-CPU machine without NUMA memory features. The system cpuset should hold only CPU0 while the user cpuset should hold the rest of the CPUs.

tux > cset set -c 0 -s system
cset: --> created cpuset "system"


tux > cset set -c 1-3 -s user
cset: --> created cpuset "user"


tux > cset set -l
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       333   2    /
user         1-3 n        0 n       0     0    /user
system       0 n          0 n       0     0    /system

Now, we need to move all running user processes into the system cpuset.

tux > cset proc -m -f root -t system
cset: moving all tasks from root to /system
cset: moving 188 userspace tasks to /system
[==================================================]%
cset: done


tux > cset set -l
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       146    2   /
user         1-3 n        0 n       0      0   /user
system       0 n          0 n       187    0   /system

We now have the basic shielding setup. Since all user space tasks are running in system, anything that is spawned from them will also run in system. The user cpuset has nothing running in it unless you put tasks there with the proc subcommand as described above. If you also want to move movable kernel threads from root to system (to achieve a form of interrupt shielding on a real time Linux kernel, for example), you would execute the following command as well:

tux > cset proc -k -f root -t system
cset: moving all kernel threads from / to /system
cset: moving 70 kernel threads to: /system
cset: --> not moving 76 threads (not unbound, use --force)
[==================================================]%
cset: done


tux > cset set -l
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       76    2    /
user         1-3 n        0 n       0     0    /user
system       0 n          0 n       257   0    /system

At this point, you have achieved the simple shielding model that the shield subcommand provides. You can now add other cpuset definitions to expand your shielding strategy beyond that simple model.

3.4 Implementing Hierarchy With set and proc

One popular extended shielding model is based on hierarchical cpusets, each with diminishing numbers of CPUs. This model is used to create priority cpusets that allow assignment of CPU resources to tasks based on some arbitrary priority definition. The idea is that a higher priority task will get access to more CPU resources than a lower priority task.

The example provided here once again assumes a machine with four CPUs and no NUMA memory features. This base serves to illustrate the point well; however, note that if your machine has (many) more CPUs, then strategies such as this and others get more interesting.

We define a shielding setup as in the previous section where we have a system cpuset with only CPU0 that takes care of unimportant system tasks. You will usually require this type of cpuset since it forms the basis of shielding. We modify the strategy to not use a user cpuset; instead we create several new cpusets each holding one more CPU than the other. These cpusets will be called prio_low with one CPU, prio_med with two CPUs, prio_high with three CPUs, and prio_all with all CPUs.

Note
Note: The Sense Behind Creating a prio_all Cpuset With All CPUs

You may ask, why create a prio_all with all CPUs when that is substantially the definition of the root cpuset? The answer is that it is best to keep a separation between the root cpuset and everything else, even if a particular cpuset duplicates root exactly. Usually, automation is build on top of a cpuset strategy. In these cases, it is best to avoid using invariant names of cpusets, such as root for example, in this automation.

All of these prio_* cpusets can be created under root, in a flat way; however, it is advantageous to create them as a hierarchy. The reasoning for this is twofold: first, if a cpuset is destroyed, all its tasks are moved to its parent; second, one can use exclusive CPUs in a hierarchy.

If a cpuset has CPUs that are exclusive to it, then other cpusets may not use those CPUs unless they are children of that cpuset. This has more relevance to machines with many CPUs and more complex strategies.

Now, we start with a clean slate and build the appropriate cpusets as follows.

tux > cset set -r
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       344   0    /


tux > cset set -c 0-3 prio_all
cset: --> created cpuset "prio_all"


tux > cset set -c 1-3 /prio_all/prio_high
cset: --> created cpuset "/prio_all/prio_high"


tux > cset set -c 2-3 /prio_all/prio_high/prio_med
cset: --> created cpuset "/prio_all/prio_high/prio_med"


tux > cset set -c 3 /prio_all/prio_high/prio_med/prio_low
cset: --> created cpuset "/prio_all/prio_high/prio_med/prio_low"


tux > cset set -c 0 system
cset: --> created cpuset "system"


tux > cset set -l -r
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       344   2    /
system       0 n          0 n       0     0    /system
prio_all     0-3 n        0 n       0     1    /prio_all
prio_high    1-3 n        0 n       0     1    /prio_all/prio_high
prio_med     2-3 n        0 n       0     1    /prio_all/prio_high/prio_med
prio_low     3 n          0 n       0     0    /prio_all/pr...rio_med/prio_low
Note
Note: Why -r/--recurse Is Needed in This Case

The option -r/--recurse lists all the sets in the last command above. If you execute that command without -r/--recurse, prio_med and prio_low cpusets would not appear.

The strategy is now implemented and we now move all user space tasks and all movable kernel threads into the system cpuset to activate the shield.

tux > cset proc -m -k -f root -t system
cset: moving all tasks from root to /system
cset: moving 198 userspace tasks to /system
cset: moving 70 kernel threads to: /system
cset: --> not moving 76 threads (not unbound, use --force)
[==================================================]%
cset: done


tux > cset set -l -r
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       76    2    /
system       0 n          0 n       268   0    /system
prio_all     0-3 n        0 n       0     1    /prio_all
prio_high    1-3 n        0 n       0     1    /prio_all/prio_high
prio_med     2-3 n        0 n       0     1    /prio_all/prio_high/prio_med
prio_low     3 n          0 n       0     0    /prio_all/pr...rio_med/prio_low

The shield is now active. Since the prio_* cpuset names are unique, you can assign tasks to them either via their simple name, or their full path (as described in Section 3.2.2, “Execing Tasks with proc).

You may have noted that there is an ellipsis in the path of the prio_low cpuset in the listing above. This is done to fit the output onto an 80 character screen. To see the entire line, use the -v/--verbose flag as follows:

tux > cset set -l -r -v
cset:
Name         CPUs-X       MEMs-X    Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root         0-3 y        0 y       76    2    /
system       0 n          0 n       268   0    /system
prio_all     0-3 n        0 n       0     1    /prio_all
prio_high    1-3 n        0 n       0     1    /prio_all/prio_high
prio_med     2-3 n        0 n       0     1    /prio_all/prio_high/prio_med
prio_low     3 n          0 n       0     0    /prio_all/prio_high/prio_med/prio_low