Alan Hargreaves' Blog

The ramblings of an Australian SaND TSC* Principal Field Technologist

Why you should Patch NTP

This story about massive DDoS attacks using monlist as a threat vector give an excellent reason as to why you should apply the patches listed on the Sun Security Blog for NTP.

Written by Alan

June 13, 2014 at 10:46 am

Posted in Uncategorized

Who is renicing these processes?

I was helping out a colleague on such a call this morning.  While the DTrace script I produced was not helpful in this actual case, I think it bear sharing, anyway.

What we wanted was a way to find out why various processes were running with nice set to -20.  There are two ways in which a process can have its nice changed.

  • nice(2) – where it changes itself
  • priocntl(2) – where something else changes it

I ended up with the following script after a bit of poking around.

# dtrace -n '
syscall::nice:entry {
        printf("[%d] %s calling nice(%d)", pid, execname, arg0);}
syscall::priocntlsys:entry /arg2 == 6/ {
        this->n = (pcnice_t *)copyin(arg3, sizeof(struct pnice));
        this->id = (procset_t *)copyin(arg1, sizeof(struct procset));
        printf("[%d] %s renicing %d by %d",
            pid, execname, this->id->p_lid, this->n->pc_val); }'


There is an assumption in there about p_lid being the PID that I want, but in this particular case it turns out to be ok. Matching arg2 against 6 is so that we only get priocntl() calls with the command PC_DONICE. I could have also had it check the pcnice_t->pc_op, but I can put up with the extra output.

So what happens when we have this running and then try something like

# renice -20 4147
dtrace: description 'syscall::nice:entry ' matched 2 probes
 0 508 priocntlsys:entry [4179] renice renicing 4147 by 0
 0 508 priocntlsys:entry [4179] renice renicing 4147 by -20

Which is exactly what we wanted. We see the renice command (pid 4179) modifying pid 4179.

Oh, why didn’t this help I hear you ask?

Turns out that in this instance, the process in question was being started by init from /etc/inittab, as such starting with nice set to whatever init is running at. In this case it is -20.

Written by Alan

December 23, 2013 at 10:39 am

Posted in Uncategorized

A Solaris tmpfs uses real memory

That title may sound a little self explanatory and obvious, but over the last two weeks  I have had two customers tell me flat out that /tmp uses swap and that I should still continue to investigate where their memory is being used.

This is likely because when you define /tmp in /etc/vfstab, you list the device being used as swap.

In the context of a tmpfs, swap means physical memory + physical swap. A tmpfs uses pageable kernel memory. This means that it will use kernel memory, but if required these pages can be paged to the swap device. Indeed if you put more data onto a tmpfs than you have physical memory, this is pretty much guaranteed.

If you are still not convinced try the following.

  1. In one window start up the command
    $ vmstat 2
  2. In another window make a 1gb file in /tmp.
    $ mkfile 1g /tmp/testfile
  3. Watch what happens in the free memory column in the vmstat.

There seems to be a misconception amongst some that a tmpfs is a way of stealing some of the disk we have allocated as swap to use as a filesystem without impacting memory. I’m sorry, this is not the case.


Written by Alan

February 28, 2013 at 9:38 am

Posted in Uncategorized

Using /etc/system on Solaris

I had cause to be reminded of this article I wrote for on#sun almost ten years ago and just noticed that I had not transferred it to my blog.

/etc/system is a file that is read just before the root filesystem is mounted. It contains directives to the kernel about configuring the system. Going into depth on this topic could span multiple books so I’m just going to give some pointers and suggestions here.

Warning, Danger Will Robinson

Settings can affect initial array and structure allocation, indeed such things as module load path and where the root directory actually resides.

It is possible to render your system unbootable if you are not careful. If this happens you might try booting with the ‘-a’ option where you get the choice to tell the system to not load /etc/system.

Just because you find a set of values works well on one system does  not necessarily mean that they will work properly on another. This is especially true if we are looking at different releases of the operating system, or different hardware.

You will need to reboot your system before these new values will take effect.

The basic actions that can be taken are outlined in the comments of the file itself so I won’t go into them here.

The most common action is to set a value. Any number of products make suggestions for settings in here (eg Oracle, Veritas Volume Manager and Filesystem to name a few). Setting a value overrides the system default.

A practice that I make when working on this file is to place a comment explaining why and when I make a particular setting  (remember that a comment in this file is prefixed by a ‘*’, not a ‘#’). This is useful later down the track when I may have to upgrade a system. It could be that the setting may actually not have the desired effect and it would be good to know why we originally did it.

I harp on this point but it is important.

Just because settings work on one machine does not make them directly transferable to another.

For example

set lotsfree=1024

This tells the kernel not to start running the page scanner (to start paging out memory to disc) until free memory drops below 8mb (1024 x 8k blocks). While this setting may be fine on a machine with around 512mb of memory, it does not make sense for a machine with 10gb. Indeed if the machine is under memory pressure, by the time we get down to 8mb of free memory, we have very little breathing space to try to recover before requiring memory. The end result being a system that grinds to a halt until it can free up some resources.

Oracle makes available the Solaris Tunable Parameters guide as a part of the documentation for each release of Solaris. It gives information about the default values and the uses of a lot of system parameters.

Written by Alan

January 22, 2013 at 10:22 am

Posted in Solaris, Work

The Importance of Fully Specifying a Problem

I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung.

Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it’s threads spinning on busy wait type locks).

This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced.

This prompted the question back to the customer:

What exactly were you seeing that made you believe that the system was hung?

It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing “root” and hitting return, the console was no longer responsive.

This description puts a whole new light on the “hang”. You immediately start thinking “name services”.

Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network.

Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open.

My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server.


When you log a service ticket for a “system hang”, it’s great to get the forced crashdump first up, but it’s even better to get a description of what you observed to make to believe that the system was hung.

Written by Alan

June 3, 2012 at 9:19 am

Posted in Solaris, Uncategorized, Work has moved (and changed address)

Over the last couple of hours the physical location of the server changed. The benefit is that the machine is now in the same building as the machines that we use to analyse your uploads, so getting the data onto those machines is now substantially faster.

What do I have to do to take advantage of this?

If you are using the DNS to look it up, then nothing, the DNS has changed over to using the new address. However, if you are using the IP address, you need to start using the new one. We are still uploading from the old server for the moment, but it is a substantially slower link. The new address is

Written by Alan

September 27, 2011 at 8:29 pm

Posted in Uncategorized

What are these door things?

I recently had cause to pass on an article that I wrote for the now defunct Australian Sun Customer magazine (On#Sun) on the subject of doors. It occurred to me that I really should put this on the blog. Hopefully this will give some insight as to why I think doors are really cool.

Where does this door go?

If you have had a glance through /etc you may have come across some files with door in their name. You may also have noticed calls to door functions if you have run truss over commands that interact with the name resolver routines or password entry lookup.

The Basic Idea (an example)

Imagine that you have an application that does two things. First, it provides lookup function into a potentially slow database (e.g. the DNS). Second, it caches the results to minimise having to make the slower calls.

There are already a number of ways that we could call the cached lookup function from a client (e.g. RPCs & sockets), but these require that we give up the cpu and wait for a response from another process. Even for a potentially fast operation, it could be some time
before the client is next scheduled. Wouldn’t it be nice if we could complete the operation within our time slice? Well, this is what the door interface accomplishes.

The Server

When you initialise a door server, a number of threads are made available to run a particular function within the server. I’ll call this function the door function. These threads are created as if they had made a call to door_return() from within the door function. The server will associate a file and an open file descriptor with this function.

The Client

When the client initialises, it opens the door file and specifies the file descriptor when it calls door_call(), along with some buffers for arguments and return values. The kernel uses this file descriptor to work out how to call the door function in the server.

At this point the kernel gets a little clever. Execution is transferred directly to an idle door thread in the server process, which runs as if the door function had been called with the arguments that the client specified. As it runs in the server context, it has access to all of the
global variables and other functions available to that process. When the door function is complete, instead of using return(), it calls door_return(). Execution is transferred back to the client with the result returned in a buffer we passed door_call(). The server thread is left sleeping in door_return().

If we did not have to give up the CPU in the door function, then we have just gained a major speed increase. If we did have to give it up, then we didn’t really lose anything, as the overhead is only small.

This is how services such as the name service cache daemon (nscd) work. Library functions such as gethostbyname(), getpwent() and indeed any call whose behaviour is defined in /etc/nsswitch.conf are implemented with door calls to nscd. Syslog also uses this interface so that processes are not slowed down substantially because of syslog calls. The door function simply places the request in a queue (a fast operation) for another syslog thread to look after and then calls door_return()
(that’s actually not how syslog uses it).

For further information see the section 9 man pages on door_create, door_info, door_return and door_call.

Written by Alan

August 1, 2011 at 5:21 pm


Get every new post delivered to your Inbox.