That title may sound a little self explanatory and obvious, but over the last two weeks I have had two customers tell me flat out that /tmp uses swap and that I should still continue to investigate where their memory is being used.
This is likely because when you define /tmp in /etc/vfstab, you list the device being used as swap.
In the context of a tmpfs, swap means physical memory + physical swap. A tmpfs uses pageable kernel memory. This means that it will use kernel memory, but if required these pages can be paged to the swap device. Indeed if you put more data onto a tmpfs than you have physical memory, this is pretty much guaranteed.
If you are still not convinced try the following.
- In one window start up the command
$ vmstat 2
- In another window make a 1gb file in /tmp.
$ mkfile 1g /tmp/testfile
- Watch what happens in the free memory column in the vmstat.
There seems to be a misconception amongst some that a tmpfs is a way of stealing some of the disk we have allocated as swap to use as a filesystem without impacting memory. I’m sorry, this is not the case.
I had cause to be reminded of this article I wrote for on#sun almost ten years ago and just noticed that I had not transferred it to my blog.
/etc/system is a file that is read just before the root filesystem is mounted. It contains directives to the kernel about configuring the system. Going into depth on this topic could span multiple books so I’m just going to give some pointers and suggestions here.
Warning, Danger Will Robinson
Settings can affect initial array and structure allocation, indeed such things as module load path and where the root directory actually resides.
It is possible to render your system unbootable if you are not careful. If this happens you might try booting with the ‘-a’ option where you get the choice to tell the system to not load /etc/system.
Just because you find a set of values works well on one system does not necessarily mean that they will work properly on another. This is especially true if we are looking at different releases of the operating system, or different hardware.
You will need to reboot your system before these new values will take effect.
The basic actions that can be taken are outlined in the comments of the file itself so I won’t go into them here.
The most common action is to set a value. Any number of products make suggestions for settings in here (eg Oracle, Veritas Volume Manager and Filesystem to name a few). Setting a value overrides the system default.
A practice that I make when working on this file is to place a comment explaining why and when I make a particular setting (remember that a comment in this file is prefixed by a ‘*’, not a ‘#’). This is useful later down the track when I may have to upgrade a system. It could be that the setting may actually not have the desired effect and it would be good to know why we originally did it.
I harp on this point but it is important.
Just because settings work on one machine does not make them directly transferable to another.
This tells the kernel not to start running the page scanner (to start paging out memory to disc) until free memory drops below 8mb (1024 x 8k blocks). While this setting may be fine on a machine with around 512mb of memory, it does not make sense for a machine with 10gb. Indeed if the machine is under memory pressure, by the time we get down to 8mb of free memory, we have very little breathing space to try to recover before requiring memory. The end result being a system that grinds to a halt until it can free up some resources.
Oracle makes available the Solaris Tunable Parameters guide as a part of the documentation for each release of Solaris. It gives information about the default values and the uses of a lot of system parameters.
I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung.
Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it’s threads spinning on busy wait type locks).
This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced.
This prompted the question back to the customer:
What exactly were you seeing that made you believe that the system was hung?
It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing “root” and hitting return, the console was no longer responsive.
This description puts a whole new light on the “hang”. You immediately start thinking “name services”.
Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network.
Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open.
My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server.
When you log a service ticket for a “system hang”, it’s great to get the forced crashdump first up, but it’s even better to get a description of what you observed to make to believe that the system was hung.
Over the last couple of hours the physical location of the supportfiles.sun.com server changed. The benefit is that the machine is now in the same building as the machines that we use to analyse your uploads, so getting the data onto those machines is now substantially faster.
What do I have to do to take advantage of this?
If you are using the DNS to look it up, then nothing, the DNS has changed over to using the new address. However, if you are using the IP address, you need to start using the new one. We are still uploading from the old server for the moment, but it is a substantially slower link. The new address is 220.127.116.11.
I recently had cause to pass on an article that I wrote for the now defunct Australian Sun Customer magazine (On#Sun) on the subject of doors. It occurred to me that I really should put this on the blog. Hopefully this will give some insight as to why I think doors are really cool.
Where does this door go?
If you have had a glance through
/etc you may have come across some files with door in their name. You may also have noticed calls to door functions if you have run truss over commands that interact with the name resolver routines or password entry lookup.
The Basic Idea (an example)
Imagine that you have an application that does two things. First, it provides lookup function into a potentially slow database (e.g. the DNS). Second, it caches the results to minimise having to make the slower calls.
There are already a number of ways that we could call the cached lookup function from a client (e.g. RPCs & sockets), but these require that we give up the cpu and wait for a response from another process. Even for a potentially fast operation, it could be some time
before the client is next scheduled. Wouldn’t it be nice if we could complete the operation within our time slice? Well, this is what the door interface accomplishes.
When you initialise a door server, a number of threads are made available to run a particular function within the server. I’ll call this function the door function. These threads are created as if they had made a call to
door_return() from within the door function. The server will associate a file and an open file descriptor with this function.
When the client initialises, it opens the door file and specifies the file descriptor when it calls
door_call(), along with some buffers for arguments and return values. The kernel uses this file descriptor to work out how to call the door function in the server.
At this point the kernel gets a little clever. Execution is transferred directly to an idle door thread in the server process, which runs as if the door function had been called with the arguments that the client specified. As it runs in the server context, it has access to all of the
global variables and other functions available to that process. When the door function is complete, instead of using
return(), it calls
door_return(). Execution is transferred back to the client with the result returned in a buffer we passed
door_call(). The server thread is left sleeping in
If we did not have to give up the CPU in the door function, then we have just gained a major speed increase. If we did have to give it up, then we didn’t really lose anything, as the overhead is only small.
This is how services such as the name service cache daemon (nscd) work. Library functions such as
getpwent() and indeed any call whose behaviour is defined in /etc/nsswitch.conf are implemented with door calls to nscd.
Syslog also uses this interface so that processes are not slowed down substantially because of syslog calls. The door function simply places the request in a queue (a fast operation) for another syslog thread to look after and then calls door_return()
(that’s actually not how syslog uses it).
For further information see the section 9 man pages on door_create, door_info, door_return and door_call.
So start 95% of the performance calls that I receive. They usually continue something like:
I have gathered some *stat data for you (eg the guds tool from Document 1285485.1), can you please root cause our problem?
So, do you think you could?
Neither can I, based on this my answer inevitably has to be “No”.
Given this kind of problem statement, I have no idea about the expectations, the boundary conditions, or even the application. The answer may as well be “Performance problems? Consult your local Doctor for Viagra”. It’s really not a lot to go on.
So, What kind of problem description is going to allow me to start work on the issue that is being seen? I don’t doubt that there really is an issue, it just needs to be pinned down somewhat.
What behavior exactly are you expecting to see?
Be specific and use business metrics. For example “run-time”, “response-time” and “throughput”.
This helps us define exit criteria.
Now, let’s look at the system that is having problems.
How is what you are seeing different? Use the same type of metrics.
The answers to these two questions take us a long way towards being able to work a call.
Even more helpful are answers to questions like
Has this system ever worked to expectation?
If so, when did it start exhibiting this behavior?
Is the problem always present, or does it sometimes work to expectation?
If it sometimes works to expectation, when are you seeing the problem? Is there any discernible pattern?
Is the impact of the problem getting better, worse, or remaining constant?
What kind of differences are there between when the system was performing to expectation and when it is not?
Are there other machines where we could expect to see the same issue (eg similar usage and load), but are not? Again, differences?
Once we start to gather information like this we start to build up a much clearer picture of exactly what we need to investigate, and what we need to achieve so that both you and me agree that the problem has been solved.
Please help get that figure of poorly defined problem statements down from its current 95% value.
I upgraded my internal Solaris 11 build last night and this morning noticed that I was getting error popups from thunderbird like:
SSL received a record that exceeded the maximum permissible length.
Searching the web didn’t help me a lot except for this one which suggested that I try telneting to port 993 on the server to see what it looked like.
When I did this and saw a complaint about
imapd not being able to open
libssl.so.0.9.8 that I twigged that this must have been the build that we went to openssl 1.0 on.
This meant that I needed to rebuild
imapd. Well I already have done most of the work here here.
The sad thing was it looks like something else changed and some structure elements have names different to what imapd was expecting in a (DIR *).
-D__USE_LEGACY_PROTOTYPES__ to the
EXTRACFLAGS macro in the top level
Makefile allowed the build to complete. After putting the new binary into place, thunderbird is happy talking to this server again.
I also needed to rebuild proxytunnel. I think that’s all that I had that linked against libssl.0.9.8.