Archive for the ‘Solaris’ Category
I had cause to be reminded of this article I wrote for on#sun almost ten years ago and just noticed that I had not transferred it to my blog.
/etc/system is a file that is read just before the root filesystem is mounted. It contains directives to the kernel about configuring the system. Going into depth on this topic could span multiple books so I’m just going to give some pointers and suggestions here.
Warning, Danger Will Robinson
Settings can affect initial array and structure allocation, indeed such things as module load path and where the root directory actually resides.
It is possible to render your system unbootable if you are not careful. If this happens you might try booting with the ‘-a’ option where you get the choice to tell the system to not load /etc/system.
Just because you find a set of values works well on one system does not necessarily mean that they will work properly on another. This is especially true if we are looking at different releases of the operating system, or different hardware.
You will need to reboot your system before these new values will take effect.
The basic actions that can be taken are outlined in the comments of the file itself so I won’t go into them here.
The most common action is to set a value. Any number of products make suggestions for settings in here (eg Oracle, Veritas Volume Manager and Filesystem to name a few). Setting a value overrides the system default.
A practice that I make when working on this file is to place a comment explaining why and when I make a particular setting (remember that a comment in this file is prefixed by a ‘*’, not a ‘#’). This is useful later down the track when I may have to upgrade a system. It could be that the setting may actually not have the desired effect and it would be good to know why we originally did it.
I harp on this point but it is important.
Just because settings work on one machine does not make them directly transferable to another.
This tells the kernel not to start running the page scanner (to start paging out memory to disc) until free memory drops below 8mb (1024 x 8k blocks). While this setting may be fine on a machine with around 512mb of memory, it does not make sense for a machine with 10gb. Indeed if the machine is under memory pressure, by the time we get down to 8mb of free memory, we have very little breathing space to try to recover before requiring memory. The end result being a system that grinds to a halt until it can free up some resources.
Oracle makes available the Solaris Tunable Parameters guide as a part of the documentation for each release of Solaris. It gives information about the default values and the uses of a lot of system parameters.
I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung.
Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it’s threads spinning on busy wait type locks).
This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced.
This prompted the question back to the customer:
It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing “root” and hitting return, the console was no longer responsive.
This description puts a whole new light on the “hang”. You immediately start thinking “name services”.
Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network.
Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open.
My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server.
When you log a service ticket for a “system hang”, it’s great to get the forced crashdump first up, but it’s even better to get a description of what you observed to make to believe that the system was hung.
I recently had cause to pass on an article that I wrote for the now defunct Australian Sun Customer magazine (On#Sun) on the subject of doors. It occurred to me that I really should put this on the blog. Hopefully this will give some insight as to why I think doors are really cool.
Where does this door go?
If you have had a glance through
/etc you may have come across some files with door in their name. You may also have noticed calls to door functions if you have run truss over commands that interact with the name resolver routines or password entry lookup.
The Basic Idea (an example)
Imagine that you have an application that does two things. First, it provides lookup function into a potentially slow database (e.g. the DNS). Second, it caches the results to minimise having to make the slower calls.
There are already a number of ways that we could call the cached lookup function from a client (e.g. RPCs & sockets), but these require that we give up the cpu and wait for a response from another process. Even for a potentially fast operation, it could be some time
before the client is next scheduled. Wouldn’t it be nice if we could complete the operation within our time slice? Well, this is what the door interface accomplishes.
When you initialise a door server, a number of threads are made available to run a particular function within the server. I’ll call this function the door function. These threads are created as if they had made a call to
door_return() from within the door function. The server will associate a file and an open file descriptor with this function.
When the client initialises, it opens the door file and specifies the file descriptor when it calls
door_call(), along with some buffers for arguments and return values. The kernel uses this file descriptor to work out how to call the door function in the server.
At this point the kernel gets a little clever. Execution is transferred directly to an idle door thread in the server process, which runs as if the door function had been called with the arguments that the client specified. As it runs in the server context, it has access to all of the
global variables and other functions available to that process. When the door function is complete, instead of using
return(), it calls
door_return(). Execution is transferred back to the client with the result returned in a buffer we passed
door_call(). The server thread is left sleeping in
If we did not have to give up the CPU in the door function, then we have just gained a major speed increase. If we did have to give it up, then we didn’t really lose anything, as the overhead is only small.
This is how services such as the name service cache daemon (nscd) work. Library functions such as
getpwent() and indeed any call whose behaviour is defined in /etc/nsswitch.conf are implemented with door calls to nscd.
Syslog also uses this interface so that processes are not slowed down substantially because of syslog calls. The door function simply places the request in a queue (a fast operation) for another syslog thread to look after and then calls door_return()
(that’s actually not how syslog uses it).
For further information see the section 9 man pages on door_create, door_info, door_return and door_call.
So start 95% of the performance calls that I receive. They usually continue something like:
I have gathered some *stat data for you (eg the guds tool from Document 1285485.1), can you please root cause our problem?
So, do you think you could?
Neither can I, based on this my answer inevitably has to be “No”.
Given this kind of problem statement, I have no idea about the expectations, the boundary conditions, or even the application. The answer may as well be “Performance problems? Consult your local Doctor for Viagra”. It’s really not a lot to go on.
So, What kind of problem description is going to allow me to start work on the issue that is being seen? I don’t doubt that there really is an issue, it just needs to be pinned down somewhat.
What behavior exactly are you expecting to see?
Be specific and use business metrics. For example “run-time”, “response-time” and “throughput”.
This helps us define exit criteria.
Now, let’s look at the system that is having problems.
How is what you are seeing different? Use the same type of metrics.
The answers to these two questions take us a long way towards being able to work a call.
Even more helpful are answers to questions like
Has this system ever worked to expectation?
If so, when did it start exhibiting this behavior?
Is the problem always present, or does it sometimes work to expectation?
If it sometimes works to expectation, when are you seeing the problem? Is there any discernible pattern?
Is the impact of the problem getting better, worse, or remaining constant?
What kind of differences are there between when the system was performing to expectation and when it is not?
Are there other machines where we could expect to see the same issue (eg similar usage and load), but are not? Again, differences?
Once we start to gather information like this we start to build up a much clearer picture of exactly what we need to investigate, and what we need to achieve so that both you and me agree that the problem has been solved.
Please help get that figure of poorly defined problem statements down from its current 95% value.
I upgraded my internal Solaris 11 build last night and this morning noticed that I was getting error popups from thunderbird like:
SSL received a record that exceeded the maximum permissible length.
Searching the web didn’t help me a lot except for this one which suggested that I try telneting to port 993 on the server to see what it looked like.
When I did this and saw a complaint about
imapd not being able to open
libssl.so.0.9.8 that I twigged that this must have been the build that we went to openssl 1.0 on.
This meant that I needed to rebuild
imapd. Well I already have done most of the work here here.
The sad thing was it looks like something else changed and some structure elements have names different to what imapd was expecting in a (DIR *).
-D__USE_LEGACY_PROTOTYPES__ to the
EXTRACFLAGS macro in the top level
Makefile allowed the build to complete. After putting the new binary into place, thunderbird is happy talking to this server again.
I also needed to rebuild proxytunnel. I think that’s all that I had that linked against libssl.0.9.8.
After an experience I had yesterday, I need to say a little more than I did at Nevada to OpenSolaris Sun Ray on SPARC (part 5 – Sun Ray Server 4.2).
It seems that I missed something.
Part of the configuration that is done at install time sets up a small LDAP server, but instead of pointing at localhost, it points at the machine name. In general this is not a problem. Unfortunately as I moved the disk image from one machine to another, changing the host information, I didn’t realise that it was still talking to the server on my lab machine that I had used to build the image.
This was not a problem until the other night when someone else booked that machine and installed something else on it. All of a sudden I could no longer get access to my Sun Ray sessions.
I spent a while trying to address the problem, but didn’t get very far (probably because I don’t have a lot of skills in the Sun Ray area).
I had noticed some blog postings about a new release of Sun Ray software out (5.2) that includes the 4.3 Sun Ray Server software in it that I had been hearing some good things about with regards to Solaris 11.
I figured it was time to bite the bullet.
The first thing to do was to clone myself another boot environment so that if it did go really badly wrong I could go back and attempt to recover from the current broken point.
# beadm create Solaris11-sr5.2 # beadm activate Solaris11-sr5.2
Have to love ZFS root for instant clones.
I then rebooted into that new boot environment and removed the 4.2 software (I found the instructions for this are in the installation guide for 4.2).
# cd /opt/SUNWut/sbin # ./utconfig -u # cd / # /opt/SUNWut/sbin/utinstall -u
Well that was pretty painless.
I had previously downloaded and unzipped the software so all I needed to do now was to run
and pretty much accept the defaults. This was an incredibly painless install in comparison to installing the previous version (well done folks), although in hindsight I should have stuck to the defaults a little more closely than I did as I found that I couldn’t get the DTU to connect, indeed it would either hang actually reboot the DTU.
Looking in /var/opt/SUNWut/log/messages, I saw the following
May 26 22:29:23 vesvi utauthd: [ID 355619 user.info] WatchIO UNEXPECTED: Connection from 10.191.128.12 is not allowed May 26 22:29:23 vesvi utauthd: [ID 572381 user.info] WatchIO UNEXPECTED: 10.191.128.12 protocolError: networkNotAllowed May 26 22:29:23 vesvi utauthd: [ID 303596 user.info] WatchIO UNEXPECTED: WatchIO.doRemove(null)
and it suddenly twigged that I’d answered the allow LAN connections question wrong.
Unfortunately I found that I can’t use
utadm to fix this as I don’t have the DHCP packages installed on this machine (I have to see if there is a bug logged on that), but if you look at my previous writeup I had to address exactly this before. You have to make allowLANConnections true in /etc/opt/SUNWut/auth.props
# Allow LAN Connections # This parameter enforces the policy that only terminals on the # private Sunray interconnect can attach to the server. Connection # attempts from other network interfaces, including the local loopback # interface, will be rejected. # allowLANConnections = true
Doing a cold restart of the software allowed me to start using my Sun Ray at home again
# /opt/SUNWut/sbin/utrestart -c
It finally got to me. I’ve got a nice USB audio adapter that I use at home on my Tecra M11, but I was only ever able to get firefox to use the builtin audio on Solaris 11. I could make it work under Virtual Box by importing it, but I have a nice sound setup in my office and I really wanted to use the Roland/Cakewalk UA-1G natively.
Searching the web found me lots of people asking the question and nothing in the way of answers.
I’d already tried
# cd /dev
# rm audio audioctl
# ln -s sound/1 audio
# ln -s sound/1ctl audioctl
but flash was still playing through the internal speakers.
The answer came when I ran pfiles on the firefox-bin process, I noticed that it had the dsp device for the internal audio controller open.
What I had forgotten was
# rm dsp
# ln -s dsp1 dsp
I went and started a youtube video and had to immediately halt it as the volume through the other device had been set WAY too high, but yea that’s all it took.
The creation of a script called audio that takes an argument of the device is then trivial, and left as an exercise for the reader (yes I’ve already written one).
When you give your customers the list of “vulnerabilities” to take up with their vendor, can you please make sure of a couple of things?
- Actually identify the security vulnerability with a reference so we don’t have to try to interpret your vague description of it (a pointer to one of the sites that reports security vulnerabilities isn’t that hard is it?)
- Verify that the system really is vulnerable. As I pointed out in an earlier blog, looking at the version label is not always enough to say that a version is vulnerable. Let alone the fact that sometimes even the best of tools get false positives.
One call I have been dealing with over the last few days identified that a customer was vulnerable to five different items. After working out what was really meant by three of them I was able to determine that they were vulnerabilities that we put patches out for back in 2003 and the customer had patches on the system that included these fixes. If the scanner software had probed the vulnerability it would have seen the product in question safe. Of the other two, “rexec” was commented out of /etc/inetd.conf and netstat -a showed nothing listening on port 512, and they actually did still have rshd running, which they needed to turn off.
Because of the vagueness of the descriptions I was given I had to spend quite some time researching three of those vulnerabilities to find exactly what they meant (not helped by how old they were).
You can probably imagine how pleased I was at having to spend time doing this research when I have other calls in my queue that really also needed attention, only to find out that it could all have been avoided.
I had a few support calls today and yesterday with folks asking us about their scanners reporting:
Found the PWS-SPyEye!env.a trojan !!!
against a lot of different files on Solaris ranging from database install executables to parts of a python patch.
I found a thread on the McAfee community site discussing this. It wasn’t only Solaris that was having the problem. There were a few people who had run tests against files which had not been modified (and stored on DVD) from before the time that this trojan hit being reported as vulnerable.
I had another look this morning and it appears that these reports only occur on version 6282 of the virus definitions file and that todays file (version 6286) no longer shows these files as hits.
Before logging an Oracle support call if you see this, could you try updating the virus definitions file to at least version 6286?
Kudos to McAfee for sorting this out quickly.
It’s been a long path to get here, including a little experimenting with having an Ultra45 as the final destination box (the fact that it only had 1gb memory in it turned out to be a show stopper for any kind of desktop work).
And yes I know it’s not called OpenSolaris anymore, but I really wanted to stick with the title to keep these articles together.
Last Wednesday I bit the bullet and migrated back to my original hardware which was slightly better specced than what I had been using in the lab.
I did learn some things in this final step which hopefully if anyone ever has to do something like this again will be beneficial.
Cloning the boot disk
While I could have moved the 72gb disk I had in the lab machine directly into the target box, I was reluctant to do so as I did not have another 72gb disk to use as a mirror and I was under the (mistaken – see later) impression that the target had a pair of 36 gb disks in it.
As we had trouble sourcing a pair of 72gb disks, I sourced a pair of 142gb ones and put one of them into the second disk slot in the lab box.
You cannot hot swap disks in a Sun Blade 2000. There is a microswitch that powers down the machine when you take the side off. I discovered this by watching the fans spin down on side removal. Sigh.
After powering up and booting again we need to add this disk as a mirror. It’s not important that it is larger than the disk I am mirroring, ZFS will only use what it needs on this larger disk to mirror the smaller. I also didn’t want to partition it to match sizes as once I was done I wanted to grow the zpool to the entire available size.
Well actually I did adjust the partition tables, but only to give me the full disk on slice 0 (yes I could have used slice 2, but neatness counts).
OK we add c6t2d0s0 as a mirror to rpool
# zpool attach rpool c6t1d0s0 c6t2d0s0
and then we wait for it to resilver.
I also updated hosts so that it also had the address of the machine that I was going to move the disk to.
I was not sure about whether or not I could boot a detached zpool mirror or if I had to simply pull the disk and move it to the new machine.
Don’t detach the mirror before removing it from teh source system to the target. You will get a Failed to boot with a message like “Failed to boot detached mirror”.
Move the disk back to the source machine and re-attach:
zpool attach rpool c6t2d0s0 c6t2d0s0
and wait another few hours for resilvering.
This time on putting this disk into slot 1, the machine booted.
Brought it up single user and modified /etc/hostname.eri0 and /etc/nodename. Rebooted to be sure everything took. Why was it still coming up with the source machine name, and why can it not contact the local NIS server?
Current builds of Solaris 11 development have moved the nodename to be a property in SMF.
Looking at /lib/svc/method/identity-node we see both how to set this AND why /etc/nodename was not helpful.
/etc/nodename is only used if there is no SMF property for config/nodename in svc:/system/identity:node. When it is used here the startup method removes the file after using it. If the property exists, it will never look at that file again. To change this property you need to use svccfg.
# svccfg -s svc:/system/identity:node setpropconfig/nodename = astring: vesvi
Where vesvi was the name of my target machine.
The method also does a:
svcadm refresh svc:/system/identity:node
Which I did and then rebooted again for good measure to make sure the interfaces came up correctly.
Hmmmm, it still isn’t seeing the NIS servers. DOH! In our lab we have our routers advertise themselves. On the normal network, router addresses are handed out with DHCP. As I have a static address, …
Booted back to single user and added the router address to /etc/defaultrouter and things looked much better. Indeed it looks like Sun Ray had come up. I was worried that I would need to dig into the guts of that configuration as well, but it appears not (though at a later time I will go through my notes to verify this).
I mentioned earlier that I thought that the target machine only had a pair of 36gb disks in it. When I took them out I noticed that they were actually 72gb disks. *CLICK* when I originally migrated to this machine from my old Ultra 80 when it died, I had 36gb disks, I must have done the mirror trick there too. What I had forgotten to do was to grow the zpool.
# zpool set autoexpand=on rpool
and we now have a 142gb non-mirrored rpool.
The last major step was to put the other 142gb disk in the machine and set up the mirror. Before I did so I checked the current configuration:
pool: rpool state: ONLINE config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c6t2d0s0 ONLINE 0 0 0 errors: No known data errors
Hang on, I said that I had put the disk into slot 1. Oh yes, c6t2d0s0 is the label on the disk. It just happened to not reflect the actually installed location. This could have made putting the other disk into c6t2d0s0 interesting.
On powering the machine down, I moved that disk into slot 2 and put the new disk into slot 1. It’s nice how ZFS really doesn’t care where you put the disks.
This time I booted from disk2 at OBP and it came up properly. Instead of working standing up at a vdu attached to the serial port of this machine, I went back to my desk and logged into a Sun Ray session on it.
Adding the other side of the mirror:
# zpool attach rpool c6t2d0s0 c6t1d0s0
and wait for the resilvering (which only took 44 minutes this time).
pool: rpool state: ONLINE scan: resilvered 34.6G in 0h44m with 0 errors on Wed Mar 9 21:46:45 2011 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c6t1d0s0 ONLINE 0 0 0 c6t2d0s0 ONLINE 0 0 0 errors: No known data errors
I’m now running on the original hardware with something much lighter than the old nevada build I had on it and it looks like I have all the services that I need.
I will say that after putting up with swapping whenever I wanted to do something on the Ultra45, the SB2000 with 4gb feels so much better.