Friday, July 19, 2013

Fresh 2008 R2 install: SQL Server Reporting Services steal port 80 and 443

I can't believe I have the misfortune of having to use a Windows server for anything.

Fresh install of Windows 2008 R2 (again: SIGH), and ports 80 and 443 are in use. By what?? IIS, Web Deployment Agent, Skype - none of this crap is installed or running. netstat -abno shows something listening on PID 4, but "can not obtain ownership information". PID 4 is the kernel. What? What?

It's the SQL Server Reporting Services. Stop the service and switch it to only be started manually.

Jerks. You're jerks, Windows.


Wednesday, August 8, 2012

Large disks in CentOS

I hardly ever have to deal with truly large RAIDs or disks in a typical server - that's what NFS and Isilon and suchlike are for, amirite? So here's how you do it.

Dealing with a large boot disk/RAID during install

If you've got a true RAID, you should be able to create a small (e.g. 10GB or whatever you judge is appropriate) RAID just for the system disk, and another for the data. Then you can specify that linux will be installed on the small disk, and the installer won't choke. Hooray!

Note that if you're dealing with HP products, the "ORCA", the raid utility that is available during boot, is not sophisticated enough to carve out a small RAID in this way. You will need to download and burn to CD or bootable USB key the HP Offline Array Configuration Utility (the "ACU"). If you're going the USB key route, note that it has to be burned using the HP USB Key utility, which for some reason will not burn this particular utility from an image, but only from a CD (or image mounted as a CD). Have fun!

Or, you may have a single 3TB hard drive, or a couple 3TB hard drives that you'd like to spread out across in your refurbished desktop tower. I don't have great notes on this and can't go into much detail - apologies. I can tell you that this is much less of a pain with the CentOS 6 installer than with the CentOS 5 installer. With CentOS 5, you want to use a utility like parted to manually create a boot partition and a system disk partition. Any large data partions should be labelled "gpt". (I will go into a few more details about how to use parted in the next section.) Note that the installer will refuse to use more than 2TB of the boot disk, no matter what. Oh well. With CentOS 6, you have similar restrictions, but you don't have to prelabel the disks.

Adding large disks/RAIDs post-install

Run parted on the new raid, like so:

parted /dev/sdc

... where sdc is the new device that you haven't seen before. If you are baffled about what the new device is called, check dmesg and/or just ls /dev and look for new, unused devices. Next, make the label:


(parted) mklabel gpt                                                    (parted) print                                                          
Model: HP LOGICAL VOLUME (scsi)
Disk /dev/sdc: 33.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags


And now make the first partition. With disks up to a certain size, the trick to avoiding the dreaded "Warning: the resulting partition is not properly aligned for best performance" is to use "1" for the start and "-1" for the end. However, with more massive disks, this is apparently insufficient due to the size of the metadata needed on the disk. You can laboriously calculate by hand how big 34 sectors will be for your size disk, call it X, and set the start to X and the end to -X. Or, you can make parted do that for you by using "0%" and "100%", and then running align-check:


(parted) mkpart primary 1 -1                                            
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? cancel                                                     
(parted) mkpart                                                           
Partition name?  []? primary                                            
File system type?  [ext2]? ext4                                           
Start? 0%                                                                 
End? 100%                                                                 
(parted) align-check opt 1                                              
(parted) print                                                          
Model: HP LOGICAL VOLUME (scsi)
Disk /dev/sdc: 30.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  30.0TB  30.0TB               primary

(parted)


Next you'll expect to run mkfs, right? However:


# mkfs.ext4 /dev/sdc1
mke2fs 1.41.12 (17-May-2010)
mkfs.ext4: Size of device /dev/sdc1 too big to be expressed in 32 bits
using a blocksize of 4096.
# mkfs.ext4 -b 8192 /dev/sdc1
Warning: blocksize 8192 not usable on most systems.
mke2fs 1.41.12 (17-May-2010)
mkfs.ext4: 8192-byte blocks too big for system (max 4096)
Proceed anyway? (y,n) n


Ooops! Even though theoretically, ext4 on a 64-bit system should be able to support more than 16TB (the 32-bit system limit), there is no stable version of ext4 that does. (yet - and there are hacks available) Also, CentOS, at least, does not support larger block sizes. So, you will need to use XFS instead. If you're on CentOS 5 and don't have mkfs.xfs available, set enabled=1 in the [centosplus] section of /etc/yum.repos.d/CentOS-Base.repo, then yum install kmod-xfs xfsprogs.  In case you missed it, if you're still running a 32-bit system for some horrible reason, do not try to have a single filesystem greater than 16TB. It may get created without errors, but you will start experiencing data loss after you fill up the first 16TB, since silent errors that result in data loss are awesome.

# mkfs.xfs /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=32, agsize=228924472 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=7325583104, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
#

You can now mount your new filesystem!

# mkdir /bigdisk
# mount /dev/sdc1 /bigdisk
df -h /bigdisk
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdc1              28T  34M   28T   1% /bigdisk

And add the appropriate line to /etc/fstab:

/dev/sdc1      /bigdisk       xfs     defaults   0 0

Hooray!


Friday, June 29, 2012

Apache shows "welcome" message instead of directory index for virtual host root

By default on CentOS, Apache will show the "welcome" page instead of the directory index for the virtual host root, even if you've added "Options Indexes" in the root directory definition in httpd.conf or in the .htaccess file for that directory.

So annoying. Seriously guys.

The AMAZING SECRET is /etc/httpd/conf.d/welcome.conf - this overrides the index option for any virtual host root. You can either comment out all the lines or edit the LocationMatch regular expression to not match a given host.

Wednesday, June 13, 2012

force default gateway through eth0

On a CentOS 5.8 machine, I needed to keep the default gateway route through eth0, even though both eth0 and eth1 were receiving DHCP. It took a suprisingly long time to Google that in CentOS, you can simply use GATEWAYDEV=eth0 in /etc/sysconfig/network rather than mucking with /etc/sysconfig/network-scripts/, etc.

It is a little annoying that although CentOS will pick up the hostname from the first dhcp upped interface (eth0 in this case), it will set the default route through the last dhcp upped interface (eth1 in this case).

Thursday, February 16, 2012

Fedora app start pauses on bean defs for DefaultExternalContentManager

I had copied a vm image running Fedora (note: the repository application, not the OS). The /data directory was pointing to NFS storage, so I made a duplicate of the other vm's directory for this new VM, and created a new blank MySQL database. I emptied out the data/resourceIndex directory, but forgot to also empty out the data/activemq directory. That caused the following mysterious problems:

The server/bin/fedora-rebuild.sh script was stalling just before loading the menu of options, and unable to continue, but not showing any errors.

If I tried starting up Fedora, server/log/fedora.log would pause at:
INFO 2012-02-14 14:51:55.210 [main] (Server) Loading bean definitions for org.fcrepo.server.storage.DefaultExternalContentManager

and tomcat/logs/catalina.out would pause at:
INFO: Deploying configuration descriptor fedora.xml

There were no actual errors.

So, note to self: when cloning a Fedora instance, you need to empty both the data/resourceIndex AND data/activemq instances.

Wednesday, January 25, 2012

LogFormat for nginx in awstats

I wanted to run a large pre-existing nginx log through awstats. The log format given here didn't work for me; instead I used
%host - - %time1 %methodurl %code %bytesd %refererquot %uaquot

for logs that looked like:
host.host.host.host - - [20/Apr/2011:15:54:40 -0400] "GET / HTTP/1.1" 200 151 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16"

Tuesday, December 6, 2011

HP DL140 with SATA disks: very slow I/O

An HP DL140 G3 I was repurposing originally came with SAS disks, but I decided (ha! ha!) to use inexpensive 2TB SATA disks instead, on the rebuild. I rebuilt using CentOS 5.7. When the system came up, iostat was consistently giving iowait in the low teens, even when the machine was more or less idle, and heavy writes showed speeds of about 7.6 MB/s.

WTF.

I updated the system ROM and storage controller firmware - no luck. The DL140 comes with this horrible arcane RAID software - it only supports a RAID 0 or RAID 1 config - so I tried both and neither - nothing seemed to help. (Yes, this was a pretty labor-intensive process.) The controller, at least, does properly identify the disks as SATA. Eventually I reinstalled CentOS with no hardware RAID, letting CentOS do a software RAID, because the performance seemed literally identical regardless.

Using hp's (RedHat) driver didn't seem to matter in any of these cases. Per this old thread, I found that I was already using ata_piix instead of the generic IDE driver. I believe I had exhausted all the DMA enabling/disabling options in the BIOS during the BIOS updating spree (but I suspect I do not fully understand everything in this thread).

Lsiutils couldn't control the RAID - naturally, because there was no RAID - but I was able to enable a write cache using sdparm instead. sdparm --set=WCE /dev/sda made the average I/O immediately leap up to around 20 MB/sec, which was suddenly usable. I experimented with schedulers a bit, and found that noop was a little more consistent in keeping the I/O above 21 MB/sec - cfq was sometimes down to 18.

This speed is still pretty crappy. I can only assume that there's still some configuration setting that can make this better. There may be some hardware issue too, ultimately - I remember reading someplace that using 6 gb/sec SATA disks with a motherboard that can only support 3 gb/sec SATA disks is actually slower than using 3gb/sec sata disks, though I can no longer find this reference.