Setting up a LVM LV quickly..

I’m playing with proxmox right now, and had a need to setup LVM locally on each node. I figure why not script it.. I’ve ended up with this abomination of a script. You probably want to do better, but it’s a start:

set -x;
lvmid=`hostname`_localLVM; 
pvcreate /dev/sda; 
pvs; 
vgcreate vg_$lvmid /dev/sda; 
vgs; l
vcreate -n lv_$lvmid -l 100%FREE vg_$lvmid;
lvs; 
mkdir -p /local/vg_$lvmid-lv_$lvmid; 
mkfs.ext4 /dev/mapper/vg_$lvmid-lv_$lvmid;
echo /dev/mapper/vg_$lvmid-lv_$lvmid /local/vg_$lvmid-lv_$lvmid ext4 defaults 0 0 >> /etc/fstab; 
mount /local/vg_$lvmid-lv_$lvmid; 
df -h

Or on one line..

set -x;lvmid=`hostname`_localLVM; pvcreate /dev/sda; pvs; vgcreate vg_$lvmid /dev/sda; vgs; lvcreate -n lv_$lvmid -l 100%FREE vg_$lvmid;lvs; mkdir -p /local/vg_$lvmid-lv_$lvmid; mkfs.ext4 /dev/mapper/vg_$lvmid-lv_$lvmid;echo /dev/mapper/vg_$lvmid-lv_$lvmid /local/vg_$lvmid-lv_$lvmid ext4 defaults 0 0 >> /etc/fstab; mount /local/vg_$lvmid-lv_$lvmid; df -h

Hope this helps someone in the future.. maybe me!

Setting up a PowerBook G4 12 inch, 1.5Ghz in 2020

Some time ago, I received a PowerBook G4 12 inch from a friend. As is healthy, the drive had been wiped, but being one to not keep too much old stuff, I didn’t have install media for Leopard (10.5). It went on the backburner for a while, but I recently received (back) some old storage devices which had been on ice in the WA Wheatbelt for about 10 years, and felt I had the right stuff together to give it a go again.

  1. Downloaded 10.5.4 Leopard Installer from Archive.org
  2. Attach 30GB PATA drive by PATA to USB2.0 dongle
  3. Using 10.15 Catalina, partition drive with Apple Partition Map to 10GB+20GB partition
  4. Mount Leopard Installer ISO previously download
  5. Use ASR to restore contents of ISO to 10GB partition
bash-3.2$ sudo asr restore --source /Volumes/Mac\ OS\ X\ Install\ DVD/ --target /Volumes/Emptied/ --erase
	Validating target...done
	Validating source...done
	Erase contents of /dev/disk3s3 (/Volumes/Emptied)? [ny]: y
	Validating sizes...done
	Restoring  ....10....20....30....40....50....60....70....80....90....100
	Verifying  ....10....20....30....40....50....60....70....80....90....100
	Restored target device is /dev/disk3s3.
	Remounting target volume...done
  1. Try to boot Powerbook using PATA drive on USB dongle, failed
  2. Move PATA drive from USB dongle to Sarotech Cutie FireWire caddy (originally purchased in Tokyo, 17 years ago), success
  3. Install Leopard
  4. Reboot, install 10.5.8 combo update
  5. Reboot
  6. All works! Yay

Quickly convert HEIC to PDF for expenses submission

Recently I had to convert a large number of photographs taken on my iPhone into PDF for submission with my expense report. I took to my old faithful ImageMagick (installed via HomeBrew) and its mogrify command:

mogrify -resize 50% -format pdf -level 0%,100%,2.0 -type Grayscale -compress jpeg *.HEIC

Hope this helps!

FAS2240 controller into DS4243/DS4246/DS424x chassis

If you have a venerable old FAS2240 or FAS255x, you can turn it into a disk shelf, by swapping the controllers with embedded IOM6 (IOM6E) for regular IOM3 or IOM6 controllers, and adding it as a shelf to a different controller.

The opposite however is not always true. It is not supported to turn a disk shelf into a FAS2240, but there are instances where it might be required or desirable, and I’ve seen people hit it a couple of times.

The FAS2240 PCM / IOM6E will work in any DS2246. However, while the DS4246 and DS4243 share an enclosure (the DS424), there are two revisions, and the FAS22xx/25xx only work in the newer one of them, which has better cross-midplane ventilation. Placing them in the older version results in a “FASXXX is not a supported platform” message and failure to boot.

The original version, X558A (430-00048) doesn’t support the embedded PCM/IOM6Es, while the X5560 (430-00061) does. If the shelf was shipped new after April 2012, it is probably the X5560. Some earlier may be also.

Edit 2018-01-23: Chemten on reddit gives a really good hint

“Just take the drives out of the first column and look at the board. 2008 stamp, no bueno for controller but ok for newer IOM. 2010 or later, takes anything.”

ONTAP 9.2 and the newest generation of platforms don’t support the out of band ACP connections on the DS4243/DS4246. With the newer ACP firmwares, the DS4246/IOM6 can do in-band ACP, but that isn’t an option for the DS4243. Despite what the product brief and other documents say, the DS4243 can be upgraded to a DS4246 with an IOM6 swap, and even more interestingly, they can both be converted into a DS424C with new IOM12/IOMC modules. This is officially a disruptive operation, but I’ve seen some people claim they have done it live without problems. YMMV. You might want to move to IOM12 as the SAS stack runs at the lowest speed, and the new platforms (FAS8200/2600/9000/700s) use the new SFF-8644 SAS connection, instead of the QSFP connections used previously. NetApp makes transition cables between the two, but I can assure you that you never have enough in the right lengths.

Converting a NetApp FAS22xx or FAS25xx or AFF system to Advanced Drive Partitioning

ONTAP 8.3 is out right now as a release candidate, which means you can install it, but systems aren’t shipping with it. If you’re installing a new one of these entry level systems, consider carefully if it’s the right choice. From my point of view, it is, as you get much better storage efficiency. You should plan to do an upgrade from 8.3RC1 to 8.3 in Feb-Mar 2015, but with Clustered ONTAP, and the right client and share settings, it can be totally non disruptive (even to CIFS). It’s worth noting too, that for now at least, if you need to replace an ADP partitioned drive, you will probably need to call support for assistance (but it’s a free call, and they love to learn new stuff)

If you want to convert straight out of the box, it’s pretty easy:

  1. Download ONTAP 8.3 from support.netapp.com, the installable version, and put it in an http accessible location
  2. Control-C at startup time to get to the boot menu. Choose option 7 – install new software first, and enter the URL of the 8.3RC1 image from your webserver
  3. Once 8.3 is installed on the internal boot device, boot into maintenance mode from the boot menu and unassign all drives from each node – this isn’t covered in NetApp’s current documentation, but is required
  4. Reboot each node, and then choose option 4 – wipeconfig. Once the wipe is finished, the system will use ADP on the internal drives and be ready to setup

Setup of the FAS22xx and FAS25xx systems can be as easy or as complex as you’d like. They come with a USB key to run a GUI setup application, so you never need to touch the serial connection, but I’ve never used it, and the version on the key probably doesn’t support 8.3RC1, so I just do initial node and cluster setup from the CLI. Another great benefit with 8.3 is that there is now a built in OnCommand System Manager on the cluster management – so no need to ensure there’s always a machine with the right version of Java and Flash on.

Theoretically, you might be able to do a conversion to ADP with data in place by relocating all of the data to one node, and unowning and reformatting the other one, then re-joining to the cluster. I haven’t been in a position to try it, but if you have, I’d be interested to know how it went (email me@thisdomain). Some caveats on an ADP conversion – you need to have at least 4 disks per node for ADP – so the 3 current root aggr drives, and one spare. Ideally re-locate your data to the surviving node using vol move, not aggregate relocation. Then once one node is converted, vol move everything to a data aggregate the converted node, then do the same thing to the unconverted one.

Disk slicing gives you a very valid option of a true active/passive configuration, one which was never really possible with even 7 mode. The root and the data partitions do not need to be assigned to the same node. You can assign all the data partitions to one node, while just leaving enough root partitions on the passive node, or go for the traditional active/active of two data aggregates – one per node. There are some pretty big caveats for disk spares and the importance of quick replacement of failed drives, but on a FAS2520, it’s probably worth it.

I have started writing a post on active/active vs active/passive configs a couple of times, but put it off in favour of waiting till ADP was available. The basic thought is that you want a node to be able to takeover for its HA parter without service degradation, so you want to keep each node below 50% utilization, so it would be the same as running all workloads on a single system, but maybe you’ll accept some degradation in favour of getting more use out of the system. You have lots of choices. One thing to consider is processor affinity – with more processor cores, there’s less need to schedule CPU time to volume operations, and running on more processors (ie, both nodes) gives you access to more cores. But on a FAS2520, how many volumes are you likely to have?