Setting up a PowerBook G4 12 inch, 1.5Ghz in 2020

Some time ago, I received a PowerBook G4 12 inch from a friend. As is healthy, the drive had been wiped, but being one to not keep too much old stuff, I didn’t have install media for Leopard (10.5). It went on the backburner for a while, but I recently received (back) some old storage devices which had been on ice in the WA Wheatbelt for about 10 years, and felt I had the right stuff together to give it a go again.

  1. Downloaded 10.5.4 Leopard Installer from Archive.org
  2. Attach 30GB PATA drive by PATA to USB2.0 dongle
  3. Using 10.15 Catalina, partition drive with Apple Partition Map to 10GB+20GB partition
  4. Mount Leopard Installer ISO previously download
  5. Use ASR to restore contents of ISO to 10GB partition
bash-3.2$ sudo asr restore --source /Volumes/Mac\ OS\ X\ Install\ DVD/ --target /Volumes/Emptied/ --erase
	Validating target...done
	Validating source...done
	Erase contents of /dev/disk3s3 (/Volumes/Emptied)? [ny]: y
	Validating sizes...done
	Restoring  ....10....20....30....40....50....60....70....80....90....100
	Verifying  ....10....20....30....40....50....60....70....80....90....100
	Restored target device is /dev/disk3s3.
	Remounting target volume...done
  1. Try to boot Powerbook using PATA drive on USB dongle, failed
  2. Move PATA drive from USB dongle to Sarotech Cutie FireWire caddy (originally purchased in Tokyo, 17 years ago), success
  3. Install Leopard
  4. Reboot, install 10.5.8 combo update
  5. Reboot
  6. All works! Yay

A bit of vmware fun for a change!

I’ve got a VMware VCP – Have done for about 8 years (passed the exam 4 times now..), but most of my day these days is dealing with storage – but I’ve had a family fixit to migrate a 17 year old laptop into a VM on a more recent mac.

Nice and easy I sez! Just use the converter! Nekt minit..

vmware error.PNG

For the keyword lulz of anyone searching for this problem: “Error 1920.Service VMware vCenter Convert Standalone Server (vmware-converter-server) failed to start. Verify that you have sufficient privileges to start system services”

First I tried the obvious – checking the local user was an Admin, running it as administrator, trying vmware’s fix of creating a group called “Domain Admins”.. all with no dice.

Then I found someone suggesting how to start the agent manually, and when I did that, it complained that the certificate could not be verified.. which lead me on another path, checking Properties in Windows, which lead me to this KB entry on how to install the code signing certificate root on Windows XP.. which lead to a KB entry on another site, which led to 404, which led to web.archive.org, saving the file as a PEM, adding the Certification MMC snapin, importing it, and finally, the service started up.

Nice and simple.. only took someone with 19 years of experience with VMware and as a professional infrastructure admin an hour to fix it..

Edit: Oh ho, but there’s more! VMware Converter then was unable to send the image to the Mac – under the hood it’s still the same old converter that has been around for years, which means that it saves the image over SMB to the Mac. Except, Windows XP can’t talk to Catalina by default. Some will suggest upgrading to SP3 (a good idea, but I want to make minimal changes to this system image..), but that isn’t necessary – as outlined at this post, all you need to do is set HKLM\System\CurrentControlSet\Control\Lsa\lmcompatibility level to 3, from the default of 1 on SP3 or 0 on SP2

Quickly convert HEIC to PDF for expenses submission

Recently I had to convert a large number of photographs taken on my iPhone into PDF for submission with my expense report. I took to my old faithful ImageMagick (installed via HomeBrew) and its mogrify command:

mogrify -resize 50% -format pdf -level 0%,100%,2.0 -type Grayscale -compress jpeg *.HEIC

Hope this helps!

How to wipe a partitioned ADP NetApp system

With ONTAP 9, there is now an “option 9” in the boot menu that allows you to re-initialise a system to/from ADP, like wipeconfig.

It is a three step process to wipe an HA pair – the first one, option 9a –  removes the existing partition information. And the second, option 9b, will repartition and re-initialise the system, and then finally on the node that was halted, boot it, then wipe it (option 4) from its boot menu.

*************************************************
* Advanced Drive Partitioning Boot Menu Options *
*************************************************
(9a) Unpartition disks and remove their ownership information.
(9b) Clean configuration and initialize node with partitioned disks.
(9c) Clean configuration and initialize node with whole disks.
(9d) Reboot the node.
(9e) Return to main boot menu.

The caveat is that one node has to be halted at the LOADER> prompt while you run the first two commands. That should be it!

Moving your Windows install to an SSD, breaking it, then fixing it.

I’d been putting this off long enough, but yesterday was the day! I was going to move our Windows install to an SSD.

And I did. But you shouldn’t.

If at all possible, do a fresh re-install of Windows on your new SSD, and move your data across. Download the Windows DVD Creator (also makes USB keys), or use the “Make a recovery disk” option in Windows to blatt the installer onto an 8GB USB key, and start fresh.

So why didn’t I do that? I like a challenge, and at this point I’m just being obstinate about not re-installing.

For a short period of time in 2009, my wife worked for a company in Canada, that then got bought out by Microsoft. Like, a really short period of time – she “worked” for 3 weeks, then got 4 weeks severance… and her desk.. and her computer, which was a pretty speccy (for 2009) Dell, running Windows 7. It got case swapped, then we swapped the motherboard, then I moved it from a 750GB SATA HDD to a 2TB SATA SSHD (Hybrid HDD.. I wouldn’t recommend them frankly), and in the process moved from MBR to GPT and BIOS to uEFI, all without re-installing. At this point, we’re now in a different country, almost 8 years later, in it’s third case, second motherboard, fourth graphics card, and it’s now running Windows 10, but it hasn’t been reinstalled.

The first challenge – C:\ was a 750GB partition, and the new SSD was 500GB. Reboot into SysRescCD and use gparted to re-size – except for some reason, it couldn’t unmount it. Mess around for a few reboots, and eventually boot with the option to cache everything into memory, and we’re good – resized down to 450GB

Next challenge – the rest of the source drive isn’t empty – there’s another 750GB scratch partition, as well as two Linux partitions. This means I can’t just copy the entire disk to the new one. But I do need the GPT, EFI boot partition, and Windows partition, and they’re all in the first 500GB. Cue “dd if=/dev/sda of=/dev/sdb bs=4096 count=115500000”. Then load up “gdisk”, delete the entries for partitions that don’t exist, and away we go.

And it works.. until I plug the old drive back in (even after deleting the old C: drive and EFI drive with SysRescCD..). Then it stops booting.

At this point, I’m pretty sure the EFI partition and BCD is hosed, so eventually I find this article – http://www.hasper.info/repair-a-destroyed-windows-7-uefi-boot-sector/ – it works for Windows 10, thankfully, and now everything is back working again, and speedy and on an SSD.

Most people’s saturday night’s don’t involve rewriting partition tables and fixing EFI. Perhaps that mine does is a sign I’m in the right career right now, working for a SAN/NAS Vendor..

Data recovery from Apple FileVault / Encrypted Disk Images

I had a message a few weeks ago from a random guy from Italy, who had found a post of mine about rewriting GPT tables on OSX, and wondered if I could help him recover data from an encrypted disk image that had screwed up when he tried to resize it, and that said no valid partitions when he mounted it. That was an exciting exercise. Fortunately it’s easy to make a master copy, in case you screw it up.

First problem: If OSX fails to find any filesystems after attaching an encrypted disk image, it detaches them. Solution to that is to run it from the command line – “hdiutil attach /path/to/file -nomount”

Peering in with gdisk, it was clear there wasn’t a partition there. We then tried recreating the GPT, by using the metadata to create an identical image, and then create the correct partitions at the correct offsets. That didn’t work, unfortunately, but it was fun to try.

Eventually we left the image attached without attempting to mount filesystems, and he was able to use photorec to recover most of the files out of there. So it wasn’t perfect, but it worked in the end – it was a fun challenge, and troubleshooting this over Facebook Messenger added to it.

If you go to enough effort to find me on Facebook, and you’re half way to a solution, I’m sometimes up for one.

Counterfeit Cisco SFPs

Probably not news to anyone, but there are counterfeit Cisco SFPs in the market place. It only matters for support purposes – all the SFPs are made to a standard, and it’s just a matter of what the label says past that point. As they say, if it quacks like a duck..

So how can you tell real from fake? Short answer – you can’t easily. If you buy through legit channels, you’re good. If you buy from eBay, there’s a pretty good chance they’re fake. If it’s from AliExpress.. well, I wouldn’t expect real ones.

Going through some old photos, I found this picture from last year from an install job I did – all of these SFPs came directly from Cisco, and they all look different.

Cisco SFP varieties

FAS2240 controller into DS4243/DS4246/DS424x chassis

If you have a venerable old FAS2240 or FAS255x, you can turn it into a disk shelf, by swapping the controllers with embedded IOM6 (IOM6E) for regular IOM3 or IOM6 controllers, and adding it as a shelf to a different controller.

The opposite however is not always true. It is not supported to turn a disk shelf into a FAS2240, but there are instances where it might be required or desirable, and I’ve seen people hit it a couple of times.

The FAS2240 PCM / IOM6E will work in any DS2246. However, while the DS4246 and DS4243 share an enclosure (the DS424), there are two revisions, and the FAS22xx/25xx only work in the newer one of them, which has better cross-midplane ventilation. Placing them in the older version results in a “FASXXX is not a supported platform” message and failure to boot.

The original version, X558A (430-00048) doesn’t support the embedded PCM/IOM6Es, while the X5560 (430-00061) does. If the shelf was shipped new after April 2012, it is probably the X5560. Some earlier may be also.

Edit 2018-01-23: Chemten on reddit gives a really good hint

“Just take the drives out of the first column and look at the board. 2008 stamp, no bueno for controller but ok for newer IOM. 2010 or later, takes anything.”

ONTAP 9.2 and the newest generation of platforms don’t support the out of band ACP connections on the DS4243/DS4246. With the newer ACP firmwares, the DS4246/IOM6 can do in-band ACP, but that isn’t an option for the DS4243. Despite what the product brief and other documents say, the DS4243 can be upgraded to a DS4246 with an IOM6 swap, and even more interestingly, they can both be converted into a DS424C with new IOM12/IOMC modules. This is officially a disruptive operation, but I’ve seen some people claim they have done it live without problems. YMMV. You might want to move to IOM12 as the SAS stack runs at the lowest speed, and the new platforms (FAS8200/2600/9000/700s) use the new SFF-8644 SAS connection, instead of the QSFP connections used previously. NetApp makes transition cables between the two, but I can assure you that you never have enough in the right lengths.

ONTAP – Why and why not to have one LIF per NFS volume

LIFs, or logical interfaces, are the interfaces from outside world to the storage of a NetApp system. There is a many to one relationship of LIFs to ports. From the early days of Clustered ONTAP, NetApp has given advice to have one LIF per datastore on VMware. There are more general purpose use-cases for this as well.

But it’s not always worth it.

The justification for a 1:1 LIF to volume mapping has been to allow a volume to move between nodes, and to move the LIF to the other node, to avoid indirect access for longer than a few moments.

Indirect access is when IP traffic comes into one node (for example N1), while the volume is on another node (say N2 – but it could be on another HA pair in another cluster). This means the data is pulled off disk on N2, goes over the cluster interconnect switching network, and then out of N1. This adds front end latency, and increases congestion on the cluster network, which in turn can delay cluster operations.

So, it seems like a good idea, right? Ok, if you have three datastores for VMware, for example – there are minimal overheads for having three IPs. But then – if you only have three datastores, how likely are you to move 1/3rd of the VMs from one node to the other? So that’s an argument for not doing it. But with 7 datastores, it’s much more likely to come up, and still, 7 to 10 IPs isn’t too bad. But if you have 50 datastores, it’s probably more than two nodes, so putting them all in place, managing the mapping datastores to LIFs.. there’s a lot of overhead.

Let’s have a look at WHY you might move a volume:

  1. Aggregate full – no more aggregates on original home node
  2. Controller CPU/IO high – balance workloads to another controller
  3. Equipment replacement – Moving off old equipment onto new equipment

In the third case, indirect access is ok, because it is temporary, so there’s no need for additional LIFs for that. For the other two cases, especially for VMware, there’s always the options of doing a storage vMotion to move all the VMs. For non VM workloads, it’s obviously going to be a different scenario – so the decision to weigh up is – how often do you as an admin think you’ll need to move only one or two volumes at a time? There is always an option of unmounting off a LIF on the source node and remounting from an IP on the destination.

So for my money – more than three datastores and less than ten, one LIF per datastore is probably fine. For anything else, I’d suggest just one NFS LIF per node (per SVM), and deal with preventing indirect access through other means. But I also don’t think it’s a “hard and fast” rule.

Selective LUN Mapping on ONTAP 8.3

We have a customer with a pretty kick-ass ONTAP environment that we built up last year – dual sites, each with 2x FAS8040 HA pairs in a cluster. This year we added an HA pair of AFF8080s with 48 x 3.84TB SSDs to each site, which included an upgrade to ONTAP 8.3.2.

We’re in the process of migrating from older FAS3270s with ONTAP 8.2 for these guys – we did a bunch of migrations last year, and we started again this year. Depending on application, workloads, etc we have a number of different methods for migration, but we got caught out last week with some LUN migrations.

Turns out there is a new features in ONTAP 8.3, which is turned on by default for new and migrated LUNs – selected LUN mapping. SLM reduces host failover time by only announcing paths from the HA pair hosting the LUN. But it’s only turned on for new LUNs – existing ones still show all 12 paths (2 per node). This is a bit of an odd choice to my thinking – I think it should optional if the system is already in production.

So our excellent tech working on the project, thinking it was a bug, called NetApp Support – and spent way too long being told to upgrade HUK, DSM and MPIO. Needless to say.. this didn’t work. Kinda disappointing. I’m told there’s a magic phrase you can use – “I feel this call isn’t progressing fast enough, can you please transfer me to the duty manager?”. Has this ever worked for you? Let me know in the comments 😉