By this time, I believe that only a single original drive remains. Originally i had a spare when I filled the Drobo. That is why the first replacement was a 3TB. Since then, only 4s have been installed. Now i wait. . . and wait. . . and wait.
I have been experiencing an excess of failures with my own equipment this week. Mostly aged drives who’s working lifespan is winding down. Often the preverbal cobbler with no shoes, I rarely have time to maintain my own equipment. My usual quick fix is to throw more drives at the problem. Although that would certainly get me running again, it was more of a stop gap then a real solution. I have a number of aged drives and i wanted a solution not just for today, but something to make a difference moving forward. Enter the Drobo 5c. I have been a fan of Drobo for years. I have an 8 bay that holds my media library in it’s warm RAID-6 embrace. I had been eyeing the Drobo 5d for years waiting for the price to sink or my need to raise. Turns out the 5c is incredibly priced with only minor disadvantages over the 5d. One reason i love Drobo and the reason it was perfect for this project is the Drobo’s ability to expand in the future while operating at diminished capacity. I bought this enclosure with only 2 drives. Started up with mearly one 4TB drive and another 3TB. This got me started with about 2.7TB of usable space. Then i got to the task of offloading data from my healthy external drives. As each drive emptied into the Drobo’s volume, it was then fed into the Drobo enclosure to continue to expand the capacity. Now i have over 8TB of usable storage with both failover protection (a single drive can fail and I loose nothing) and expandability. It’s one big volume makes organizing and tidying a snap. That last bay will get a 4TB eventually and it’s doubtful the most recent 3TB i installed will be working this time next year.
My doc was a bit confused about the age of this computer when he gave it to me to setup. We spoke about a 5 year old laptop. It turned out to be a beast from 2003: a Lifebook n series by Fujitsu. I’ve always said that Fujitsu must be run by a supervillain or at the very least, a rebel billionaire. They make industrial equipment and infrastructure, while at the same time making laptops and other select home electronics. Like someone just wanted their ideal laptop and then as an afterthought sold it as a product. Don’t get me wrong, I am not trying to disparage Fujitsu in any way. Quite the opposite in fact. I have relied on their hard drives for my most precious data and their Lifebooks have always been some of the best out there. The fact that this 13+ year old laptop is operating with all original parts and a working battery is testament to Fujitsu’s commitment to quality.
My first clue was the XP sticker. I decided to go with Lubuntu, a minimized variant on the popular Ubuntu Linux. Ubuntu is a wonderful distribution, especially for those new to Linux, but it’s built on top of Debian, so it’s not just for beginners. For years, I’ve used some of the older (still supported) Ubuntu versions for old machines. I hate to see workin computers fail because of a lack of software support. Thanks to the good people at Lubuntu, Ubuntu, GNU/Linux, this is a thing of the past. This guy is running all the latest in security and cryptographic technology, a fully modern web browser and a full suite of productivity software fully compatible with the latest MS Office.
In truth, I haven’t touched it in years. I haven’t even touched cydia recently. Sadly, all this work would only be useful for someone with an original or 3g iPhone. Apple certainly doesn’t support those devices anymore. Does anyone still use them? Unfortunately, my ISP insists that I remove the content. After 7 years of hosting it, they realized it violates TOS. I should check the logs. I wonder if it will even be missed. People say the internet never forgets. Sometimes it is quite the opposite. For nostalgias sake, I left the instructions site up: http://cydia.be3n.com/ (at least that does’t violate Dreamhost TOS). For the record, much of my work continued support well into iOS 4.
. . . Maybe it will rise again on S3?
Times like these, you just have to wait. (and hope nothing else breaks)
At the device level, SSD drives function entirely differently then conventional mechanical disks. As a result, the way that operating systems traditionally use these devices lead to progressive performance degradation and even shortened lifespan. Technology was needed to offset this failing. Enter TRIM. Apple introduced it in 2011, but believe it or not, even today, Apple refuses to automatically enable TRIM for 3rd party SSD drives. Not only that, but if you manually enable it yourself, it is then disabled during any OS upgrade (i.e. 10.9.1-10.9.3). You can check your TRIM status from the System Profiler/System Information under SATA by selecting your device. I switched my favorite utility from Chameleon SSD Optimizer to Trim Enabler. I made the change for two reasons. First, Chameleon has some compatibility issues. Second, Trim Enabler has a feature to check on startup. Makes it easier to reenable after a software update.
I found a great utility to enable TRIM on 10.6.8-10.9.5: Trim Enabler
Don’t forget to reenable it after each OS X System Update.
Technical Details from Wikipedia:
Because of the way that file systems typically handle delete operations, storage media (SSDs, but also traditional hard drives) generally do not know which sectors/pages are truly in use and which can be considered free space. Delete operations are typically limited to flagging data blocks as “not in use” in the file system. Contrary to, for example, an overwrite operation, a delete will therefore not involve a physical write to the sectors that contain the data. Since a common SSD has no knowledge of the file system structures, including the list of unused blocks/sectors, the storage medium remains unaware that the blocks have become available. While this often enables undelete tools to recover files from traditional hard disks, despite the files being reported as “deleted” by the operating system, it also means that when the operating system later performs a write operation to one of the sectors, which it considers free space, it effectively becomes an overwrite operation from the point of view of the storage medium. For traditional hard disks, this is no different from writing an empty sector, but because of how some SSDs function at the lowest level, an overwrite produces significant overhead compared to writing data into an empty page, potentially crippling write performance.
SSDs store data in flash memory cells that are grouped into pages, with the pages (typically 4 to 16 kB each) grouped together into blocks (typically 128 to 512 pages per block, e.g. totaling 512 kB per block in case of the 4/128 combination). NAND flash memory cells can only be directly written to when they are empty. If they are considered to contain data, the contents first need to be erased before a write operation can be performed reliably. In SSDs, a write operation can be done on the page-level, but due to hardware limitations, erase commands always affect entire blocks. As a result, writing data to SSD media is very fast as long as empty pages can be used, but slows down considerably once previously written pages need to be overwritten. Since an erase of the cells in the page is needed before it can be written again, but only entire blocks can be erased, an overwrite will initiate a read-erase-modify-write cycle: the contents of the entire block have to be stored in cache before it is effectively erased on the flash medium, then the overwritten page is modified in the cache so the cached block is up to date, and only then is the entire block (with updated page) written to the flash medium. This phenomenon is known as write amplification.