Skip to main content

Synology NAS

As a young man, one of the things I wanted was a powerful home server - running Linux - on which I could create an AI. That desire for having raw computing power under the desk, or in the basement, decreased somewhat with the easy accessibility of servers in the lab or (now) in the office. But a man's heart wants what it wants.

I finally found my excuse when the spouse wondered out loud that, wouldn't it be wonderful if we could have multiple speakers at different parts of the house and we could stream our music collection to which ever set of speakers we wanted, especially a wireless speaker placed outside on the patio, for parties? You don't need to ask me twice. 

An email sent to a list where my gaming buddies hang out retrieved a few important keywords: Sonos, Chromecast Audio, Synology and S3 bucket. I had two main aims - find a good home for our photos and, as the spouse wanted, stream music to speakers. 

I debated more than you would think before laying out the capital for a home server. I considered the cost of Glacier and Google coldline storage, the cost of external USB hardrives and the cost of the Synology 216j + HD (the cheapest of the Synology series), subscribing to streaming music services and so on. For a while I was going to be a grown up and rely only internet streaming via the chromecast audio for the music and have external USB drives (as I do now) + google's coldline storage for disaster recovery.

What clinched it was the hassle I encountered when I went to upload our photos to coldline. Only a fraction of our photos fit on my laptop hardrive. I first tried tarring the photos on the USB drive itself and then uploading, but I kept getting read/write errors. Then I started to tar the directories onto my laptop drive. As I scrambled to delete files on my laptop to make space for each tar, I thought, I should spring for the server - I can then do all this from the server, properly.

There were a few things cheaper than the Synology 216J - the Western Digital 4TB NAS intrigued me, for example - but I'm very, very glad that I went for the Synology.

Interface

It's the 21st century and the interface to this headless server is a slick browser based desktop. Usually I go - uh-oh, browser based something, it's gonna be bad. But Synology's interface software is very good and I couldn't find any bugs in it and the settings layout is easy to get used to and gives a lot of control. At the same time, this is a real linux box here, and you can ssh into it and putter around with root access - so this is one of the key things Synology gets right. They have a easy to use configuration tool (the virtual desktop) but they don't lock it down and make it some kind of fake linux system - they also just give you root access. It's your machine.

Just remember to go to user -> user advance -> enable home otherwise it just feels weird an you get an error message whenever you log in since your home directory is not initialized and writeable otherwise.

Setup

One of the things Synology gets right is configuration. It has a remarkably easy hardware and software setup.

One gotcha: When I updated the operating system the first time, the webapp counted down until the system restarted and everything was ok. The next time updating however, I lost connection and I thought I had bricked the system (I could see the three activity lights were on but I could not raise the system via the desktop or via ssh). Then I thought to access it by using the IP address rather than "diskstation" - which worked! Turns out that, since I had given the server a name, it was now going by "new-name", rather than the generic "diskstation".

Getting rsync working was a little fiddly. I needed to set it up in control panel -> file services ->rsync, there was another place to do something related, but not the same, so there was some confusion. On top of this, I was trying to rsync with a shared folder on the Synology. The shared folder system is a little restrictive - in order to get security right Synology decided that shared folders are owned by root. Subfolders, on the other hand can be owned by other users. When rsync-ing I have to indicate one of these sub-directories that my user owns - If I indicate the shared folder I get a very opaque binary error from rsync.

exFAT

As I mentioned before, all the photos and music are on an external 1TB hardrive. I simply connected the HD to the Synology and could copy over the files using linux cp -r commands.

The one thing I had to do was buy the Synology package for exFat. It was only $4.00 but I wonder why they charge for it - possibly a license fee to Microsoft? This was because (and of course I had forgotten this) I had formatted the external hard-drive carrying all our photos and music as exfat so that we could access it from both Windows and Mac computers. Would I go back in time and tell my former self to format the drives as FAT32 instead, so I could save four bucks? Nah. Installation was very pain free and the service started automatically and immediately the widget showing the USB device went from a nonsensical 0/900GB used/total to a correct 300/900GB.


A little Linux machine

I don't want to belabor the point, but you have a little linux box there (running the busybox distribution) and all the usual linux tricks just work. It even has htop. For example,  I got curious about all the different filesystems at work here and stackoverflow told me about the wonderfully concise command df -T.

Another example: One mistake I made when I ran the cp command was not to do something like no hup and put in the background, so I was planning to leave my laptop on with the terminal open while it ran the copy, but then I remembered disown and it works as it should.

Google storage

After getting the data on to the Synology HDD I wanted to tar and upload the data to google coldline storage (the quest that started this whole journey, remember?). Installing gsutil on the Synology requires a little bit of tweaking. The recommended install (from the official page), which works fine on mac, fails with

ERROR: (gcloud.components.update) The following components are unknown [gcloud-deps].

On the Synology.

The alternative methods listed on the Google storage help page is for windows (there is a zip archive) but from a hint in this github repo we see that there is also a tar.gz file and the following commands work just fine:

curl -O https://storage.googleapis.com/pub/gsutil.tar.gz
tar xzvf gsutil.tar.gz
rm gsutil.tar.gz
gsutil/gsutil config
This is the standalone version and not the whole of gcloud, but that's all I'll need. It asks you to register it via the typical oauth procedure (You go to the url it supplies, google gives you a code, which you paste back to the application, which now has permission to access your account)

Gsutil also supports standalone updating, which is very convenient (I was afraid I'd have to manually periodically re-download the tar.gz file)

gsutil/gsutil update

So, that's for photos and a lifelong dream of having a continuously running computer in the basement.

Chromecast audio

I Just needed to install Audio station on the NAS and the DS Audio App on the Android phone. It seamlessly found all the audio I saved on to the NAS and had a decent, if simple interface. Getting the Chromecast to play was simple - just look in the settings and find the Chromecast. The only glitchy thing was that I had the Phone connecting to the server via https, but chromcast refused. The App was very helpful and said to try without https, and it worked just fine then.

Prime photos

Just install cloudsync app and you are off - auth is handled grownup way

Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...