home networks – part 1

This is the first post in a series about the home, networks and the wider internet.

This series will explain why I have decided on the design of our home network. It looks at some previous generations and why it had to change. There will be discussion about some of the technologies in use, and some how-to articles on key components.

A “home network” is very different from what it used to be. The network has now to extend beyond the home, and it has to include much more within the home that it ever used to. I am unsure of the terminology I should apply, but the home network is just a subset of something much broader. The current fashionable terminology to describe this is a “home area network” or a “home automation network” – a HAN. Cloud, mobility, VPN, the internet of things, privacy, security, multi-media, sharing, backup, archiving and automation are all topics that need to be addressed.

I strongly advocate that you should do no more than you have to build your HAN. If you cannot use out of the box componentry, then I suggest you might think again about your solution. Building it is one thing, and can be very, very rewarding. But, incessant attention to keep it running and up to date is another matter entirely. If you consider the time spent upgrading, enhancing, maintaining and tinkering against the time spent using the thing, and you find a skew towards the first category, I suggest you might reconsider your approach. If this is a hobby for you, then you will take one decision. If you want the thing to just work, and want reliability, with minimum need for your personal intervention, you will take a different decision. I will expand on this concept a little more, because it has driven some major decisions about my current approach.

I work in IT and have knowledge and experience that I completely ignored throughout the early iterations of the network. Did I stringently test? Did I keep development and production separate? Did a keep version control of the configurations? I am ashamed to admit I did not, but somehow avoided tragedy. I did have to spend an awful amount of time recovering systems, but complete disaster never eventuated, thanks to good luck, not good design. I will discuss this in a later post, but if you do not build a sandbox machine, you are mad. Note I say machine – not a partition, not a virtual machine, but a quite separate machine that you can totally trash without it affecting anything whatsoever. In this context sandbox means a play machine that you can do things to without affecting the use of your network. This where you try out new facilities, new operating systems, and, perhaps most importantly, disaster testing.

The technologies in our network, at the date of this post, include several flavours of Windows, Mac OS/X, iOS devices, CentOS 6.7, Office 365 and SharePoint. The oldest device is a Windows XP digital audio workstation. There are some DLNA devices and some AirPlay devices. And some WeMo devices. And a home alarm system with IP cameras.

Your mileage may vary on each of these technologies, but I will attempt to keep much of the conceptual material at level above the product-specific. In some cases I will descend into detail, but this likely to be in the linux space, but I will try to avoid too much CentOS specific content.

These articles are an attempt to repay the community who have helped me over the last years. Google may well be your friend, but it can only help you find material that someone has posted somewhere. Thanks to all those folks who went to the trouble in the past to discuss and document their travails, findings and recommendations. I hope these articles of mine add some value to that community.

in the beginning

We had a PC connected to our phone line through a 9600bps dial-up modem. Any other devices connected by a “sneaker-net”.

In the middle 1990s, we got our cable internet service. It had a whopping 10Mbit/second speed, and a download limit measured in megabytes. Having a now few PCs, and a user community of myself, my wife, a daughter in primary school and a toddler, I decided to put in something that would leverage the great pipeline we had to the world.

The aim was to provide a secure place to store material, and to provide a useful experience on the internet. I wanted individual mail accounts for everyone, and a place to share important information. You may well stop and ask why I wanted an individual email account for a toddler, and the answer is simple. It seemed a really good idea at the time. You will find that answer, and its companion, “because I could”, recurring often throughout these articles.

The technology solution to our requirements was a Netgear home router, some cat-5 cabling, and an old Pentium-based PC repurposed with RedHat linux. None of which our ISP supported or wanted to know about. If your one PC was not directly connected to the cable modem, then you were on your own.

That RedHat decision is laughable now. Having owned a software house developing unix software in the past, I thought this linux thing might be useful. So I went out and bought a big, thick book. Read it and decided “I can do this”. So I took the included CD, installed it on an old PC, tinkered for a while, and I had a working server. The included CD was a copy of the then free RedHat linux. Twenty years on I look back, and find that my strategic choice of a platform to host the family’s digital assets was based entirely on the thickest book in a bookshop. Mind you, it might well have been the right answer, but the way I got there is, well, hardly a recommended approach.

So for many years, our home network centred upon this PC running RedHat. It offered samba for file sharing, with a private share for each person, and a family share for common material. It and the other PCs all connected to the Netgear router, giving some firewall protection and NAT. The network storage offered was around 500Mb, and was occasionally backed up to CD.

This network would run for months at a time with no maintenance or disruption. It was safe, secure and fast. Our ISP handled email, and all internet connections originated outbound – the network was not easily accessible from outside, a good security feature.

But, things changed.

in the middle

What changed was a number of things. Not all at once, but in waves. Each one could have been accommodated, but taken in totality they forced a rethink. The changes were that RedHat moved their business model, my daughter wanted to use a mac, Windows XP came out and iTunes and large media libraries became the standard.

I decided to move to Fedora, and take advantage of its more cutting edge capabilities. It was frequently updated, community supported, and allowed me to (almost) lift and shift the existing capability.

I implemented our own mail server, a web server, and developed a much more robust backup mechanism, partly due to my wife’s needs for her own business. This backup mechanism uses rsync, and is rather like an Apple time capsule. It runs on its own server, and will be the subject of its own post shortly. I implemented, over time, Bonjour, some sophisticated printing capabilities, a VPN, and roaming profiles.

I implemented iTunes across shared network drives, and linked the content to a DLNA server. We had shared family calendars and address books. We had close to an “any-device, anywhere, anytime” capability. Having learnt some lessons, I decided to do some professional-strength things, and built a development environment with a robust source control system. It has automated build and deploy capabilities.

Pretty sophisticated stuff.

But, things changed.

today

Well, actually, it was not so much change in the landscape or technology. That just continued to develop. The change was more my realisation that I had made something that was just too hard to look after. Its sophistication was not matched by its robustness. Further, a Fedora system has a lifespan of about 12 months, given their support policies. The nature of Fedora is to give people the chance to be on the leading edge. They did not offer maintenance releases for older versions, and upgrades, frankly, are reinstalls. If having a system you want to set and forget, but get critical updates for is important, then Fedora is not for you.

The idea of a major upgrade every year – skipping a release – was just too much. Even further, new versions had too many changes, and all required extensive refactoring of existing capabilities. I just did not want the responsibility nor the impost on my time for such work.

My ISP offers a pretty good service, but it is a domestic service, and it does have outages. When your single point of failure is your communications link then that is not a good place to be. Rest assured that the outage will coincide with high urgency email usage. Not much fun when your laptop is humming, your servers are humming, but the bit in the middle, the communications link, is down.

As I said earlier, I work in IT, and Intellectual Property is a very important concept to me. Consequently, everything is more or less legal in our implementation. When I say “more or less” I will admit to stripping DRM from media so I can use it as I choose – this does not mean distributing it, it means unfettered personal use. Why shouldn’t I be able to watch a bought DVD on my tablet? It also means that I found we were spending a lot of money on licenses. Consider the cost of legal copies of Microsoft Office across many devices, even allowing for student discounts. Yes, I know there is an answer to that, and I will talk about that later. I am also a musician of sorts, and the idea of pirated music, or any other content, is an anathema. Our network and its content is predicated upon legality;  or morality at least. But that comes at a cost. A direct financial cost that I had to take into account.

So I decided to simplify things as much as possible.

A couple of key decisions had to be made. I could simplify things considerably by making a heterogeneous network. All Apple, all Microsoft, or all linux. This was not possible because so much of what my user community (ie the family) uses was not cross-platform, nor were there reasonable, comparable alternatives. Such an example is our accounting software, which is Windows only. Converting to a new system is just too hard for no great benefit. User retraining? That was going to be fine for some, but a real pain for others, and the responsibility would fall to me, still.

Another decision was whether I should in fact do this myself.

So where did I end up?

Importantly, I do not think I have “ended” anything. But I have formulated a strategy for this current iteration of the network. This is a work-in-progress, and it is my intention to document this journey in these posts. Before explaining this direction, it worthwhile to reiterate the key themes.

  • Reduce cost of ownership, both financial cost and my time cost.
  • Protect privacy and security to at least a reasonable level. I am sure the CIA will be able to walk through what I have with little effort, and they are welcome to look at the photos of our pets. But the “script-kiddie” crackers will find it a tad more difficult.
  • Provide enterprise class services for mail and content management
  • Provide a simple to use media content repository
  • Provide simple to use home automation services. This is a new requirement and one that fascinates me. This too will be the subject of a whole entry to itself.
  • Provide social media integration. My toddler is now a university student. If you have anyone of this age, or a teenager, or a tween who will use your home network, then I need say no more. This capability is a table stake today. You are irrelevant if you do not do it.
  • Eliminate subjective justification for feature addition. I did a lot of things “because I could”. Poor rationale. I remind myself that good enough is good enough, standard capabilities do most of the job, and every piece of tinkering comes at a support or upgrade cost. I remind my customers that the cost of supporting something is four and a half times the cost of implementing something. You spend a couple of days putting in a tricky feature? Plan on a couple of weeks to maintain it. By all means add features if they make sense, but recognise that they add to your workload.

I have moved email to a hosted Exchange service from Microsoft. For a few dollars a month per user, I get Exchange in the cloud. Works perfectly with OS/X native apps, with iOS devices and with Windows machines. Support time and cost to me? About nil. This is of course the Office 365 service, and as a corollary, I get an up to date Office suite across all my family devices for a subscription fee that is a tiny percentage of what I paid in licences in the past. I considered the iCloud offering but it just doesn’t play well with Office clients.

I am moving our family website to this same Office 365. As part of the subscription you get SharePoint, and a public website. Is it great? Probably not. Is it good enough? Probably maybe. I am working on this now, and will document my experience in due course. Given Microsoft’s new position on the public website, I am not sure I am going to pursue this.

I am moving our shared content to SharePoint. I have built a family intranet, and it wasn’t that hard. I have never worked with this technology before, and I keep getting surprised. Some things are just so hard, and others are just so simple, with apparently nothing in between. Taking my philosophy of “no more than you have to”, I am finding that if it is hard, then I should rethink what I want to do. So far so good – and this will be the subject of further posts. At the time of stopping development and recasting the strategy, I was very close to implementing OwnCloud, a wonderful product. But it would sit behind my domestic communications link, and that was what killed that.

I have upgraded the internal servers to CentOS. At this time they are running CentOS 6.7, an absolutely painless set of upgrades from 6.0 which was my starting point. The CentOS maintenance release timeframe is years, quite different to Fedora, so I am happy about that. I checked my logs and I found that I restarted the servers twice in six months, mainly to use new kernels. As far as I am concerned, that is a win. I will in due course examine CentOS 7 closely, but the beauty is I do not have to worry about it just yet. Why CentOS you ask? Given my experience with RedHat and Fedora I get what they are about, and can adapt most of my skills to that environment. The CentOS maintenance suits what I need. The community support is excellent. Could it have been one of the other linux favours? Probably, but I did not want the cost to learn how to use them for what I have assessed as limited benefit. CentOS is good enough.

My next post will discuss designing your network, which is the most important starting place. To paraphrase the Cheshire Cat – if you don’t know where you are going, then it doesn’t matter which way you go. You will get somewhere if you walk long enough. Knowing approximately where you want to go could make this exercise so much easier, and maintainable.

about the author

Keith Pfeiffer was born in the UK at an early age and migrated to Australia shortly thereafter. He has a passion for his technology career, literature, music performance, and of all things, Indian cuisine.

comments

All comments are moderated according to our comments policy.