From Mac to Freedom Laptop 3: Data Recovery

Leaving Apple’s Nursing Home

This series is about replacing a MacBook Air with an equally beautiful Freedom Software laptop. It is also about setting up a Freedom Software laptop for the kind of user who wants it to Just Work with the least possible involvement and no interest in how it works.

Part 1 of this series was about the rationale and the hardware.

Part 2 of this series was about choosing and configuring the software.

Part 3: Data Recovery

First I would recover all the generic user files. Most of the documents are in portable formats such a text or PDF or Open Document Format (used with NeoOffice on the Mac, LibreOffice or OpenOffice on the new computer)

After that, I wanted to recover some important data from specific Mac apps:

  • Photos from iPhoto / Apple Photos
  • email from Apple Mail
  • references from Zotero
  • bookmarks from Safari

(Ordered from most to least important.)

Restore Data from TimeMachine or from SSD?

  • from a TimeMachine backup: put together a software solution, copy data off it
  • from the Mac’s SSD: extract the SSD from the Mac, buy a special adapter, copy data off it

From a TimeMachine backup

There was a recent TimeMachine backup, stored on our NAS. As I don’t have a working Mac to run TimeMachine on, I searched for Linux software able to read it. My searches led to using sparsebundlefs plus tmfs to mount the backup as a directory tree. It took me quite some attempts at fighting the permissions system, especially the way that FUSE filesystems deal with permissions, until I could see a list of top level folders, one per snapshot, with one named “Latest” pointing at the latest snapshot. Inside there was apparently a complete snapshot of the Mac’s disk filesystem.

The TimeMachine backup was not encrypted. While the connection from the TimeMachine app to its storage folder on the NAS had required a password, the data inside was not encrypted. (By comparison, some backup systems such as Borg encrypt data on the client side before sending it to the server, so that only the data’s owner can decrypt and read it.)

The sparsebundlefs + tmfs software appeared to have given access to all the files. When I started copying them, however, two issues arose. First, the extraction speed was initially terrible, 0.2 MB/s, when the extractor was running on one machine, with sparsebundlefs remotely accessing the NAS TimeMachine storage through SSHFS, and rsync’ing its output over to the new machine. I suspected the main problem was random access to the NAS’s moderately slow spinning disk, and the secondary problem may be the SSHFS access to it.

Rather than measure and diagnose the exact cause, I copied the TM backup folder over to a faster disk (still a spinning disk) on the extractor machine. This copying went much faster, presumably because it was mostly sequential reading. Using that copy directly on the extractor machine, so cutting out SSHFS too, the extraction process was then much faster.

Then the second problem struck. The extracted data was much larger than expected, too large for the disks on the extractor machine and the target machine. It turned out the extractor was not preserving symlinks. It was presenting every symlinked directory as a separate copy of the directory. I did not know what directories (and perhaps some files too) had originally been symlinked on the Mac, and I could no longer boot it to find out.

I could guess some of the symlinks, partly from prior knowledge, partly from using ‘du’ to spot directory trees of identical huge sizes, and partly from only sources where I found some lists of symlinks others have catalogued, especially those created by migrations through successive versions of iPhoto to Photos. I confirmed the guesses by using an ‘rsync –dry-run’ to verify whether the content of one directory was identical to the other for each of my guesses. (‘diff -r’ works too but is slower because it always reads the full file content whereas rsync takes a shortcut if the file size and timestamp match.)

I ended up manually adding ‘exclude’ rules to my ‘rsync’ invocation. I excluded (in the home dir):

  • Applications/
  • Library/
    • except for “Library/Mail” and “Library/Mail Downloads”
  • Pictures/iPhoto Library*.migratedphotolibrary/
    • an old pre-migration folder that should have contained symlinks
  • Pictures/Photos Library*.photoslibrary/Originals/
    • which should have been a symlink to ‘Masters’

I also excluded a few other files and folders that held nothing interesting and would clutter or confuse the target. Here is the exclude list I used (not mentioning the ‘Library/Mail’ and ‘Library/Mail Downloads’ exceptions).

.android
.bash_sessions
.CFUserTextEncoding
.cups
.DS_Store
.lesshst
.mozilla
.ssh
Applications
Library
Pictures/iPhoto Library Test.migratedphotolibrary
Pictures/Photos Library Test.photoslibrary/Originals
Pictures/Photos Library Test.photoslibrary/resources
Public
Sites

(Your photo library would not have the word ‘Test’ in its name, by default. Mine did, caused by some manual repair by an Apple shop technician years ago.)

For additional speed in transferring a large amount of data to the new laptop, I copied a couple of chunks of it over on a USB memory stick, as rsync over the WiFi connection was going at only 5 MB/s (~50 Mpbs) even when near the WiFi access point. It would have been a good idea to buy a USB-to-Ethernet adaptor for a task like this, which could have gone much faster.

More details on TimeMachine storage format and manually accessing it: Deep Dive or here, by Glenn ‘devalias’ Grant.

From the Mac’s SSD

Reading data directly from the Mac’s SSD would have saved me time in fiddling with the sparsebundlefs + tmfs software, and in dealing with the data that should have been symlinks but wasn’t.

Apple used a non-standard SSD connector on some MacBook Air (and Pro) models. We can buy an adapter for the particular Mac model, to connect the SSD to a standard SATA connector, or to a USB-to-SATA adapter.

I ordered an SSD adapter. When it arrived, I got out my collection of security screwdriver bits (various sizes and odd shapes) and found I didn’t have the required tiny 5-pointed star shape. Dang.

I will order the special screwdriver because, even though I completed the data transfer, I do not want to sell or dispose of the broken Mac with the private data still on it. (It’s not encrypted. Next time it should be. And indeed I have set up the new computer with disk encryption.)

I have also heard that one can get low level access to an internal drive through the Thunderbird port. I have not investigated whether this is possible in my case.

Recover Photos from iPhoto / Apple Photos

The plain JPEG (etc.) files are found in the ‘Pictures/Photos Library.photoslibrary/Masters’ folder.

TODO: Find out if there were also metadata stored separately, e.g. photo album names and comments.

Recover Email from Apple Mail

For mail accounts using IMAP: the mail should be on the mail server. Don’t bother trying to recover anything from the local data.

For mail accounts using POP: the mail is stored only locally and we will want to recover it.

We find an “Apple Mail to dovecot mailbox converter” at https://github.com/pguyot/emlx_to_mbox.

I installed Erlang (as required) and ran it… and it did not work. Here is the output from a test run on a single message:

$ escript emlx_to_mbox.escript --single ~/tm-home/Library/Mail/V4/863E1A15-*/INBOX.mbox/233CA490-*/Data/0/0/1/Messages/100638.emlx 
emlx_to_mbox.escript:13: Warning: erlang:get_stacktrace/0 is deprecated and will be removed in OTP 24; use use the new try/catch syntax for retrieving the stack backtrace
escript: exception error: no case clause matching 
                 {ok,{http_header,0,<<"Return-Path">>,<<"Return-Path">>,
                                  <<"<LISTNAME-bounces+EMAIL=DOMAIN@mailman.DOMAIN>">>},
                     <<"Received: from [10.92.1.161] (HELO SERVER)\n  by SERVER (CommuniGate Pro SMTP 6.0.11)\n  with ESMTP id 399065899 for EMAIL@DOMAIN; Tue, "...>>}
  in function  emlx_to_mbox_escript__escript__1634__919879__991744__2:get_header_value/2 (emlx_to_mbox.escript, line 286)
  in call from emlx_to_mbox_escript__escript__1634__919879__991744__2:process_emlx_file/4 (emlx_to_mbox.escript, line 71)
  in call from escript:run/2 (escript.erl, line 758)
  in call from escript:start/1 (escript.erl, line 277)
  in call from init:start_em/1 
  in call from init:do_boot/3

I have not programmed in Erlang before. Maybe now would be a good time to start?

Recover References from Zotero

Copying the Zotero folder to the Linux laptop Just Worked. Hooray!

Recover Bookmarks from Safari

TODO.

From Mac to Freedom Laptop 2: Software

Leaving Apple’s Nursing Home

This series is about replacing a MacBook Air with an equally beautiful Freedom Software laptop. It is also about setting up a Freedom Software laptop for the kind of user who wants it to Just Work with the least possible involvement and no interest in how it works.

Part 1 of this series was about the rationale and the hardware.

Part 2: Software

Our leaving Apple is, fortunately for us, much easier than for someone who has both feet firmly planted in Apple’s walled garden.

This particular laptop was being used more like an old-fashioned stand-alone computer than a portal to Apple services. Finding and switching to alternative Freedom Software apps will not be too much of a hurdle.

  1. What Operating System?
  2. What Replacement Apps?
  3. What to Configure
  4. Practise the Configuration

What Operating System?

We’re going for something mainstream and stable and familiar. We’re not going for 100% hard-core freedom such as LibreBoot and Trisquel, important though they are. While there are several good options, for me (long time Ubuntu fan) it’s probably going to be one of:

  • Ubuntu, in its default Gnome form — the most common, generic Linux; stable and widely known and supported.
  • Ubuntu, with customisations (e.g. a MacOS-like launcher).
  • Elementary OS — slick, MacOS-inspired style; based on Ubuntu; generally great for beginners, especially from MacOS; but perhaps too quirky and too niche (compared with Ubuntu) to be the best choice for this situation. (Here’s a review.)

Also considered:

  • PureOS — slick; made with great dedication to software and phone freedom; likely too new, quirky and niche (compared with Ubuntu) to be the best choice for this situation.
  • Ubuntu Budgie — MacOS-like style.
  • Ubuntu MATE — supports MacOS-like, Windows-like and Ubuntu-like layouts; but I’m wary it may not offer the best modern simplicity and future, being based on older Gnome2.

For Ubuntu customisations, we might be looking at just installing a MacOS-like dock such as Dash to Dock. My philosophy is not to care much about the non-functional visual elements of a design, such as its colour theme and icon style and type fonts. Where to find things (apps, settings, files) is more important. Aesthetics do matter too, and the MacOS dock with its icons standing on a reflective surface and unfolding and bouncing is undoubtedly beautiful. However if placed at the bottom of the screen it has a serious flaw which is that vertical screen space is tight on a laptop screen, and for tasks like writing a document or programming, even just reading a web page, windows should use the maximum height available. Even with options to maximize a window or auto-hide the dock, side docking is a better arrangement.

What Replacement Apps?

Replacement user apps:

  • email (Apple Mail)
    • –> MailSpring (FOSS desktop UI, like web mail UI, for IMAP mail accounts)
    • –> Thunderbird? (for recovering local-only (POP) mail from Apple Mail)
  • web browser (Safari)
    • –> Firefox
  • photos (Apple Photos)
    • –> Shotwell?
  • Zotero reference manager
    • –> Zotero also works in Linux

Configuration, Admin:

  • access to network-shared documents folder
    • –> SMB share: also works in Linux
  • backup (TimeMachine)
    • –> Borgmatic (efficient external backup)
    • –> Cronopete? (friendly UI but limited, internal?)
    • –> TimeShift? (for system configuration)

What to Configure

User Apps

  • MailSpring
    • gmail account
    • other account
  • Thunderbird?
  • Firefox
    • create Firefox account
    • plugin: Bitwarden
    • plugin: Adblock Plus
    • plugin: Zotero
  • Shotwell
  • Zotero

Admin

  • machine name, user name, …
  • remote admin
    • ssh
    • xdmcp or rdp or xspice?
  • backup: Borgmatic
  • backup: TimeShift?
  • online accounts (Ubuntu config)
    • Ubuntu One (needed for Livepatch)
    • Google account?
    • our self-hosted Nextcloud (calendar, address book, …)
  • external accounts (manually)
    • our shared documents folder (NFS/SMB share): open & bookmark
  • printer
    • install hplip (for HP printers), hplip-gui (esp. useful for scanner)
    • connect to our local and network printer(s)
  • preferences
    • lock / power
    • appearance, …
  • remove unwanted apps
    • games, favourites, …

Additional Apps Wanted

  • Bitwarden
  • syncthing?
    • share with phone
    • share among users
  • KDEConnect?

Practise the Configuration

The goal is to configure a laptop that is ready to use, not one where the user will need to configure everything bit by bit as they start using it.

We can practice setting everything up, before doing it for real. The practice run is going to involve lots of manual work and trial and error. We can document it, and try to automate parts of it, to help make the real configuration quick and smooth.

For the practice run, we can use dummy accounts wherever we connect to external systems. For example, create an external email account that is not using the user’s real email address or password. We don’t want the user’s real email to be spammed with all our configuration attempts, certainly not the trial and error.

When it comes to the final real configuration, the user will have to be involved. Their real email account will receive various configuration emails and they will have to create and store some new passwords. The user will have to choose whether they want to be involved hands-on in this, choosing their own passwords etc., or whether they want us to do it all for them, depending on their attitude. This is that part that needs to go quickly and smoothly.

What to use for a test environment?

  • additional user account on own system
  • virtual machine (VM)

We could practise some of the configuration inside an additional user account on our own laptop, assuming we use a similar enough operating system and don’t mind installing the required apps on it. That is a good start. Better would be to create a VM and then we would be able to practice every part of the set up, except for any hardware that is not available on the VM such as perhaps a fingerprint reader or a printer.

We will surely need to fix and adjust the configuration afterwards. Remote admin access will be useful, if the user is willing. There are broadly speaking two levels to choose from, depending on the user’s relationship to the admin:

  • remote desktop (via e.g. VNC):
    • the user needs to be already logged in and accepts our connection at their discretion;
    • the user can watch what the admin is doing, though they might well not understand it;
    • the admin can thereby access the user’s data and account settings in the same way the user would;
    • the admin may access system configuration (via sudo).
  • remote log-in, by command-line (via SSH) and/or graphical (via e.g. XDMCP):
    • the user need not be involved nor see it;
    • the admin has the same control as if they had the machine physically, which means full access to the system configuration, and, depending on how the system was set up (encryption), likely also access to the user’s own data and account settings, to be used with the user’s consent, of course.

Switching from a MacBook Air to a Freedom Software Laptop

Leaving Apple’s Nursing Home

This article tells how we replaced a MacBook Air with a freedom-software laptop, aiming to keep it delightful to use and to carry about, while standing up in support of the principles of freedom of the users, freedom from the control and lock-in that Apple wields over its users, its subjects.

The Opportunity: the old MacBook Air dies.

The MacBook Air showing a “panic” message at switch-on and dotted lines across the screen

It’s terminal. The diagnosis is the soldered-on RAM has failed. Technically speaking it could be repaired, but it’s not worth it. We need a new laptop.

This is the opportunity. We have to make an effort to replace this and set everything up again, one way or the other, so can we make the effort to switch to freedom software at the same time? Why should we?

The Choice: Apple or Freedom?

While we should choose our direction according to our values and principles, we all find it hard to see and evaluate the big picture.

Apple promises to sell us a world in which “our” computer systems do what we want and what we need, easily and quickly and beautifully. At first sight, that is indeed what their products look like. Only when we dive deeper into their ecosystem, that is when we begin to learn how controlling they are. Devices we buy from Apple are not “ours”, they are tightly controlled by Apple. Apple restrict both what we are allowed to do (legal controls) and what we are able to do (practical enablement). Let’s see an example of how this works out.

As long as we play along inside Apple’s walled garden, everything smells of roses. Now let’s try to message a friend who has not bought Apple, or share photos with them. Suddenly we hit the wall. Our friend is Outside, and Apple has locked the doors. But it’s OK, we say, they’re not blocking us, look, we just need to install and sign up to Facebook’s WhatsApp or Google’s Photos because that’s what our friend is using. That seems to work. Why? Because Apple chooses to unlock the door for us to install those particular apps, according to agreements with those particular vendors. Apple only lets us install software from their own store, and they only let in software that conforms to strict Apple-centric rules. That’s very strongly enforced on iPhones, with MacOS moving swiftly in the same direction. The marketing message that says this is all to protect us from nefarious cyber threats. Who could deny that there is a grain of truth behind that? Yet the unspoken reality is they are mainly protecting their control over our digital life.

Besides, installing another app to meet a friend outside this garden only “works” in a crude way: it still does not allow us to invite our friend to meet us in our current messaging system. Instead we have to go and visit them in one of those separate, equally proprietary walled gardens, where we can’t share our photos and contacts and messages directly.

It’s not only Apple. Google and Microsoft are doing it too, while Apple and Amazon wield the tightest restrictions over their users. If you were not aware how bad it is, try reading up about how the vendors can remotely install and uninstall software on what they like to call “our” device.

The Future of Computers

Two of the most readable short articles illuminating this sad state of affairs are Your Phone Is Your Castle and The Future of Computers: The Neighborhood and The Nursing Home by Kyle Rankin. The author is the chief security officer of Purism, one of several small companies that are passionately contending to change the landscape by offering a digital life characterised by principles of freedom. Freedom in the sense that we the users are in ultimate control of our digital data systems, not the other way around. “As a social purpose company, Purism can prioritize its principles over profit. The mission to provide freedom, privacy, and security will always come first.”

Another player is /e/ Foundation (“Your data is YOUR data!”), bringing us de-Googled Android phones. These phones can run without any dependence on or control by Google: instead the user is in ultimate control. The irony of Android being marketed as an “open source” operating system is that only parts of it are open source and people have had to expend a huge amount of effort to build replacements for Google’s proprietary parts. But now the huge efforts of many volunteers over many years, now beginning to be augmented by some small companies including /e/, are paying off and these alternatives exist. Read more in, for example, a late-2020 interview in The Register.

These companies are formed from small groups of people following their beliefs. Together they are building the next wave of the freedom software movement that is perhaps most widely known as the Linux world. Taking the idea far beyond freedom to re-use and re-mix just individual software programs, they are bringing freedom now to the world of connected digital services that we use to store our family memories and to communicate with one another.

Freedom Software Laptops

Back to laptops.

A few big-name manufacturers make a few of their models available to buy with Linux pre-installed. Sadly they hide rather than promote this option, seeming to consider it merely a necessity to satisfy certain business customers, and offering little beyond a basic default installation which could easily be done at home.

The best way to support freedom software, and to get a machine that is already properly set up for it, is to buy from one of the small companies that specialise in it.

A DuckDuckGo web search for “Linux laptops” found plenty of starting points, some articles listing the favourite mainstream laptops that people like to run Linux on, others listing the specialist companies that sell Linux laptops.

I ended up looking at both alternatives: buying a mainstream laptop, likely second-hand, or buying a new laptop from a specialist. The category I am looking for this time is slim, ultra-light or “ultrabook”, around 14″ screen size, to replace the feel of a MacBook Air.

Best liked mainstream laptops this year seem to be first Dell’s XPS 13 series, and second Lenovo’s ThinkPad X1 Carbon series. Each range covers a wide range of specs.

Specialist linux laptop vendors include System76 (such as their Lemur pro), Purism (e.g. Librem 14), and Pine64 (e.g. PineBook pro), along with several more. Some make their own hardware, and others buy mainstream or OEM hardware and customise it. Most offer a choice of operating system, all based on well known open source OS’s (the GNU/Linux or *BSD families), sometimes customised or own-branded.

Then I found Laptop with Linux.com, a trading name of Comexr B.V. in the Netherlands. They sell a range of laptop styles, all based on the OEM brand “Clevo”, and have a lovely set of customisation options ranging from hardware components to setting up disk encryption, choosing installed applications and choosing my user login name. None of that is anything I couldn’t do at home, but it shows they go further than a basic default installation of the OS and it genuinely will save me some time and effort. For me, they offer the extra advantage of shipping with UK tax and duties already included.

Second-hand? Tempting. New? Sensible.

To begin with, I could not accept the cost of buying new, as machines I considered decent spec were available for hundreds of pounds less. Eventually, I re-balanced my assessment in favour of buying something that is intended to last for years, and I mean ten years. The hassle of changing from one computer to another, setting everything up and getting used to the differences, can be realistically valued at tens of hours. From that point of view, it made sense to buy something new and high spec so that it doesn’t seem too terrible after many years.

So it is that I am ordering the Clevo L141MU 14-inch Magnesium Laptop. I will go for a mid-to-high hardware spec, particularly focusing on speed because I want it to be pleasant to use, and mid-level RAM and SSD capacity because this is an upgradeable computer and the prices of those will come down. RAM in particular can be upgraded later with no hassle. Upgrading the SSD later would require externally copying its contents to the new one which might be an evening’s work.

It is even lighter than the MacBook Air it replaces, and just fractionally less thin.

Running an OpenWrt Router

I am running an OpenWrt open-source router, at last.

OpenWrt: Wireless Freedom

Dave kindly donated me the hardware three years ago, when I spent many happy and frustrating hours installing OpenWrt for the first time, bricking it, recovering by connecting a serial port inside it, and eventually finding the OpenWrt configuration interfaces at that time were just too complicated for me to navigate.

It sat on my desk ever since then, unused.

What changed?

The old noddy little router

This week, our noddy little ISP-provided router keeled over.

All I did was try to change its upstream DNS server addresses to point to AdGuard’s ad blocking service. There was a simple web UI to enter the addresses, but, after doing so, its web UI promptly and permanently died and would not come back. Its DNS gateway function and SSH access died too, while some functions such as its basic routing and port forwarding continued. I tried power-cycling the router, of course, but avoided doing a factory reset because then I would lose my port forwarding that provides access to my self-hosted services such as Matrix and contacts and calendar, and would not be sure I could reconfigure everything. I was able to regain internet access temporarily, by manually configuring each of our devices to use external DNS server addresses instead of the router’s local address.

Well, I didn’t like that router anyway. Its UI was slow and awkward, its features were very bare and its WiFi was weak. (It was a Sagemcom 2704N, also branded PlusNet and Technicolor.)

So it was that I took a second look at this TP-LINK TD-W8970 router.

A pleasant surprise awaited: I found that OpenWrt had just the previous week released a major update, a 2021 version, a year and a half since their previous 2019 version, and it looks much more polished. A quick in-place firmware upgrade, followed by many hours figuring out how to make and manage the configuration, resetting, starting again from defaults, and it’s now all working. ADSL WAN connection, wired, wireless, and my port forwarding rules for my servers, and some bits of static DHCP and static DNS hostname entries.

Where the previous router had hung lopsided from one screw, to make a better impression and improve its chances of acceptance by the family I screwed it neatly to the wall and tidied the wires.

The Ordinary User May Appreciate…

TP-LINK TD-W8970 v1
  • ad-blocking
  • stronger WiFi signal now covering the whole house and garden
  • faster

None of these benefits seen by the ordinary user are unique to OpenWrt, of course.

Ad blocking was the trigger for this whole exercise. I had previously been considering self-hosting either Pi-Hole or Adguard-Home. Recently I learned that AdGuard DNS service is currently available free of charge, simply by setting it as the router’s DNS server address (or, less conveniently, by overriding the setting in individual devices). While less comprehensive and customisable than a self-hosted ad-blocking DNS server, for the time being the convenience and simplicity of this solution wins.

The new router is faster in a few ways: faster WiFi connection speeds; faster access to self-hosted services such as backups enabled by gigabit ethernet (up from 100 Mbit) for the wired connection; and (probably) some faster software operations such as DNS where the previous router often seemed responsible for delays of several seconds.

The Self-Hoster Appreciates…

Configuration Example

Where OpenWrt shines is in the features I use for self-hosting services, and how I will be able to manage it over time.

Because it’s open-source software:

  • reassurance that the software cannot be abandoned at the whim of some company;
  • strong support for open and standard and modern protocols, e.g. mesh WiFi, encrypted DNS standards, standard Unix admin tools;
  • likely to be upgraded to add new features, support new security measures;
  • I can keep my configuration if I need to buy new or different hardware, because the same software runs on many devices;
  • many optional add-on features contributed by community members;

Because it’s software for professionals:

  • full IPv6 support, alongside IPv4;
  • strong WiFi features, e.g. multiple networks (trusted vs. guest);
  • strong network protocols support, e.g. tagged VLANs, switch control protocols;
  • configuration stored as text, so can be managed by external tools like Ansible and version control, and re-configured from scratch by one automated script (“configuration as code”, “infrastructure as code”);

Things That Went Wrong

Bricking the device during initial installation

Part of the OpenWrt TD-W8970 installation instructions, which are in a linked forum post, advised me to use commands like “cat openwrt.image > /dev/mtdblock1” to install OpenWrt initially. What appears to have gone wrong is this did not successfully write all of the image file to the flash memory. Some blocks of flash remained blank. Then when rebooting the router, it just hung. I got in touch and was advised there are more reliable ways to do it. To recover, I had to buy a serial port to USB adapter, open up the router and solder on a serial header, and use the serial port recovery method.

Some web sites would not load

At first, a few ordinary web sites failed to load.

According to a note near the end of the user guide “Dnsmasq DHCP server” page:

“If you use Adguard DNS … you need to disable [DNS] Rebind protection… If not, you can see lot of this log in system.log, and have lag or host unreachable issue.”

"daemon.warn dnsmasq[xxx]: possible DNS-rebind attack detected: any.adserver.dns"

I have read a lot more about this issue since then, to understand it better. I changed the setting, as suggested, and everything seems to work OK now.

I wish this issue would be explained more clearly, and with references. I am still not entirely comfortable that disabling the rebind protection is the best that could be done: it seems to me it would be better if we could accept just the “0.0.0.0” responses that this DNS sends while still protecting against any other local addresses.

WiFi Would Not Connect

After a while I decided to change the WiFi channel selection from 11 to Auto. Next day, our devices would not connect. Some of them would briefly attempt to connect and immediately disconnect, while others would not even show our WiFi network in their list.

It turned out the router had switched to channel 13. From what I have been able to learn, this is a valid channel to choose, although in the USA there are restrictions on the power level on channels 12 and 13. A lot of writers strongly advise only choosing among 1, 6, and 11. The rationale for this advice seems to originate from one particular study that may not be relevant in today’s common scenarios; some writers disagree and it’s not really clear. I wonder if the problem is that the firmware in many devices may not “like” connecting to channels above 11.

Whatever the precise cause, switching back to manually selected channel 11 seems to have solved the problem.

Struggles

It was far from a breeze to install, and far from a breeze to configure.

The OpenWrt web UI (LUCI)

LUCI is still not clear and helpful, although much improved. Examples:

  • understanding how to set upstream DNS (on WAN interface, in LAN interface, in DHCP settings, in all of these?);
  • same for how to set local domain name (3 places to choose) and what the consequences are.

Poor documentation

I struggled with the OpenWrt “user manual”. For example, many of its pages say basically “help for FOO: to accomplish FOO, I pasted the following text into the config files in some unspecified version of OpenWrt,” without explaining what exactly FOO was meant to accomplish and its trade-offs and interactions.

Configuration as code

I discovered by accident that the LUCI can show the commands for the settings changes, if you click the mis-named “unsaved changes” button which appears after pressing “save”.

That’s a great start. It could be developed into something so much better, a real configuration-as-code methodology. Nowadays that should be promoted as the primary way to manage the router. Instead of just “backup” and “restore” there should be facilites like diff the current config against a backup and revert selected differences. Tools should be promoted for managing the config externally from e.g. a version control system or Ansible.

Inconsistent defaults

When LUCI writes a config section, it changes settings that the user didn’t change. It seems to have its own idea about what a default config looks like, and this is different from the default config files supplied at start-up. This makes it difficult to manage the settings in version control. These spurious changes are shown in the LUCI pending changes preview. (It would be helpful if that preview included the option to revert selected changes, although that would not go far enough.)

How it should be done: The LUCI settings should always match the text config defaults, and that should be tested. This would come naturally when adopting configuration-as-code as the primary management method.

Finding what (A)DSL settings to use

Finding settings to use for the ADSL connection was hard. My ISP PlusNet published a few basic settings (VPI/VCI, mux, user and password, etc.) but OpenWrt required other settings as well, and some of the settings didn’t exactly match.

The OpenWrt ISP Configurations page seems quite useful but says for example “Annex A, Tone A” whereas LUCI doesn’t have an option named exactly “Annex A”: its options include “Annex A+L+M (all)”, “Annex A G.992.1”, etc., and it doesn’t have an option for “Tone A” but instead “A43C+J43+A43”, “A43C+J43+A43+V43”, etc. This makes it really frustrating if one is not a DSL expert: I do not know which of the available options will work and which will not. When on my first try it would not connect (showing some sort of authentication error) I did not know which settings could possibly be the cause.

After a lot of reading and experimentation I noticed that the generated text configuration corresponding to each LUCI option gave me a strong clue: the generated config for tone “A43C+J43+A43” used the option code value “a” whereas for tone “A43C+J43+A43+V43” it used the code value “av”. That strongly suggested I should select the former. And similarly for “Annex”.

Finally I came across a small comment between two example configurations in that same page, that said I must also delete the ATM bridge that was set up by default. The LUCI description of “ATM Bridges” says, “ATM bridges expose encapsulated ethernet in AAL5 connections as virtual Linux network interfaces which can be used in conjunction with DHCP or PPP to dial into the provider network.” Not great. That didn’t help me at all.

After changing settings as best I could, and deleting that ATM bridge, it then worked.

How it should be made easier:

  • define a way of publishing a DSL configuration online as a structured code block (could be the OpenWrt config language, for a start);
  • make LUCI able to accept a whole DSL definition in a single cut-and-paste operation (a text config box);
  • start a database of these (encourage this to be maintained by the community; make it distributed);
  • add a “search in database(s)” function for these in LUCI.

Decoupling Identity in Matrix

For individuals, Matrix's identity scheme creates lock-in.
How to fix?

I love Matrix. I think it’s the way forward for libre/open (as in freedom) personal communications, with a real chance to free users from the lock-in of popular silo messaging systems like Whatsit, Facepalm and Twiddle. I run all my own messaging (except email) through Matrix, with bridges to the silos that my friends still use as well as SMS and IRC.

At present, unfortunately, there is a big obstacle to me recommending any friend or family member to sign up to Matrix: identity and server lock-in.

An Open system with lock-in? Ugh. What went wrong?

To use a silo, you register an account and either you are identified by your telephone number or you choose a username. (You can then set some account options, usually including a “display name” which you can change from time to time.) Now, what if at some point you dislike that silo’s rules or advertising or charging? You’re stuck. They deliberately designed the system so that nobody has any options other than continue or quit.

To use Matrix, you register an account on a server. You first need to choose a server, which is identified by its Internet domain name such as matrix.org, or mozilla.org, or my-own-server.my-name.me if you run your own server. You can find out which servers are available for public use. Some are free of charge and others require payment, similar to email services. Having chosen a server, you pick a username and are then identified globally as @username:servername . (You can also choose a display name.)

Matrix right now is great for an organization: running their own server on their own domain, they control their own rules and namespace for users, rooms and groups.

If you are a normal person, your default option is to register a username on ‘matrix.org‘. (In principle there will be other public servers but there are hardly any so far.) Then, that username is tied to that server forever, or at least until Matrix developers invent a way out.

This lock-in is different from a silo. At least with Matrix you can create a new account on another server, to get away if you don’t like the old one. What you can’t do (yet) is migrate your old account to the new one. Not in any way. See “Account Migration” below.

Bring Your Own Domain Name

One way to mitigate the account migration problem is to register an account under a server domain name that you control.

The point is, then, the user controls their own domain name registration, which is directly registered with a domain registrar, outside the control of and Matrix or other service provider. The user can keep their own domain and have it served by a new server in the future if the current server becomes unsuitable or unavailable.

How feasible is this, today?

  • A geek with time and skills can register a domain name and run their own server.
  • A person with some time and effort and money can register a domain name and pay for a hosted matrix server. The cost and effort is broadly similar to setting up a new phone or internet or TV service. It does require some investment of thought and learning what it’s all about.
  • A normal Whatsit user is used to “free and easy”, and there is currently no such option for them.

Hosted servers come with significant limitations on customizing your server. For example, on modular.im (currently the main hosting option), AFAIK you cannot run the Whatsit bridge.

What can we do to improve things for the normal user?

  1. make it cheap (not necessarily free)
    • Build a server that can serve lots of different people’s personal-domain user accounts. (This may be called a “multi-tenant” server design.) @mfilipe:matrix.org mentioned this today on #matrix-dev:matrix.org.
    • Spread the word that it’s sensible to pay for a service so that you are not the product being sold, unlike the free silos.
  2. make it easy
    • Build services in which a new user can set up a domain name and a matrix server or account at that domain, and pay for both with one payment. (Major providers of some services like email offer this.)
    • For people migrating from a specific silo, offer ready-to-use setups (bridging) and messaging (intro, and suggestions for how to tell the silo friends about it) that are customised for that case.
    • Make it easier for geeks to run matrix servers for their friends and family.

Who should be doing this? Not necessarily the Matrix Foundation or New Vector (who make Riot and Modular.im among other Matrix things). They have limited resources and their own priorities. It’s an open-source system so anyone wanting these things should get involved and start making them.

Good places to discuss and get involved in the self-hosting side include #matrix-docker-ansible-deploy:devture.com and #matrix-self-host-onboarding:chat.weho.st .

Account Migration

It would be useful to be able to migrate an old account to a new one in ways like:

  • forward messages to the new address
  • inform all contacts of the new address
  • set up an auto-reply
  • copy account settings
  • copy message history
  • copy a list of contacts

I was thinking about what is possible in email, and what regrettably isn’t available. Migrating an email account is not at all simple, but most of the mini-features above are possible to some extent. One thing regrettably missing in the email system is a way to automatically inform senders to an old account that they should update your address and re-send to a new account. (Like an HTTP “redirect”.)

It would be useful to develop those kinds of mini-features for making the transition to a new matrix account smoother. That might be a feasible short-term mitigation.

However, there is a better long term solution: decoupling accounts from identity.

Decoupling Identity

[TODO: Write about decoupling identity.]

Just Another Proprietary Service

One of my favourite open source institutions is considering replacing their use of an open source tool with a proprietary service “donated” for “free” by its vendor.

It’s time I just said what I think: Encouraging open-source contributors to adopt another proprietary sponsored service is against the principles I want the institution to uphold.

Pootle is an open source tool that assists with human-language translation. Contributors to a project use it to write and contribute translations of open source software into their local language. As with many open source projects, it is under-resourced. Proprietary services look more attractive, if we look as measures such as the immediate user experience and the maintenance burden.

Yet, when we ask contributors to use any “donated” proprietary service, we make those users and the FOSS community bear its cost in the domains of lock-in and advertising. I am disappointed to hear that my favourite institution is seriously considering this. (This is not about translation tools specifically; I feel the same about all the user-facing tools and services we use.)

Don’t get me wrong: I am not suggesting this goes against the institution’s policies, and of course there are hard-to-ignore benefits to choosing a proprietary service. I can’t imagine exactly how much pain it is trying to maintain this Pootle instance. On the other hand I do know first-hand the pain of maintaining a lot of other FOSS that I insist on using myself, and I sometimes wonder if I’d like to switch to a commercial this-or-that. At those times I remember how much I value upholding the open source principles, and I choose to stick with what is sometimes less immediately convenient but ultimately more rewarding.

Time after time I observe the FOSS community suffering from getting sucked in to the traps of commercial interest like this. A FOSS project chooses to use a commercial service for its own convenience, and by doing so it feeds the commercial service, increasing familiarity with it and talk about it (forms of lock-in and advertising), decreasing the development going in to competing FOSS services, making it more likely that others will follow. I observe FOSS people tending to concentrate on the short-term benefit to their own project in isolation, even when they are peripherally aware that their field would benefit in the long run from working together with others on the tools and services that they all need.

What could be the cultural process led the institution to this place?

“Current tools are poor… Let’s try another ‘free’ service to quickly overcome our problem.”

I feel like there’s a cultural step missing there. Where is the step that says,

“We are hundreds of open source developers needing a good translation service. Other open source developers are trying to develop good translation services for people like us. What a great fit! Let’s work together!”?

I would rather join and contribute to a new project group whose purpose is to provide an Open service (in this case for translation) for the institution’s projects to use, doing whatever development, customization, maintenance and IT infra work it needs depending on the state of the available open solutions.

To fill in the missing step, I feel we need to introduce a culture of speaking out at a membership level to say, “Here’s a challenge; who can volunteer to form a group to solve it?” and encouraging members to think of working together on communal service provision projects as a normal part of the institution’s activity.

By working closely with the FOSS people who want to provide a service that we need, our contribution to the upstream software projects would benefit others for the public good, and more generally we would foster mutually beneficial growth and normalization of adoption of FOSS technologies.

I’m not saying it isn’t hard to get the necessary contribution level to make a difference, or that folks haven’t tried before. (Some communal service projects are used in this institution, but they tend to be small scale in-house projects rather than collaborations with other FOSS projects.)

How can we drum up support for doing it the FOSS way?

My Own Video Calls

For a long time I’d been looking for free (libre) alternatives to Skype. I think I’ve found what I’m looking for.

The French seem to have a passion for liberty, and have done some good work in this area. I’d heard of Framasoft last year and then I listened to a talk from them at FOSDEM. [*]

What they have done is taken many open-source software services in the realm of personal information management, such as shared calendar, address book, photo sharing, and blogging; and packaged them into “Frama-” branded services that they encourage French people to use. Some are very popular.

For video calling they have FramaTalk.

For each service, as well as running a server for free (gratis) public use, they also encourage anyone else to install and run the same software on one’s own server: there is a button leading to instructions for doing so, on their front page, under “Cultivez votre jardin”.

The open-source software behind FramaTalk is called Jitsi Meet. It sets up a WebRTC video call between the web browsers of two or more participants. To use it you just visit the server home page and choose an identifier (“le nom du salon”) which can be a random string such as it suggests, or any word you type, so I choose “julian” for example, and it appends that to the URL. Or you can go straight to that URL in the first place, as that’s all the home page does.

(Google Chrome and Mozilla Firefox already support WebRTC. If you are using Internet Explorer or Safari you will need a WebRTC plug-in such as Temasys. Safari automatically prompts you to download it, and then installing it is quick and easy. I haven’t seen what IE does. Apparently a website needs to be “tweaked” to use such a plug-in. I expect Framatalk has the tweak.)

When your browser requests the page from that address, the Jitsi Meet server running on framatalk.org serves the small web page that presents the user interface, and also reveals the addresses of other browsers that are currently connected to the same “room”. The actual video and audio and text chat then goes directly between your browser and the other participants’ browsers, not through the Framatalk server. Roughly speaking that’s how WebRTC works. As a consequence, the server can be very lightweight.

My next step is to install an instance of Jitsi Meet on my own server, so that instead of using framatalk.org I can talk at an address I own — talk.foad.me.uk for example.

I’ll write about that in another post.

Go on, give Framatalk a try!


* (The speaker had his slides in French and was talking English and saying how he wanted to invite the wider world to join in the efforts. I’m thinking of offering my help in providing an English translation of the text on some of their sites. Maybe.)

Click “Edit the Source Code”

How Open is the Source?

Each time I want to modify or fix some open-source application that I’m using in Ubuntu, it takes me a long time to find the correct source code, download and unpack it, install required dependencies, and figure out how to build and run it. I usually use either Ubuntu Software or Synaptic package manager to install an app, so can I also use those to install its source code? No, I can’t. I haven’t fully learnt to take advantage of the tools that are available to automate parts of this process, and I should. But nevertheless…

This is how easy it should be for anyone to modify an open-source software application on an open-source OS:

  • While running the app, click “Edit the Source Code”. (This button appears in the launcher, perhaps, when it is available.)
  • Ubuntu downloads the source code and runs an IDE with the source code loaded.
  • At this point, I can edit the source code if I wish.
  • The IDE is configured so that clicking its “Build” button re-builds the software, and clicking its “Run” button runs the built version in place of (or alongside) the system-installed version.

The key is zero configuration effort for the user to get to this point: running the locally built version.

The power user will want to configure the choice of IDE and other details, but that’s secondary.