From Mac to Freedom Laptop 3: Data Recovery

Leaving Apple’s Nursing Home

This series is about replacing a MacBook Air with an equally beautiful Freedom Software laptop. It is also about setting up a Freedom Software laptop for the kind of user who wants it to Just Work with the least possible involvement and no interest in how it works.

Part 1 of this series was about the rationale and the hardware.

Part 2 of this series was about choosing and configuring the software.

Part 3: Data Recovery

First I would recover all the generic user files. Most of the documents are in portable formats such a text or PDF or Open Document Format (used with NeoOffice on the Mac, LibreOffice or OpenOffice on the new computer)

After that, I wanted to recover some important data from specific Mac apps:

  • Photos from iPhoto / Apple Photos
  • email from Apple Mail
  • references from Zotero
  • bookmarks from Safari

(Ordered from most to least important.)

Restore Data from TimeMachine or from SSD?

  • from a TimeMachine backup: put together a software solution, copy data off it
  • from the Mac’s SSD: extract the SSD from the Mac, buy a special adapter, copy data off it

From a TimeMachine backup

There was a recent TimeMachine backup, stored on our NAS. As I don’t have a working Mac to run TimeMachine on, I searched for Linux software able to read it. My searches led to using sparsebundlefs plus tmfs to mount the backup as a directory tree. It took me quite some attempts at fighting the permissions system, especially the way that FUSE filesystems deal with permissions, until I could see a list of top level folders, one per snapshot, with one named “Latest” pointing at the latest snapshot. Inside there was apparently a complete snapshot of the Mac’s disk filesystem.

The TimeMachine backup was not encrypted. While the connection from the TimeMachine app to its storage folder on the NAS had required a password, the data inside was not encrypted. (By comparison, some backup systems such as Borg encrypt data on the client side before sending it to the server, so that only the data’s owner can decrypt and read it.)

The sparsebundlefs + tmfs software appeared to have given access to all the files. When I started copying them, however, two issues arose. First, the extraction speed was initially terrible, 0.2 MB/s, when the extractor was running on one machine, with sparsebundlefs remotely accessing the NAS TimeMachine storage through SSHFS, and rsync’ing its output over to the new machine. I suspected the main problem was random access to the NAS’s moderately slow spinning disk, and the secondary problem may be the SSHFS access to it.

Rather than measure and diagnose the exact cause, I copied the TM backup folder over to a faster disk (still a spinning disk) on the extractor machine. This copying went much faster, presumably because it was mostly sequential reading. Using that copy directly on the extractor machine, so cutting out SSHFS too, the extraction process was then much faster.

Then the second problem struck. The extracted data was much larger than expected, too large for the disks on the extractor machine and the target machine. It turned out the extractor was not preserving symlinks. It was presenting every symlinked directory as a separate copy of the directory. I did not know what directories (and perhaps some files too) had originally been symlinked on the Mac, and I could no longer boot it to find out.

I could guess some of the symlinks, partly from prior knowledge, partly from using ‘du’ to spot directory trees of identical huge sizes, and partly from only sources where I found some lists of symlinks others have catalogued, especially those created by migrations through successive versions of iPhoto to Photos. I confirmed the guesses by using an ‘rsync –dry-run’ to verify whether the content of one directory was identical to the other for each of my guesses. (‘diff -r’ works too but is slower because it always reads the full file content whereas rsync takes a shortcut if the file size and timestamp match.)

I ended up manually adding ‘exclude’ rules to my ‘rsync’ invocation. I excluded (in the home dir):

  • Applications/
  • Library/
    • except for “Library/Mail” and “Library/Mail Downloads”
  • Pictures/iPhoto Library*.migratedphotolibrary/
    • an old pre-migration folder that should have contained symlinks
  • Pictures/Photos Library*.photoslibrary/Originals/
    • which should have been a symlink to ‘Masters’

I also excluded a few other files and folders that held nothing interesting and would clutter or confuse the target. Here is the exclude list I used (not mentioning the ‘Library/Mail’ and ‘Library/Mail Downloads’ exceptions).

Pictures/iPhoto Library Test.migratedphotolibrary
Pictures/Photos Library Test.photoslibrary/Originals
Pictures/Photos Library Test.photoslibrary/resources

(Your photo library would not have the word ‘Test’ in its name, by default. Mine did, caused by some manual repair by an Apple shop technician years ago.)

For additional speed in transferring a large amount of data to the new laptop, I copied a couple of chunks of it over on a USB memory stick, as rsync over the WiFi connection was going at only 5 MB/s (~50 Mpbs) even when near the WiFi access point. It would have been a good idea to buy a USB-to-Ethernet adaptor for a task like this, which could have gone much faster.

More details on TimeMachine storage format and manually accessing it: Deep Dive or here, by Glenn ‘devalias’ Grant.

From the Mac’s SSD

Reading data directly from the Mac’s SSD would have saved me time in fiddling with the sparsebundlefs + tmfs software, and in dealing with the data that should have been symlinks but wasn’t.

Apple used a non-standard SSD connector on some MacBook Air (and Pro) models. We can buy an adapter for the particular Mac model, to connect the SSD to a standard SATA connector, or to a USB-to-SATA adapter.

I ordered an SSD adapter. When it arrived, I got out my collection of security screwdriver bits (various sizes and odd shapes) and found I didn’t have the required tiny 5-pointed star shape. Dang.

I will order the special screwdriver because, even though I completed the data transfer, I do not want to sell or dispose of the broken Mac with the private data still on it. (It’s not encrypted. Next time it should be. And indeed I have set up the new computer with disk encryption.)

I have also heard that one can get low level access to an internal drive through the Thunderbird port. I have not investigated whether this is possible in my case.

Recover Photos from iPhoto / Apple Photos

The plain JPEG (etc.) files are found in the ‘Pictures/Photos Library.photoslibrary/Masters’ folder.

TODO: Find out if there were also metadata stored separately, e.g. photo album names and comments.

Recover Email from Apple Mail

For mail accounts using IMAP: the mail should be on the mail server. Don’t bother trying to recover anything from the local data.

For mail accounts using POP: the mail is stored only locally and we will want to recover it.

We find an “Apple Mail to dovecot mailbox converter” at

I installed Erlang (as required) and ran it… and it did not work. Here is the output from a test run on a single message:

$ escript emlx_to_mbox.escript --single ~/tm-home/Library/Mail/V4/863E1A15-*/INBOX.mbox/233CA490-*/Data/0/0/1/Messages/100638.emlx 
emlx_to_mbox.escript:13: Warning: erlang:get_stacktrace/0 is deprecated and will be removed in OTP 24; use use the new try/catch syntax for retrieving the stack backtrace
escript: exception error: no case clause matching 
                     <<"Received: from [] (HELO SERVER)\n  by SERVER (CommuniGate Pro SMTP 6.0.11)\n  with ESMTP id 399065899 for EMAIL@DOMAIN; Tue, "...>>}
  in function  emlx_to_mbox_escript__escript__1634__919879__991744__2:get_header_value/2 (emlx_to_mbox.escript, line 286)
  in call from emlx_to_mbox_escript__escript__1634__919879__991744__2:process_emlx_file/4 (emlx_to_mbox.escript, line 71)
  in call from escript:run/2 (escript.erl, line 758)
  in call from escript:start/1 (escript.erl, line 277)
  in call from init:start_em/1 
  in call from init:do_boot/3

I have not programmed in Erlang before. Maybe now would be a good time to start?

Recover References from Zotero

Copying the Zotero folder to the Linux laptop Just Worked. Hooray!

Recover Bookmarks from Safari


From Mac to Freedom Laptop 2: Software

Leaving Apple’s Nursing Home

This series is about replacing a MacBook Air with an equally beautiful Freedom Software laptop. It is also about setting up a Freedom Software laptop for the kind of user who wants it to Just Work with the least possible involvement and no interest in how it works.

Part 1 of this series was about the rationale and the hardware.

Part 2: Software

Our leaving Apple is, fortunately for us, much easier than for someone who has both feet firmly planted in Apple’s walled garden.

This particular laptop was being used more like an old-fashioned stand-alone computer than a portal to Apple services. Finding and switching to alternative Freedom Software apps will not be too much of a hurdle.

  1. What Operating System?
  2. What Replacement Apps?
  3. What to Configure
  4. Practise the Configuration

What Operating System?

We’re going for something mainstream and stable and familiar. We’re not going for 100% hard-core freedom such as LibreBoot and Trisquel, important though they are. While there are several good options, for me (long time Ubuntu fan) it’s probably going to be one of:

  • Ubuntu, in its default Gnome form — the most common, generic Linux; stable and widely known and supported.
  • Ubuntu, with customisations (e.g. a MacOS-like launcher).
  • Elementary OS — slick, MacOS-inspired style; based on Ubuntu; generally great for beginners, especially from MacOS; but perhaps too quirky and too niche (compared with Ubuntu) to be the best choice for this situation. (Here’s a review.)

Also considered:

  • PureOS — slick; made with great dedication to software and phone freedom; likely too new, quirky and niche (compared with Ubuntu) to be the best choice for this situation.
  • Ubuntu Budgie — MacOS-like style.
  • Ubuntu MATE — supports MacOS-like, Windows-like and Ubuntu-like layouts; but I’m wary it may not offer the best modern simplicity and future, being based on older Gnome2.

For Ubuntu customisations, we might be looking at just installing a MacOS-like dock such as Dash to Dock. My philosophy is not to care much about the non-functional visual elements of a design, such as its colour theme and icon style and type fonts. Where to find things (apps, settings, files) is more important. Aesthetics do matter too, and the MacOS dock with its icons standing on a reflective surface and unfolding and bouncing is undoubtedly beautiful. However if placed at the bottom of the screen it has a serious flaw which is that vertical screen space is tight on a laptop screen, and for tasks like writing a document or programming, even just reading a web page, windows should use the maximum height available. Even with options to maximize a window or auto-hide the dock, side docking is a better arrangement.

What Replacement Apps?

Replacement user apps:

  • email (Apple Mail)
    • –> MailSpring (FOSS desktop UI, like web mail UI, for IMAP mail accounts)
    • –> Thunderbird? (for recovering local-only (POP) mail from Apple Mail)
  • web browser (Safari)
    • –> Firefox
  • photos (Apple Photos)
    • –> Shotwell?
  • Zotero reference manager
    • –> Zotero also works in Linux

Configuration, Admin:

  • access to network-shared documents folder
    • –> SMB share: also works in Linux
  • backup (TimeMachine)
    • –> Borgmatic (efficient external backup)
    • –> Cronopete? (friendly UI but limited, internal?)
    • –> TimeShift? (for system configuration)

What to Configure

User Apps

  • MailSpring
    • gmail account
    • other account
  • Thunderbird?
  • Firefox
    • create Firefox account
    • plugin: Bitwarden
    • plugin: Adblock Plus
    • plugin: Zotero
  • Shotwell
  • Zotero


  • machine name, user name, …
  • remote admin
    • ssh
    • xdmcp or rdp or xspice?
  • backup: Borgmatic
  • backup: TimeShift?
  • online accounts (Ubuntu config)
    • Ubuntu One (needed for Livepatch)
    • Google account?
    • our self-hosted Nextcloud (calendar, address book, …)
  • external accounts (manually)
    • our shared documents folder (NFS/SMB share): open & bookmark
  • printer
    • install hplip (for HP printers), hplip-gui (esp. useful for scanner)
    • connect to our local and network printer(s)
  • preferences
    • lock / power
    • appearance, …
  • remove unwanted apps
    • games, favourites, …

Additional Apps Wanted

  • Bitwarden
  • syncthing?
    • share with phone
    • share among users
  • KDEConnect?

Practise the Configuration

The goal is to configure a laptop that is ready to use, not one where the user will need to configure everything bit by bit as they start using it.

We can practice setting everything up, before doing it for real. The practice run is going to involve lots of manual work and trial and error. We can document it, and try to automate parts of it, to help make the real configuration quick and smooth.

For the practice run, we can use dummy accounts wherever we connect to external systems. For example, create an external email account that is not using the user’s real email address or password. We don’t want the user’s real email to be spammed with all our configuration attempts, certainly not the trial and error.

When it comes to the final real configuration, the user will have to be involved. Their real email account will receive various configuration emails and they will have to create and store some new passwords. The user will have to choose whether they want to be involved hands-on in this, choosing their own passwords etc., or whether they want us to do it all for them, depending on their attitude. This is that part that needs to go quickly and smoothly.

What to use for a test environment?

  • additional user account on own system
  • virtual machine (VM)

We could practise some of the configuration inside an additional user account on our own laptop, assuming we use a similar enough operating system and don’t mind installing the required apps on it. That is a good start. Better would be to create a VM and then we would be able to practice every part of the set up, except for any hardware that is not available on the VM such as perhaps a fingerprint reader or a printer.

We will surely need to fix and adjust the configuration afterwards. Remote admin access will be useful, if the user is willing. There are broadly speaking two levels to choose from, depending on the user’s relationship to the admin:

  • remote desktop (via e.g. VNC):
    • the user needs to be already logged in and accepts our connection at their discretion;
    • the user can watch what the admin is doing, though they might well not understand it;
    • the admin can thereby access the user’s data and account settings in the same way the user would;
    • the admin may access system configuration (via sudo).
  • remote log-in, by command-line (via SSH) and/or graphical (via e.g. XDMCP):
    • the user need not be involved nor see it;
    • the admin has the same control as if they had the machine physically, which means full access to the system configuration, and, depending on how the system was set up (encryption), likely also access to the user’s own data and account settings, to be used with the user’s consent, of course.

Going to build a Matrix

I am leaving my latest job and moving on to my new passion, Matrix.

Recently I have become passionate about the need for modern communications systems that are Open in the sense of freedom to talk to anyone. The currently dominant silos like WhatsApp only let me talk to the friends who are willing and able to subscribe to that company’s terms and restrictions, with no way to get around them when they decide to display advertising to me or stop supporting my mother’s not-terribly-old iPhone. To me, that is like some historical feudal system in which I live as a tenant and I must obtain agreement from the lord of the estate if I want to invite any of my friends to visit me. We need and deserve better than that: an Open way to communicate to our friends and business contacts. Email served that role for the first part of the 21st century, and Matrix now serves that role for the era of instant messaging.

My software development in recent years has been mostly on the open source Subversion version-control system, and I have particularly enjoyed helping to create something so widely used and appreciated. Nowadays its popularity is eclipsed by Git in small to medium sized projects, while Subversion still enjoys a strong following in certain fields such as games development due to its strengths in versioning large data sets and simplicity of usage. Participating in the development of Open Source software has given me the greatest satisfaction in my professional life, and I intend to keep it that way. That is why developing the Matrix communication system is so exciting.

p.s. I am still contracting on Subversion support work, so get in touch if you need any bug fixing or problem diagnosis.

Just Another Proprietary Service

One of my favourite open source institutions is considering replacing their use of an open source tool with a proprietary service “donated” for “free” by its vendor.

It’s time I just said what I think: Encouraging open-source contributors to adopt another proprietary sponsored service is against the principles I want the institution to uphold.

Pootle is an open source tool that assists with human-language translation. Contributors to a project use it to write and contribute translations of open source software into their local language. As with many open source projects, it is under-resourced. Proprietary services look more attractive, if we look as measures such as the immediate user experience and the maintenance burden.

Yet, when we ask contributors to use any “donated” proprietary service, we make those users and the FOSS community bear its cost in the domains of lock-in and advertising. I am disappointed to hear that my favourite institution is seriously considering this. (This is not about translation tools specifically; I feel the same about all the user-facing tools and services we use.)

Don’t get me wrong: I am not suggesting this goes against the institution’s policies, and of course there are hard-to-ignore benefits to choosing a proprietary service. I can’t imagine exactly how much pain it is trying to maintain this Pootle instance. On the other hand I do know first-hand the pain of maintaining a lot of other FOSS that I insist on using myself, and I sometimes wonder if I’d like to switch to a commercial this-or-that. At those times I remember how much I value upholding the open source principles, and I choose to stick with what is sometimes less immediately convenient but ultimately more rewarding.

Time after time I observe the FOSS community suffering from getting sucked in to the traps of commercial interest like this. A FOSS project chooses to use a commercial service for its own convenience, and by doing so it feeds the commercial service, increasing familiarity with it and talk about it (forms of lock-in and advertising), decreasing the development going in to competing FOSS services, making it more likely that others will follow. I observe FOSS people tending to concentrate on the short-term benefit to their own project in isolation, even when they are peripherally aware that their field would benefit in the long run from working together with others on the tools and services that they all need.

What could be the cultural process led the institution to this place?

“Current tools are poor… Let’s try another ‘free’ service to quickly overcome our problem.”

I feel like there’s a cultural step missing there. Where is the step that says,

“We are hundreds of open source developers needing a good translation service. Other open source developers are trying to develop good translation services for people like us. What a great fit! Let’s work together!”?

I would rather join and contribute to a new project group whose purpose is to provide an Open service (in this case for translation) for the institution’s projects to use, doing whatever development, customization, maintenance and IT infra work it needs depending on the state of the available open solutions.

To fill in the missing step, I feel we need to introduce a culture of speaking out at a membership level to say, “Here’s a challenge; who can volunteer to form a group to solve it?” and encouraging members to think of working together on communal service provision projects as a normal part of the institution’s activity.

By working closely with the FOSS people who want to provide a service that we need, our contribution to the upstream software projects would benefit others for the public good, and more generally we would foster mutually beneficial growth and normalization of adoption of FOSS technologies.

I’m not saying it isn’t hard to get the necessary contribution level to make a difference, or that folks haven’t tried before. (Some communal service projects are used in this institution, but they tend to be small scale in-house projects rather than collaborations with other FOSS projects.)

How can we drum up support for doing it the FOSS way?

Testing Svn Release Scripts

How can I test a release process?

I’m improving the scripting of the Apache Subversion release process. Preparing and publishing a release involves things like making a branch in the source repository, updating web pages, updating buildbot configurations, and (not yet automated) sending emails.

When I’m modifying the release scripts, I needed a way to test without making bogus “live” releases.

The source code, web pages, and buildbot config are all stored in Subversion repositories, so that was a good place to start.

Where I’ve got to so far:

  • scripted the creation of a set of local repositories that contain a minimum viable contents that the script expects to find;
  • pass the script a set of alternative URLs for the various repositories it is going to read and modify;
  • manually inspect what the script changed in those repositories.

My initial script to set up local dummy repos is ugly and slow. But that’s OK; the important thing is it’s enough to proceed with verifying the automation while I improve it.

# Make a local svn repo suitable for testing our release procedures
# ('' etc.)

set -e



# export_with_props SRC_WC DST_WC
function export_with_props() {
  svn export $SRC_WC $DST_WC
  svn add --force --no-auto-props $DST_WC
  # remove automatically added mime-type props as some of them are not an exact replica
  svn pd -R svn:mime-type $DST_WC
  # add an exact replica of source props
  (cd $SRC_WC && svn diff --properties-only --old=^/@0 --new=.) > $PROPS_TO_ADD
  (cd $DST_WC && svn patch $PROPS_TO_ADD)
  # rm props-to-del props-to-add

set -x


### the 'dist.a.o' repo ###

if ! [ -d $DUMMY_DIST_REPO_DIR ]; then

  # create a repo
  svnadmin create $DUMMY_DIST_REPO_DIR

  # create skeleton dirs
  svn -m "Init" mkdir --parents $DUMMY_DIST_REPO_URL/{dev,release}/subversion


### the 'svn.a.o' repo

if ! [ -d $DUMMY_SVN_REPO_DIR ]; then

  # create a repo
  svnadmin create $DUMMY_SVN_REPO_DIR

  # create skeleton dirs
  svn -m "Init" mkdir --parents $DUMMY_SVN_REPO_URL/subversion/{trunk,tags,branches,site}

  # check out
  cd $SVN_WC_DIR

  # populate trunk
  rmdir subversion/trunk
  export_with_props $SRC_TRUNK_WC subversion/trunk
  svn revert -R subversion/trunk/{contrib,notes}/*
  svn ci -m "Add trunk" subversion/trunk

  # populate site
  export_with_props $SRC_SITE_WC/tools subversion/site/tools
  export_with_props $SRC_SITE_WC/publish subversion/site/publish
  svn revert -R subversion/site/publish/docs/{api,javahl}/*
  svn ci -m "Add site" subversion/site
  svn cp -m "Add site staging branch" ^/subversion/site/publish ^/subversion/site/staging

  # a mod on trunk
  echo hello > subversion/trunk/hello.txt
  svn add subversion/trunk/hello.txt
  svn ci -m "Trunk mod." subversion/trunk/hello.txt

  # make 1.13.x branch
  svn cp -m "Branch 1.13.x" ^/subversion/trunk ^/subversion/branches/1.13.x
  svn up --depth=immediates subversion/branches/1.13.x/
  sed 's/1\.12\.[0-9]*/1.13.0/' < $SRC_BRANCHES_WC/1.12.x/STATUS > subversion/branches/1.13.x/STATUS
  svn add subversion/branches/1.13.x/STATUS
  svn ci -m "Add 'STATUS' file."


Make the dummy repos:

$ cd /opt/svn
$ rm -rf dummy-asf-repos/ &&

Tell ‘’ where the dummy repositories are:

$ export SVN_RELEASE_SVN_REPOS=file:///opt/svn/dummy-asf-repos/svn-repo/subversion
$ export SVN_RELEASE_DIST_REPOS=file:///opt/svn/dummy-asf-repos/dist-repo
$ export SVN_RELEASE_BUILDBOT_REPOS=file:///opt/svn/dummy-asf-repos/buildbot-repo

Start testing:

$ mkdir -p test-releasing/ && cd test-releasing/
$ build-env 1.13.0-alpha1
INFO:root:Creating release environment
$ ./ roll 1.13.0-alpha1 7
INFO:root:Rolling release 1.13.0-alpha1 from branch branches/1.13.x@7
$ ./ sign-candidates 1.13.0-alpha1
INFO:root:Signing /opt/svn/test-releasing/deploy/
$ ./ create-tag 1.13.0-alpha1 7
INFO:root:Creating tag for 1.13.0-alpha1
$ ./ post-candidates 1.13.0-alpha1 
INFO:root:Importing tarballs to file:///opt/svn/dummy-asf-repos/dist-repo/dev/subversion
# ... and so on

Inspect what the script changed in the dummy repositories:

$ svn log -v --limit=1 $SVN_RELEASE_SVN_REPOS 

r8 | julianfoad | 2019-10-02 15:40:18 +0100 (Wed, 02 Oct 2019) | 1 line
Changed paths:
   A /subversion/tags/1.13.0-alpha1 (from /subversion/branches/1.13.x:7)
   M /subversion/tags/1.13.0-alpha1/subversion/include/svn_version.h
Tagging release 1.13.0-alpha1

$ svn log -v --limit=1 $SVN_RELEASE_DIST_REPOS 

r2 | julianfoad | 2019-10-02 15:40:56 +0100 (Wed, 02 Oct 2019) | 1 line
Changed paths:
   A /dev/subversion/subversion-1.13.0-alpha1.tar.bz2
   A /dev/subversion/subversion-1.13.0-alpha1.tar.bz2.asc
   A /dev/subversion/subversion-1.13.0-alpha1.tar.bz2.sha512
   A /dev/subversion/subversion-1.13.0-alpha1.tar.gz
   A /dev/subversion/subversion-1.13.0-alpha1.tar.gz.asc
   A /dev/subversion/subversion-1.13.0-alpha1.tar.gz.sha512
   A /dev/subversion/
   A /dev/subversion/
   A /dev/subversion/
   A /dev/subversion/svn_version.h.dist-1.13.0-alpha1
Add Subversion 1.13.0-alpha1 candidate release artifacts

and so on.

Scripting Svn Releases

The Subversion release process had too many manual steps in it.

The manual effort required was becoming a burden, now we’re doing a “regular” release every six months, as well as a source of inconsistency and errors.

As part of an effort to improve stability and availability of Subversion releases, I’m scripting some of the manual steps to bring us closer to automated releases.

The headings below link to the descriptions in our community guide. An automated step or set of steps is indicated as “ <command>“; the rest are manual. Steps in strikethrough were previously manual, now automated.

Creating a new minor release branch

  • create-release-branch
    • Create the new release branch with a server-side copy
    • increment version in svn_version.h,,
    • add a section in CHANGES
    • Create a new STATUS file on the release branch
    • add the new branch to the buildbot config
  • write-release-notes
    • Create a template release-notes document
    • Commit it
  • Ask someone with appropriate access to add the A.B.x branch to the backport merge bot.

Rolling a release (repeat for both RC and final release)

  • Merge CHANGES file from trunk to release branch; set the release date in it.
  • Check works on the release branch.
  • roll
  • Test one or both of the tarballs
  • sign-candidates
  • create-tag
  • post-candidates
  • send an email to the dev@ list
  • adjust the topic on -dev to mention “X.Y.Z is up for testing/signing”
  • Update the issue tracker to have appropriate versions/milestones

The actual releasing (repeat for both RC and final release)

  • Uploading the release
    • move-to-dist
    • wait 24 hours
    • clean-dist
    • Submit the new release version number on
  • Announcing the release
    • write-announcement
      • send the announcement email
    • Update the topics in IRC channels , -dev
  • Update the website, any release:
    • write-downloads
      • edit the result in download.html
    • write-news … news.html
    • write-news … index.html
      • check date, text, URL; remove old news from index.html
  • Update the website, stable release X.Y.Z (not alpha/beta/rc):
    • List the new release in doap.rdf
    • List the new release in release-history.html
  • Update the website, new minor release X.Y.0:
    • Update the support levels in release-notes/index.html
    • Update supported_release_lines in
    • Remove “draft” warning from release-notes/X.Y.html
    • Create/update API docs: docs/api/X.Y, docs/javahl/X.Y
      • an example script is given in the doc
      • Update the links to the API docs in docs/index.html
    • Publish (commit or merge) these modifications

So quite a bit still to do…

Apache Subversion 1.12.2, 1.10.6, 1.9.12 released

Today I pushed the final “go” button, announcing Apache Subversion 1.12.2, 1.10.6, and 1.9.12. Thank-you, everyone who helped with reporting, developing, reviewing and testing the changes that went into these releases.

Release-managing was a bit of a struggle this time around. While Subversion has moved into the “mature” phase of its life with possiblly the most users ever, the number of active contributors has been dwindling, which is understandable. Recently it reached the point where the development community could no longer meet its self-imposed requirements for three members to approve each back-ported change. As the release was stalled even though we had fixes waiting that were otherwise ready to release, we — those of us who are still around to discuss such things — agreed to lower those requirements as a pragmatic move. Then I was able to merge in all the outstanding fixes in the queue. As a result, these releases have a few more changes in than they otherwise would have, albeit small changes as they are only patch releases.

Looking to future releases, it seems to me one of the most important issues we need to address at this stage of the project’s life is to streamline the release process, in two different ways — from the inside, making it easier for us to produce a release — and more importantly from the outside perspective, making it easier for end users and packagers to receive an upgrade. We are currently discussing this and other aspects of Subversion’s “community health”.

And when I say “we” — the Subversion project — that’s you too, if you want to be included. It might surprise some users to hear that there is no team dedicated to producing Subversion for you. There is just me, using a bit of my employer‘s time, and a very few others using a bit of their personal time, and … potentially you or a bit of your employer’s time? If you care about Subversion continuing to be available, and you can offer any kind of help — perhaps with building and releasing it in one of the many forms it’s distributed these days — you could make all the difference.

Could you be a part of true, free, libre, open source software development? Please come and say hello in the channel (on IRC or on Matrix) or on the mailing lists.

Subversion 1.12 Released

I’m happy to announce the release of Apache Subversion 1.12.0.
Please choose the mirror closest to you by visiting:

This is a stable feature release of the Apache Subversion open source
version control system.

SHA-512 checksums are available at:

PGP Signatures are available at:

For this release, the following people have provided PGP signatures:

Julian Foad [4096R/1FB064B84EECC493] with fingerprint:
6011 63CF 9D49 9FD7 18CF  582D 1FB0 64B8 4EEC C493
Stefan Sperling [2048R/4F7DBAA99A59B973] with fingerprint:
8BC4 DAE0 C5A4 D65F 4044  0107 4F7D BAA9 9A59 B973
Johan Corveleyn [4096R/B59CE6D6010C8AAD] with fingerprint:
8AA2 C10E EAAD 44F9 6972  7AEA B59C E6D6 010C 8AAD
Stefan Fuhrmann [4096R/99EC741B57921ACC] with fingerprint:
056F 8016 D9B8 7B1B DE41  7467 99EC 741B 5792 1ACC

Release notes for the 1.12.x release series may be found at:

You can find the list of changes between 1.12.0 and earlier versions at:

Questions, comments, and bug reports to .

— from the announcement email

Svn: What’s in my Head

This week I took a break from coding to write about some non-coding ways I’ve been thinking about to modernize the Subversion project’s communications and reach, and encourage its community.

Here’s a snapshot of “What’s In My Head” copied from Subversion’s Wiki.

In a later post I write about some potential svn code developments in my head.


Ways to modernize the Subversion project’s communications and reach, and encourage its community.

Communication technology


  • keep using Open technologies
  • keep plain email the baseline standard of communication
  • integrate ‘forum’ and ‘mailing list’ forms of access
  • integrate long-form and chat-form if possible
  • integrate archives / logs, permalinks, searching

Email: Forums and Mailing Lists

Problem: Simple old mailing lists are inconvenient for new and occasional users. They have weak or no integration with their own archives, our issue tracker, etc.

Some proprietary forums e.g. Google groups already attempt to mirror and integrate with our lists, with some degree of success, but we are mainly ignoring it.

We should seek to integrate better with forums. For a start: let users know that it’s an option for them (on our ‘mailing-lists’ web page); and see if we can make some better steps of integration (such as unified permalinks).

For a longer term solution, I wonder if any suitable open source software exists, or if an ASF group might consider developing/adapting something.

Chat (IRC, Matrix)

Problem: The IRC chat systems we use are inconvenient for new and occasional users. They have weak integration with archives, issue tracker, etc.

Use Matrix as an upgrade path from IRC. That is my strong recommendation.

Matrix is Open, bridges to existing IRC channels, has good UI on mobile and desktop, is simple enough for newbies, has permalinks and can act as an archive, can be self-hosted.

Matrix is ready for immediate ad-hoc use by individual participants in our IRC channels, through the Freenode bridge operated by, and is being used in this way by at least two svn-dev members.

In future the ASF should run its own Matrix infrastructure: server, bridges, etc. The ASF would then control its own data, its own user accounts, and its own integrations with archives, commits, issue trackers, etc. Usability advantages: use ASF single-sign-on; install specialist bridges/integrations.

Examples of migration to Matrix in other IRC-based communities: Wikimedia, Drupal


Integrate archives / logs, permalinks, searching

Problem: Past communications are scattered across systems and storage locations, with no consistent archives or permalinks, so cross-referencing is difficult and non-permanent. Our issue tracker and wiki provide only links that are tied to their current provider technology.

The types of information include:

An important step is to develop a URL “permalink” scheme to refer to our various resources. These would be technology-ignorant URLs, all under, like “/issue/1234“.

A baby step is the ‘.message-ids.tsv’ file in our web site directory, holding a mapping from haxx archive URLs used in our web pages to email message ids, with (in the commit log message) a script to generate it. There is, as yet, no automation to use the mapping in any way.

Initial tasks:

  • start documenting a URL-space map for our resources
  • populate one entry, e.g. “/issue/<number> → issue <number>”
  • implement some simple automated handling (e.g. redirects) for that
    • well, well… we already have this in our .htaccess which covers that exact case along with some aliases:
    • "RedirectMatch ^/issue[^A-Za-z0-9]?(\d+)$$1"
  • start using it: update existing direct links to point here instead; publicize it

Deeper integration: A permalink URL should not merely redirect the user to its technology-specific target URL, but present the target in such a way that other inbound and outbound URLs also use the permalink form. With a big third-party system like Jira or Confluence the feasibility of that is going to depend entirely on whether the system has built-in support for that usage.

Software Distribution

Problem: Subversion packages are outdated or unavailable for many platforms, especially server/cloud environments (e.g. Docker) and mobile (e.g. Android).


What could we do?

  • reach out to individual maintainers and ask if they need any help or just a ping?
  • encourage companies to take on packaging (Assembla?)
  • make dedicated efforts to establish builds that may be important

Traditional OS / Desktop

(Windows, Linux, BSD, Solaris) lists the current package versions for many traditional Linux/BSD distributions.

Some companies maintain up to date package builds for several platforms, notably WANdisco & CollabNet.

Cloud / Server / Containers

(Docker, SNAP, VM, etc.)

On Docker Hub the most comprehensive svn server seems to be elleflorio/svn-server (http + svnserve). Next is garethflowers/svn-server (very simple; svnserve only). None seem to be an enterprise-grade installation.

There are no ‘subversion’ or ‘svn’ packages in the SNAP store.

Mobile (client)

(Android, IOS)

We should be able to have functional svn client apps on Android and IOS. The libs and bindings might be the best focus. The command-line client won’t so often be wanted, but no reason it should not be available.

For Android, the only open-source client is OAsvn, based on svnkit 1.7.5. It works, but is unmaintained and primitive.

Small / Personal Servers

(Home NAS; Raspberry Pi, etc.; also personal server software platforms like Sandstorm, UBOS, Yunohost, etc.)

Integrations with IDEs etc.

(Visual Studio, Netbeans, IntelliJ, XCode, etc.)


The Svn Book

The authors of the Svn Book no longer maintain it have recently been offering to hand its ownership to the Subversion project.

What should we do with it?

Man Pages and Help Text

The built-in help is half way between a summary and a detailed reference. The Book contains an index which is a variation on this. We never got around to generating man pages or other formatted help.


  • Generate svn help and man pages from common source: there is an old patch as a starting point

I Wish You’d Fix That Bug

Another comment saying “I can’t believe you haven’t fixed this bug yet!” A recent example in issue SVN-2507 prompted me to write this response to everyone who feels the same.

I understand your frustration. I’d love to fix this bug. I admit it’s our fault this problem exists, and I am embarrassed that we have not fixed it after so many years.

But who are we? Subversion is an open source project, developed by whoever wants or needs to develop it. “We” are myself, a part-time developer with my own priorities, and a small handful of other developers who contribute to Subversion development a few hours a week at most, with their own priorities. And … You.

How much do you and your company value a fix for this problem?

  • Can you allocate somebody to work on it for a few hours a week? The current developers, including myself, would be glad to help that person through all the stages.
  • Can your company offer X pounds or dollars for a freelancer to fix it, on one of those websites? Ask us for help with estimating the cost.
  • Do you pay for any Subversion services? If so, ask your supplier. If you are a customer of Assembla, for example, then you can raise the priority on my personal priority list.
  • Can you personally make progress towards a fix? Think through the problem and discuss a proposal? Create some sort of mock-up? Interaction like that often gets other volunteers interested.

As things stand, to my regret, “we” still can’t fix this bug…

Until “we” includes “you”.

Thank you for using and supporting open source software. (I understand you personally may not be able to do any of these things. That’s alright. Even then, you are still supporting Subversion in other ways.)