Solar PV & Heat Pump

This page contains our notes about our proposed solar PV and air source heat pump installation.

  1. General
  2. Solar Photovoltaic (PV) System
  3. Air Source Heat Pump (ASHP) System
  4. Control and Monitoring

General

Our Household Energy Usage History

  • year Elec kWh Gas kWh
    2018 4,500 16,000
    2019 4,500 16,000
    2020 5,500 16,000
    2021 5,500 17,000
    2022 4,500 13,000
    expected ~5,000 ~16,000?
  • Years based on October to October billing. 2018/2019 are a 2-year average (2019 bill unavailable).

  • Why is our 2022 gas usage low? Were we so frugal? Was weather so mild?

Solar PV System

Our own roof measurements

  • 4 potentially useful roof surfaces
  • 2 larger roofs roughly south facing:
    • roof pitch: 45°
    • roof azimuth: 20° east of south (compass bearing 160°)
    • roof height: 27~30 tile rows
  • 2 smaller roofs roughly east facing:
    • roof pitch: 45°
    • roof azimuth: 20° north of east (compass bearing 070°)
    • roof height: 27~30 tile rows

Our own system planning exploration

  • Easy PV Project Report – Julian’s version 1
  • This plan is only illustrative, for discussion; not verified, not complete, not final.
  • We have no preference for make and model of the panels, other than obviously preferring higher output, all else being equal.
  • This version of our plan shows a 6 kW inverter where 5 kW is probably enough; likewise other details may be inappropriate.
  • No shading is defined for any panels in this version. In reality some shading is likely.
  • This version of our plan shows panels that are slightly more elongated than usual, 1855 x 1029 mm (Jinko Tiger 410W N-Type Black Framed Mono). This type is shown just to illustrate one of the extremes of shape that might fit. The same pattern could work using any of the more common panel sizes around 1100×1700 mm, if sufficient top-to-bottom measurement is available on the roofs.

It’s currently looking to me like sixteen panels may be too much of a squeeze. Also, on the two more north-westerly roofs, the lower they reach the more shading they have.

Inverter

Aim: local home automation connection; avoid depending on manufacturer’s “cloud” service.

Tech specs relevant to solar inverters:

  • Communication protocol: (standard) SunSpec Modbus protocol IEEE-1547 (explanation), (optional) manufacturer’s other open protocol such as Fronius API (JSON).
  • Communication physical connection: any of Ethernet, WiFi, Zigbee, perhaps other.

Research:

  • Fronius seem to be the only inverter manufacturer that makes known a serious commitment to local API with open protocols: Fronius Solar API info.
  • Home Assistant (preferred open home automation system) has good integration for Fronius inverters.
  • SolarEdge looks like the second best make in this regard: local connection available through hard-wired ethernet (previously also wifi but that no longer enabled in recent versions).
  • Home Assistant has some integration for local connection to SolarEdge and a few others; but neither SolarEdge nor others seem to promote or prioritise this from their end.

Choice:

  • Fronius Primo GEN24 Plus (page for installers | page for home owners)
  • probably 5.0 kW (available models: 3 to 6 kW)
  • with “full backup” add-on, because “why not?”: small additional cost for grid outage backup.

Some suppliers of Fronius Primo GEN24 Plus (just from a general DuckDuckGo search):

Assigning strings of panels to MPP Tracker inputs:

  • There are 4 potentially useful roof areas, two roughly south facing (-20°, or SSE), two roughly east facing (-110°, or ENE).
  • The two more north-westerly roofs will have some shading from the more south-easterly roofs, at times of low sun elevation. If this is significant, do we need to consider dual inverters to give each group of panels its own MPPT input? That is “option B”.
  • Option A: 1 x Fronius Primo GEN24 Plus 5.0 kW, with 2 MPPT inputs: all south-facing panels (10) to one, all east-facing panels (6) to the other.
  • Option B: 2 x Fronius Primo GEN24 Plus 3.0 kW, total 4 MPPT inputs: each group of panels to a separate MPPT input (5 south, 5 south, 3 east, 3 east).

Panels

Aim: maximise capacity, for the sake of future use and economy of scale.

Requirements:

  • No particular requirements on size, make, style, colour.

Research:

  • See my plan (shown above) made on Easy-PV
  • Consider whether 16 panels can fit.
  • Consider panels of more elongated shape (~1850 x 1050 mm) instead of the more common shape (~1700 x 1100 mm) if that helps.
  • Considered triangular panels: they seem to be expensive and hard to obtain (few suppliers).
  • Lichen growth, especially on non-south-facing panels. Are some panel types more resistant than others? Concern because ours are quite high and difficult for cleaners to reach.

Battery

  • Battery sizing. Still TBC whether we install PV + ASHP together or PV first and ASHP later. Consider probably sizing for PV + ASHP. (Is that the conditions where a smaller battery is appropriate?)

Installation requests

  • Battery and inverter in under-stair cupboard next to existing electric supply consumer unit.

Air Source Heat Pump (ASHP) System

Location

  • Considering east-facing chimney breast above garage, at north-east corner of house. Discussed with surveyor. Ideal siting for connection to existing heating pipes in garage below, and access is good. It’s a potential bedroom, so vibration is a concern, but it’s old solid walls.

Control system

Aim: upgrade to better control than simple whole-house thermostat and time switch.

  • controlled from smart TRVs? (related to “zoned” control?)

Research/selection:

  • general introduction: https://www.theheatinghub.co.uk/best-smart-heating-controls-compatibility-guide
  • standards: OpenTherm?
  • compatibility: ???

Requests:

  • Replace existing simple time switch with new controller (TBD).

Concerns:

Other Installation Requests

  • Remove gas boiler flue completely and make good the roof. (Check: do we have spare tiles?) Aim: weatherproof.
  • Remove excess pipe runs from old boiler location in garage: route directly to new ASHP location. Aim: tidiness.
  • Site the hot tank for shortest possible outlet pipe run to domestic hot water supply. That would be south-west corner of garage. Aim: minimise time delay and waste in using domestic hot water.

Control and Monitoring

Open source home automation system:

Solar PV monitoring:

Heat pump monitoring:

Jellyfin Feature Requests: 26: Add support for two factor authentication: comment

Yes we need more powerful auth options but no we don’t want each service implementing its own sign-on and security code, we don’t want more user-visible complexity that’s specific to Jellyfin and works differently from our other self-hosted services. So…

Instead, help the whole self-hosting ecosystem by supporting standard external auth / SSO such as OIDC:

–> https://features.jellyfin.org/posts/230/support-for-oidc (support SSO / OIDC)

which can then provide all those great authentication options like 2FA.

Continue reading “Jellyfin Feature Requests: 26: Add support for two factor authentication: comment”

Snake Oil

I have a cough and a cold. I followed a recipe for home-made cough mixture. Cider vinegar, honey, ginger, cinnamon, Cayenne pepper. Potent, and rather delicious to someone who enjoys the tastes of vinegar and spices and sweetness. It evokes childhood memories of the silky black-brown Galloway’s cough syrup.

These remedies smell nice and feel like they are doing good. I have no evidence that they are more effective than cheaper and simpler alternatives. Honey and lemon and ginger is good, or just gargling with salt water.

My home-made cough mixture is a light muddy brown and I can see the spice particles floating around in the small ramekin dish. Perhaps if I add black treacle and put it in a dark glass medicine bottle, like Galloway’s, it would have a stronger placebo effect and appear more valuable.

Clark Stanley's Snake Oil Liniment

Sellers of Snake Oil long ago touted their concoctions as effective remedies for anything, presumably asking a fortune for each little bottle, until the term became a by-word for a scam. We do not look kindly upon scammers.

Recently I noticed a file named “snakeoil” on my computer. My suspicions were raised at first sight, but it is not a virus. It is a digital certificate created by the operating system. Let’s take a quick look at digital certificates and why this one is so named.

When our computer or smartphone retrieves a web page securely (the ‘s’ in ‘https’), it checks that the response came from the genuine server rather than some imposter trying to trick us. The way it knows is by asking the server to verify a digital certificate, a statement saying this server really belongs to this web address, signed by a digital signature. It requires a different specific response each time it asks, so that a rogue server cannot get away with playing back some response it learnt from the real server. Our software checks if the digital signature was made by someone that our software providers trust. If there is no signature, or it is invalid (for example past its expiry date, or names the wrong web address), or cannot be traced to someone our software provider trusts, then the browser raises a big warning that it may be unsafe to proceed. You might have seen the warning occasionally. Usually it is not an imposter trying to trick us but merely a missing or out of date certificate when the operator of a site has forgotten to update something properly.

These certificates for public servers are signed by well known authorities who have a track record of being trustworthy.

What happens if I generate a certificate for myself, and sign it with a digital signature I just made up? This is legitimate and easy to do, and is called a self-signed certificate. If I set up my own server with this certificate, other people’s browsers won’t trust it. It can still be useful to me, though, if I intend to use it just from my own devices. I trust it, and when my browser shows a big warning I can tell my browser to “accept the risk and continue” or I can configure it to trust that certificate. Some companies set up servers this way for internal use, and developers regularly set up test servers this way.

If I try to convince members of the public to trust my self-signed certificate, however, I am like a snake oil salesman. “Look, user this server: its security certificate proves it belongs to google.com.” “Who are you kidding? That certificate is only signed by you. That proves nothing to me.”

The aptly named “snakeoil” file I came across is that self-signed certificate, provided for me so that I can use it for legitimate purposes if I want to. I just need to beware what it does and doesn’t prove.

I still have a cough. My little tub of Vicks VapoRub, another flashback to childhood, and the only one I have purchased in adulthood, no longer smells of anything to me today, having passed its use-by date just over ten years ago. I am pleased to find I can still buy a new tub of Vicks VapoRub for £4 at the supermarket. I wouldn’t mind some Galloway’s too, but apparently it has been discontinued.

BecoCleanse
a £10 bottle of sea water

Looking down Tesco’s long list of cold and cough remedies, one other product caught my eye, at £10 being the most expensive one sold. Covered in labels like “cold relief”, “congestion relief”, “nasal cleanse”, “super plus”, blah blah, it contains “all natural ingredients” and says a lot of pseudo-medical-sounding waffle about how gently and effectively it provides the exact kind of cleansing your nose needs. Then the punch line: this £10 bottle contains 135 ml of “100% pure sea water”. Worth every penny, I’m sure.

 

From Mac to Freedom Laptop 3: Data Recovery

Leaving Apple’s Nursing Home

This series is about replacing a MacBook Air with an equally beautiful Freedom Software laptop. It is also about setting up a Freedom Software laptop for the kind of user who wants it to Just Work with the least possible involvement and no interest in how it works.

Part 1 of this series was about the rationale and the hardware.

Part 2 of this series was about choosing and configuring the software.

Part 3: Data Recovery

First I would recover all the generic user files. Most of the documents are in portable formats such a text or PDF or Open Document Format (used with NeoOffice on the Mac, LibreOffice or OpenOffice on the new computer)

After that, I wanted to recover some important data from specific Mac apps:

  • Photos from iPhoto / Apple Photos
  • email from Apple Mail
  • references from Zotero
  • bookmarks from Safari

(Ordered from most to least important.)

Restore Data from TimeMachine or from SSD?

  • from a TimeMachine backup: put together a software solution, copy data off it
  • from the Mac’s SSD: extract the SSD from the Mac, buy a special adapter, copy data off it

From a TimeMachine backup

There was a recent TimeMachine backup, stored on our NAS. As I don’t have a working Mac to run TimeMachine on, I searched for Linux software able to read it. My searches led to using sparsebundlefs plus tmfs to mount the backup as a directory tree. It took me quite some attempts at fighting the permissions system, especially the way that FUSE filesystems deal with permissions, until I could see a list of top level folders, one per snapshot, with one named “Latest” pointing at the latest snapshot. Inside there was apparently a complete snapshot of the Mac’s disk filesystem.

The TimeMachine backup was not encrypted. While the connection from the TimeMachine app to its storage folder on the NAS had required a password, the data inside was not encrypted. (By comparison, some backup systems such as Borg encrypt data on the client side before sending it to the server, so that only the data’s owner can decrypt and read it.)

The sparsebundlefs + tmfs software appeared to have given access to all the files. When I started copying them, however, two issues arose. First, the extraction speed was initially terrible, 0.2 MB/s, when the extractor was running on one machine, with sparsebundlefs remotely accessing the NAS TimeMachine storage through SSHFS, and rsync’ing its output over to the new machine. I suspected the main problem was random access to the NAS’s moderately slow spinning disk, and the secondary problem may be the SSHFS access to it.

Rather than measure and diagnose the exact cause, I copied the TM backup folder over to a faster disk (still a spinning disk) on the extractor machine. This copying went much faster, presumably because it was mostly sequential reading. Using that copy directly on the extractor machine, so cutting out SSHFS too, the extraction process was then much faster.

Then the second problem struck. The extracted data was much larger than expected, too large for the disks on the extractor machine and the target machine. It turned out the extractor was not preserving symlinks. It was presenting every symlinked directory as a separate copy of the directory. I did not know what directories (and perhaps some files too) had originally been symlinked on the Mac, and I could no longer boot it to find out.

I could guess some of the symlinks, partly from prior knowledge, partly from using ‘du’ to spot directory trees of identical huge sizes, and partly from only sources where I found some lists of symlinks others have catalogued, especially those created by migrations through successive versions of iPhoto to Photos. I confirmed the guesses by using an ‘rsync –dry-run’ to verify whether the content of one directory was identical to the other for each of my guesses. (‘diff -r’ works too but is slower because it always reads the full file content whereas rsync takes a shortcut if the file size and timestamp match.)

I ended up manually adding ‘exclude’ rules to my ‘rsync’ invocation. I excluded (in the home dir):

  • Applications/
  • Library/
    • except for “Library/Mail” and “Library/Mail Downloads”
  • Pictures/iPhoto Library*.migratedphotolibrary/
    • an old pre-migration folder that should have contained symlinks
  • Pictures/Photos Library*.photoslibrary/Originals/
    • which should have been a symlink to ‘Masters’

I also excluded a few other files and folders that held nothing interesting and would clutter or confuse the target. Here is the exclude list I used (not mentioning the ‘Library/Mail’ and ‘Library/Mail Downloads’ exceptions).

.android
.bash_sessions
.CFUserTextEncoding
.cups
.DS_Store
.lesshst
.mozilla
.ssh
Applications
Library
Pictures/iPhoto Library Test.migratedphotolibrary
Pictures/Photos Library Test.photoslibrary/Originals
Pictures/Photos Library Test.photoslibrary/resources
Public
Sites

(Your photo library would not have the word ‘Test’ in its name, by default. Mine did, caused by some manual repair by an Apple shop technician years ago.)

For additional speed in transferring a large amount of data to the new laptop, I copied a couple of chunks of it over on a USB memory stick, as rsync over the WiFi connection was going at only 5 MB/s (~50 Mpbs) even when near the WiFi access point. It would have been a good idea to buy a USB-to-Ethernet adaptor for a task like this, which could have gone much faster.

More details on TimeMachine storage format and manually accessing it: Deep Dive or here, by Glenn ‘devalias’ Grant.

From the Mac’s SSD

Reading data directly from the Mac’s SSD would have saved me time in fiddling with the sparsebundlefs + tmfs software, and in dealing with the data that should have been symlinks but wasn’t.

Apple used a non-standard SSD connector on some MacBook Air (and Pro) models. We can buy an adapter for the particular Mac model, to connect the SSD to a standard SATA connector, or to a USB-to-SATA adapter.

I ordered an SSD adapter. When it arrived, I got out my collection of security screwdriver bits (various sizes and odd shapes) and found I didn’t have the required tiny 5-pointed star shape. Dang.

I will order the special screwdriver because, even though I completed the data transfer, I do not want to sell or dispose of the broken Mac with the private data still on it. (It’s not encrypted. Next time it should be. And indeed I have set up the new computer with disk encryption.)

I have also heard that one can get low level access to an internal drive through the Thunderbird port. I have not investigated whether this is possible in my case.

Recover Photos from iPhoto / Apple Photos

The plain JPEG (etc.) files are found in the ‘Pictures/Photos Library.photoslibrary/Masters’ folder.

TODO: Find out if there were also metadata stored separately, e.g. photo album names and comments.

Recover Email from Apple Mail

For mail accounts using IMAP: the mail should be on the mail server. Don’t bother trying to recover anything from the local data.

For mail accounts using POP: the mail is stored only locally and we will want to recover it.

We find an “Apple Mail to dovecot mailbox converter” at https://github.com/pguyot/emlx_to_mbox.

I installed Erlang (as required) and ran it… and it did not work. Here is the output from a test run on a single message:

$ escript emlx_to_mbox.escript --single ~/tm-home/Library/Mail/V4/863E1A15-*/INBOX.mbox/233CA490-*/Data/0/0/1/Messages/100638.emlx 
emlx_to_mbox.escript:13: Warning: erlang:get_stacktrace/0 is deprecated and will be removed in OTP 24; use use the new try/catch syntax for retrieving the stack backtrace
escript: exception error: no case clause matching 
                 {ok,{http_header,0,<<"Return-Path">>,<<"Return-Path">>,
                                  <<"<LISTNAME-bounces+EMAIL=DOMAIN@mailman.DOMAIN>">>},
                     <<"Received: from [10.92.1.161] (HELO SERVER)\n  by SERVER (CommuniGate Pro SMTP 6.0.11)\n  with ESMTP id 399065899 for EMAIL@DOMAIN; Tue, "...>>}
  in function  emlx_to_mbox_escript__escript__1634__919879__991744__2:get_header_value/2 (emlx_to_mbox.escript, line 286)
  in call from emlx_to_mbox_escript__escript__1634__919879__991744__2:process_emlx_file/4 (emlx_to_mbox.escript, line 71)
  in call from escript:run/2 (escript.erl, line 758)
  in call from escript:start/1 (escript.erl, line 277)
  in call from init:start_em/1 
  in call from init:do_boot/3

I have not programmed in Erlang before. Maybe now would be a good time to start?

Recover References from Zotero

Copying the Zotero folder to the Linux laptop Just Worked. Hooray!

Recover Bookmarks from Safari

TODO.

From Mac to Freedom Laptop 2: Software

Leaving Apple’s Nursing Home

This series is about replacing a MacBook Air with an equally beautiful Freedom Software laptop. It is also about setting up a Freedom Software laptop for the kind of user who wants it to Just Work with the least possible involvement and no interest in how it works.

Part 1 of this series was about the rationale and the hardware.

Part 2: Software

Our leaving Apple is, fortunately for us, much easier than for someone who has both feet firmly planted in Apple’s walled garden.

This particular laptop was being used more like an old-fashioned stand-alone computer than a portal to Apple services. Finding and switching to alternative Freedom Software apps will not be too much of a hurdle.

  1. What Operating System?
  2. What Replacement Apps?
  3. What to Configure
  4. Practise the Configuration

What Operating System?

We’re going for something mainstream and stable and familiar. We’re not going for 100% hard-core freedom such as LibreBoot and Trisquel, important though they are. While there are several good options, for me (long time Ubuntu fan) it’s probably going to be one of:

  • Ubuntu, in its default Gnome form — the most common, generic Linux; stable and widely known and supported.
  • Ubuntu, with customisations (e.g. a MacOS-like launcher).
  • Elementary OS — slick, MacOS-inspired style; based on Ubuntu; generally great for beginners, especially from MacOS; but perhaps too quirky and too niche (compared with Ubuntu) to be the best choice for this situation. (Here’s a review.)

Also considered:

  • PureOS — slick; made with great dedication to software and phone freedom; likely too new, quirky and niche (compared with Ubuntu) to be the best choice for this situation.
  • Ubuntu Budgie — MacOS-like style.
  • Ubuntu MATE — supports MacOS-like, Windows-like and Ubuntu-like layouts; but I’m wary it may not offer the best modern simplicity and future, being based on older Gnome2.

For Ubuntu customisations, we might be looking at just installing a MacOS-like dock such as Dash to Dock. My philosophy is not to care much about the non-functional visual elements of a design, such as its colour theme and icon style and type fonts. Where to find things (apps, settings, files) is more important. Aesthetics do matter too, and the MacOS dock with its icons standing on a reflective surface and unfolding and bouncing is undoubtedly beautiful. However if placed at the bottom of the screen it has a serious flaw which is that vertical screen space is tight on a laptop screen, and for tasks like writing a document or programming, even just reading a web page, windows should use the maximum height available. Even with options to maximize a window or auto-hide the dock, side docking is a better arrangement.

What Replacement Apps?

Replacement user apps:

  • email (Apple Mail)
    • –> MailSpring (FOSS desktop UI, like web mail UI, for IMAP mail accounts)
    • –> Thunderbird? (for recovering local-only (POP) mail from Apple Mail)
  • web browser (Safari)
    • –> Firefox
  • photos (Apple Photos)
    • –> Shotwell?
  • Zotero reference manager
    • –> Zotero also works in Linux

Configuration, Admin:

  • access to network-shared documents folder
    • –> SMB share: also works in Linux
  • backup (TimeMachine)
    • –> Borgmatic (efficient external backup)
    • –> Cronopete? (friendly UI but limited, internal?)
    • –> TimeShift? (for system configuration)

What to Configure

User Apps

  • MailSpring
    • gmail account
    • other account
  • Thunderbird?
  • Firefox
    • create Firefox account
    • plugin: Bitwarden
    • plugin: Adblock Plus
    • plugin: Zotero
  • Shotwell
  • Zotero

Admin

  • machine name, user name, …
  • remote admin
    • ssh
    • xdmcp or rdp or xspice?
  • backup: Borgmatic
  • backup: TimeShift?
  • online accounts (Ubuntu config)
    • Ubuntu One (needed for Livepatch)
    • Google account?
    • our self-hosted Nextcloud (calendar, address book, …)
  • external accounts (manually)
    • our shared documents folder (NFS/SMB share): open & bookmark
  • printer
    • install hplip (for HP printers), hplip-gui (esp. useful for scanner)
    • connect to our local and network printer(s)
  • preferences
    • lock / power
    • appearance, …
  • remove unwanted apps
    • games, favourites, …

Additional Apps Wanted

  • Bitwarden
  • syncthing?
    • share with phone
    • share among users
  • KDEConnect?

Practise the Configuration

The goal is to configure a laptop that is ready to use, not one where the user will need to configure everything bit by bit as they start using it.

We can practice setting everything up, before doing it for real. The practice run is going to involve lots of manual work and trial and error. We can document it, and try to automate parts of it, to help make the real configuration quick and smooth.

For the practice run, we can use dummy accounts wherever we connect to external systems. For example, create an external email account that is not using the user’s real email address or password. We don’t want the user’s real email to be spammed with all our configuration attempts, certainly not the trial and error.

When it comes to the final real configuration, the user will have to be involved. Their real email account will receive various configuration emails and they will have to create and store some new passwords. The user will have to choose whether they want to be involved hands-on in this, choosing their own passwords etc., or whether they want us to do it all for them, depending on their attitude. This is that part that needs to go quickly and smoothly.

What to use for a test environment?

  • additional user account on own system
  • virtual machine (VM)

We could practise some of the configuration inside an additional user account on our own laptop, assuming we use a similar enough operating system and don’t mind installing the required apps on it. That is a good start. Better would be to create a VM and then we would be able to practice every part of the set up, except for any hardware that is not available on the VM such as perhaps a fingerprint reader or a printer.

We will surely need to fix and adjust the configuration afterwards. Remote admin access will be useful, if the user is willing. There are broadly speaking two levels to choose from, depending on the user’s relationship to the admin:

  • remote desktop (via e.g. VNC):
    • the user needs to be already logged in and accepts our connection at their discretion;
    • the user can watch what the admin is doing, though they might well not understand it;
    • the admin can thereby access the user’s data and account settings in the same way the user would;
    • the admin may access system configuration (via sudo).
  • remote log-in, by command-line (via SSH) and/or graphical (via e.g. XDMCP):
    • the user need not be involved nor see it;
    • the admin has the same control as if they had the machine physically, which means full access to the system configuration, and, depending on how the system was set up (encryption), likely also access to the user’s own data and account settings, to be used with the user’s consent, of course.

Switching from a MacBook Air to a Freedom Software Laptop

Leaving Apple’s Nursing Home

This article tells how we replaced a MacBook Air with a freedom-software laptop, aiming to keep it delightful to use and to carry about, while standing up in support of the principles of freedom of the users, freedom from the control and lock-in that Apple wields over its users, its subjects.

The Opportunity: the old MacBook Air dies.

The MacBook Air showing a “panic” message at switch-on and dotted lines across the screen

It’s terminal. The diagnosis is the soldered-on RAM has failed. Technically speaking it could be repaired, but it’s not worth it. We need a new laptop.

This is the opportunity. We have to make an effort to replace this and set everything up again, one way or the other, so can we make the effort to switch to freedom software at the same time? Why should we?

The Choice: Apple or Freedom?

While we should choose our direction according to our values and principles, we all find it hard to see and evaluate the big picture.

Apple promises to sell us a world in which “our” computer systems do what we want and what we need, easily and quickly and beautifully. At first sight, that is indeed what their products look like. Only when we dive deeper into their ecosystem, that is when we begin to learn how controlling they are. Devices we buy from Apple are not “ours”, they are tightly controlled by Apple. Apple restrict both what we are allowed to do (legal controls) and what we are able to do (practical enablement). Let’s see an example of how this works out.

As long as we play along inside Apple’s walled garden, everything smells of roses. Now let’s try to message a friend who has not bought Apple, or share photos with them. Suddenly we hit the wall. Our friend is Outside, and Apple has locked the doors. But it’s OK, we say, they’re not blocking us, look, we just need to install and sign up to Facebook’s WhatsApp or Google’s Photos because that’s what our friend is using. That seems to work. Why? Because Apple chooses to unlock the door for us to install those particular apps, according to agreements with those particular vendors. Apple only lets us install software from their own store, and they only let in software that conforms to strict Apple-centric rules. That’s very strongly enforced on iPhones, with MacOS moving swiftly in the same direction. The marketing message that says this is all to protect us from nefarious cyber threats. Who could deny that there is a grain of truth behind that? Yet the unspoken reality is they are mainly protecting their control over our digital life.

Besides, installing another app to meet a friend outside this garden only “works” in a crude way: it still does not allow us to invite our friend to meet us in our current messaging system. Instead we have to go and visit them in one of those separate, equally proprietary walled gardens, where we can’t share our photos and contacts and messages directly.

It’s not only Apple. Google and Microsoft are doing it too, while Apple and Amazon wield the tightest restrictions over their users. If you were not aware how bad it is, try reading up about how the vendors can remotely install and uninstall software on what they like to call “our” device.

The Future of Computers

Two of the most readable short articles illuminating this sad state of affairs are Your Phone Is Your Castle and The Future of Computers: The Neighborhood and The Nursing Home by Kyle Rankin. The author is the chief security officer of Purism, one of several small companies that are passionately contending to change the landscape by offering a digital life characterised by principles of freedom. Freedom in the sense that we the users are in ultimate control of our digital data systems, not the other way around. “As a social purpose company, Purism can prioritize its principles over profit. The mission to provide freedom, privacy, and security will always come first.”

Another player is /e/ Foundation (“Your data is YOUR data!”), bringing us de-Googled Android phones. These phones can run without any dependence on or control by Google: instead the user is in ultimate control. The irony of Android being marketed as an “open source” operating system is that only parts of it are open source and people have had to expend a huge amount of effort to build replacements for Google’s proprietary parts. But now the huge efforts of many volunteers over many years, now beginning to be augmented by some small companies including /e/, are paying off and these alternatives exist. Read more in, for example, a late-2020 interview in The Register.

These companies are formed from small groups of people following their beliefs. Together they are building the next wave of the freedom software movement that is perhaps most widely known as the Linux world. Taking the idea far beyond freedom to re-use and re-mix just individual software programs, they are bringing freedom now to the world of connected digital services that we use to store our family memories and to communicate with one another.

Freedom Software Laptops

Back to laptops.

A few big-name manufacturers make a few of their models available to buy with Linux pre-installed. Sadly they hide rather than promote this option, seeming to consider it merely a necessity to satisfy certain business customers, and offering little beyond a basic default installation which could easily be done at home.

The best way to support freedom software, and to get a machine that is already properly set up for it, is to buy from one of the small companies that specialise in it.

A DuckDuckGo web search for “Linux laptops” found plenty of starting points, some articles listing the favourite mainstream laptops that people like to run Linux on, others listing the specialist companies that sell Linux laptops.

I ended up looking at both alternatives: buying a mainstream laptop, likely second-hand, or buying a new laptop from a specialist. The category I am looking for this time is slim, ultra-light or “ultrabook”, around 14″ screen size, to replace the feel of a MacBook Air.

Best liked mainstream laptops this year seem to be first Dell’s XPS 13 series, and second Lenovo’s ThinkPad X1 Carbon series. Each range covers a wide range of specs.

Specialist linux laptop vendors include System76 (such as their Lemur pro), Purism (e.g. Librem 14), and Pine64 (e.g. PineBook pro), along with several more. Some make their own hardware, and others buy mainstream or OEM hardware and customise it. Most offer a choice of operating system, all based on well known open source OS’s (the GNU/Linux or *BSD families), sometimes customised or own-branded.

Then I found Laptop with Linux.com, a trading name of Comexr B.V. in the Netherlands. They sell a range of laptop styles, all based on the OEM brand “Clevo”, and have a lovely set of customisation options ranging from hardware components to setting up disk encryption, choosing installed applications and choosing my user login name. None of that is anything I couldn’t do at home, but it shows they go further than a basic default installation of the OS and it genuinely will save me some time and effort. For me, they offer the extra advantage of shipping with UK tax and duties already included.

Second-hand? Tempting. New? Sensible.

To begin with, I could not accept the cost of buying new, as machines I considered decent spec were available for hundreds of pounds less. Eventually, I re-balanced my assessment in favour of buying something that is intended to last for years, and I mean ten years. The hassle of changing from one computer to another, setting everything up and getting used to the differences, can be realistically valued at tens of hours. From that point of view, it made sense to buy something new and high spec so that it doesn’t seem too terrible after many years.

So it is that I am ordering the Clevo L141MU 14-inch Magnesium Laptop. I will go for a mid-to-high hardware spec, particularly focusing on speed because I want it to be pleasant to use, and mid-level RAM and SSD capacity because this is an upgradeable computer and the prices of those will come down. RAM in particular can be upgraded later with no hassle. Upgrading the SSD later would require externally copying its contents to the new one which might be an evening’s work.

It is even lighter than the MacBook Air it replaces, and just fractionally less thin.

Green Tariffs — Beware Greenwash, Choose Well

Many of us want to support green energy. We keep hearing we can play our part by switching to a “green tariff”, and we can.

Sadly there is a big trap that we need to learn to recognise: the “green ​tariffs” advertised by many suppliers are a deceptive marketing ploy known as “greenwash”.

Look for explanations and advice from independent organisations to identify the (very few) good green options. Price comparison and switching web sites can be misleading. Some good resources are:

The core of the explanation is this. When we ask our mixed-source energy ​supplier to switch to a “green tariff”, the main things that happen are ​trivial accounting exercises that do not result in any less brown or any ​more green energy being bought or sold or produced.

First, let’s be sure we understand that our electricity is supplied by pouring all the sources into the National Grid, and delivering a portion of the mix to our home like from a tap on a water pipe. There is no technical way to separate out which bit came from which source.

Therefore any claim like “we deliver 100% green energy to your home” is already misleading. The only thing we can potentially achieve by switching is to redirect the money from our bills away from brown sources and into green sources.

What really happens? When we switch to a “green tariff” from our mixed-source energy ​supplier, they may “allocate” to us (on paper) a portion of the existing green energy supply that is really shared among all their customers, thereby deeming the non-green-tariff customers a corresponding bit “browner”, not redirecting our bills towards green supplies, and not changing the overall ​supply or demand at all. Or they may claim “offsetting” or “matching”, cheaply buying up certificates that prove a green source generated that amount of energy. Indeed it did, but not for us, not because of our switching.

They may round it off with talk of tree planting to help us forget about questioning the technicalities.

Companies’ marketing material, price comparison websites, staff on the phone, and even rules from energy regulator Ofgem aren’t helping customers understand…

Which?

The “green tariff” has been advertised for so long and so widely that it is ​hard to believe it does not mean what it ought to, hard to believe the ​industry has got away with such misrepresentation, but this has been going on for years and still is the case in 2021.

If we have already switched or were planning to do so, we might feel deceived. But there is something we can do.

What Can We Do?

The conclusion is simple. Search for articles like those linked above, that list the few suppliers that directly buy or produce renewable energy, investing their customers’ ​bills into increasing renewable generation. The way for a consumer to make a difference is to ​switch to one of those suppliers.


I first mentioned this issue years ago in some notes on Wind Energy when I lived in sight of a wind turbine and decided to make the switch.

Disclaimer: I have no connection to the industry besides being a customer and bond holder of Ecotricity.

Running an OpenWrt Router

I am running an OpenWrt open-source router, at last.

OpenWrt: Wireless Freedom

Dave kindly donated me the hardware three years ago, when I spent many happy and frustrating hours installing OpenWrt for the first time, bricking it, recovering by connecting a serial port inside it, and eventually finding the OpenWrt configuration interfaces at that time were just too complicated for me to navigate.

It sat on my desk ever since then, unused.

What changed?

The old noddy little router

This week, our noddy little ISP-provided router keeled over.

All I did was try to change its upstream DNS server addresses to point to AdGuard’s ad blocking service. There was a simple web UI to enter the addresses, but, after doing so, its web UI promptly and permanently died and would not come back. Its DNS gateway function and SSH access died too, while some functions such as its basic routing and port forwarding continued. I tried power-cycling the router, of course, but avoided doing a factory reset because then I would lose my port forwarding that provides access to my self-hosted services such as Matrix and contacts and calendar, and would not be sure I could reconfigure everything. I was able to regain internet access temporarily, by manually configuring each of our devices to use external DNS server addresses instead of the router’s local address.

Well, I didn’t like that router anyway. Its UI was slow and awkward, its features were very bare and its WiFi was weak. (It was a Sagemcom 2704N, also branded PlusNet and Technicolor.)

So it was that I took a second look at this TP-LINK TD-W8970 router.

A pleasant surprise awaited: I found that OpenWrt had just the previous week released a major update, a 2021 version, a year and a half since their previous 2019 version, and it looks much more polished. A quick in-place firmware upgrade, followed by many hours figuring out how to make and manage the configuration, resetting, starting again from defaults, and it’s now all working. ADSL WAN connection, wired, wireless, and my port forwarding rules for my servers, and some bits of static DHCP and static DNS hostname entries.

Where the previous router had hung lopsided from one screw, to make a better impression and improve its chances of acceptance by the family I screwed it neatly to the wall and tidied the wires.

The Ordinary User May Appreciate…

TP-LINK TD-W8970 v1
  • ad-blocking
  • stronger WiFi signal now covering the whole house and garden
  • faster

None of these benefits seen by the ordinary user are unique to OpenWrt, of course.

Ad blocking was the trigger for this whole exercise. I had previously been considering self-hosting either Pi-Hole or Adguard-Home. Recently I learned that AdGuard DNS service is currently available free of charge, simply by setting it as the router’s DNS server address (or, less conveniently, by overriding the setting in individual devices). While less comprehensive and customisable than a self-hosted ad-blocking DNS server, for the time being the convenience and simplicity of this solution wins.

The new router is faster in a few ways: faster WiFi connection speeds; faster access to self-hosted services such as backups enabled by gigabit ethernet (up from 100 Mbit) for the wired connection; and (probably) some faster software operations such as DNS where the previous router often seemed responsible for delays of several seconds.

The Self-Hoster Appreciates…

Configuration Example

Where OpenWrt shines is in the features I use for self-hosting services, and how I will be able to manage it over time.

Because it’s open-source software:

  • reassurance that the software cannot be abandoned at the whim of some company;
  • strong support for open and standard and modern protocols, e.g. mesh WiFi, encrypted DNS standards, standard Unix admin tools;
  • likely to be upgraded to add new features, support new security measures;
  • I can keep my configuration if I need to buy new or different hardware, because the same software runs on many devices;
  • many optional add-on features contributed by community members;

Because it’s software for professionals:

  • full IPv6 support, alongside IPv4;
  • strong WiFi features, e.g. multiple networks (trusted vs. guest);
  • strong network protocols support, e.g. tagged VLANs, switch control protocols;
  • configuration stored as text, so can be managed by external tools like Ansible and version control, and re-configured from scratch by one automated script (“configuration as code”, “infrastructure as code”);

Things That Went Wrong

Bricking the device during initial installation

Part of the OpenWrt TD-W8970 installation instructions, which are in a linked forum post, advised me to use commands like “cat openwrt.image > /dev/mtdblock1” to install OpenWrt initially. What appears to have gone wrong is this did not successfully write all of the image file to the flash memory. Some blocks of flash remained blank. Then when rebooting the router, it just hung. I got in touch and was advised there are more reliable ways to do it. To recover, I had to buy a serial port to USB adapter, open up the router and solder on a serial header, and use the serial port recovery method.

Some web sites would not load

At first, a few ordinary web sites failed to load.

According to a note near the end of the user guide “Dnsmasq DHCP server” page:

“If you use Adguard DNS … you need to disable [DNS] Rebind protection… If not, you can see lot of this log in system.log, and have lag or host unreachable issue.”

"daemon.warn dnsmasq[xxx]: possible DNS-rebind attack detected: any.adserver.dns"

I have read a lot more about this issue since then, to understand it better. I changed the setting, as suggested, and everything seems to work OK now.

I wish this issue would be explained more clearly, and with references. I am still not entirely comfortable that disabling the rebind protection is the best that could be done: it seems to me it would be better if we could accept just the “0.0.0.0” responses that this DNS sends while still protecting against any other local addresses.

WiFi Would Not Connect

After a while I decided to change the WiFi channel selection from 11 to Auto. Next day, our devices would not connect. Some of them would briefly attempt to connect and immediately disconnect, while others would not even show our WiFi network in their list.

It turned out the router had switched to channel 13. From what I have been able to learn, this is a valid channel to choose, although in the USA there are restrictions on the power level on channels 12 and 13. A lot of writers strongly advise only choosing among 1, 6, and 11. The rationale for this advice seems to originate from one particular study that may not be relevant in today’s common scenarios; some writers disagree and it’s not really clear. I wonder if the problem is that the firmware in many devices may not “like” connecting to channels above 11.

Whatever the precise cause, switching back to manually selected channel 11 seems to have solved the problem.

Struggles

It was far from a breeze to install, and far from a breeze to configure.

The OpenWrt web UI (LUCI)

LUCI is still not clear and helpful, although much improved. Examples:

  • understanding how to set upstream DNS (on WAN interface, in LAN interface, in DHCP settings, in all of these?);
  • same for how to set local domain name (3 places to choose) and what the consequences are.

Poor documentation

I struggled with the OpenWrt “user manual”. For example, many of its pages say basically “help for FOO: to accomplish FOO, I pasted the following text into the config files in some unspecified version of OpenWrt,” without explaining what exactly FOO was meant to accomplish and its trade-offs and interactions.

Configuration as code

I discovered by accident that the LUCI can show the commands for the settings changes, if you click the mis-named “unsaved changes” button which appears after pressing “save”.

That’s a great start. It could be developed into something so much better, a real configuration-as-code methodology. Nowadays that should be promoted as the primary way to manage the router. Instead of just “backup” and “restore” there should be facilites like diff the current config against a backup and revert selected differences. Tools should be promoted for managing the config externally from e.g. a version control system or Ansible.

Inconsistent defaults

When LUCI writes a config section, it changes settings that the user didn’t change. It seems to have its own idea about what a default config looks like, and this is different from the default config files supplied at start-up. This makes it difficult to manage the settings in version control. These spurious changes are shown in the LUCI pending changes preview. (It would be helpful if that preview included the option to revert selected changes, although that would not go far enough.)

How it should be done: The LUCI settings should always match the text config defaults, and that should be tested. This would come naturally when adopting configuration-as-code as the primary management method.

Finding what (A)DSL settings to use

Finding settings to use for the ADSL connection was hard. My ISP PlusNet published a few basic settings (VPI/VCI, mux, user and password, etc.) but OpenWrt required other settings as well, and some of the settings didn’t exactly match.

The OpenWrt ISP Configurations page seems quite useful but says for example “Annex A, Tone A” whereas LUCI doesn’t have an option named exactly “Annex A”: its options include “Annex A+L+M (all)”, “Annex A G.992.1”, etc., and it doesn’t have an option for “Tone A” but instead “A43C+J43+A43”, “A43C+J43+A43+V43”, etc. This makes it really frustrating if one is not a DSL expert: I do not know which of the available options will work and which will not. When on my first try it would not connect (showing some sort of authentication error) I did not know which settings could possibly be the cause.

After a lot of reading and experimentation I noticed that the generated text configuration corresponding to each LUCI option gave me a strong clue: the generated config for tone “A43C+J43+A43” used the option code value “a” whereas for tone “A43C+J43+A43+V43” it used the code value “av”. That strongly suggested I should select the former. And similarly for “Annex”.

Finally I came across a small comment between two example configurations in that same page, that said I must also delete the ATM bridge that was set up by default. The LUCI description of “ATM Bridges” says, “ATM bridges expose encapsulated ethernet in AAL5 connections as virtual Linux network interfaces which can be used in conjunction with DHCP or PPP to dial into the provider network.” Not great. That didn’t help me at all.

After changing settings as best I could, and deleting that ATM bridge, it then worked.

How it should be made easier:

  • define a way of publishing a DSL configuration online as a structured code block (could be the OpenWrt config language, for a start);
  • make LUCI able to accept a whole DSL definition in a single cut-and-paste operation (a text config box);
  • start a database of these (encourage this to be maintained by the community; make it distributed);
  • add a “search in database(s)” function for these in LUCI.

Seedvault Backup: Why not add a launcher icon?

Hooray for Seedvault Backup! At last there is an open-source backup solution that can be built in to de-Googled Android-based phones such as those running LineageOS or CalxyOS. I should write more about why this is a fantastic development, but not right now.

This article is about something else: the importance of keeping a very clean default launcher experience for the Ordinary User, and how surprising the habits of the Ordinary User may seem to us if we are a developer or a Power User.

This story starts with Seedvault Backup currently (mid-2021) being relatively hard to find and awkward to access in the Android system settings menus. A developer proposed making it easier to find and access by adding a launcher icon for it. This would certainly make it easier to find and access. But here is the long version of my response to that proposal.

TL;DR: In my opinion it is probably best NOT to add a launcher icon by default, because that is not in the best interest of the ordinary user in their ordinary every-day activities. Basically, my argument is that an Ordinary User will treat it as set-and-forget settings, and will want to ignore it for nearly all of their life, so we shouldn’t insert an icon for it among their user apps.

The rest of this comment is rather long. It is not meant to be a rant, just a fuller explanation of why I make this suggestion, and I thought I might as well write it all down as it may not all be obvious to everyone.

I understand, it would indeed be nice to make the Seedvault UI a bit easier to find and access. It would especially be nice during the time I (the user) am setting up and testing the backup procedure, when I need to access it again and again. Let’s come back to this later, towards the end of this long comment. But I am not the ordinary user and this is not the usual way of using my phone, so let’s first think about the ordinary user in their ordinary usage.

I have had the privilege of watching a real Ordinary User recently and this is what I found in real life. The “ordinary user” I speak of is quite surprising to us techies: it is a person who is focused on their own activities and does not care to spend time understanding or interfering with how their phone is working. In fact we might consider them to be neglecting their phone’s needs and health. This ordinary user does not want to customize their phone: they do not even add favourite apps to their home screen! They just find them in the launcher or in recents. This ordinary user does not care about keeping the software up to date, they only do so if a pop-up forces them to, otherwise they ignore it. This ordinary user does not even bother to dismiss their notifications! That’s an optional extra thing they don’t need to do, they don’t want to waste their brain space on it, they just want to do something particular with some particular app right now and then shut the phone.

A typical default launcher such as on LineageOS shows all the user apps, plus just one “Settings” icon. That seems an intentional and user-focused way to organize things. The ordinary user likes to get some help setting up the phone, and then forget about it. They don’t need or want to visit what they regard as low-level settings like these again, so they don’t need easy access to them. Additional icons added to the launcher are just clutter that get in the way of them finding their user apps.

I noticed that LineageOS-for-microG adds a second settings icon for microG settings. In a small way, that is already prioritising the interests of power users and developers and techies while starting to degrade experience for the ordinary user. Let us not copy that mistake.

Backup, and microG-settings, are far from the only system settings that some users might sometimes like shortcuts for. They are just two of many. It should certainly be made possible for a ROM maker or a power user to create shortcuts to settings and settings-related apps such as Seedvault, and also to particular activities within it, so that such developers or power users can make a page of settings shortcuts if they want to. That would make sense, for example, with a launcher that allows creating sub-groups of launcher icons. But while making such things possible, let us not underestimate the importance of getting the default experience right for ordinary users.

Now back to the topic of making Seedvault easier to access for those users and those times when that’s needed, a few minor suggestions:

  • Enhancement: I noticed that searching the system settings for “backup” finds Seedvault’s UI, while “seedvault” does not find it. Conversely, searching for “seedvault” does find Seedvault’s “App info” screen, whereas searching for “backup” does not. Let’s at least add “seedvault” to the searchable string used for its settings UI, and add “backup” to the searchable string used by its “App info”, to make these consistent and more discoverable.
  • Enhancement: when the “Seedvault: backup running” notification is showing, clicking this should open the Seedvault UI. It currently doesn’t do anything when I touch it.
  • Minor enhancement of interest to power users: without adding this to the launcher, just to mark the UI activity as launchable, so that an “Open” button appears on its “app info” screen where currently there are only the Uninstall and Force Stop buttons.

Maybe we can think of more ways to make it findable when it’s needed without showing anything when it’s not needed.

(My versions: Seedvault 11-1.2, on LineageOS-for-microG 18.1 3-September-2021, on a OnePlus 6 “enchilada”)