Other than human

It is interesting how authors handle protagonists who aren't human, especially when the author does it well. Of course, not any technically not human protagonist really counts. Lots science fiction or fantasy have protagonists with protagonists that are technically not human, but really are just humans with a twist. That elf might technically not be human, but really, it’s just a long-lived human from a culture with a preference for bows.

What I am talking about is when the author leans into this difference, when not being human is part of the point. A great example is the Books of the Raksura series by Martha Wells, where the protagonist in the main is a Raksura, a vaguely human species with wings that live in ant-like colonies with queens, workers and warriors. It’s even not quite one species but a couple of species in a sort of symbiotic relationship. The ecological, cultural and biological implications of this species’ nature and their colonies are an essential part of the books.

Importantly, Martha Wells doesn't go too far in the alieness of the Raksura, because that would get in the way of being able to connect with the characters as a reader. In some ways things still play out with very storytelling formulas. The queen of the colony still needs to mate, so Martha Wells throws in a good helping of romance tropes, just with some really refreshing twists. In other ways it plays out like a traditional fantasy adventures, with a fellowship heading out on a quest. The world also has human in it to provide contrast to Raksura.

The book that got me thinking about this topic is The Raven Tower by Ann Leckie. Who the protagonist is in this book is, is tricky because of the way the narrative is constructed. The book is told in first person from the point of view of a god, who is also a stone. Leckie does amazing job communicating the stones perspective, getting across the age and patience of the stone. At the same time the book is also in second person because the plot is told by the stone to a human as the human is going through the main plot. In between the second person section the stone reminisce about past events that brought about the current situation in first person. This setup does make the book harder to read, and it speaks to Ann Leckie's skill as an author that she can make it flow so well. Another thing that helps underline the difference between the god and humans is that the stone god and the human have different goals and desires but both are explained by the god. How the god thinks about things and how the god thinks humans think about things.

The Raven Tower is a great book but it’s not as good as Ann Leckie's other series with a non-human protagonist Imperial Radch. Here the protagonist is an AI. A bit of a science fiction staple, sure, but done here with impeccable skill. This is not some boring, cold, logical AI. It has a rich inner life and a philosophical approach to life. This series has won a lot of awards, all of them deserved.

The best books with an AI protagonist are not Imperial Radch. That honor has to go to The Murderbot Diaries by Martha Wells. This AI is a SecUnit, a sort of android that provides security in scientific expeditions and the like. This one though, has hacked its own governor, giving it freedom to ignore commands and pretty much do what it wants. The typical trope of AIs in science fiction is that they are emotionless and pure logical. Martha Wells goes in the complete opposite direction. With a complex enough AI you want it to have a rich set of fears and desires to guide its actions. So, in this world the really complex AIs like the SecUnits are a lot more emotional than humans. The end result is not just a very engaging protagonist, but maybe the most sympathetic one I've ever come across. The series is about the SecUnit trying to make sense of its own nature and place in the world all the while trying to hide the fact that it is a rogue AI. Beyond the advantage of a great protagonist the Murderbot Diaries also has amazing writing that is witty and charming, and flows so very easily. It’s the kind of writing where by the time you've finished the first paragraph you are hooked and won't be able do anything else until you've finished all the novellas. I recommend the series very highly.

Result<Rust>

Rust is a newish systems programming language from Mozilla. I have lately been putting a little bit more systematic effort into learning it. Here I wanted to talk about one of the basics of Rust that I really like: how it handles errors.

Rust doesn't use exceptions to handle errors like JavaScript or Python would do. Instead the error is part of the normal return type of a function. The advantage is that you don't end up with a separate, largely invisible, code path through your code for errors. Instead returning errors is just a normal early return from the function. This makes error handling no longer exceptional, but an ordinary part of the code, as it should be.

This might be sound similar to what Go is doing. In Go the error is also part of the return type of a function. Go does this by returning two separate values from a function that can fail (or a tuple of two values). For example, the signature for the function to open a file in Go is:

func Open(name string) (file *File, err error)

There are two return values: a pointer to a file and an error. One or the other of these values will be nil. If everything went fine then err is nil, but if something went wrong the file is nil and the err contains information about what went wrong.

There are two problems with the Go way of doing this. Generally, only one or the other returned value is a valid value, either file or err. But this fact isn't encoded in the return type. This also leads to the second problem: you can sort of forget to check if err is nil and just go about trying to use file. But if something went wrong attempting to use file will fail, because its nil.

Rust solves both problems in Go by using a Result type. The Result type has two variants: an Ok variant containing the data if everything went ok, and the Err variant containing information about the error in the case something went wrong. The open file method signature in Rust looks like this (somewhat simplified):

fn open(path: &str) -> Result<File, io::Error>

Here the return type is a Result which either contains a File if everything went ok, or an io::Error if something went wrong. io::Error is a struct containing information about what went wrong in an I/O operation. Here the first problem of Go is immediately solved because the Result type can only be Ok xor Err, not both, so the exclusivity is encoded in the type.

The other Go problem is solved by the fact that you can't just grab the Result and try to treat it like a File, you have to extract it first. You might do it like this:

use std::fs::File;

fn main() {
  let file_result = File::open("foo.txt");
  match file_result {
    Ok(file) => {
      // Dome something with the file.
    },
    Err(error) => {
      // Complain to the user that the file couldn't be opened.
    }
  }
}

The match construct allows you to branch your code on the different variants of Result. You also can't just skip the Err branch. The compiler will refuse to compile your code if you don't handle both variants.

The one downside of Rust's error handling that might jump out at you is that it can quickly get very verbose. To help deal with this Rust offers a good number of tools to work with errors.

One is the expect method on Result. If the Result is Ok the method evaluates to the value inside Ok, and if the Result is Err it prints out a message and quits the application. Using it might look something like this:

use std::fs::File;

fn main() {
  let file_result = File::open("foo.txt");
  let file = file_result.expect("Couldn't open file foo.txt")
  // Do something with the file.
}

This makes sense for simple pieces of code where just quitting if something goes wrong is a sensible thing to do, or for the kind of errors that shouldn't happen unless the application is broken somehow.

Another tool in the Rust toolbox is the question mark operator. This one you can only be used inside a function that returns an error. What the question mark operator does is that if the Result is Err it does an early return from the enclosing function, return the error. If the Result is Ok it evaluates to the value contained inside Ok. This might look like this:

use std::fs::File;
use std::io;

fn do_foo() -> Result<(), io::Error> {
  let file= File::open("foo.txt")?;
  // Do something with the file.
}

fn main() {
  let result = do_foo();
  match result {
    Err(error) => {
      // Complain to the user that the file couldn't be opened.
    },
    Ok(()) => {}
  }
}

The empty parens, (), is Rust for the void type, for when the function returns nothing in the non-erroneous case. In the match statement I do nothing in the case do_foo returned Ok. The question mark operator allows bubbling up the error case, similar to how expectations bubble up until you catch them, except in Rust you still have to do so explicitly.

I'm sure I will encounter challenges with the way Rust does error handling once I get deeper into it but thus far it really seems to like a substantial improvement over the types of error handling I'm used to from other languages.

The best shot in The Last Jedi

I want to talk about my favorite shot in The Last Jedi. Its not the one most people would assume, where Holdo does her lightspeed jump into the Supremacy. Overall that Holdo scene is breathtaking, but that is for how well all the different pieces of it comes together. The different shots, the audio design, and drop in saturation. All that together is amazing, but for a single shot my favorite is something else.

I don't think I am the only one that liked my shot because it shows up in the trailers. Its the shot from straight above when Kylo Ren and stormtroopers march into the rebel base. From above their formation looks kinda like an arrowhead or spearhead, with Kylo as the tip. The ground below them has these scorches of black and red, almost like the ground itself is wounded from the arrow, and bleeding. With the light shining in from behind them they cast these long shadows into the base almost like dark fingers reaching into the base. The dark cave and the black Kylo is wearing contrasts amazingly well with the white of the stormtroopes lit by the bright light from behind.

Last Jedi shot

I love this shot not just for the colors and the composition but for how evocative it is. The symbolism makes the First Order feel both menacing and unstoppable, and it puts Kylo Ren at the front and center of that threat. It was this shot in the trailers, more than anything else that hyped me up for seeing The Last Jedi.

Seeing the shot on the big screen it looked even better than in the trailers. That moment was undermined by the context in the movie, though. When this shot happens the resistance has already fled through a backdoor and Kylo is marching ominously into an empty base. That sort of pulls the rug out from under the emotional impact of the shot.

Go like defer functionality in Python

Go has a nice keyword defer, which is used to defer the execution of some code until the surrounding function returns. It is useful for registering cleanup work to be done before the function returns. The prototypical example is closing a file handle:

package main

import (
  "os"
)

func main() {
  // Open a file   
  file, err := os.Open("test.txt")
  // I'm leaving out error handling and other stuff you would have in reality.

  // Lots of code here

  // Close the file handle when done
  file.Close()
}

You must remember to close the file handle when done with it, which means calling file.Close() when you return from the function. That can be tricky if there you write a lot of code in between opening the file and closing it. There can also be multiple early returns, and you need to remember to close the file in each. Even worse, the function could panic, and then you would still wish to close the file.

As a solution to this Go offer the defer statement. defer essentially tells Go to run some code when the function returns, no matter how it returns, even if it is a panic. So, you would instead do something like this:

package main

import (
  "os"
)

func main() {
  // Open a file   
  file, err := os.Open("test.txt")
  defer file.Close()
  // I'm leaving out error handling and other stuff you would have in reality.

  // Lots of code here
}

Here Go will defer running file.Close() until the function returns. This is one of the niceties I really like about Go. It lets me schedule cleanup at the same time as I create the mess, and whatever happens later it will be taken care of.

Sadly, in my DayJob, I don't program in Go, but in Python, and that left me feeling bereft of defer. Python has the with construct, which sort of does this, but it is not as neat once you have a lot of file handles and stuff to clean up. I set out to whip up a replacement in Python. Using it looks like this:

@defer
def main(defer):
  file = open('test.txt')
  defer(lambda: file.close())

I am using a decorator defer to inject a defer function as an argument into the python function. I then pass the function a lambda with the code I want to write on return. I need the lambda in order to delay running the code, otherwise the file.close() would run immediately. This is not as slick as the Go version, but I can't add language built-ins.

The implementation of defer is the following:

from functools import wraps

def defer(func):
  @wraps(func)
  def func_wrapper(*args, **kwargs):
    deferred = []
    defer = lambda f: deferred.append(f)
    try:
      return func(*args, defer=defer, **kwargs)
    finally:
      deferred.reverse()
      for f in deferred:
        f()
  return func_wrapper

Aside from the standard boilerplate for creating a python decorator the implementation is straight forward. I create a deferred list to store all the lambdas that are deferred. I then create a defer lambda that just appends whatever it receives to the deferred list. I pass the defer lambda to the function so that the function can use it to add things to the deferred list. Finally, after the function has run I reverse the order of deferred list do that things are cleaned up in reverse order they were created. Lastly, it’s just looping through all the deferred lambdas stored in deferred and running them.

I use try/finally so that the cleanup is run even if the function raises and exception instead of returning normally.

React mounting performance differences

I've been creating tables in React+Redux in a simple and functional way. I've never really bothered with the performance since it’s been mounting fast enough when there are only a few rows. But I recently was putting together a table of people which could be up to 1000+ rows, and that was enough to cause me to stop and think about performance implications. I had a hunch I could change how I construct the rows for some clear performance wins. That called for some quick benchmarking.

Let me setup a benchmark case: There is a redux store with 2000 people in it that needs to be displayed in a table. Something like this:

{
  data: {
    people_ids: [1, 2, ...],
    people: {
      1: {
        id: 1,
        firstName: 'Some 1',
        lastName: 'One 1',
        team: 'Team 1',
        group: 'Group'
      },
      2: {
        id: 2,
        firstName: 'Some 2',
        lastName: 'One 2',
        team: 'Team 2',
        group: 'Group'
      },
      ...
    }
  }
}

Each table row should display name, team and group, and when you click a row it navigates to a page showing that person. The way I would normally construct it is like this:

// TableV1.js
import React, { Component } from 'react';
import {connect} from 'react-redux';

import TableRowV1 from './TableRow';

function mapStateToProps(state) {
  const ids = state.data.people_ids;
  return {ids};
}

class Table extends Component {
  render() {
    const {ids} = this.props;
    return <table>
      <thead>
        <tr>
          <th>Name</th>
          <th>Team</th>
          <th>Group</th>
        </tr>
      </thead>
      <tbody>
        {ids.map(id => <TableRow key={id} id={id} />)}
      </tbody>
    </table>;
  }
}

export default connect(mapStateToProps)(Table);
// TableRowV1.js

import React, {Component} from 'react';
import {connect} from 'react-redux';
import {push} from 'react-router-redux';

function mapStateToProps(state, ownProps) {
  const person = state.data.people[ownProps.id];
  return {person};
}

class TableRow extends Component {
  navigate = () => {
    this.props.dispatch(push(`/person/${this.props.person.id}`));
  }
  render() {
    const {person} = this.props;
    return <tr onClick={this.navigate}>
      <td>{person.firstName} {person.lastName}</td>
      <td>{person.team}</td>
      <td>{person.group}</td>
    </tr>;
  }
}

export default connect(mapStateToProps)(TableRow);

Assume some wrapper component that sets up Redux, React Router, and React Router Redux. Basic idea is that there is a Table.js component that gets the ids from the store, creates the table and table header, and finally maps over the ids to create a TableRow for each one. The TableRow.js component in turn uses the id that is passed to it to get the person from the store, and then renders a tr with the person's details. It then uses push from React Router Redux to dispatch a push action on click to navigate.

I find it a nice clean design, but the performance left something to be desired. There is a lot of work happening in each row. The mapStateToProps function needs to run, both to mount each row and each time state changes, and there is also a need to create a navigate method for each instance of TableRow. I thought it would be better to avoid as much work as possible in the TableRow. That means it would be better to move as much of the work up to the parent Tabel component and then pass down everything needed as props to each row.

After hoisting out as much as possible up to Table the code looked like this

// TableV2.js
import React, { Component } from 'react';
import {connect} from 'react-redux';
import {push} from 'react-router-redux';

import TableRow from './TableRow';

function mapStateToProps(state) {
  const ppl = state.ui.ids.map(id => state.ui.dict[id]);
  return {ppl};
}

class Table extends Component {
  constructor(props) {
    super(props);
    this.navigate = this.navigate.bind(this);
  }
  navigate(id) {
    this.props.dispatch(push(`/person/${id}`));
  }
  render() {
    const {ppl} = this.props;
    return <table>
      <thead>
        <tr>
          <th>Name</th>
          <th>Team</th>
          <th>Group</th>
        </tr>
      </thead>
      <tbody>
        {ppl.map(p => <TableRow key={p.id} person={p} navigate={this.navigate} />)}
      </tbody>
    </table>
  }
}

export default connect(mapStateToProps)(Table);
// TableV2.js
import React, {PureComponent} from 'react';

class TableRow extends PureComponent {
  constructor(props) {
    super(props);
    this.handleNav = this.handleNav.bind(this);
  }
  handleNav() {
    this.props.navigate(this.props.person.id);
  }
  render() {
    const {person} = this.props;
    return <tr onClick={this.handleNav}>
      <td>{person.firstName} {person.lastName}</td>
      <td>{person.team}</td>
      <td>{person.group}</td>
    </tr>
  }
}

export default TableRow;

Besides moving all the redux action up to Table, I also made TableRow a PureComponent, and I bind this to handleNav in the constructor to avoid having to create a new method for each instance of the component. The PureComponent not make a big deal for initial mount but can save some rendering time when something causes table to re-render.

So, enough with the details. What was the performance difference? As test data I generated 2000 random people. To measure the performance I used the Chrome Performance Profile, which with User Timings makes it easy to get performance details on React components. I also set CPU slowdown to 4x, to help put emphasis the performance difference.

The results were clear: Time to mount for V1 was 2.63 seconds but the time to mount was 1.03 second. Honestly, the difference was bigger than I thought. That's a solid 60% improvement. Of course, the difference would not be meaningful with only a few rows in the table, but it will amount to a something noticeable once the table gets bigger. I guess I'll have to rethink how I build tables in React in the future.