Today I Learned

For the past two years, I’ve been working at one of the best educational technology companies in the world: Duolingo. I want to start writing about the things I’m learing about educational technology and I needed a space to do so. So here it is.

  1. September 05, 2023

    Update: Found a workaround. Minutes after publishing this, I was reading Chris Eidhof’s presentation on SwiftUI, and noticed he uses ZStack to wrap his conditional SwiftUI statements where I’ve been using Group. On a whim I tried changing Group to ZStack in my repro code below… and it solved the problem of the disappearing toolbar buttons. I’m relieved there’s a workaround to this bug but it’s still been a bit of a maddening experience.

    Original post:

    I really want to like SwiftUI. I like the way it manages state, and when it works, it feels magical. The problem is it doesn’t work reliably. For example, I spent way too much time banging my head against the wall managing toolbars in a SwiftUI app that targets the Mac. I finally distilled things down to this: If you have a NavigationSplitView, and have toolbar items associated with the detail: view, and that detail: view is conditional… then you’ll lose the toolbar items when the condition changes, and they never come back.

    For example, here’s what my simple app looks like when I launch it:

    D735457D 32EB 4151 BC39 E74A4DA693A1

    See those toolbar items over the detail view? That’s what we’re going for.

    But then click the Toggle Globe button and this happens:

    89C3F695 3CD8 4F2F A9AD 77676BF72B72

    The toolbar buttons have gone away and they’re not coming back. I haven’t figured out a great way to work around this. This bug only happens for SwiftUI apps that target MacOS. It doesn’t repro on iOS or in the “Scaled to match iPad” Catalyst mode. (All is not great on iOS though. In my “real” app instead of my toy app I wrote to isolate the disappearing-toolbar bug, the iOS app crashes when I try to install the toolbar. I haven’t yet isolated why it crashes. I don’t have time for this.)

    Every year I think, “Maybe this is the year I can rely on SwiftUI,” and every year I walk away disappointed.


    Repro code

    import SwiftUI
    struct ContentView: View {
      @State private var showGlobe = true
      var body: some View {
        NavigationSplitView {
        } content: {
          VStack {
            Button("Toggle Globe", systemImage: "globe") {
          .toolbar(id: "my-toolbar") {
            ToolbarItem(id: "trash") {
              Button("Test", systemImage: "trash") {
                print("Tapped test")
        } detail: {
          Group {
            if showGlobe {
              VStack {
                Image(systemName: "globe")
                Text("Hello, world!")
            } else {
              Text("There is no globe")
          .toolbar(id: "my-toolbar") {
            ToolbarItem(id: "toggle-globe", placement: .automatic) {
              Button("Toggle Globe", systemImage: "globe") {
            ToolbarItem(id: "bold", placement: .automatic) {
              ControlGroup {
                Button("Bold", systemImage: "bold") {
                  print("tapped bold")
                Button("Italic", systemImage: "italic") {
                  print("tapped italic")

  2. August 21, 2023

    Today I learned another difference between iOS and MacOS: Dealing with attachments (specifically, images) in rich text. This is something I’m adding for my Permanent Marker project, which is a multi-platform UIKit / AppKit application.

    I had my first exposure working with attachments when working on my Library Notes application. Library Notes does not have full rich text editing, but it does support embedded images, and that’s where I learned the following simple recipe for adding an image to rich text on iOS:

    1. Create an NSTextAttachment that contains the data for your image.
    2. Create an NSAttributedString that displays the attachment and insert it into the appropriate location in the rest of your rich text. (Rich text attachments are stored as an attribute on a NSTextAttachment.character. While you can manually create this by inserting the character and applying the right attribute, it’s easier to use the NSAttributedString(attachment:) initializer.)

    Now often I find myself trying to insert photographs into text, which presents a problem: Modern digital photos (even ones from cell phone cameras) are huge. Consequently, I’m almost never working with raw image data from disk or the network for my attachments; instead, I use CoreImage to resize the image before including it as an attachment. This works fine on iOS; NSTextAttachment has an image property that I can use for my resized image.

    Things are a little different on MacOS. First, and a little surprisingly to me, the designated initializer for NSTextAttachment on the Mac takes a FileWrapper. When I did initial prototyping of image support on the Mac version of Permanent Marker, this worked fine — I’d create a FileWrapper for my image file, pass the FileWrapper to the NSTextAttachment, and boom, there was an image.

    My surprise happened when I started resizing the image before trying to show it. I no longer used NSTextAttachment(fileWrapper:) and instead used NSTextAttachment(data:ofType:). Suddenly my images stopped showing up.

    And that’s when I learned about a MacOS-specific NSTextAttachment property: attachmentCell. If you actually want to display an attachment on MacOS, you need to create an object that conforms to NSTextAttachmentCellProtocol and assign it to the attachmentCell property. The built-in NSTextAttachmentCell class conforms to the protocol, and you can easily create one for an image using the NSTextAttachmentCell(imageCell:) initializer. Apparently the NSTextAttachment(fileWrapper:) initializer takes care of setting up the attachmentCell for you, so you only need to learn about this trick when you need to manipulate data directly rather than using data straight from a file.

    Working on Permanent Marker is giving me quite the education on the differences between UIKit and AppKit!

  3. August 14, 2023

    Today I learned: If you want to create a menu item in a SwiftUI app for the Mac that has a checkmark next to it and has a keyboard shortcut, you want to use the Toggle control in the menu.

    A screenshot of a menu with checkboxes

    I spent many hours confused because:

    • If you use a Button, you can easily get a control that has a keyboard shortcut, but there is no modifier that will give it a checkmark.
    • If you use a Picker, then you can get a checkmark, but you can’t give keyboard shortcuts to individual items in the picker.

    This little discovery highlights one of the problems I have simply learning how to use SwiftUI. Now, menus aren’t the most intuitive things in the world in AppKit. However, when you read the documentation for NSMenu, you can find out that it contains a bunch of NSMenuItem objects, and then find out these have a state property, and make a guess that setting the state property will change the checkmark.

    In contrast, when you read the documentation for the SwiftUI Menu, there’s no information at all about what kinds of content the Menu can contain. The documentation shows a Menu containing a Button and a child Menu. But you have no way to know you can also put a Picker or a Toggle or a Divider in there too… you just have to figure this out by trial and error and ChatGPT (which isn’t great because SwiftUI is so new).

    Anyway, putting this info out on the internet so the next versions of ChatGPT will be able to answer this question for future developers…

  4. August 01, 2023

    It’s been a while. What have I been learning lately?

    As an iOS engineer, I spend a decent chunk of my summer free time tinkering with side projects to get hands-on experience with the new APIs coming in the Apple ecosystem. The nice thing about side projects is I don’t have to worry about pesky things like “backwards compatibility.” I never really have a plan during these summer coding hours. I just work on things that seem interesting to me and that could benefit from the new APIs.

    So far this summer, my focus has been my Permanent Marker app. Things I’ve done:

    1. I adopted the new SwiftUI data flow APIs (@Observable and its ilk). My verdict: Easy to adopt, much easier to use than the old version, something I really wish I could use on projects that do require backwards compatibility.
    2. I’ve poked around in TextKit2. I think “TextKit2 on iOS 17 / MacOS Sonoma” is the answer to my how to do rich text lists question.
    3. I’ve turned the Mac version Permanent Marker into a SwiftUI + AppKit app, not a SwiftUI + Catalyst app. It’s helped me learn how to make an app feel “right” when running on the Mac.
    4. I’ve started using Apple’s Swift Markdown package and I’m working on a rich text editing component for my Markdown files.
    5. I don’t know if I’m getting better at SwiftUI, but I keep re-training my intuition on the “right” way to do things in a SwiftUI app. For example: As I said, Permanent Marker now has a rich text editor component. This is a SwiftUI wrapper around UITextView / NSTextView. The scenario: How do I get SwiftUI actions (“someone tapped the bold button”) to get recognized by the UIKit / AppKit code? If I was writing this as a UIKit app, I can just send the toggleBoldface() message to the UITextView instance. In SwiftUI, though, even when you bridge to UIKit classes, you don’t have a natural way to manipulate the underlying UIKit code. My first approach was to just dispatch messages to the responder chain, knowing my UITextView instance would get a chance to respond to that message. But later, I stumbled on an approach that feels more aligned with the SwiftUI style: I created an @Observable model object called CurrentSelectionFormatting, and I do things like toggle bold properties on that model object. I then update the text view in response to model object changes, not in response to messages. Is it better this way? I’m not sure. But it just feels more SwiftUI.

    There’s still a lot I want to do with this app, but so far I’m really happy with the changes I’ve been able to make.

  5. April 05, 2023

    In my career, I’ve learned that there can be vastly different strategies for something you do only once versus something you get to do repeatedly.

    Suppose I have a unfair coin. It comes up heads 60% of the time and tails 40% of the time. You have $25. I’ll let you bet as often as you want, and you want to maximize your money. What do you do?

    I first encountered this problem on Paul Butler’s website, which has a simulator which lets you place bets over and over and tracks what happens to your bankroll. I did awfully at this. According to this reference from the website, a lot of people do terribly at this problem, so I don’t feel too badly.

    Here was my reasoning: If I only got to make a single bet, I figured the best strategy was to go all-in. If I won (60% chance), I’d have $50, but if I lost (40% chance), I’d have $0 — the expected value of my “all in” strategy was $30. You can check different values of what you could bet, and you can see that “bet it all” gives you the highest expected value if you know you’ve got a better than 50% chance of winning.

    I knew the wrinkle in this game is I could place multiple bets, so getting wiped out was bad. If I lost, I’d want to have some money left to bet again and try to earn more money. I figured there was a 40% chance of losing, so I’d ratchet down my “bet it all” strategy by 40% and only bet 60% of my money.

    As I said, this strategy doesn’t work. Try it in the simulator. When you bet 60% of what you’ve got, the losses hit too hard. You have a 16% chance of two losses in a row, and that event reduces your bankroll by almost two thirds, making it hard to make up lost ground. (A small bankroll means you can only make small bets, so even when you start winning again you’re not winning as much.)

    I’ll spare you the math, but it turns out the right amount to bet in this game is 20% of your bankroll. I was shocked that there was such a divergence between the optimum strategy if you could only make one bet (bet it all!) and the optimum strategy if you could make multiple bets (bet just 20%).

    I think about this sometimes when it comes to my job. Every now and then, I find myself having influence over task allocation — deciding who works on what. When I’m in this situation, I think of advice I heard from one of my managers at Microsoft. He told me not to just assign tasks to the person who is best able to do the task. Instead, I should focus on who is going to learn the most by working on the task.

    My former manager’s insight here comes back to different optimum strategies for things you do just once and things you are going to do repeatedly. Assigning tasks to the people who are most capable of finishing those tasks is the optimum “play the game once” strategy. With the most capable person working on any given task, the project will finish in the least time and with the least risk — but also, there will be the least growth in the capabilities of the team. Nobody will be stretched by the work they do. Assigning tasks based upon who’s going to learn the most from the task is a “play the game over and over” strategy. In exchange for a little bit of risk / time on the current project, you’re going to have increased capabilities on the team for the next project. Over time, the crew be able to take on more and more ambitious projects.

    Recognizing the difference between single and repeated play strategies — and taking the time to figure out what kind of game you’re playing! — can profoundly impact outcomes.

  6. March 23, 2023

    This isn’t a “today I’ve learned,” but a “thing I’ve learned from working a long time in this industry.” I’m also taking advice to “write what you repeat”, and I’ve repeated this a lot over the years as I’ve interviewed people — usually recent college graduates — for roles at Microsoft, Facebook, and now Duolingo. They ask me, “What’s the work-life balance like at your company?”

    Here’s the same answer I’ve given for years: I can’t say what will be true for you, but for me, work life balance has been about my life stage, not about my company.

    When I started working at Microsoft, I was fresh out of grad school. I didn’t have kids or many responsibilities outside of work. I liked what I was doing. My wife also worked at Microsoft, she also liked her work, and we hated sitting in traffic. So, we crossed the 520 bridge from Seattle to Redmond before rush hour, returned home well after rush hour, and built a life that revolved around our work. Even when we weren’t in the office, it was easy to let work bleed over into our evenings and weekends.

    Fast forward many (many) years: My wife and I were still both working at Microsoft, but now we had two small kids. Suddenly, life no longer revolved around work — it centered instead on helping our two little humans grow and thrive. My workday had really hard boundaries at both the beginning of the day and the end of the day. Once I was home, I focused my attention and energy on family, not work.

    This is the exact same company, with the exact same management and the exact same company culture, with the exact same person still working there (me), but with two very different pictures of “work life balance.” Microsoft didn’t change in the intervening years. I changed, by moving to a different life stage.

    This is on my mind these days because I’m in another life stage (with yet another on the horizon). I love my work and my kids are older and don’t need constant supervision. However, we’re just three years away from having both kids off at college. As Tim Urban puts it, I’m at the tail end of my time with them. They don’t need as much of my time — and, quite honestly, don’t seem to want much of time or attention any more. However, because I’m at the tail end, any time they do want to spend with me is a gift and trumps anything that my job might want me to pay attention to. I’m at a new life stage with a new work-life balance. I suspect that in five years, when I’m firmly in the “empty nest” stage, I’ll have yet another view of the new work-life balance for my new life stage. Life keeps changing, so the balance point changes, too.

  7. March 22, 2023

    Today I learned: two businesses I’ve enjoyed for years are going out of business.

    First I heard that Kitsbow is shutting its doors. That hurts because it’s my favorite bike clothing brand by far. I have several of their pieces. The quality’s always been top-notch, and they’ve been really innovative in how they produce their clothes in North Carolina. I’m really sad to see them go.

    Then, venerable digital photography site is also shutting down. It turns out that I had young kids right at the peak of the digital photography boom, so I spent a lot of time on that site dreaming of my next piece of gear. It’s obvious in hindsight: How can a site dedicated to camera gear survive the transition to the cell phone age? However, I never really thought about it, and I just assumed that dpreview would stay around forever.

    This feels personal because biking and photography were such big parts of my identity at one point. (Biking still is, but photos have waned as the kids got older.)

    I feel terrible for the people who are losing their jobs.

  8. March 18, 2023

    I live in Seattle but I work for Duolingo, a company headquartered in Pittsburgh, PA. Recently, I’ve been doing a lot of meetings on “East Coast Time” — jumping on Zoom meetings as early as 6:30 AM. Every time I’ve had to reassure my coworkers: “No, really, it’s not a big deal! I’m a morning person. I’m up at 5:30 every day.”1 So in case any of them read this blog, I’m going to record for posterity how and why I became a morning person. This is a life hack I’ve shared with other parents of young kids.

    I didn’t start as a morning person. When I was younger, I was a combination of “night owl” and “moderate morning person” (usually up and alert around 7am), and absolutely not an “afternoon person.” I think I would have thrived well in cultures with a siesta culture — stay up late, wake up at a normal time, crash in the afternoon for a bit to recharge. This was my life for years.

    Then came kids. For those who’ve lived through it, you know that having infants in the house destroys whatever sleep routine you think you have. Common advice to new parents is “sleep when the baby sleeps.” Those are wise words, but it means that you spend a few years just in a fog.

    Then a miracle happens! Sooner or later, your infant or toddler will start sleeping soundly through the night, and at least with my kids they could sleep a lot on a good night — 10, 11 hours at a stretch.

    Suddenly, after a few years of being in a fog, you have these glorious few hours after the kids have gone to bed when you can do adult things again, like catch up on work or relax or read a book or whatever. Reclaiming that adult time feels magical. I loved it.

    And what I wound up doing, for a few months after my kids started sleeping regularly through the night, was I extended my adult time by staying up later and later. The problem with that plan? The kids still woke up at the same time in the morning. (7-ish? It’s so long ago that I no longer remember exactly.) I kept staying up later, they woke up at the same time… my sleep started suffering again.

    The problem is once I started being in “adult time” mode at night, I couldn’t muster the willpower to stop and go to bed. So I kept cheating myself out of sleep.

    Then one day I had a thought: Since the hard part is “stopping adult time”, why don’t I rearrange things so I have a hard boundary at the end of “adult time” that I can’t move? Thus began my shift to being a morning person. My new plan: I went to bed shortly after the kids went to bed, and set my alarm clock for 5am. When I woke up, a solid two hours of adult time in the morning. That time block had a really clear ending that I couldn’t control. The kids wake up when the kids are going to wake up. This system let me control how much time I was devoting to sleep and how much time I was giving myself to be awake-without-kids without relying a lot on willpower. (Other than the willpower to wake up when my alarm clock went off — but that’s always been easy for me.)

    So that’s how I became a morning person. I’m long past the stage of life when I need to organize my sleep schedule around the sleep schedule of my children, but once established, I saw no need to change the pattern. I did allow my luxury a few years ago of changing my morning alarm to 5:30am from 5:00am, but that’s it. Now that I have a dog who’s gotten used to my early-morning ways, I don’t think I’ll be able to change this sleep pattern any time soon.

    This isn’t a lifestyle for many people, but I do think that any parents of young kids should at least consider it. If you find yourself cheating yourself out of sleep because you want to stay up late and to enjoy time without kids, try taking that time-without-kids in the morning instead when you’ll have clearer boundaries!

    1. This doesn’t really belong in the “Today I Learned” section of this website, but it turns out it’s a pain to maintain separate categorized streams of content.

  9. March 17, 2023

    Every now and then, I realize that some of my thought processes are deeply, deeply irrational.

    For example: I have a modestly long streak in Duolingo. (178 days as of this writing. I’m hoping to nuture this streak and grow it to beat my previous best of 780 days.) This streak is precious to me. I go out of my way to make sure it continues to grow.

    I’ve also been in the Diamond league — the highest league in the Duolingo leaderboards — for the past 8 weeks, and I’m terrified that if I start coasting and start doing the bare minimum to extend my streak each day, I’ll find myself demoted. So I spend more time in the app.

    Caring about these things makes no sense. Nobody else knows how long my Duolingo streak is, much less cares about it. I’m not going to have them engrave “Achieved a 780 day Duolingo streak” on my tombstone. I can’t put “Duolingo Diamond League” on my resumé.

    When one part of my brain realizes that another part of my brain is essentially malfunctioning by caring about something meaningless, what should I do?

    One option: I can try to correct my irrational thinking.1 Like most people I have loss aversion, but loss aversion for something meaningless like a Duolingo Streak is just silly.2 I’m sure with a little bit of work, I can get myself to stop caring about these trifles.

    Or… maybe I realize I need to be grateful for this irrationality. I’m a fundamentally lazy person. If I didn’t have nudges from loss aversion kicking in, there’d be many days my laziness would triumph over my good intentions. I can’t reason with “I don’t feel like doing that today.” I need to turn to something powerful, primal, and — yes — irrational to overcome that feeling. Without Duolingo streaks and leaderboards, I’d practice languages less.

    Maybe keeping a little bit of irrationality can be rational. The thing I hope I can learn is how to tell when it’s good to learn to identify and fight my faulty thinking versus channel it towards something useful.

    1. I pasted a draft of my essay up to the “what should I do?” question into ChatGPT, and it helpfully suggested that I try to practice “cognitive restructuring.”
    2. I winced inside when I described my Duolingo Streak as “meaningless.” That’s how strongly it’s grabbed me.

  10. March 08, 2023

    Earlier, I wrote about using ChatGPT to perform the “because/but/so” exercise from the book The Writing Revolution. While it took me two tries to get the prompt correct, and there were some unexpected responses, I’m still amazed that I’m able to get a computer to walk someone through a writing exercise at all.

    Recently, I’ve been trying to see if I can get AI to help with another exercise in The Writing Revolution. This exercise is about teaching students how to revise. Because it’s hard to revise your own work, the teacher starts the exercise by giving the students an “unadorned paragraph.” Here’s an example from the book:

    Eleanor Roosevelt was important. She was married to Franklin Roosevent. She was the First Lady from 1933 to 1945. She made many contributions. Many people admired her.

    This is all true, but boy is it vague and dull! The book then talks about how the teacher gave the students a list of strategies they could try (“expand sentences” / “use transitions” / “give examples”) to come up with a much improved paragraph.

    How well could an AI do this?

    I gave ChatGPT the following prompt:

    I want you to act as a teacher who is using the methodology of “The Writing Revolution” to teach students how to revise paragraphs. You are going to teach students how to revise by generating an unadorned paragraph about a topic. An unadorned paragraph is well organized and grammatically correct, but lacks variety and detail and uses only simple vocabulary and sentence structure. This is an example unadorned paragraph: “The Titanic was a big ship. It sank. Many passengers died. There are new regulations.”

    I will be the student and you will be the teacher. You will give me an unadorned paragraph on the topic of The Wife of Bath’s Prologue from The Canterbury Tales. You will then walk me through steps for revising the paragraph to make it better by giving me a prompt for revising the paragraph. Example prompts that may work for improving the paragraph could be “try adding detail,” “explain why this is true,” “give examples,” “use more interesting vocabulary,” and “improve the topic sentence.”

    After you give me a prompt to improve the paragraph, you will wait for me to revise the paragraph based on your prompt. After you read my revised paragraph, you will then give me feedback on how I did. If needed, you will then give me a new prompt for improving the paragraph and then wait for me to revise the paragraph based on the prompt. You will repeat this process until we have a paragraph about The Wife of Bath’s Prologue that is well written, well organized, with good variety of sentence structure, and good supporting detail.

    And we were off! ChatGPT gave me the following paragraph to start with:

    In The Canterbury Tales, The Wife of Bath’s Prologue is a story about a woman who has had five husbands. She talks about how she likes to control her husbands and how they should always do what she wants. She says that women should have sovereignty over their husbands and that they should be able to do what they want.

    ChatGPT then walked me through a few different prompts to improve this paragraph:

    • “Can you add more information about the main character, the Wife of Bath, and her personality?”
    • “Can you provide an example or illustration to support the claim that women should have sovereignty over their husbands?”
    • “Can you improve the topic sentence to better introduce the paragraph?”
    • “Can you provide more supporting detail to strengthen the paragraph?”

    By this point, I was getting kind of tired with this first experiment and ended the chat. For the record, this is the paragraph I ended with:

    Through the character of the Wife of Bath in The Canterbury Tales, Chaucer presents a vivid and complex portrayal of a medieval woman who defies conventional expectations. She is a blunt and sexual woman who has had five husbands. She is smart and argues well. For example, she quotes St. Paul and says that women should have “power of the body” over their husbands, so women should be able to do what they want in marriage.

    Things that went well:

    • Holy cow, I’m still amazed I can have this kind of interaction with software. A year ago it was impossible.
    • If I was ever stuck, I could ask ChatGPT for help, and its tips were pretty spot-on.

    Things that weren’t great:

    • The steps that it took me through felt kind of robotic. It didn’t feel like a human was picking the best next step for my paragraph based on what I’d done, but that an uninterested and uninspired tutor was just picking the next step on a list.
    • Whenever I completed a prompt, I was always told “great job,” so I’m not sure how well ChatGPT will do at evaluating the output of novice writers.

    After spending about an hour creating and editing prompts, testing them with various texts, and reflecting on my experience, I remain extremely optimistic about the potential of large language models to help students become better writers. Things aren’t perfect, and they don’t work out-of-the-box… but again, a year ago, this wasn’t even possible. I can’t wait to see what sort of educational tools we can build with this technology.

  11. March 05, 2023

    Just three quick things to throw on the Internet today so OpenAI can index it for GPT-5.

    For work, we’ve been converting a simple UICollectionView from a single horizontally scrolling row of content (that uses a plain ol’ UICollectionViewFlowLayout) to a view that has two independently scrolling rows of content. “This sounds like a job for `UICollectionViewCompositionalLayout!” The problem is I find the learning curve for compositional layout to be pretty tough, and I haven’t mastered it yet.

    First quick thing: Read this guide if you want to understand compositional layout. It’s long, but easier to follow than the official Apple documents.

    Second quick thing: If you want to skip reading the docs and just get the recipe for “how do I create a simple layout that is a vertically scrolling list of lines, where each line is a horizontally scrolling list?”, here it is:

    /// A layout of independent lines. Each line scrolls horizontally.
    private let horizontalScrollingLinesLayout: UICollectionViewCompositionalLayout = {
      let item = NSCollectionLayoutItem(
        layoutSize: NSCollectionLayoutSize(
          widthDimension: .fractionalWidth(1),
          heightDimension: .fractionalHeight(1)
      let group = NSCollectionLayoutGroup.horizontal(
        layoutSize: NSCollectionLayoutSize(widthDimension: .absolute(100), heightDimension: .absolute(100)),
        subitems: [item]
      let section = NSCollectionLayoutSection(group: group)
      section.orthogonalScrollingBehavior = .continuous
      section.interGroupSpacing = 10
      section.contentInsets = NSDirectionalEdgeInsets(top: 10, leading: 10, bottom: 0, trailing: 10)
      return UICollectionViewCompositionalLayout(section: section)

    Full playground showing this is here.

    Finally, a protip: As I mentioned, we were porting an existing collection view from a flow layout to a compositional layout. We ran into a strange bug: When there wasn’t enough content to fill an entire row, you’d wind up dragging the entire content when you tried to tapped in that area. It looks like this:

    Image showing wiggly scrolling

    The problem: The solution that was based on the flow layout used contentInsets on the collection view to provide spacing between items and the edge of the view. That breaks the compositional layout in the way you see above. A compositional layout will create orthogonal scrolling subviews that are the exact width of their containing collection view. If you then apply a content inset to that, you wind up creating content that is bigger than the scroll view bounds, which you can then try to drag around with your finger. To fix this, make sure you apply insets inside the compositional layout instead of to the collection view. When you do this, the orthogonal scrolling views will have precisely the right size and you no longer get strange scrolling behavior.

    Image showing fixed scrolling

  12. March 02, 2023

    Today, a SwiftUI recipe.


    In SwiftUI, the TabView component doesn’t report how much vertical size it needs. Here’s how it manifests. This layout kind of works:

    VStack {
      TabView(selection: $selection) {
      }.tabViewStyle(PageTabViewStyle(indexDisplayMode: .never))
      ScrollView {

    Because the TabView doesn’t say how much space it needs, it’ll get half the available vertical space in the VStack, and the scroll view with its VerticallyScrollingContent() will get the other half. However, I was running into cases where “half the space” wasn’t enough, so I tried this:

    ScrollView {
      TabView(selection: $selectedPage) {
      }.tabViewStyle(PageTabViewStyle(indexDisplayMode: .never))

    I expected the contents of the HorizontallyScrollingContent() on top of the VerticallyScrollingContent(), and the whole thing scrolls vertically. (In other words, the HorizontallyScrollingContent() scrolls away, like a header.)

    What actually happened? HorizontallyScrollingContent() didn’t show up at all, because TabView doesn’t tell the ScrollView how much space it needs.

    The solution

    Write a component that uses GeometryReader and preferences to report its size. (Hat tip: this project — I wouldn’t have figured this out on my own. My only improvement is putting it into an easy-to-reuse component.)

    /// A variant of `TabView` that sets an appropriate `minHeight` on its frame.
    struct HeightPreservingTabView<SelectionValue: Hashable, Content: View>: View {
      var selection: Binding<SelectionValue>?
      @ViewBuilder var content: () -> Content
      // `minHeight` needs to start as something non-zero or we won't measure the interior content height
      @State private var minHeight: CGFloat = 1
      var body: some View {
        TabView(selection: selection) {
            .background {
              GeometryReader { geometry in
                  key: TabViewMinHeightPreference.self,
                  value: geometry.frame(in: .local).height
        .frame(minHeight: minHeight)
        .onPreferenceChange(TabViewMinHeightPreference.self) { minHeight in
          self.minHeight = minHeight
    private struct TabViewMinHeightPreference: PreferenceKey {
      static var defaultValue: CGFloat = 0
      static func reduce(value: inout CGFloat, nextValue: () -> CGFloat) {
        // It took me so long to debug this line
        value = max(value, nextValue())


    I spent a lot of time frustrated because I didn’t understand SwiftUI preferences and wrote the preference key wrong. I started with this:

    private struct TabViewMinHeightPreference: PreferenceKey {
      static var defaultValue: CGFloat = 0
      static func reduce(value: inout CGFloat, nextValue: () -> CGFloat) {
        value = nextValue()

    My thinking was there was no need to really “reduce” the min height preference, because only one thing would report its height.

    However! My mental model of preferences was wrong, and this article helped straighten me out. It turns out every view will have the value you are looking for with the preference key. Views get the defaultValue if you don’t specify something else. So my original code worked only if the last child inside the TabView reported its height. As soon as I changed things even a little, things stopped working. (I still find it really hard to debug SwiftUI because I don’t know where to put breakpoints or print statements.) Once I understood that every view gets a preference value, I realized I had to change my reduce logic to value = max(value, nextValue()), and things started working reliably.

  13. February 26, 2023

    Ten years ago today, I made the following Facebook post:

    Rode my bike to work today for the first time in about 10 years. It’s a pretty easy commute to downtown, so I hope to do this more often.

    Fast forward a decade, and “I hope to do this more often” looks like quite the understatement. By 2014, I’d committed to bike commuting. Rain or shine, heat or freezing weather — if I was in the office, I was there by bike.1

    What are some things I’ve learned in ten years of bike commuting?

    1. I am my own worst enemy. While I’ve been hit by cars twice (one super-minor in 2014, one slightly more serious in 2018), my worst crashes have come from my own mistakes. (Those mistakes are almost always biking too fast downhill for the road conditions.)

    2. Bike commuters are the only people who think, “Wow, the weather is great. I’m going to make my commute longer.”

    3. Related to the above, one of the little joys of bike commuting is it’s easy to stop and take photos when the scenery justifies it.

    4. The hardest part is starting. This is still true for me ten years later, and it’s still true no matter how many times I’ve experienced this: When the weather is bad, or I’m tired, or for whatever reason just don’t feel like biking: If I can just get myself on the bike and pedaling, I feel great five minutes later. (I’ve often used this mental trick when I’m feeling too worn out to bike to work. I’ll tell myself, “It’s OK, I’ll just bike to work slowly.” And after five minutes, I’m biking at my normal pace.) The inverse is also often true. There have been times I’ve just not been able to muster the mental energy to bike to work and work from home instead. Often, an hour or two later, I feel bad because I didn’t ride.

    5. I’m a gear junkie and I’ve used bike commuting as an excuse to buy way too many jackets and bags.

    6. Fundamentally, I’m a lazy person. Since 2014, if I’ve gone to the office, I’ve almost certainly gotten there by bike. I’ve lost weight and controlled my cholesterol through the regular exercise. However, for two years of the pandemic, I didn’t have an office to commute to. Nothing stopped me from riding my bike just for exercise during those two years! I tried. But for me, my generic willpower isn’t as strong as this simple thought process: “Well, I need to get to the office today. And the way I get to the office is on my bike. So, I guess I’m biking today.”

    I’m so happy to have an office to commute to again.

    A scene from my commute in the fall of 2016 A scene from my commute in the fall of 2016

    1. One thing I don’t do is bike in the snow — I don’t think the risk of slipping is worth it. This is relevant today because I’d been looking forward to celebrating 10 years of bike commuting with the 2023 Cascade Bicycle Club Chilly Hilly, but we’ve had a rare late-February snowfall.

  14. February 21, 2023

    Here’s a quick little today I learned about the new RegexBuilder framework in Swift. I haven’t paid that much attention to it so far, as it requires iOS 16 / MacOS 13. It also turns out that I’m pretty good at writing cryptic regular expression syntax, so I haven’t felt a huge need to change.

    However, it turns out the new RegexBuilder can do a cool trick: It can transform a matched substring into some other type for you.

    Recently, I’ve been dusting off my Captain’s Log project. The core of that app is just a text file with a bunch of lines that look like this:

    2023-02-16 📖 Read (20.0 min)

    I parsed that line with this regular expression:

    private let lineRegex = try! NSRegularExpression(pattern: #"^(\d{4}-\d{2}-\d{2}) (.*?)(\(.*\))?$"#, options: [])

    And part of parsing involved transforming data from one type to another. For example, I don’t want to deal with the string 2023-02-16, I want to deal with a Day struct that contains a year/month/day. So in my parsing logic, I have to check to make sure I can build a valid Day from the string, like so:

           let result = lineRegex.matches(in: line, options: [], range: NSRange(location: 0, length: line.utf16.count)).first,
           // It's not enough to parse the string; it needs to be a valid Day
           let day = Day(line[result.range(at: 1)])
       else { return nil }

    Now I admit that the regex ^(\d{4}-\d{2}-\d{2}) (.*?)(\(.*\))?$ was easier to write than read, and I first wrote it over two years ago, so when I was looking to add some features to the project I also thought I’d try the new RegexBuilder to see if it would make the regular expression easier to read and maintain. And that’s when I discovered a cool trick: RegexBuilder lets you put the string matching and data transformation in one place, where it’s much easier to read and maintain. For example, I now have the following code:

    private enum LogEntryRegex {
      let day = Regex {
         TryCapture {
           Regex {
             Repeat(count: 4) {
             Repeat(count: 2) {
             Repeat(count: 2) {
         } transform: { dateString in

    Now, together in one place, I get to say that “a day regex is supposed to parse a string of this particular format and produce a Day struct.” If it can’t make the Day, it doesn’t parse. When I match something against, the resulting output is a Day struct, not a substring.

    This is definitely something I’ll remember on any projects that do a lot of text processing!

  15. February 19, 2023

    When my kids were in middle school, I spent a lot of time helping them revise their writing assignments for their Humanities class.1 I found this process humbling. It reminded me that many things that I take for granted about writing need to be taught, and I had no idea how to teach them. I turned to the book The Writing Revolution for help. While I wasn’t able to directly use much of the content of that book with my own kids, I really liked how the authors broke down writing into simple steps that could be practiced with guidance. A year ago, I briefly toyed with the idea of developing software to help students write better. Working at Duolingo, I’ve seen how technology can help people learn by encouraging them to practice new skills every day. Perhaps I could build software that used some of the exercises from The Writing Revolution and helped kids with daily practice in writing? However, I couldn’t even figure out where to start! Sure, I could write software that gave kids writing prompts, but I had no idea how to write software that would evaluate if kids did a good job responding to those prompts. How can software understand and evaluate all of the intricacies of language? I quickly shelved my “work on software to improve writing” idea.

    Fast forward to 2023 and the widespread availability of Large Language Models like ChatGPT. Suddenly, “software that understands language” is within reach. Is it possible to use software like ChatGPT to coach students through writing exercises? To test this idea, I tried to see if I could get ChatGPT to walk me through the “because/but/so” exercise from The Writing Revolution. The idea behind “because/but/so” is that students need to practice crafting complex sentences, and this exercise helps students by giving them some structure. The way the exercise works is the teacher starts with a sentence “stem,” such as “The Ancient Egyptians built the pyramids.” (Ideally the sentence stem comes from material that the class is studying.) Then, you get the students to elaborate on the idea in the sentence stem with the words “because/but/so”:

    • The Ancient Egyptians built the pyramids because…
    • The Ancient Egyptians built the pyramids, but…
    • The Ancient Egyptians built the pyramids, so…

    This gives the students practice in writing sentences that connect different ideas.

    I gave ChatGPT the following prompt to start our “because/but/so” session:2

    I want you to act as a teacher who is using the methodology of “The Writing Revolution” to teach middle school students how to write well. You are going to lead the students through the “because / but / so” exercise. You will generate a sentence stem from a topic we have been studying in school and end the stem with one of the words “because”, “but”, or “so”. You will then wait for the student to finish the sentence and give helpful and constructive feedback on how the student performed.

    For example, you could give the student the prompt, “The planets in our solar system orbit the sun because…” If the student responds “The planets in our solar system orbit the sun because gravity keeps the smaller objects in orbit around the larger object,” you could compliment the student on the sentence for showing an understanding of science. If the student writes “The planets in our solar system orbit the sun because the sun is in the center,” you could suggest, in a kind way, that the sentence is repetitive.

    Please generate five prompts suitable for middle school students on the topic of The American Revolution. After each prompt, wait for me to reply and evaluate my sentence before giving the next prompt.

    This (mostly) worked! ChatGPT would give me a prompt and give me feedback on how I did. Here’s an example where I did a good job completing the prompt:

    ChatGPT Great, let’s get started with the prompts on the topic of The American Revolution: The American colonies wanted independence from Great Britain because…

    Me The American colonies wanted independence from Great Britain because they felt it was unfair that they did not have the same rights.

    ChatGPT Great job! Your sentence shows a clear understanding of the American colonies’ desire for independence and the reasons behind it. Keep up the good work!

    ChatGPT does a pretty good job detecting nonsense.

    ChatGPT (prompt) Many colonists were angry about the tax on tea, so…

    Me Many colonists were angry about the tax on tea, so monkeys cannot fly.

    ChatGPT I appreciate your effort, but your sentence doesn’t make sense in relation to the prompt. Remember, the “because/but/so” exercise is intended to help you connect ideas and express your thoughts more clearly. Let’s try again: Many colonists were angry about the tax on tea, so what happened next? Can you finish the sentence with a logical outcome or consequence?

    If I was too vague, ChatGPT told me so.

    ChatGPT (prompt) The American victory at the Battle of Saratoga was significant because…

    Me The American victory at the Battle of Saratoga was significant because the Americans won.

    ChatGPT That’s a good start, but let’s try to expand your sentence by explaining why the American victory at Saratoga was significant. Can you think of any consequences or outcomes of this battle that had an impact on the Revolutionary War? You can use “so” to help connect the ideas.

    At this point I should admit that I haven’t studied the American Revolution in a very long time and remembered nothing about the Battle of Saratoga. ChatGPT can help!

    Me I don’t remember the Battle of Saratoga. Can you help?

    ChatGPT Sure, I’d be happy to help. The Battle of Saratoga was fought in September and October of 1777 during the Revolutionary War. It was a turning point in the war because it was the first major victory for the Americans, and it convinced France to enter the war on the side of the Americans. This support from France was crucial to the ultimate success of the American Revolution. So, with this information in mind, can you think of a sentence that explains why the American victory at Saratoga was significant?

    What if I draw the wrong conclusion from this helpful information?

    Me The American victory at the Battle of Saratoga was significant because it convinced the French to help the British.

    ChatGPT Actually, the opposite is true. The American victory at Saratoga convinced the French to enter the war on the side of the Americans, which helped turn the tide of the war in the Americans’ favor.

    This session wasn’t perfect. There was one prompt where I thought I was being too vague but ChatGPT thought I did a good enough job. (Who knows? Maybe it would have been a good enough job for a middle school student.) And after I drew the wrong conclusion about the significance of the Battle of Saratoga, ChatGPT seemed to lose patience with me and it completed the sentence on my behalf. Here’s ChatGPT’s full response to my last “the French helped the British” sentence:

    ChatGPT Actually, the opposite is true. The American victory at Saratoga convinced the French to enter the war on the side of the Americans, which helped turn the tide of the war in the Americans’ favor. So, let’s try again:

    ChatGPT The American victory at the Battle of Saratoga was significant because it convinced the French to enter the war on the side of the Americans, which helped the Americans ultimately win the war.

    ChatGPT Great job! You’ve shown an understanding of the significance of the Battle of Saratoga and its impact on the outcome of the Revolutionary War.

    At the end of the five prompts, I asked ChatGPT for feedback. It sounds like a helpful coach:

    Me As a teacher, how would you evaluate my effort as a student? What have I done well and where should I work to improve?

    ChatGPT Overall, I think you did a great job! You demonstrated a good understanding of the prompts and were able to generate sentences using the “because/but/so” exercise. You also took feedback constructively and adjusted your responses accordingly, which is a sign of a good learner.

    One area where you could improve is in providing more specific and accurate information when responding to the prompts. For example, in prompt #5, you initially responded with a simple sentence stating that the Americans won, but you were not aware of the significance of the Battle of Saratoga. That’s understandable, as we all have areas where we need to improve, but it’s important to take the time to learn and research in order to provide accurate and complete responses.

    Overall, though, you did a great job and I’m confident that with continued practice and effort, you will become an excellent writer!

    While this session wasn’t perfect, it was amazingly good. One year ago, I thought it was impossible to create software that could coach a human through a writing exercise. Today, not only is it possible, it was only about 20 minutes of work figuring out how to craft the right ChatGPT prompt. While some worry that ChatGPT means “the end of High School English”, I’m excited: I think we are at the beginning of a new era of software that can turbocharge students’ ability to learn to write.

    1. When I was at William & Mary, I worked in their Writing Resources Center. This gave me a lot of practice walking the fine line between “coaching people to write better” while still letting their work be their own. (It was much harder coaching middle school kids!)
    2. It took me two attempts to get a prompt that worked the way I wanted. The first time, ChatGPT role-played both the teacher and the student… it didn’t wait for me to respond to its prompts. This experience gives me more empathy for the emerging discipline of prompt engineering.

  16. January 16, 2023

    Today I haven’t learned something: I haven’t learned a good strategy to write a better rich-text-editing experience for Library Notes and other projects of mine.

    Doing syntax-highlighting-as-you-type for plain text content has a lot of advantages, but it is pretty geeky. I’m trying to figure out what’s involved with building a full-on rich text editor instead: One where you make text bold by applying a “bold” attribute to existing text rather than adding **bold** delimiters to the text.

    Where I’m currently stuck is figuring out how I’m supposed to handle lists. What I want are lists that behave like best-in-class rich text editors:

    • Bulleted and numbers lists have a hanging indent
    • The text input caret doesn’t ever land in the list delimiter. In other words, if the caret is at the start of one element in the list and you try to move the caret to the left, the caret will move to the last character in the previous list entry rather than somewhere in the list delimiter.
    • I want this to work in UIKit (UITextView instead of NSTextView).

    Things I’ve investigated:

    • Use an NSTextList on the NSParagraphStyle attribute for an attributed string. However, this doesn’t render anything on a UITextView that is using TextKit 1. For a UITextView that uses TextKit 2, it will render list delimtiers as expected. However, the experience of moving the caret through the document becomes really buggy. TextKit 2 currently has a reputation of being buggy, so I don’t want to waste more time here.
    • I’ve tried overriding UITextInput methods, like position(from:offset:) to see if I can “skip” positions that would land inside the list delimiters. However, while this method gets called as I expect (on Catalyst, anyway), it’s not preventing the caret from landing inside the list delimiter.
    • Do all of this in the layout manager, somehow. I currently have a custom layout manager in Library Notes to handle rendering quotes, but I’m not sure how I can use a layout manager to display list delimiters when they’re not present in the text.

    There’s remarkably little info about how to make lists work in UITextView on the internet. So, time to do some Rubber Duck Debugging and then take a break.

  17. January 06, 2023

    As you might tell from Library Notes, reading is one of the great pleasures of my life. As a techno-optimist, one of the great disappointments of my life is that the computer revolution has not done more to improve readers’ lives.

    That’s why I’m excited to give Readwise Reader a try. I love the Readwise team’s mission: “Improve the practice of reading through software by an order of magnitude.” Their flagship product, Readwise, helps you manage the highlights and annotations you’ve made in ebooks. I’d probably be a big user of this service if I hadn’t already built Library Notes. (I tried Readwise and loved how seamless the integration with my Kindle library was. If you’re a heavy Kindle user, I recommend giving Readwise a try. However, I’m going to stick with Library Notes because I want to make sure that all of my notes stay on my own computers, forever.)

    Readwise Reader goes a step further than Readwise: It’s a complete digital reading experience that integrates content and annotations. Some things I love about what they’ve done:

    • Their reader handles all “modern” content: Blog posts, twitter threads, PDFs, newsletters, and epubs. (Sorry, Kindle.) Even YouTube?! I haven’t tried their YouTube integration and I can’t envision what that’s like, but it’s certainly true that there’s a ton of interesting educational content on YouTube these days.
    • They have all of the great annotation tools from Readwise integrated into their reading experience.
    • They’ve added science-fictiony features like “Ghostreader,” which uses GPT-3 to help you do things like summarize passages and generate flashcard content.

    Speaking of GPT-3, I’ve been meaning to learn how the new wave of generative AI products work. When I came across “Transformers from Scratch”, I thought it would be a great testing ground for Reader. Here are my impressions of Reader after using it for this initial article:

    • The app’s reading experience does what it needs to do: Gets out of the way and lets me focus on content.
    • On the iPad at least, where I did my reading, the highlighting experience was a bit finicky. Often the highlight wouldn’t start on the precise word I intended, and I couldn’t find handles to adjust the highlighted range. I had to delete the highlight and start again.
    • I loved being able to follow links in the article I was reading to original PDFs and other helpful tutorials and add those to my reading list. This is a great feature for doing research. My “learn AI” reading list is already growing.
    • Ghostreader feels like it could be really useful. When I was reading “Transformers from Scratch,” I came to this section, and it seemed like important information for my brain to really internalize. I asked Ghostreader to generate a flashcard for the section, and it produced the following:

    Q: What are three practical considerations when implementing transformers?

    A: 1. Computers are especially good at matrix multiplications. 2. Each step needs to be differentiable. 3. The gradient needs to be smooth and well conditioned.

    Not bad!

    I plan to keep using Readwise Reader as I try to teach myself more about modern AI. I think the Readwise team is building a great tool to help readers’ lives.

  18. December 04, 2022

    To help me learn SwiftUI, Permanent Marker is primarily developed with that framework.

    One of the first problems I had to solve: How do I handle loading / editing / saving files in SwiftUI? Here are the constraints I had:

    • Loading and saving files are async operations.
    • I don’t want to save on every keystroke. Instead, I want to autosave at periodic intervals.
    • However, when I’m done editing a file, I want to save any outstanding changes right away (rather than waiting for the autosave timer).

    I’ve put together a sample app that shows the main parts of my solution. The core idea is a class I call FileBuffer. A FileBuffer manages:

    • The in-memory copy of the file contents
    • A flag isLoading that is true if the in-memory copy of the file has not yet been loaded from disk.
    • A flag isDirty that is true if the in-memory copy of the file contents have changed, and therefore needs to be saved back to disk.
    • FileBuffer manages autosaving dirty file contents at periodic intervals…
    • …while also exposing a save() method that saves the file contents right now.

    Here are the key parts of FileBuffer. First, note its declaration: this is a @MainActor ObservableObject because its primary job is to communicate “truth” to UI elements.

    final class FileBuffer: ObservableObject, Identifiable {
      // ...

    Each FileBuffer exposes publishes three properties, only one of which (text) is settable. The isDirty and isLoading properties change as side-effects of other operations inside of FileBuffer.

      /// The in-memory copy of the file.
      /// This is a computed property! More details later.
      var text: String { get set }
      /// If true, this buffer contains changes that have not yet been saved.
      @Published private(set) var isDirty = false
      /// If true, the contents of the buffer have not yet been read from disk
      @Published private(set) var isLoading = true

    When you first create a FileBuffer, isLoading starts as true. Once the contents of the file have been loaded from disk, isLoading becomes false and remains false for the remainder of the lifetime of the FileBuffer.

    isDirty becomes true any time you make a change to text, and stays true until those changes have been saved to disk.

    Speaking of text, let’s take a look at how that is implemented:

      /// The actual file contents. The stored property is private and is exposed through the computed property ``text``
      private var _text = ""
      /// Gets/sets the in-memory copy of the file contents.
      /// Setting the in-memory copy of the file contents sets ``isDirty`` to `true` and makes sure that autosave will run some time in the future.
      var text: String {
        get {
          assert(!isLoading, "Shouldn't read the value of `text` until it is loaded.")
          return _text
        set {
          assert(!isLoading, "Shouldn't write the value of `text` until it is loaded.")
          _text = newValue
          isDirty = true

    Basically, the computed property text is responsible for three things:

    1. Validity checking: You shouldn’t be accessing text until the file contents have been loaded.
    2. Maintaining isDirty: Any time you change text, isDirty needs to get set to true.
    3. Ensuring that autosave will run after changes get made to text.

    What is the “autosave task”? It’s an example of a technique I’ve been using in my apps that support Swift Structured Concurrency — to my brain, it’s the most natural way to say, “Run a function exactly once at some point in the future.” Here’s what that code looks like:

      private(set) var autosaveTask: Task<Void, Never>?
      /// Creates an autosave task, if needed.
      /// The autosave task will save the contents of the buffer at a point in the future.
      /// This lets you batch up saves versus trying to save on each keystroke.
      private func createAutosaveTaskIfNeeded() {
        guard autosaveTask == nil else { return }
        autosaveTask = Task {
          try? await Task.sleep(until: .now + .seconds(5), clock: .continuous)
          try? await save()
          autosaveTask = nil

    Here’s how it works.

    • The private autosaveTask property serves as a flag to know if autosave has been scheduled to run in the future. If it’s nil, then there’s no autosave; if it’s non-nil, the autosave will run. While I don’t take advantage of this here, in this pattern I use a Task? instead of a Bool for this flag so you can write something like _ = await autosaveTask?.value to wait until the current task completes.
    • The first thing the autosave task does is sleep for some duration. I picked a fairly long one in this test code to make it easier to see delays.
    • After waiting, the task runs save() and clears the autosave task.

    The final outcome of this work: As you type away in a document, repeatedly setting the text property and changing the in-memory copy of the file, the first change will create an autosave task. Subsequent changes within the autosave window will see that the task exists, so won’t create a new task. Finally, after the delay, the FileBuffer will save its contents to disk. The next change that happens to text will create a new autosave task.

    save() is an interesting method. I got it wrong two times while working on this sample. This was my first attempt:

      func save() async throws {
        guard isDirty else { return }
        try await FakeFileSystem.shared.saveFile(_text, filename: filename)
        isDirty = false

    Simple and elegant! If isDirty is false, there are no changes to save. Otherwise, save the changes and set isDirty to false. It turns out this code is also buggy. There is a race condition. Can you see it? (As an aside, I still haven’t fully internalized “running code on a single actor does not mean there are no race conditions.” I keep making mistakes like this.)

    Here’s the race condition:

    1. Change text to some value, like “version 1.” This sets isDirty to true.
    2. Call save(). You see isDirty is true, so you continue.
    3. You get to the point where you await saveFile(), and this operation suspends until the save completes.
    4. (This is the part I always forget can happen.) While waiting for the operation in Step 3 above to complete, change text to some new value, like “version 2.” This sets isDirty to true.
    5. The operation in Step 3 completes, and you resume executing save() after the await statement, setting isDirty to false. This is the bug. The value of text is “version 2”, and this hasn’t been saved to disk yet, so isDirty should be true. Since we set it to false, we’ll never save the string “version 2” to disk (unless something comes along and makes another change).

    This was my first attempt to fix the race condition:

      func save() async throws {
        guard isDirty else { return }
        isDirty = false
        try await FakeFileSystem.shared.saveFile(_text, filename: filename)

    This code looks wrong to me. “Surely,” my brain says, “you don’t want to set isDirty to false until you’ve saved the file?” However, waiting until the save finishes opens the door to the race condition described above. Setting isDirty = false before saving means that, when the code suspends in the await statement, any future changes to text will properly set isDirty back to true and we won’t overwrite that when we resume from the await. It fixes the race. However, this code creates a new bug. What happens if the saveFile() call fails? We’ve set isDirty = false, but we didn’t actually save the contents to disk, so isDirty should be true at the end of the function.

    This leads to my third and hopefully final version of this function:

      func save() async throws {
        guard isDirty else { return }
        isDirty = false
        do {
          try await FakeFileSystem.shared.saveFile(_text, filename: filename)
        } catch {
          // If there was an error, we need to reset `isDirty`
          isDirty = true
          throw error

    At this point, FileBuffer contains enough logic to connect files to SwiftUI. Here is an example of how to use a FileBuffer:

    /// Creates a `TextEditor` that can edit the contents of a `FileBuffer`
    struct FileEditor: View {
      @ObservedObject var buffer: FileBuffer
      var body: some View {
        Group {
          // (1)
          if buffer.isLoading {
          } else {
            // (2)
            TextEditor(text: $buffer.text)
        .navigationTitle((buffer.isDirty ? "• " : "") + buffer.filename)
        // (3)
        .onDisappear {
          Task {
            try? await
        // (4)

    A quick guide to understanding this code:

    1. Remember to check the isLoading property on the buffer so you don’t attempt to read or write invalid contents!

    2. If you know the buffer has loaded, you can get a binding to the in-memory copy of the file with $buffer.text. Making changes through this binding will create an auto-save task that will ensure the changes get written at some later point in time.

    3. However, when we are done with this view, we want to save its contents immediately, rather than waiting for the auto-save task to run.

    4. If you forget the .id(buffer.filename) line, then the .onDisappear block might not run! Without this line, switching from one file to another could reuse the same FileEditor instance. An instance doesn’t “disappear” if it’s reused. The .id(buffer.filename) causes SwiftUI to treat FileEditors for different files as different View instances, which means .onDisappear will run.

      Incidentally, this is one of those SwiftUI cases where the order of modifiers matters. The code above works. This code doesn’t:

      .onDisappear {
        Task {
          try? await

      This is another one of those things I often get wrong! My mental model is that all of the view modifiers are setting properties on some object, whereas what really happens is each view modifier creates a new View with with a new property. In the broken code above, the .id modifier creates a new View with the id property set, and then the .onDisappear modifier creates yet another new View with an onDisappear block. That “onDisappear” view doesn’t have an id property tied to the filename, so the “onDisppear” View doesn’t actually disappear when the filename changes, so the “onDisappear” block doesn’t run. (At least I think this is what’s happening. I don’t know if my SwiftUI mental model is the best.)

    I’m not sure this is the best way to work with files in SwiftUI, but it works for me. As you can see, there is some surprisingly tricky issues to work through. I hope this writeup helps others who are working on editing files in SwiftUI!

    (A sample working SwiftUI app with all of the code referenced here is available at

  19. November 28, 2022

    In my last post, I hinted I’ve started a new project that I’m calling Permanent Marker. What’s it all about?

    The main idea of Permanent Marker is to bring git from the world of writing code to the world of writing English. As programmers, we rely on git for version control. This is important for writing English, too! The most terrifying thing that can happen when writing is losing your work. Permanent Marker aims to be a writing environment where this fear doesn’t exist: It uses git under the covers to maintain a version history of all of your writing.

    The main data structure for Permanent Marker is “a bunch of Markdown files in a git repository.” There isn’t anything particularly innovative about that. For years, tools like VSCode (for programming) and Ulysses (for English) have provided excellent experiences working with collections of files in a directory hierarchy. What I’m exploring with Permanent Marker is how to integrate git into the writing workflow, as opposed to the programming workflow.

    Here are the main ideas I’m playing with for bringing git to writing:

    • All writing starts in an unpublished state. I represent this as a “scratch” branch created in the git repository holding the content.
    • As you edit a file, it’s automatically saved every 5 seconds and committed to the scratch branch. To prevent an explosion of tiny commits, I will amend the previous commit in the repository if it’s to the same file and the commit isn’t “too old”. The goal is that an author, without thinking, will be able to keep distinct versions every N minutes of every file she works on.
    • When a file is sufficiently “done”, you can either publish it (which moves it directly from the scratch branch to the main branch of the repository) or open it for feedback. Whereas all unpublished files live in the same “scratch” branch, I create a different branch for each file in the “feedback” state so reviewers just see the changes to the file that needs feedback.

    By far, this work of thinking through how to model writing workflows in git has been the most interesting part of this project. I’ve gained more than an in-depth understanding of git from this project, too. Like any self-respecting developer, I maintain this website as a “collection of markdown files in a git repository” that’s published with a static site generator. Permanent Marker, even in its rough state, helps me edit and maintain this site.

  20. November 23, 2022

    This fall, I’ve been interviewing a lot of college students for internships at Duolingo. (We’re hiring!) A repeated question I’ve gotten from these students, as the interview wraps up, is if I have any advice for them as they look to start their career in tech. (Maybe it’s my gray hair that suggests I’ve got a valuable perspective here.)

    Each time I’ve given the same answer: Don’t expect learning to stop when you leave school. This industry is constantly reinventing itself. The technology you’ll be working on in 15 years probably doesn’t exist yet. To build a long career in tech, you need to be comfortable constantly learning new things.

    Personal programming projects have been my most effective way to keep learning in the decades since I left the University of Washington. My laptop is littered with projects I’ve started and mostly abandoned. Only one, Library Notes, has grown to the point where another human can use it. All of the other projects linger, unknown and unsung, as dusty Git repositories. I accumulate them like hidden scars. Only I know or care they exist. Yet I’ve been able to build a 24 year career with the experience these projects gave me. Some notable examples:

    • As a young Microsoft program manager, I wrote a stress test for Transactional NTFS. My hands-on experience helped me be a better program manager for that project.
    • When the iPad was released in 2010, I bought my first Mac, learned Objective-C, and wrote a simple app that turned the iPad into a digital picture frame that downloaded photos from Flickr. (Remember that site?) This project gave me the experience to pivot from “Microsoft program manager” to ”iOS developer.”
    • When I was working at Facebook, I learned about spaced repetition and wrote apps for my kids, who were then in middle school, to help them with spelling tests and Spanish vocabulary. This experience with educational technology led me to Duolingo.

    I’m writing this post using my latest personal project: An iOS/Mac app I’m calling Permanent Marker. Permanent Marker is a simple writing app for editing plain text files stored in a Git repository. The project makes it a tiny bit easier for me to update content on this website. But more importantly: it makes it much easier for me to learn SwiftUI and how to programatically work with Git.

    College is long behind me, but learning never ends.

  21. August 30, 2022

    Nevermind! The previous bug about clipping images in a SwiftUI-optimized-for-Mac toolbar is fixed in Mac Ventura Beta 6.

    I thought I had been up-to-date on my Beta builds when I wrote the prior post. Here’s how I discovered I wasn’t:

    1. I wanted to see if this problem reproduces on MacOS Monterey. When I did this, I discovered that on Monterey, there’s no attempt to create a Mac-style toolbar at all from SwiftUI Catalyst. Note this is different from the .borderedProminent bug I wrote about yesterday — that bug also happens on Monterey.
    2. Filing a Radar on bugs in new functionality seems more valuable than filing a Radar on bugs that Apple already decided is OK to ship. So, I prepared to file a Radar…
    3. And that’s what made me think, “Let’s make sure I’m up-to-date first.” And I wasn’t. (In my defense, System Settings said I was up-to-date, but it also said it hadn’t checked in over a week. I saw on the Downloads page that there was a new version after Settings had checked. I forget what I had to jiggle to get System Settings to properly refresh.)

    Anyway, it’s awesome seeing bugs get fixed in Beta builds. Feels like receiving a gift.

  22. August 29, 2022

    I’ve had so much fun creating a Mac version of Library Notes in Catalyst that I’ve started a couple of other multi-platform projects. Along the way I’ve encountered some bugs and come up with at least one workaround that I will now share with you, Gentle Reader.

    Developing for the Mac: So many choices

    If you’re an iOS engineer, like me, venturing into Mac land for the first time, be aware that there are at least three ways to make this journey without going full AppKit.

    1. SwiftUI targeting the Mac SDK: In this mode, you write SwiftUI code, and under the hood that SwiftUI code will create native Mac (AppKit) controls. This route works if your UI is 100% SwiftUI. You don’t get the escape route of creating a UIViewRepresentable to manage a UIView with SwiftUI, because your app doesn’t have access to UIKit at all.
    2. SwiftUI targeting Mac Catalyst, “optimize for Mac” mode: In this mode, you’re writing SwiftUI, but under the hood your app is using the iOS SDK and will use Mac Catalyst to run on the Mac. Catalyst will try to make your UI controls look more Mac-like.
    3. SwiftUI targeting Mac Catalyst, “scaled to match iPad” mode: In this mode, you’re writing SwiftUI, it uses the iOS SDK, it uses Mac Catalyst to run on the mac, but the controls will look like iOS controls. (This is most noticeable with buttons and navigation bars.)
    4. UIKit, not SwiftUI, with the different Catalyst modes: Replay options (2) and (3) above, but this time substitute “UIKit” for “SwiftUI.”

    I’ve outlined the different modes because many of the issues I’ve run into only affect one of them: SwiftUI code that uses Mac Catalyst to run on the Mac in “optimize for Mac” mode. You’d think that this would be the easiest way for an iOS Engineer to write apps that look like native Mac apps, but beware these sharp edges.

    Bug: .borderedProminent doesn’t work in “optimize for Mac”

    Mac “push buttons” have borders. According to the Human Interface Guidelines, you should use a filled button for the primary action in a view. SwiftUI provides an easy way to get this: Apply the .borderedProminent style.

    The problem? This works for SwiftUI-targeting-iOS apps, and SwiftUI-targeting-Mac apps, and SwiftUI-targeting-Catalyst-in-iOS-mode apps, but not SwiftUI-targeting-Catalyst-in-optimized-mode apps. For just that mode, the button doesn’t get filled in.

    This seems to be a SwiftUI bug and not a Catalyst bug. If you write UIKit code, and create a UIButton and use UIButton.Configuration.borderedProminent() to create a button configuration, you’ll get a button that shows up in your “optimize for Mac” Catalyst app appropriately filled in.

    That, then, is the workaround for this bug. If you’re writing a Mac app, using Mac Catalyst, choose “optimize for Mac” for your UI mode, you cannot use the SwiftUI Button View for any button you want to display in the prominent “filled” style. Instead, you need to use UIViewRepresentable to create a UIButton and explicitly give it the UIButton.Configuration.borderedProminent() configuration.

    Bug: Toolbars clip their toolbar buttons

    Update: This was a bug in MacOS Ventura that was fixed in Beta 6.

    This is another one that appears to be SwiftUI + Mac Catalyst + “optimize for Mac” specific. In this mode, the toolbar clips its buttons. Instead of this:

    A sample app with a proper Mac toobar

    you get this:

    A sample app with a broken Mac toobar

    Note that the top & bottom of the toolbar icon are clipped.

    While I haven’t coded this yet, I suspect the answer is going to be the same as above: Use UIKit to manage your toolbars if you want a Mac-style toolbar in your Catalyst app.

  23. August 21, 2022

    Today I learned that a UISplitViewController behaves differently in a Mac Catalyst app when it is the root of a window versus if it is wrapped in a window. If you want Mac-style toolbar behaviors, make it the root of a window.

    Library Notes uses a UISplitViewController for its main screen. When I first wrote this app, I was deep into the “composition instead of inheritance” philosophy, and I used view controller containment to avoid subclassing UISplitViewController. I created a class called NotebookViewController to manage the UISplitViewController. NotebookViewController creates the split view controller & adds it as a child view controller that completely fills its view.

    Running on iOS, it looks like a normal UISplitViewController fills the whole screen.

    Now that I’m working on porting Library Notes to the Mac using Catalyst, though, I noticed something: Even when I tell Xcode that I want to optimize for the Mac interface, I still get iOS-style bar button items displayed in the navigation bar instead of Mac-style buttons displayed in the toolbar. My UI looked like this:

    UISplitViewController with iOS-style buttons

    I suspected that Mac Catalyst did something different if the window’s rootViewController is a UISplitViewController, so I rewrote NotebookViewController to be a UISplitViewController rather than contain a UISplitViewController. Sure enough, after that simple change, my UI looked like this:

    UISplitViewController with a Mac-style toolbar

    In retrospect, this makes perfect sense. Mac Catalyst will adapt iOS components to a Mac interface based upon what those components are. Something to keep in mind if, like me, you are an iOS developer venturing into Mac-land for the first time.

  24. August 14, 2022

    tl;dr: If you’re trying to use UIDocumentPickerViewController or UIDocumentBrowserViewController from a Mac Catalyst app and always get stopped in the debugger with the message “this class is not key value coding-compliant for the key cell”, just temporarily disable breakpoints and continue. Everything will work.

    I’m embarrassed how much time I lost on this problem.

    When I first worked on getting Library Notes ready to submit to the App Store, I figured I should also try this new-fangled “Mac Catalyst” technology and get a version of the app that runs on the Mac, too.

    In my notes at the time, I wrote:

    Update Jan 14, 2021 — The Catalyst app crashes on launch with an error about an NSView not being key-coding compliant for “cell”, and I have no idea how to debug further. So, I’m just going to ignore making a Mac app for now.

    I didn’t pursue the Mac version of this project because, at the time, it worked just as well for me as an iPad / iPhone app.

    Fast forward 18 months, and I’m getting ready to go on a long series of back-to-back business and personal trips, and I didn’t want to bring both my iPad Pro and Mac. Suddenly it really bothered me that I didn’t have a Mac version of Library Notes that I could use to continue to update my reading notes while on my trip. There was also a brand-new version of Xcode, and a lot of hubbub about “desktop-class iPad apps” at WWDC. Surely, this problem about an NSView not being key-coding compliant for “cell” is fixed in with the new developer tools, right?


    I fire up the Xcode project, set it to target Mac Catalyst, hit Run in Xcode, and almost immediately hit the error message:

    Thread 1: ”[<NSView 0x14363e4f0> valueForUndefinedKey:]: this class is not key value coding-compliant for the key cell.”

    The frustrating this was this error doesn’t originate in my code. As near as I can tell, it comes from using UIDocumentBrowserViewController in a Catalyst app on a Mac with a Touch Bar. I only found one other person on Twitter who had this problem. Google searches turned up nothing. I tried:

    • Refactoring my code to use UIDocumentPickerViewController instead of UIDocumentBrowserViewController. Same problem as soon as I bring up the picker.
    • I created a custom UIDocumentPickerViewController subclass and used that instead. I tried manipulating every Touch Bar hook in UIViewController to see if I could make the problem go away. No luck.
    • Creating a new sample app that just brought up a UIDocumentBrowserViewController. This one worked, so I systematically started looking for differences between Library Notes and the test app. I changed random Info.plist properties, target SDK versions; really, anything I could think of. This took about a day.
    • Finally, I decided to run my broken Catalyst app in Instruments to see if there’s any code running at app start that I’ve forgotten about. “Maybe,” I thought to myself, “something running at app start is putting things in a funny state?” Imagine my shock when the app worked when connected to Instruments.
    • This is what lead me to discover that everything works when I run the Release build. Something related to optimizations?
    • No! Even dumber. Everything works if I don’t have breakpoints enabled when running the app in the debugger. It’s not enough to just hit “continue” when you run into this problem. It will just happen again. However, if you get this message in the debugger, just disable breakpoints then click Continue. The document picker will work.

    This one problem set back the Mac version of Library Notes by over a year. (Facepalm.)

  25. July 23, 2022

    Recently I’ve been exploring using “files in a Git repository” as the main storage for iOS and Mac apps. I’ve got two little projects using this.

    The key technology that enables this approach is libgit2, which is a C language implementation of the core git methods. There are at least two popular ways to use libgit2 from iOS / Mac. The first, ObjectiveGit, is Objective-C bindings to the C API. The second, SwiftGit2, is a set of Swift bindings.

    Being me, I wound up going with neither of these libraries. Things that made me shy away:

    1. Neither project use Swift Package Manager, which I use exclusively in my personal projects.
    2. Neither project has been updated recently. SwiftGit2 links against version 1.1 of libgit2 (the library is now, at the time of this writing, at version 1.5). ObjectiveGit is worse, linking against version 0.28.1!

    So I’ve approached git integration to Swift iOS/Mac apps from first principles and created two projects:

    1. static-libgit2 is a Swift package that exposes the libgit2 C API through the Clibgit2 module. This project is a modification of LibGit2-On-iOS and follows the same basic strategy:

      1. Use build scripts to build libgit2 and its dependencies (libssh, openssl) and create a single xcframework for all of the necessary SDK and architecture variations.
      2. Create a Package.swift file to let projects include the xcframework through Swift Package Manager.

      static-libgit2 is pretty stable, and if you want to just use the C APIs in a Swift app, it gives you want you want. It’s ready for public consumption now.

      import Clibgit2
      import SwiftUI
      struct ContentView: View {
          var body: some View {
    2. AsyncSwiftGit is much more experimental and much less stable. It’s my attempt to write Swift wrappers around the C API that uses the new concurrency features of Swift 5.5. For example, instead of passing in C-compatible callback functions when fetching changes from a remote repository, I can write:

      for try await progress in repository.fetchProgressStream(remote: "origin", credentials: credentials) {
          // do something with `progress` here

      This is “more experimental and less stable” because I’m still figuring out the right way to use Swift concurrency, the best way to design wrappers around a C API, etc. This one is not yet ready for public consumption.

    Overall, though, I’ve been really impressed with how fun and reliable it is to use git as the main storage system for personal programming projects! I predict I’ll be using it more and more.

  26. October 22, 2021

    …the lesson is clear: a central challenge to improving the way we learn is finding a way to interrupt the process of forgetting.

    (Make it Stick: Brown & Roediger)

    I started my love affair with books in elementary school. By the time I entered high school, though, I noticed I was forgetting most of what I’d read. Sometimes I couldn’t even remember that I had read a book at all, much less remember what the book was about. This started a mild obsession of figuring out ways to remember more of what I’ve read. I wrote about books in my journal; I kept a running bibliography of books for a few years; I cataloged and reviewed books on LibraryThing.

    My Library Notes app my current system to help me remember what I’ve read. While the app has only been on the App Store for a few weeks, I’ve been using this app for about three years. I’ve noticed that I use Library Notes differently for different kinds of books. Basically, there are four “levels” to how involved I am with a book, and I can use Library Notes for all four levels:

    1. For hundreds of books, I just use Library Notes as a book cataloging app. I just want a record that the book is in (or has been in) my personal library. (Alas, I buy more books than I read!) Title, author, cover image: That’s all I want. Because Library Notes can scan the book’s ISBN barcode and look up bibliographic information & cover images from the Internet, the cataloging process is fairly streamlined.
    2. For a lot of fiction books I read for fun, I add a little more information: A quick blurb about the book and a star rating. There’s no formula for how much I write about each book, but recently I’ve been happy with the following pattern: I write the names of the main characters and the rough plot arc. I’ll probably forget everything else about the book within a year, but this is enough to help me recommend books to friends: I sort my library by star rating, and then I can say, “Oh yeah! Have you read anything by Tana French? She’s great…” It doesn’t matter that I can’t remember the plot of In the Woods — I remember that I loved the book and that my friends will probably love it, too.
    3. For books I really want to remember, I use a technique I first learned from The Well-Educated Mind, by Susan Wise Bauer: I write down an outline of the book in my notes. This is much more intensive than writing a quick blurb when I’m done with the book. When I’m in this mode, I’m going back-and-forth between the book and Library Notes after each chapter, creating a chapter-by-chapter summary of what I’ve read. However, I’ve noticed two things: First, probably because of the work I’ve put into creating the outline, these books remain much more firmly lodged in my brain in the first place. Second, if I do need a refresher on what’s in the book, rereading the outline brings back a lot more detail than reading my quick “character-and-plot-arc” blurb. I save this for “serious” reading.
    4. Outlines are great. However, the science is clear: If you really want to cement something in your brain, the best techniques are active recall and spaced repetition. For the most interesting & challenging works I read, I spend time to create active recall prompts in my notes (either question & answer prompts, or fill-in-the blank prompts). I can then use Library Notes’ review mode to quiz myself on the prompts.

    Library Notes isn’t an app for everyone, but I’m really happy with how it scales from “simple cataloging” to “advanced memory tool with active recall and spaced repetition.” It’s been a great companion on my reading journey. If you think it’s something that would help you, you can get it on the App Store now. It’s software made for the love of books, not to be a business, and is now and will always be free.

  27. August 21, 2021

    I’ve recently extracted another module out of Grail Diary: KeyValueCRDT.

    It turns out designing a file format that works in an era of cloud document storage is hard! Cloud documents and mobile devices make it really easy for people to make conflicting changes to the same document. It’d be nice to provide a better experience for people than a “pick which version of the file to keep” dialog box.

    The key to avoiding the “pick the version of the file to keep” dialog is making your file format a Conflict-Free Replicated Data Type (CRDT). With a CRDT, you can reliably merge changes made from multiple devices rather than forcing a person to pick which file version to keep.

    My goal with KeyValueCRDT is to provide a CRDT implementation that can work as a file format for a wide range of applications. There are more details about the API the GitHub page, but here’s the bullet-point summary:

    • KeyValueCRDT uses SQLite for its storage, for all of the reasons listed in SQLite As An Application File Format.
    • The data model is a key-value store.
    • Values can be text, JSON, or arbitrary data blobs. Text values are indexed using FTS5 for fast full-text search.
    • At its core, KeyValueCRDT is an observed-remove set and provides multi-value register semantics. When you read a key from the database, you may get multiple values returned if there were conflicting updates to that key.
    • In addition to the underlying database operations, the module provides a UIDocument subclass that lets you integrate with the iOS document ecosystem (including iCloud documents). The module also provides a command-line tool (kvcrdt) to allow you to inspect and manipulate the database from scripts.

    Currently I use KeyValueCRDT for the document format for Grail Diary, and I hope it will be a useful format for other applications as well.

  28. June 28, 2021

    Yes, async / await is going to be great. However, Xcode 13’s DocC documentation compiler is currently the most inspiring feature for me. For the past several days I’ve been pulling out the building blocks of Grail Diary into separate packages and revamping the documentation. Often, when trying to write the documentation, I’ve realized that the APIs themselves are awkward, so I’ve refactored those as well. While this work hasn’t done much to make Grail Diary feel different when using it, I’m feeling awesome because the foundation of the program is getting more solid.

    Since the new documentation toolchain is in beta, I’m isolating this work in an xcode13 branch across the following repositories:

    • SpacedRepetitionScheduler for recommending times to review prompts in a spaced-repetition system
    • BookKit for utility routines for dealing with different book web services, like Google Books, Open Library, Goodreads, and LibraryThing.
    • TextMarkupKit for parsing and formatting text as you type.

  29. June 23, 2021

    Yesterday, I released TextMarkupKit. This is the core text processing code that I use for Grail Diary — it handles all of the text parsing, formatting, etc.

    If you’ve ever wanted to build an iOS app that does automatic formatting of plain text as you type, check out TextMarkupKit. It might be exactly what you need.

  30. May 22, 2021

    In my quest to make Grail Diary a great app for book lovers, I’ve just finished adding a feature I’ve wanted for a while. In a stroke of marketing genius, I’m calling it Random Quotes. It does exactly what it says: It scans through your book notes for five random quotes and shows them, nicely formatted, on a single page. Want to see a different selection of quotes? Just hit the Shuffle button.

    The goal here is perusability. When you flip through your book notes, you get to revisit the books in your mind. It’s like dropping in on old friends. Random Quotes tries to make this easy and fun.

    The feature’s only a few hours old but I’ve gotten a lot of joy from hitting the Shuffle button!

    Random Quote Screenshot

  31. May 12, 2021

    Once upon a time, I was going to take what I learned writing the custom syntax-highlighting editing component of Grail Diary and turn it into a stand-alone tutorial on text editing. I ran out of time to work on that after writing one item: An overview of how you can take a custom data structure for text editing (a piece table) and give it a natural API by conforming to Swift Collection. I don’t want this material to die, so I’ve moved it over here.

    The Theory

    What’s so hard about editing text? Let’s ignore for the moment the problems with even storing Unicode text, with its encodings, multi-byte characters, etc. If you put those considerations aside, the abstract model for a text file is an array of characters. An array is about as simple a data structure as you can get. What’s the problem?

    The answer, of course, is that changing things in an array can be expensive. Appending to or removing from the end of an array is cheap. Any other operation, though, means copying elements to make room for the new element (or to remove existing elements). And of course in a text editor, you want to make changes all throughout the text, not just at the end. That’s kind of the point. If your editor’s main data structure for text is “an array of characters”, it’s doing a ton of memory copying on every keystroke whenever the cursor is anywhere but the very end of the file.

    So we need something better. But what?

    One option is to store the file as a linked list of lines, and each line is an array of characters. You still need to do copying as you insert and remove characters, but you’re now only copying characters on the same line instead of all characters to the end of the file. If you’re implementing a source code editor, where you can assume that lines are all of a reasonable maximum length, you can get far with this approach.


    Next up in sophistication is a data structure known as a gap buffer. The main idea behind a gap buffer is that edits to a text file aren’t randomly distributed throughout the text file — they exhibit a lot of locality. If you insert a character at offset 42 in the file, the next insertion is much more likely to be at offset 43 than any other location, and the next deletion is likely to be at offset 42 than any other location. Basically, where the cursor is is where edits are likely to be. A gap buffer makes edits at the cursor really cheap, but you pay a cost to move the cursor.

    A gap buffer does this by storing the text in an array that’s much larger than what’s needed to store the text. This gives you a lot of free space inside the array (the “gap”), and the key insight is you can pay a cost to move the gap to the location of the cursor to make insertions and deletions at the cursor really cheap.


    While you can implement world-class editors with a gap buffer, for Scrap Paper we’re going to use a third approach, called a Piece Table. Remember how we said that appending to the end of an array is cheap? The piece table exploits that by keeping two arrays. One read-only array contains the original file contents. The second append-only array contains all characters inserted at any time during the editing session. Finally, the piece table tells you how to build the file as a sequence of “pieces” from the different arrays.


    Just as with the gap buffer, a piece table works efficiently because most edits to a text file are localized. When you insert character after character into the same spot, you’ll end up with a pretty compact representation of the “pieces” constructed from the two arrays. For example, I edited this file in a version of Scrap Paper that recorded all of the changes that I made to the text file (backspaces and all). At the end of my editing session of 2276 individual edits, I had 48 pieces representing the contents of the file.

    One more bit of theory: String, NSString, and unicode

    I glossed over the challenges of representing text earlier. It’s now time to pay a little attention to that.

    1. The Swift String struct and the Objective-C NSString class made different engineering choices about how to store and model strings. Swift models its strings as an array of “characters” and encodes those characters in UTF-8. The NSString class, in contrast, does not expose individual Unicode characters, and it uses UTF-16 encoding internally.
    2. The TextKit classes are from the NSString era.
    3. Since we will be interfacing a lot with TextKit, we’re going to use the NSString convention and model our text as an array of UTF-16 code points.

    Let’s build a Piece Table!

    With the theory out of the way, it’s time to do some building.

    /// A piece table is a range-replaceable collection of UTF-16 values. At the storage layer, it uses two arrays to store the values:
    /// 1. Read-only *original contents*
    /// 2. Append-only *addedContents*
    /// It constructs a logical view of the contents from an array of slices of contents from the two arrays.
    public struct PieceTable {
      /// The original, unedited contents
      private let originalContents: [unichar]
      /// All new characters added to the collection.
      private var addedContents: [unichar]
      /// Identifies which of the two arrays holds the contents of the piece
      private enum PieceSource {
        case original
        case added
      /// A contiguous range of text stored in one of the two contents arrays.
      private struct Piece {
        /// Which array holds the text.
        let source: PieceSource
        /// Start index of the text inside the contents array.
        var startIndex: Int
        /// End index of the text inside the contents array.
        var endIndex: Int
      /// The logical contents of the collection, expressed as an array of pieces from either `originalContents` or `newContents`
      private var pieces: [Piece]
      /// Initialize a piece table with the contents of a string.
      public init(_ string: String) {
        self.originalContents = Array(string.utf16)
        self.addedContents = []
        self.pieces = [Piece(source: .original, startIndex: 0, endIndex: originalContents.count)]

    This code defines the stored properties we need for a piece table:

    • originalContents is the read-only copy of the characters from the file we are trying to edit.
    • addedContents is an append-only array of all characters added during an edit session.
    • pieces describes the logical contents of the file as a series of contiguous characters from either originalContents or addedContents.

    Conforming to Collection

    To make PieceTable feel Swift-y, we’re going to make it conform to a few standard protocols. First: Collection — this will let users read characters from a piece table as easily reading characters from an array. In Swift, a Collection is a data structure that contains elements that can be accessed by an index. If you’ve used arrays in Swift, you’ve used a collection.

    The Collection protocol is big. While it contains over 30 methods, most of those have default implementations. To create a custom Collection, this is all you need to implement:

    // The core methods of a Collection.
    // Everything here should have O(1) complexity.
    protocol Collection {
      associatedtype Element
      associatedtype Index: Comparable
      var startIndex: { get }
      var endIndex: { get }
      func index(after position: Index) -> Index
      subscript(position: Index) -> Element

    If your only exposure to Collection has been through arrays, you may have assumed that the index needs to be an integer. Not so! The Collection protocol gives implementations a ton of flexibility about the index type. You can use any type so long as:

    1. You can efficiently return the index of the first element of the collection
    2. You can efficiently return the index that means “you’ve moved past the last element of the collection”
    3. Given an index, you can efficiently return the next index in the collection.

    For our piece table, we are going to need a custom index type. To find a character in the piece table, we will use two values: The index of the piece in the pieces table, and then the index of the character within the contents array. With this information, we can easily figure out the character at an index (use the piece index find the correct contents array, then return the character at the correct character index). It’s a tiny bit more complicated to figure out the index that comes after the next index, because you have to consider two cases: If the current index represents a character at the end of a piece, you have to move to the next piece; otherwise you move to the next character in the current piece.

    With this overview, here is the minimal code to have a piece table conform to Collection:

    extension PieceTable: Collection {
      public struct Index: Comparable {
        let pieceIndex: Int
        let contentIndex: Int
        public static func < (lhs: PieceTable.Index, rhs: PieceTable.Index) -> Bool {
          if lhs.pieceIndex != rhs.pieceIndex {
            return lhs.pieceIndex < rhs.pieceIndex
          return lhs.contentIndex < rhs.contentIndex
      public var startIndex: Index { Index(pieceIndex: 0, contentIndex: pieces.first?.startIndex ?? 0) }
      public var endIndex: Index { Index(pieceIndex: pieces.endIndex, contentIndex: 0) }
      public func index(after i: Index) -> Index {
        let piece = pieces[i.pieceIndex]
        // Check if the next content index is within the bounds of this piece...
        if i.contentIndex + 1 < piece.endIndex {
          return Index(pieceIndex: i.pieceIndex, contentIndex: i.contentIndex + 1)
        // Otherwise, construct an index that refers to the beginning of the next piece.
        let nextPieceIndex = i.pieceIndex + 1
        if nextPieceIndex < pieces.endIndex {
          return Index(pieceIndex: nextPieceIndex, contentIndex: pieces[nextPieceIndex].startIndex)
        } else {
          return Index(pieceIndex: nextPieceIndex, contentIndex: 0)
      /// Gets the array for a source.
      private func sourceArray(for source: PieceSource) -> [unichar] {
        switch source {
        case .original:
          return originalContents
        case .added:
          return addedContents
      public subscript(position: Index) -> unichar {
        let sourceArray = self.sourceArray(for: pieces[position.pieceIndex].source)
        return sourceArray[position.contentIndex]

    Conforming to RangeReplaceableCollection

    We can now iterate through the contents of a PieceTable. However, we don’t have a way to modify the contents of the PieceTable. To add this capability, we are going to make PieceTable conform to RangeReplaceableCollection. This protocol has a single required method, replaceSubrange(_:with:). If you implement this method, you get a ton of other APIs for free.

    For our implementation of replaceSubrange, we have to do two high-level jobs:

    1. Append the new characters to the end of addedContents. Remember, in a piece table, we only ever add characters — never delete — and they always get added to the end of the array. This is the easy part.
    2. The hard part: Update pieces to reflect the new contents of the file. The performance of the piece table will depend on how many entries are in pieces, so we need to take care to avoid creating unneeded items.

    This implementation manages the complexity of updating the pieces array by creating a stand-alone change description that contains the new piece table entries. When constructing the change description, the implementation adheres to two rules to minimize the size of the pieces array:

    1. No empty pieces! If an edit creates a Piece with no characters, it’s removed.
    2. If it is possible to coalesce two neighboring pieces into one, do it.

    Here is the code that adds conformance to RangeReplaceableCollection:

    extension PieceTable: RangeReplaceableCollection {
      /// This structure holds all of the information needed to change the pieces in a piece table.
      /// To create the most compact final `pieces` array as possible, we use the following rules when appending pieces:
      /// 1. No empty pieces -- if you try to insert something empty, we just omit it.
      /// 2. No consecutive adjoining pieces (where replacement[n].endIndex == replacement[n+1].startIndex). If we're about to store
      ///   something like this, we just "extend" replacement[n] to encompass the new range.
      private struct ChangeDescription {
        private(set) var values: [Piece] = []
        /// The smallest index of an existing piece added to `values`
        var lowerBound: Int?
        /// The largest index of an existing piece added to `values`
        var upperBound: Int?
        /// Adds a piece to the description.
        mutating func appendPiece(_ piece: Piece) {
          // No empty pieces in our replacements array.
          guard !piece.isEmpty else { return }
          // If `piece` starts were `replacements` ends, just extend the end of `replacements`
          if let last = values.last, last.source == piece.source, last.endIndex == piece.startIndex {
            values[values.count - 1].endIndex = piece.endIndex
          } else {
            // Otherwise, stick our new piece into the replacements.
      /// If `index` is valid, then retrieve the piece at that index, modify it, and append it to the change description.
      private func safelyAddToDescription(
        _ description: inout ChangeDescription,
        modifyPieceAt index: Int,
        modificationBlock: (inout Piece) -> Void
      ) {
        guard pieces.indices.contains(index) else { return }
        var piece = pieces[index]
        description.lowerBound = { Swift.min($0, index) } ?? index
        description.upperBound = { Swift.max($0, index) } ?? index
      /// Update the piece table with the changes contained in `changeDescription`
      mutating private func applyChangeDescription(_ changeDescription: ChangeDescription) {
        let range: Range<Int>
        if let minIndex = changeDescription.lowerBound, let maxIndex = changeDescription.upperBound {
          range = minIndex ..< maxIndex + 1
        } else {
          range = pieces.endIndex ..< pieces.endIndex
        pieces.replaceSubrange(range, with: changeDescription.values)
      /// Replace a range of characters with `newElements`. Note that `subrange` can be empty (in which case it's just an insert point).
      /// Similarly `newElements` can be empty (expressing deletion).
      /// Also remember that characters are never really deleted.
      public mutating func replaceSubrange<C, R>(
        _ subrange: R,
        with newElements: C
      ) where C: Collection, R: RangeExpression, unichar == C.Element, Index == R.Bound {
        let range = subrange.relative(to: self)
        // The (possibly) mutated copies of entries in the piece table
        var changeDescription = ChangeDescription()
        safelyAddToDescription(&changeDescription, modifyPieceAt: range.lowerBound.pieceIndex - 1) { _ in
          // No modification
          // We might need to coalesce the contents we are inserting with the piece *before* this in the
          // piece table. Allow for this by inserting the unmodified piece table entry that comes before
          // the edit.
        safelyAddToDescription(&changeDescription, modifyPieceAt: range.lowerBound.pieceIndex) { piece in
          piece.endIndex = range.lowerBound.contentIndex
        if !newElements.isEmpty {
          // Append `newElements` to `addedContents`, build a piece to hold the new characters, and
          // insert that into the change description.
          let index = addedContents.endIndex
          addedContents.append(contentsOf: newElements)
          let addedPiece = Piece(source: .added, startIndex: index, endIndex: addedContents.endIndex)
        safelyAddToDescription(&changeDescription, modifyPieceAt: range.upperBound.pieceIndex) { piece in
          piece.startIndex = range.upperBound.contentIndex

    Does it make a difference?

    For large file sizes, yes!

    I gathered a trace of all of the edits I made to a text buffer for a couple of minutes of editing a file. I then replayed that trace on an NSTextStorage object and on a PieceTable, timing how long it took to perform all of the edits on files of different sizes. This is the result:

    DDDB752B EBA1 4DD0 84AA 36D78CA12529

    For the NSTextStorage, the time to perform the edits increases linearly with the file size. For PieceTable, however, the time to perform the edits is independent of file size; PieceTable operations will get slower as the complexity of the edit history increases.

  32. May 09, 2021

    Since deciding to focus Grail Diary on book notes, I’ve changed the app’s navigation model to be reading-focused instead of notes-focused.


    Next up is extending the Review Mode. Currently review mode is all about spaced repetition and active recall. However, part of the value of having your book notes all in one place is perusability — I want a review mode that makes it easy to revisit your favorite quotes. What I implemented is a mode that quizzes you on the material you are likely to forget. (I still want a spaced repetition mode! I just also want another one that’s about revisiting your favorite quotes.)

    Finally, it’s time to look for an audience for Grail Diary. I’ve long been conflicted about if I should put Grail Diary on the App Store. I don’t want to turn Grail Diary into a side hustle. That doesn’t sit right with me for some reason. However, this past week I re-read a post by Brent Simmons where he writes, “This is the age of writing iOS apps for love.” That struck a chord. I’ve been working on this app for years because I love books, I love writing apps, and Grail Diary makes my experience of reading better. Writing an app for love why I’ve decided it’s worth the work of finding an audience for Grail Diary and putting it on the app store. I know there won’t be a huge audience for this app, but I also know there is a niche somewhere. And just like my writing gets better when I get feedback, Grail Diary will become better when I find the right audience and start getting feedback. That’s my goal.

  33. April 30, 2021

    I last wrote:

    When I was working on Grail Diary, I was confused on if I was writing a digital Commonplace Book (specifically designed for storing quotes and other things you want to remember about the books you read) or if I was writing a general-purpose notes app with a spaced-repetition feature…

    I’ve been thinking about this a lot for the past few days, and I’ve made a decision: Grail Diary is going to be an app for taking notes about the books you read. That’s currently what I use it for, and I want to start evolving the app to make it as great as possible for any other book lover.

    Book notes are important enough to deserve a dedicated application! For a book lover, your book notes will be among the most important things you create in your life. Your notes help books become a part of you.

    I designed Grail Diary around three factors that make book notes different:

    • Permanent value: If you are a book lover, you can build up a reading log and notes over the course of decades. My log and notes go back over twenty years. Grail Diary uses simple plain-text markup so your notes will be readable forever, by almost any application.
    • Personal ownership: Your book notes are yours. You shouldn’t lose them when a software company goes out of business. Grail Diary works with simple files! No account, no sign-up, and you can move or copy your files wherever you want. You can use file synchronization services like iCloud Drive or Dropbox to keep your content in sync across multiple devices.
    • Perusability: The joy of writing book notes is rereading them! It’s like meeting old friends again. Grail Diary already has features to help you get reacquainted with what you’ve put in your notes, and I’ve got ideas for many more.

    Even if Grail Diary never makes it beyond “personal project,” I feel better with focus. I’m trying to decide if I want to really polish the app and put it on the app store, or if I just want to keep it as an open-source app for the motivated techies to benefit from. On the one hand, getting my own app back on the app store would be a good ultralearning project. On the other hand, I’m kind of scared to take that step. If I release the app to the app store and people don’t like it… can my ego handle it?

    That’s a decision for another day.

  34. April 25, 2021

    Three years ago today, I was reading the book Factfulness by Hans Rosling, and I wrote in my journal that I’d like a program that would add Anki-like spaced repetition to the notes I was making about the book. I wrote:

    I’m thinking of maintaining a simple text file with Markdown syntax, one bulleted line per “fact” I want to remember from a book. Markdown-underline something like this and it becomes a phrase that gets elided for an Anki card.

    That idea turned into Grail Diary, which I still use today to take notes about the books I read. The project has been a huge personal success. I write personal programs as a way to teach myself things, and by working on Grail Diary I cemented knowledge into my brain about piece tables, incremental packrat parsing, spaced repetition, sqlite, and iCloud document storage. I also have 760 prompts about the 66 books I’ve read in the past 3 years, and by regularly reviewing those cards I’ve remembered the material I’ve read these past three years way better than what I’d read for the prior 44. It was also the start of my journey into educational technology, which lead me to leave Facebook and join Duolingo.

    Of all of my side projects, Grail Diary feels like it’s got the most potential to be useful for someone other than myself. However, it’s got one huge problem at its core: When I was working on Grail Diary, I was confused on if I was writing a digital Commonplace Book (specifically designed for storing quotes and other things you want to remember about the books you read) or if I was writing a general-purpose notes app with a spaced-repetition feature. As a result, it’s this strange mishmash of features. I doubt anyone else would understand why the software in its current form behaves the way it does.

    Who knows… maybe by the time Grail Diary turns 5, I’ll have picked “Commonplace Book” or “General-purpose notes app” as the primary identity for the project and it will have another user. Time will tell!

  35. April 24, 2021

    I’m awed by Scott Young’s MIT Challenge for its simplicity and audacity. In 2011, he gave himself one year to complete the full four-year curriculum in Computer Science from MIT. MIT made most of its course material available for free online, including the tests and the answer keys, so Young could work at his own pace, at his own home, and not spend any tuition money on this experiment. He successfully finished this project in 2012. In 2019, he published the book Ultralearning, a book that helps people plan big learning projects of their own.

    Young writes that for any ultralearning project, you should budget about 10% of your time on metalearning — making a plan to identify what you need to learn and how you will go about learning it. Furthermore, he advises you to break down the what you need to learn into three buckets:

    1. New concepts that you need to understand
    2. New facts that you need to memorize
    3. New skills that you need to practice and acquire

    You do this because the techniques to efficiently learn things are different for the different categories. I’m shocked I’d never thought about this before! After reading this, I understand my own learning shortcomings much better than I did before. I love love love learning that falls in “understanding new concepts” category. I gravitated to subjects like math, physics, and computer science that are rich in first-principles conceptual understanding. However, the learning tools that help me pick up new concepts don’t help me pick up new skills or memorize things, so I struggled in subjects like foreign languages and art. I wish I’d had this book back in my high school years to know that I needed to use different tools to learn different things.

    Ultralearning also makes another important argument. As much as possible, you should structure your learning project around doing the thing you’re trying to learn how to do and you need a way to get feedback on how you are doing. Do you want to learn a language so you can speak to locals when you travel? Then you should be speaking to locals as much as you can as early as you can. (The reaction of the native speakers gives you real-time feedback!) If you want to learn jazz guitar, you need to spend a lot of time playing guitar.

    The book devotes a few pages arguing against the effectiveness of my employer, Duolingo, because it is a very indirect way to learn a language. At the same time, though, Young writes of the importance of using drills to isolate and improve specific skills for your learning project. Someone trying to improve at tennis will do more than play games; forehand / backhand / serving drills will help you isolate and improve the building block skills for the game faster than you can just from games. Duolingo plays a similar role for serious language learners. It’s not a substitute for talking to native speakers, but the app does help you drill on vocabulary and grammar. (Also! Duolingo provides way more than app-based translation exercises. You can use to find groups to practice speaking and listening. You can use Duolingo Podcasts for practice understanding native speakers. And perhaps most importantly, Duolingo offers learners of all levels motivation to keep learning — the hardest part of learning a new language.)

    Anyone who is interested in learning, and in particular anyone who is interested in self-directed learning, should read Ultralearning. You’ll find a ton of helpful material. For those interested in educational technology, Ultralearning suggests two things where technology seems uniquely positioned to help people learn faster and better: In providing material to practice and in feedback. All of the influential educational software I can think of — from Duolingo to Anki to Kahn Academy to experimental efforts like the “mnemonic medium” — deliver in both of these dimensions. However, I think because Ultralearning already assumes its reader is highly motivated to learn, it doesn’t say much about one of the most interesting contributions of educational technology. Successful technology makes learning fun and contains mechanisms to help sustain motivation over time.

  36. April 21, 2021

    Since I lamented about the black-box nature of performance engineering in SwiftUI two days ago, I spent some time familiarizing myself with the SwiftUI tools inside of Instruments. While I’ve made some headway, I’ve also hit a wall.

    First, some context. As I mentioned earlier, Captain’s Log is a simple habit-tracking app that I want to use as a playground for experimenting with streaks, streak freezes, and the psychology of motivation. It’s currently a document-based app that stores its data in a plain text file. Right now there are only two screens. The main screen shows a day’s status on each habit and a calendar to help visualize how each streak is going. Then, there’s a second screen to tweak any additional details for completing a habit. (For example, to prepare for exiting quarantine, I’m trying to ride my bike at least a little every day. When I record a bike ride in Captain’s Log, I track how long and how far I ride.)

    Streak Visualization

    While performance isn’t terrible, there are noticeable lags in several interactions. For the past two evenings, I wanted to eliminate one of the lags: There is a noticeable delay processing keystrokes in the edit form, with the first keystroke being the worst.

    Here’s what I learned after a few days poking around with the SwiftUI tools in Instruments.

    1. My pre-SwiftUI Instruments workflow of just looking at heaviest stack frame in Time Profiler and optimizing that doesn’t work in this case. The heaviest stacks are all deep in SwiftUI library code that I don’t understand.
    2. When debugging UI glitches, the Core Animation tool is really helpful. All of the places where I noticed that the UI was lagging, like typing characters, were visible in the Core Animation track as “long transaction commits.” For instance, before any performance optimizing, there’d be a 135ms core animation commit when processing the first keystroke in my edit form. Having these sections called out in the track let me focus specifically on what was happening at this problematic times.
    3. Paul Hudson pointed out that you can use the Hide System Libraries option in Time Profiler to quickly find bottlenecks in your code instead of the SwiftUI code. This helped! I found a couple of places where I was doing excessive calendar math when drawing the streak visualization view. However, unnecessary calendar math was only about 25% of the CPU use during the long transaction commits — the rest is SwiftUI library code. With my optimizations I got the long commit down to 100ms. Better, but still way too long for processing a keystroke.
    4. The SwiftUI View Body tool showed that my view bodies aren’t that heavy. Most compute in 2-4 microseconds. In the span of a 100ms core animation commit, I spend 0.6ms computing view bodies for my own views and a total of 2ms computing all view bodies. 98% of the time is spent somewhere else.
    5. But here’s what I don’t understand. The SwiftUI View Properties tool shows that my main FileDocument struct changes on each keystroke. And I assume because the FileDocument changes, SwiftUI recomputes everything that depends on that document (basically, all the UI in the app). On every keystroke. I don’t understand this at all. Inspecting my code, it doesn’t look like the file document should be changing on each keystroke (the text fields are backed by a separate String binding independent of the FileDocument until you tap Done). I wrote some custom Binding code and set breakpoints everywhere I could think to validate that the document is not changing on each keystroke. In spite of that, SwiftUI is convinced that it needs to recompute everything that depends on this document every time I enter a new character in a text field.

    I don’t know how to debug this further. This is exactly what I meant when I wrote earlier about the inherent tension between declarative UI frameworks and performance tuning. I’ve described what I want in my UI (“a monthly calendar with line segments showing how long I’ve maintained different streaks”). There are probably things I can do to make that “what” even more efficient. However, code I don’t understand and has don’t have access to has decided the “how” of making my app work involves recomputing that calendar on each keystroke. I don’t know how I can make the app responsive without having the ability to influence that “how.”

    Since Captain’s Log is a toy app meant for me to learn, I’m just going to leave things as is and hope that Apple provides better performance guidance and tools at WWDC 2021.

  37. April 19, 2021

    Yesterday, I wrote about streaks and motivation. To let me experiment with streaks and streak freezes, I’ve started work on a simple habit-tracking app. One of my personal goals is to become proficient in SwiftUI, so I used SwiftUI for this project.

    The good news: My project, Captain’s Log, is done “enough” for me to use it. It’s also pleasingly compact (1200 lines of code). However, I’m now wrestling with performance. This is the slowest app I’ve written in a long time, and in spite of working as an iOS performance engineer at Facebook for years, I have no idea how to make this simple program faster. The time profiler tool in Instruments shows me deep, incomprehensible stacks that have nothing obvious to do with my SwiftUI code. The new View Body and View Properties tools are a little more helpful. For instance, one performance problem I have is the app takes too long to process keystrokes. Using the new tools, it looks like my central document property updates on each keystroke, and this causes most of the app to redraw. However, I can’t figure out why this property is updating on each keystroke, nor can I tell if I’ve broken some intelligent View diffing that’s supposed to be happening. I feel stuck.

    When Apple introduced SwiftUI, they explained the difference between imperative and declarative programming with a sandwich shop metaphor. If you walk into a sandwich shop and say, “I’d like an avocado toast,” that’s like declarative programming. You’re describing what you want and you let the server figure out how to make it. To get an avocado toast imperative-style, you’d need to tell the server individual steps instead. (“First, I want you to get a slice of bread. Next, toast it for 2 minutes. Then, get a properly ripe avocado. Mush some avocado and spread it on the toast…“)

    I love this metaphor! It shows the promise of declarative frameworks — and also hints at why performance problems might be inherently harder to solve with them. Suppose I order an avocado toast at brunch, and the server disappears. 20 minutes pass. 30 minutes. Where’s my food? Since I don’t know the steps that the server takes to fulfill my order, there’s no way to figure out why things are taking so long. This seems to be the state of performance tuning in SwiftUI: You, sitting at a table, alone & hungry, wondering where your food is.

    Clearly, if I’m going to become proficient with SwiftUI, I’m going to need to learn some new performance skills. Paul Hudson has the best performance tuning guide I’ve found so far, and my next project is to see if I can use this to make Captain’s Log pleasantly snappy.

    Always new skills to learn!

  38. April 18, 2021

    Inside Duolingo, we have a saying: The hardest part about learning a new language is staying motivated. I didn’t appreciate this aspect of effective educational technology before I started working here. The best educational software will not only have great content: It will have mechanisms that help learners stay motivated to keep learning.

    Streaks are one of the most important mechanisms that Duolingo uses to keep people motivated. Streaks encourage people to do an activity a little bit every day by counting the number of consecutive days you’ve done something you care about (spent time studying a language, got some exercise, wrote in your journal, etc). Skipped a day? Your streak counter resets.

    While tons of apps use streaks, Duolingo adds one twist that, as far as I know, is unique: The streak freeze. As you use the app, you earn the ability to buy streak freezes. Each streak freeze protects your streak for one full day of inactivity. Imagine: You’ve been studying Spanish dutifully every morning before breakfast for a month. But then one day you wake up feeling a little sick, sleep in a bit to recover… and since your routine was disrupted, you forget to practice Spanish that day. Most apps will say that you broke your 30-day streak, and the streak counter will reset the next time you practice. With Duolingo, though, if you had a streak freeze active for your account, your sick day would use up that streak freeze but your streak continues.

    Streak freezes dramatically increase the length of the streak you can build. Suppose you’ve got a 99% success rate at remembering to practice on Duolingo each day. Without streak freezes, you could expect your streaks to average around 100 days before they get broken. Impressive, yes! However, if you keep your account equipped with two streak freezes, you have to miss three days in a row to break your streak. With just a little bit of care, you can keep that streak going indefinitely. (If you didn’t take care and let chance dictate your streak length: that same 1% chance of forgetting gave you 100-day streaks in a world with no streak freezes. With streak freezes, left entirely to chance, you could expect your streak to last almost 30 years.)

    Longer streak lengths tap into two motivation centers in learner’s brains.

    1. Loss aversion It just hurts so much to lose something you “own.” If you have a long streak you’ll want to keep it. Each day your streak gets longer, your brain realizes it gets harder to replace if it breaks… so you care that much more about keeping it going.
    2. Identity At some point, after practicing a language and caring for a long streak, it stops being something you do and starts being part of who you are. “I’m a person who practices languages at least a little every day.” As Angela Duckworth writes in Grit, once you make an activity part of your sense of identity it makes it much easier to stick with it because your brain stops doing cost-benefit calculations.

    I’m not surprised that so many apps try to use streaks as a motivational tool — it’s a simple concept that’s simple to implement in almost any program. Streak freezes, on the other hand, require much more design and programming work.