Today I Learned

For the past five years, I’ve been working at one of the best educational technology companies in the world: Duolingo. I want to start writing about the things I’m learing about educational technology and I needed a space to do so. So here it is. (You can get get this in your email via Substack.)

  1. April 15, 2024

    The music was loud, the lights were flashing, young bodies slid around each other like water preparing to boil. The woman across from me took my hand. She wore a tight outfit with a low cut top. She was younger than me by decades but old enough to be in the night club. Old enough to work in the night club, in fact; she was one of the roaming waitresses of Cabo’s El Squid Roe, dressed like a sexy sports referee with a whistle around her neck and a bottle of something bright red that she’d placed on the railing next to us. She turned my hand palm side up, produced a pen from somewhere, and started drawing on my palm. Stroke, stroke, stroke, stroke, circle… it was a tic-tac-toe board, with an O in the corner. She leaned her mouth to my ear so I could hear her. “If you win,” she said, “I’ll give you two free shots.”

    Let me pause and tell you that it was really clear that I didn’t belong in that night club. I’d parked myself by a rail with a good view of the action, but I wasn’t near anyone else. While the dance mix of Taylor Swift’s Love Story bounced off the metal walls, I stood still. I wore a bright orange Patagonia jacket, zipped up, because I’d been worried it was going to get cold since the sun had set. Perhaps this waitress was just being a good hostess, finding the person uncomfortable at the party and trying to make him feel welcome. Or maybe she hoped two free shots would loosen me up and get me to buy more drinks.

    I looked down at the tic-tac-toe board on my palm, closed my hand, and shook my head. She leaned in to my ear again. “Do you have babies here?” she asked, and followed my gaze over to the knot of people dancing in front of the bar. Babies. While they’d hate to be described that way — they were high school seniors! — yeah, I guess to us parents, they were still babies. I nodded back at the waitress. Realizing she wasn’t going to get me to drink, she smiled and walked off, looking for the next person she could loosen up with sweet, cheap alcohol. I returned to watching my babies jump around.


    This is a story about that uncomfortable parenting moment when you realize your child is growing up and making different choices than you would make. Perhaps making different mistakes.

    When my wife and I decided to join dozens of other families from our kids’ high school for a spring break trip to Cabo, we had no idea what we were getting in to. We agreed to this because a warm weather beach-resort vacation sounded like a great way to chase away the Seattle gray. I know for our teens, it’s much more fun to do stuff with friends than family, so taking them to resort filled with schoolmates seemed like a wonderful idea. They could go off and have fun, we could relax and read by the pool. Dear reader, you will laugh at our naïvety: We really didn’t consider that we were traveling to a country where the drinking age is 18.

    There is a school of parenting that believes that teens will drink no matter what you do, so the prudent approach is to let them drink in a safe and supervised environment. Parents from that school of thought organized the Cabo trip. I can see the logic, and maybe I would have quickly adjusted to it if this was about letting the kids have beer with their tacos. But that’s not why people come to Cabo, I’ve learned. People come to Cabo to party. If you’ve met me in person, you won’t be surprised to know that I’ve never participated in “party culture.” It’s not a matter of careful cost-benefit analysis. I don’t weigh the cost of tomorrow’s hangover versus the fun of tonight’s party, because nothing about loud drunken revelry seems fun. I don’t like crowds. How are you supposed to talk to people? It’s too loud! And “dancing” that is really “jumping around” — I don’t get it. I have a fear of letting my brain get foggy when I’m around others.

    I realize I may be the outlier here. The urge to cast aside all inhibitions and just party seems as old as civilization: The Romans had their Saturnalia, the Middle Ages had the Lord of Misrule, and the 21st century has the El Squid Roe night club in Cabo San Lucas.

    An image of El Squid Roe night club in Cabo San Lucas.

    This is where we were bringing the seniors for one final event for their spring break trip. Responsible parents planned everything: Transportation to and from the club. A battalion of chaperones. Making sure that the club management knew what was up: a class of high school seniors were coming and might not know their limits.

    In spite of all these preparations, though, I found myself increasingly nervous as the Squid Roe trip approached. This wasn’t just my son’s first trip to a night club, it was my first trip, too. I had no idea what to expect but could easily imagine how things could go wrong. All I know about wild parties comes from TV and movies. No screenwriter adds a party to a script to have it be a fun, uneventful, and somewhat boring evening. Something dramatic always happens: The alcohol-fueled fight, the trashed rooms, the drug slipped into a girl’s drink, the arrival of the cops.

    In this case, at least, real life was more boring than TV. The kids had a fantastic time. When we brought them back to the resort at midnight, it appeared that only one or two of them took things to “you’ll regret things in the morning” excess. (One girl had trouble walking. One boy was belligerent with his friends.)

    In spite of my initial fears, even I had a good time. The people-watching at El Squid Roe was excellent! As the night went on, better and more experienced dancers than our 18-year-olds joined the club. The most memorable example: For about fifteen minutes, an older mixed-race couple put on a sultry, sexy dance show nearby. When they danced, I found my attention split between watching the dancers and watching the kids watch the dancers. Some kids, jaws literally dropped, openly stared with such intensity that I thought they were filing away slightly misguided life lessons of “this is how adults behave.” Others filmed the dancers on their phones, surely destined for TikTok. They were a hit for the crowd.

    When that couple was leaving, the woman stopped by me and leaned into my ear. “Do you have daughters here?” she asked. (Why assume that parents only want to keep daughters protected?) I tried to explain how I had a son but was here with a group of high school students. “That’s great,” she said. “It’s good to have fun but it’s important to be safe. You’re a good dad.”

    I don’t know if that’s the right lesson for this story, but it did feel irrationally cool to earn the sultry stranger’s stamp of approval at the end of the night.

  2. March 19, 2024

    I was an iOS engineer at Facebook from 2012 until 2019. I joined a few months after the Facebook IPO, when pundits worried if Facebook would survive the transition from the web to mobile computing. In fact, I probably got my job precisely because pundits (and company leadership) worried that Facebook might not survive the transition from web to mobile computing. In 2011, realizing that the company was staffed with backend and web engineers in a world that was increasingly shifting to smartphones, Facebook set up specialized recruiting pipelines for iOS and Android engineers. In the summer of 2012, I’d been a full-time developer on the now-defunct Urbanspoon restaurant review and recommendation iPhone app for about nine months. That might not sound like a lot of experience… but it was nine months more iPhone development experience than most of the engineers at Facebook at the time.1 It was enough to get me an interview at Facebook. Since the company was busy vacuuming up all of the iPhone engineers it could find, I somewhat amazingly got a job offer. I started in October.

    A collection of cameras on a shelf

    I never worked for Instagram, but I always loved walking by their desks. Their section of the campus was always the best decorated.

    Every engineer at Facebook started in “bootcamp,” which could last somewhere from six to nine weeks. Bootcamp gave incoming engineers time to learn the tools and processes used at the company; it was a time to learn and to make sure you were ready to be productive when you joined a team. Bootcamp was also a deliberate attempt to maintain company culture in its “hypergrowth” phase.2 I worked in Facebook Seattle, which was hiring a lot of ex-Microsoft and ex-Amazon engineers. Bootcamp tried to make sure that Facebook Seattle felt like Facebook and not an outpost of Microsoft. Bootcamp also served to match incoming engineers with available positions — kind of like sorority rush. Most engineers who start at Facebook don’t know what team they will be joining when they sign their offer letters. Instead, during bootcamp, incoming engineers hear about teams with openings and are encouraged to try working on a few different teams to find a good fit. Unlike most incoming engineers, though, I was “pre-allocated” — when I started, I knew I would be joining the Seattle outpost of the Facebook for iPhone team. Still, everyone encouraged me to take my time in bootcamp. “It’s the best time to see the scope of everything happening inside the company,” my first manager advised me. “When you leave bootcamp, you’ll be heads-down on our team’s problems. Take advantage of this time.”

    Many years later, I was a bootcamp mentor, and I passed along that advice to new hires. I also remember some simple things I, and other bootcamp mentors, did to subtly communicate Facebook’s engineering culture to new hires. For example, we made sure to show new people the vending machines throughout campus, stocked with things like keyboards, phone chargers, noise-cancelling headphones… if there’s something you needed to be more productive, there’s a good chance a vending machine would have it. You could just walk over and get it with just a swipe of your badge. The subtext: If the company can spend money to make you more productive, it will.

    A Facebook vending machine

    A Facebook vending machine. Source: The Atlantic.

    More importantly: we tried to make sure that folks pushed a change to production within the first week or two. Most people found that terrifying. Brand new to the company, learning the ropes… and making a change that’s potentially visible to a billion people? What if you make a mistake and a billion people see it? Our job as mentors was to make sure that people didn’t let that fear stop them. Facebook took the “move fast” part of engineering extremely seriously. Chuck Rossi, head of release engineering when I was there and an absolute company legend, gave a talk to our bootcamp class. He told us how important it was for engineers to be available to support and debug any issues that may arise when trying to deploy their code to production, and to take all reasonable steps to verify your work before it’s deployed. But if there are bugs? “So what?” he said. “It’s not like we’re trying to land a spacecraft on the moon.” Everything in Facebook’s move-fast engineering culture depended on speed, and we needed new engineers to understand that. Sometimes you shipped code with bugs, yes, but Facebook’s engineering tools made it really easy and fast to ship bug fixes, too.

    I have enough self-confidence to avoid imposter syndrome.3 My first year at Facebook tested me, though. My first two changes to Facebook for iPhone needed to get reverted for one bug or another; not the strong start I was hoping for. For my first nine months at the company, I didn’t feel like I was fully productive; every change I made was a struggle, with lots of back-and-forth iterations with code reviewers. Several engineers I worked with left me humbled because they were clearly better than I was: They wrote code faster, and it was higher quality, and they could just see farther ahead and anticipate issues better than I could.

    While my first year was humbling, I was clearly learning. All of that code review feedback that the better and faster engineers were leaving on my work? It was a very detailed set of instructions for how I too could be a better and faster engineer. It helped, and I improved. My first years at Facebook strongly influenced how I think about code reviews. While some teams emphasize code reviews as a way to prevent bugs, I now believe code reviews do two things that are even more important. First, code reviews are an excellent way for peers to teach each other — this is the value I got as a new Facebook engineer. I also came to rely on code reviews as a great communication tool. Facebook had a “promiscuous” code review culture: On the teams I was on, it was expected to at least “cc” the members of your team on all of your changes. This made reviews a great way to communicate precisely what was changing in the code and to keep team members up-to-date.

    Another thing that kept me going: pure ambition. If you’ve never worked at a Big Tech company and wonder what it’s like, I think ambition is the most defining characteristic. A small example: I remember in one of the regular Friday Q&A sessions, Mark Zuckerberg explained to the company how one of the obstacles to Facebook’s continued growth is that a large chunk of the world did not have a reliable, inexpensive connection to the internet. So, Facebook was going to try to fix that. He told us the company was working on a solar powered drone with the wingspan of a 737, weighed a mere 900 pounds, and could stay aloft for months at a time and provide an internet connection to people living below. They had a model of the drone in one of the buildings for a while, a gigantic gray V; it was simultaneously big, imposing, yet delicate looking because it was unnaturally thin. I never worked on the project (called Aquila, Latin for “eagle”), but I recognized and loved the ambition.

    Most people working at Facebook in those years did not work on internet-delivery drones. However, I think people at Facebook in 2012-2019, myself included, had similarly outsized ambition for leaving a mark on culture. Working at Facebook gave us front-row seats to a changing world. 2012 was the first year that a majority of Americans owned a smartphone. Now, 92% of us carry a supercomputer in our pockets. Social media was undeniably popular in 2012: Facebook had a billion monthly active users when I joined the company. When I left, a billion and a half people used the service every single day. Everyone at Facebook in that era played a part in the twin trends of the rise of mobile computing and social media. At the time, it felt awesome. Feeling humbled by brilliant coworkers was a small price to pay.

    “At the time it felt awesome” — you can see the shift coming in the narrative. Reflecting on my time at Facebook, I now think it’s more fun to be striving for success than to actually be successful. In the “striving” phase you get to keep daydreaming of how awesome it will be when you finally achieve your goals, and you get to wildly celebrate all of the milestones along the way. Once you have achieved success, though, you become a target, and you’re constantly playing defense. For Facebook: the more popular it became, the bigger a target it became for spammers, scammers, liars, and just plain evil people. 2012 Facebook felt like it was in the “striving” phase. The company was popular, but also mocked as being trivial: People posting cat photos and pictures of their lunch. Fun, but unimportant. By 2016 it felt like company was firmly in the “wild success” category. By this point it was clear that Facebook had fended off challengers like Google+ and Twitter, and the company had reached the “1 billion daily active users” milestone. The twin revolutions of mobile phones and social media had changed the entire world, not just rich and developed countries. I remember reading a 2016 article by Craig Mod in The Atlantic that covered how farmers in Myanmar used Facebook. Everyone in the country was getting phones, every phone had Facebook, and Facebook was how everyone found out what was going on in the world. But Facebook’s “wild success” brought tragedy and atrocity in this case. The military in Myanmar promoted hatred against the minority Rohingya on Facebook and used online hatred to fuel their 2017 genocide. In 2018, a United Nations report wrote, “The role of social media is significant. Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet.”

    Stories like that helped bring my time at Facebook to an end. While I was able to stay absorbed in technical problems during the day, my conscience was uneasy each evening. In 2018, when I found out Duolingo was hiring for their new Seattle office, it was a pretty easy decision to reach out and apply. I still miss 2012-2016 Facebook, though; it was a magical time with magical people and one of the highlights of my career.


    1. Considering that the iPhone SDK was only four years old at that point, I guess nine months of professional iPhone development experience was pretty substantial.
    2. Hypergrowth: There were around 4,600 employees when I started in 2012, and about 44,000 when I left in 2019.
    3. I’ll let you decide how much of that self confidence is earned versus unearned.

  3. March 06, 2024

    I knew Duolingo was a different kind of tech company the day I interviewed there in December 2018. Before COVID, interviews were in person, which meant I had to fly from Seattle to Pittsburgh to interview. That was the first sign that things were different: Pittsburgh?! Duolingo isn’t just outside the West Coast tech bubble, it’s not even the same time zone. At the time I worked for Facebook, whose Menlo Park headquarters was like Disneyland for computer nerds, and if you made the hour-long drive (traffic!) to the Instagram San Francisco offices you’d pass one gleaming tech giant campus after another. Duolingo, in contrast, was tucked away in the third floor of an old furniture showroom in a city known more for steel mills and stone churches.

    Do Your Duolingo Superbowl spot Seriously, what kind of company would do this at the Superbowl?

    True, once you got inside the Duolingo office, it looked like a tech company: Open floor plan, foosball tables. The interview was much like a normal tech interview; I spent hours answering technical questions on whiteboards and laptops. While Duolingo employees spent most of the time asking the questions, I was also “interviewing” the company to see if it was the kind of place I wanted to work. I saw two things that day that made me want to give Duolingo a try if they gave me an offer. First, at 12:30, Chef Rick rang a bell, and everyone in the office, including myself and my “lunch buddy,” headed to the cafeteria to have lunch together. The lunch was a full hour. No hurried eating at desks. People who wrapped up early and didn’t want to talk would do things like head to the aforementioned foosball tables, or perhaps go into one of the conference rooms to solve the day’s New York Times crossword together. Second, I happened to be interviewing on the last day before Duolingo’s winter break, when they close the office for two weeks in December to let everyone start the new year refreshed. My impression from these two facts: Duolingo seemed to be the kind of place that fundamentally respected the humanity of the people who worked there. It hadn’t let tech-bro-hustle-culture take over.

    While it may not be tech-bro, Duolingo does make money. I remember a company Q&A with Duolingo’s CEO, Luis von Ahn, that happened during a celebration of the company’s first 10 years. Someone asked what Luis if there was anything he wished he’d done differently in Duolingo’s history. His answer: “I wish we’d figured out how to make money sooner.” He went on to explain that it was only after they’d found and proven they had a successful business model could they focus on growing and hiring the artists, designers, curriculum specialists, learning scientists, TikTok stars, and engineers needed to build the zany yet effective product the world now loves.1 If Duolingo had cracked the problem of the business model sooner, then perhaps we’d be even further along making it fun to learn on your phone.

    Duolingo’s business model is perhaps the most innovative and important thing about the company. There are plenty of EdTech companies in the world and plenty of language learning companies. There plenty of nonprofits trying to improve the world through better education. Other companies do “gamification” more aggressively than we do. Duolingo’s found a unique winning formula: Develop educational material that people find valuable, give it away to reach the most people and do the most good, but also make it so fun and engaging that people fall in love with the product and happily pay so they can use it more. One thing I’ve come to appreciate in my time working here is just how magical it feels to work somewhere that has figured out a good way to make money.

    Another thing working at Duolingo has taught me: I seriously undervalue “fun.” Because I’ve always loved learning, I forget that not everybody has the same intrinsic motivation to study new things. I’m pretty sure that if I had tried to start a company like Duolingo, my product would have been serious, felt serious, would have attracted a few dozen people who take learning as seriously as I do, and would have failed miserably. Learning can be fun and effective, but you need to pay attention to both the “fun” and “effective” parts. I naturally pay attention to “effective,” and I love how being at Duolingo has surrounded me with coworkers who are great at the “fun” part.

    I won’t pretend Duolingo is perfect. It’s not easy having an owl for a boss; it’s kind of nasty when the Big Green One throws a pellet. However, I just celebrated my fifth anniversary at Duolingo, and I feel lucky to have found a place to work that lines up so well with my values. Now, go do your Duolingo!


    1. It’s important to get this right; in 2011-2012, I worked for a small company that had a great product, strong brand, fun team to work with… but it was too hard to make money, and that company no longer exists. RIP, Urbanspoon.

  4. February 26, 2024

    I don’t remember many of the books I read in the 1990s, but Beloved still haunts me.

    Cover images of Beloved and Frederick Douglass

    I hope you haven’t read Toni Morrison’s Beloved yet, so you can experience it for the first time. I realize this is unlikely. In the twenty five years since I first read the book, it’s apparently become a staple in high school English classes. But if somehow you haven’t read it, I encourage you to pick it up, and do your best to avoid spoilers beforehand.1 Be warned, though, it’s not an easy read. It is a novel about slavery, so there is almost nothing happy in the book. It’s also written in a challenging style. The plot is nonlinear, jumping from character to character and time to time. Many things are hinted at rather than being explicitly stated. As a result, it demands careful attention just to figure out what’s going on. Your reward for being a careful reader is an emotional gut-punch when you piece together the “Misery” at the center of the book. I know this doesn’t sound like much of a recommendation — but Morrison’s approach to writing about slavery creates a story that will stick in your brain for the rest of your life.

    If you have read Beloved, then I encourage you to reread it at some point. My second reading of Beloved was a different experience because I already understood the main arc of the story. I did not need to spend my attention trying to figure out the basics of what was going on. Rereading, I was surprised to see how Morrison lays out everything in the first five pages. The key plot points, the major themes, the most important characters: It’s all there, quick as can be — just not as clear as can be in your first encounter with the novel. Freed from needing to devote most of my attention to piecing together the plot, I was able to wrestle with this question: Why does Morrison write her book in a way that makes the reader work so hard just to figure out what’s happening? Was this just a cheap trick to hide the main plot development of the story and reveal it later for maximum dramatic impact? You could imagine Morrison writing a different novel: A simple, straightforward, linear narrative that walks the reader clearly through the events that happen to her characters. You would read about the dehumanizing and brutal treatment of slaves by enslavers, the characters’ risky escapes to freedom, and how life unfolds in a world first governed by the Fugitive Slave Act and then post-Emancipation life.

    In fact, you don’t have to really imagine this book, you can read it: The Narrative of the Life of Fredrick Douglass, An American Slave covers much of the same territory as Beloved. Douglass describes how the institution of slavery destroyed the family life of the enslaved, the brutal treatment of the enslaved, the soul-destroying inhumanity of the entire system, and Douglass’s escape to freedom. Douglass’s Narrative asks for empathy from its readers but does not make its readers work to piece together simple facts.

    While the two books cover much of the same material, they were written for very different audiences, and I think the different audiences explain their different structures. Douglass’s book was published in 1845, when slavery was still very much alive and a large share of the white population — his book’s intended audience — viewed enslavement as the necessary and right way to provide strong guidance to people of an inferior race that could not otherwise run their own lives. His book’s goal was to either convince or remind people of the shared humanity of black and white in 19th century America. By making his narrative easy to follow, Douglass was able to reach the widest possible audience, and he could use what he wrote to prove to a skeptical audience that a former slave could match or exceed anybody in intelligence and eloquence. Morrison is writing for people seven generations later, when slavery has passed out of living memory, when the biggest danger is we let ourselves forget this part of our history, and when she can be confident that readers believe that the enslaved are fully human. This last bit lets Morrison do something that Douglass could not: She can try to remind people that the barbaric conditions of slavery would break a normal human being, and trust that her readers will know that a traumatized human is still a human. She seems to know that Douglass could not have this level of trust in his readers; in the only part of Beloved told from a white character’s point of view, she shows how the character Schoolteacher, a slaveowner, views being traumatized by savage treatment as something characteristic of animals, not people:

    Schoolteacher had chastised that nephew, telling him to think – just think – what would his own horse do if you beat it beyond the point of education… Supposed you beat the hounds past that point that away. Never again could you trust them in the woods or anywhere else.

    Douglass couldn’t talk about this part of how enslavement affected the enslaved without inviting this kind of dehumanizing comparison to animals in the minds of his audience. Morrison can. Every character in the novel is broken in some way by slavery, and she shows us that broken narrators will produce a broken narrative. This is what Morrison wants us to experience as readers. She trusts that we will view this as a sign that her narrators are fully human and not that they are sub-human. The difficult nonlinear storytelling and hints-instead-of-explicit-narration are not tricks to hide the point of Beloved — they are the point of Beloved.

    I write all of this knowing that you have probably read Beloved once, because it is so widely taught in schools. However, it is also one of the most widely challenged and banned books from school curricula and libraries because its themes are so unpleasant to think about.2 This is ironic, because one of the major themes of the book is how people will sacrifice memory to protect themselves from pain. One of the earliest lines of dialog in the book comes when the grandmother “Baby Suggs” complains to her daughter-in-law Sethe that she only remembers one minor detail about all of Baby Suggs’s eight children that had been taken from her because of slavery. Sethe’s insightful response: “That’s all you let yourself remember.”

    I understand why people want to avoid thinking about the topics raised by Beloved. I empathize with parents who want to protect their children from the brutality of the world. I want my kids to live in a world free of violence and oppression, too. However, I also believe we must remember the mistakes of the past so we can be vigilant against repeating them. I believe the memory and condemnation of past wrongs is an important tool to help us improve the way we treat our fellow humans. Let yourself read or re-read Beloved; let your high school children read Beloved. Let yourself remember.


    1. Amazingly, in this era where we “spoiler alert!” everything we talk about, if you Google Beloved and read anything about it, you’ll find articles that reveal, like it’s no big deal, the key plot point that the book carefully unfolds. So: if you haven’t read Beloved, don’t Google it. Just get a copy and start reading.
    2. The fight over teaching Beloved in my home state of Virginia likely cost former governor Terry McAuliffe his job in 2021. Though I’ve lived in Washington for decades, I think of myself as a Virginian, so this feels personal.

  5. February 19, 2024

    At Duolingo, we just finished our twice-annual performance review cycle. Duolingo has a fairly well-honed performance system that operates like many other Big Tech companies. Every employee has a “ladder level.” For the common roles in the company, like engineering, there is a thorough guide to what’s expected at each level along different dimensions. For example, engineers at Duolingo have expectations around technical ability, ability to ship, impact, and communication / leadership. As you advance in your career from one level to the next, the expectations increase. While no system is perfect, I appreciate the work that Duolingo does to clarify expectations and demystify promotions.1

    A man appears to be hanging from the gutter of his house with his ladder fallen off to the side.

    Be careful on that ladder.

    Since ladder levels work so well to help motivate people to advance in their career, I wondered what it would look like to create “ladder levels” for being a good human being? I’m no philosopher and I’ve spent maybe an hour total repackaging uncontroversial self-help clichés… but if you have also just finished a performance cycle and want a reminder that there’s more to living well than getting that next promotion at work, for your enjoyment I present the Dewey Ladder Levels for Living (version 1.0):

    Wealth Health Community Purpose
    Level 1
    (“Hot mess”)
    You depend upon others You take your health for granted You are searching for community and belonging Your do what others tell you to do
    Level 2
    (“Junior adult”)
    You are self-sufficient You realize that your actions can impact your health but don’t make significant changes You are a member of a community You are questioning what others tell you
    Level 3
    (“Adulting”)
    You support others You regularly engage in activities to support your health You are a leader in your community You have identified your own purpose
    Level 4
    (“Killing it”)
    You employ others Healthy habits are seamlessly integrated into your life You are a pillar of your community You are living your purpose

    While I didn’t put a ton of thought into creating this, I do want to share a few things that went through my head while I did so.

    • Every Big Tech company I’ve worked at has a “terminal level,” which sounds much worse than it is — it’s the minimum level that every employee is expected to get to.2 Not making it to at least the terminal level within a certain timeframe is the sign of a performance problem, and once you make it to the terminal level, it’s OK (from the company’s perspective) if you never get another promotion.3 Without really planning for it, I wound up with the same rough structure in my Ladder Levels for Living. In my mind, everyone should strive to get to the “Adulting” level.

    • I’ve been in many performance and career development conversations with people over the years, and I’ve had this conversation repeatedly: There comes a point in your career where growth comes less from developing your skills and more from helping develop the skills of the people around you. I don’t think I nailed this in my first draft of Ladder Levels for Living, but it was going through my head: I tried to create levels that have a point where growth is less inward-focused (“have I gotten better?”) and becomes more outward-focused (“have I helped others get better?”).

    I found this an amusing exercise that gave me a little perspective at the end of the company performance review cycle. I also admit I got kind of uncomfortable writing all of this down… I realized I’m not doing well in some of the dimensions that I know are important for living well, and I’m also scared at trying and failing to improve them. So, if you try a similar exercise of creating your own ladder levels and find yourself a little uncomfortable: I see you.


    1. We’re hiring.
    2. In my era at Microsoft, it was Level 64; at Facebook it was E5; at Duolingo it’s Senior Software Engineer.
    3. Each company’s ladder goes beyond the terminal level because tech companies are staffed with overachievers and it helps to give us something to strive for.

  6. February 12, 2024

    I remember standing in the kitchen of our old house about fifteen years ago. My kids were roughly 3 and 1. It was nighttime, and I was exhausted, the kind of exhausted you only experience after you’ve been chronically sleep deprived for months because you have an infant in the house. Our youngest son Patrick was supposed to be sleeping, I needed him to be sleeping because I needed to be sleeping, and the tricks I’d learned for getting our older son to sleep weren’t working on him. They rarely worked on him. And then I remember a flash of profound insight that I carry with me to this day: Oh, of course.

    Phrased that way it doesn’t make sense, much less sound profound. “Of course?”

    But here are all of the thoughts that went into that phrase. If you would have asked me, before I had children, if I would have expected my two children to be the same, I would have answered, “Of course not.” I knew that. So did it make sense to expect that all of the techniques I picked up parenting my first child would work with my second? No, of course not. Yet without realizing it, I had just been assuming that my kids were the same, that what worked with the oldest would work with the youngest. With that context, the full thought that went through my head was closer to: I have two different kids. Of course I have to adjust what I’m doing.

    My “oh, of course” thought was immediately followed by this one: “This is the difference between knowing something intellectually versus knowing something emotionally.” My 20-year-old self, single and without children, could tell you that you shouldn’t expect your kids to be the same. I knew that intellectually. But I didn’t know helpless it felt to be struggling with frayed nerves, desperate for sleep, and realizing that the wisdom I thought I had earned from keeping one child alive for three years was worth a lot less than I thought because “my kids weren’t the same.” That’s emotional knowledge.

    “Emotional” might not be the right word, but it’s close. When I talk about “knowing something emotionally,” I’m talking about something primal that seems grounded in the body instead of in the brain. It’s the difference between thinking, “If I skip eating breakfast, I’m going to be starving at lunch” and actually being starving at lunch. The first is an abstract thought in your brain; you could ignore it if you wanted. The second is a demanding reality coming from your body that fights back when you try to ignore it. “If I have more than one child, they’ll be different” is an abstract thought. “I’m crazed and at my wits end because I can’t figure out how to take care of my infant, and everything I learned from raising my toddler is useless” was an unignorable feeling grounded in my chest.

    IMG 1160

    That night in the kitchen was not the only time I have been surprised at my emotional response to what I thought were dry facts, just the first time I’ve noticed it. What particularly embarrasses me are the times I realize that someone’s been trying to explain something that they know emotionally, but I just didn’t get it at the time. For example: in 2017, I took a business trip to Beijing, and I had a day by myself before meetings started. I wandered through the tourist areas on my own, like I’ve done in countless cities… but this time I stood out as a foreigner and was repeatedly approached for one scam after another — strangers constantly in my personal space, trying to get me to go one place or another, trying to get me to ride in their rickshaw… After extracting myself from the infamous “Tea House” scam, I was surprised at how strongly and suddenly I felt: “I like being new places and seeing new things, but I don’t like being a target. This isn’t fun any more. I’m just going to spend the day in my hotel room.” That’s what I did. At the start of that bad day, I knew intellectually that I would not look like most people on the streets of Beijing, but I did not know what that would feel like.

    I wish I knew better ways to predict how I would respond to situations, and I wish I knew how to understand others’ knowledge at an emotional level, not just an intellectual level. But so far, my emotional knowledge obeys three fairly limiting laws. First: my emotions can be felt in the moment (“I’m uncomfortable because I’m out of my element and repeatedly targeted by strangers”). I can’t always predict what things will feel like, and even as an adult I find myself occasionally floored by the realization, I’ve never felt this way before. (Parenthood brings lots of those.) Second: After I’ve experienced something, I can remember how it felt (“I was uncomfortable wandering the streets of Beijing alone”). Finally, I can make guesses that some other situations might feel like things I remember (“Maybe women catcalled on the street feel like I did when I was alone in Beijing”). I’ve found no way to short circuit the process and have other people give me the whoa, I’ve never felt this way before sensation — I can’t pick up others’ emotional knowledge.

    Because of this, I try to be humble about my ability to predict how I’ll feel about a new situation, and try to be patient with myself when I get it wrong. I try, but don’t always succeed, to bring that same humility and patience to other people’s emotions. Just as I can’t always predict how I will react to a situation until it happens, I know intellectually that different people can have different emotional responses to the exact same situation. But it can still feel jarring to have a situation provoke one response in me and a different response in someone else. I’m slowly getting better at recognizing when I’m tempted to be drawn into this fight: “You are not emotionally responding to this issue in a way I feel is appropriate! I’m going to try to convince you to feel the same way I do about this!” Because I am so full of humility, I call this The Dewey Truce. When the heart of a disagreement is about different emotional responses to something, I try to remember that my energy is best spent trying to understand the other person’s emotions (however flawed my empathy may be) rather than trying to convince him or her that my response is “correct.”

    When I was younger, I guess I just didn’t understand how emotions work, and since my job is mostly explaining things to computers, that gap wasn’t too obvious. Nothing I had read or heard prepared me for how the intellectual knowledge of “my kids are different” would feel at different times terrifying, frustrating, and delightful. I keep experiencing whoa, I’ve never felt this way before as I wander into new stages of my life — and this is one of the joys I’ve been finding about growing older. My life feels richer now because I can remember “all of the feels” I’ve picked up along the way.

  7. February 04, 2024

    We are currently living through a machine learning revolution. Over the past ten years or so, for more and more problems, it’s become practical to teach computers how to solve problems rather than program them to solve problems. I suspect that advances in machine learning will transform the craft of programming; what the next generation of computer professionals does 25 years from now may look very little like what computer programmers do today.

    Computers are just machines that use electricity to solve lots of math problems really fast. We take it for granted today that computers can be “programmed,” that you don’t have to design and build a new machine for each new problem you want to solve. However, people had to figure that out! In 1936, Alan Turing figured out that a very simple programmable hypothetical machine (the “Turing Machine”) could compute anything that it was possible to compute. In 1945, John von Neumann proposed one of the first practical designs for a programmable digital computer. Today, the phone in your pocket is pretty close to a “universal computing device” that just needs a new program to unlock entirely new capabilities. (Remember the early iPhone slogan “There’s an app for that”?)

    Because computers are “fast math machines,” programming a computer is basically about turning real world problems into solvable math problems. Take the photos app on your phone as an example. The programmers who created that app needed a way to represent colors with numbers, needed a way to mathematically describe an entire image from individual points of color, and finally needed to make sure that the math problems they were describing were solvable with the available computing hardware. Programming requires deep understanding: How do light, images, and human perception work? What are the capabilities and limitations of the hardware I am programming? Getting to this level of understanding about anything is one of the thrills I talked about when I described why I am still a programmer.

    IMG 4178

    This picture of my dog was brought to you by MATH.

    However: Just like someone invented the notion of a universal computing device, what if it was possible to create a universal algorithm? That’s the promise of machine learning. Machine learning first took off with problems that we didn’t we didn’t know how to solve with conventional programming. For example, consider the problem of recognizing if there’s a dog in an image. While we know the math behind representing colors, nobody knows a simple mathematical function that can takes an all of the colors in an image as input and produces “dog or not a dog” as its output. If you can’t turn the problem into a math problem, you can’t program a computer to solve the problem. However, researchers discovered you could create a mathematical algorithm that takes tons of images as input, along with labels of whether the images contained dogs, and then output an algorithm that determines if an image contains a dog. It’a a crazy trick but it works in this case: We don’t know how to make the algorithm to solve the “is this a picture of a dog” problem, but we can create an algorithm that can make an algorithm to solve the “is this a picture of a dog” problem. This is the main machine learning breakthrough: With enough examples, computers can figure out the math on their own.1

    I deliberately use the term machine learning instead of artificial intelligence. Artificial intelligence invites too many hard-to-settle philosophical debates about the nature of intelligence and focuses the conversation on the current quality of the output of programs such as ChatGPT and DALL-E. My focus is on the way humans program computers in the first place: Did someone write out, step-by-step, how to solve a problem, or did we let the computer derive its algorithm from examples? Debate if you will if the output from ChatGPT or DALL-E exhibits intelligence — it is unambiguous that they were created via machine learning and not conventional programming.

    For two reasons, I expect machine learning will become a more widespread technique for programming computers. The first is as close to a sure bet as exists in this industry: Hardware will become more powerful and cheaper, making it practical to apply machine learning in more and more cases. I’m less confident in my second reason: I think we’re close to solving the biggest obstacle for the widespread use of machine learning, the “training data problem.” Machine learning requires a lot of data and a lot of computing resources. Training GPT-4 supposedly cost upwards of $100 million — this is not an expense you could afford on most software projects. However, you can take a general-purpose machine learning model and fine-tune it to solve the problem you face with significantly less time, data, and hardware resources as it took to create the general-purpose machine learning model. Fine-tuning a general purpose machine learning model could be a key activity in most software projects. Plus, the state-of-the-art machine learning models are becoming useful in a wide variety of problems without futher fine-tuning. A common task when faced with a simple programming problem is to see if ChatGPT can already solve the problem for you.

    The increasing practicality of machine learning makes me wonder how much longer the craft of programming as I know it will remain relevant. Will it remain a valuable skill to break apart problems in the real world into simple math problems that can be easily solved by computer? Or will computers’ increasing ability to “figure out the math from examples” render conventional programming ability as valuable as darkroom skills after the invention of digital photography? We’re already firmly in an in-between world, the dawn of the era of the “cyborg programmer”: Computers cannot yet easily write most programs on their own, but machine learning techniques are making human programmers more productive through things like ChatGPT and GitHub Copilot. I don’t see the pace of innovation in machine learning slowing. Things that used to be impossible or impractical for a computer will become both possible and commonplace. If I was starting my career in software today, I would make sure to have a firm grounding in machine learning.


    1. If you’ve heard people talk about “training a machine learning model,” this is what they’re talking about. Producing an algorithm via machine learning has a couple of drawbacks compared to figuring out the math on your own. First, the model won’t be perfect. We’ve all seen computer image recognition and face recognition make mistakes, for example. Second, we often don’t understand how or why the model works.

  8. January 23, 2024

    Dear youth of today,

    It’s OK if you don’t know what you want to do with your life. For example: When I was young, I resisted computers as long as I could.

    A bit unusual for someone of my generation, I was exposed to computers and programming at a young age, enough to know programming came easily to me. But because it came easily, it was also a little boring. Also, if you’re younger than Generation X, you probably don’t remember there was a time before computers became entangled with every aspect of society and culture, a time when computers were clearly separate from “normal life.” Growing up in the 1980s, most households didn’t have computers, because why would you? They were novelty items, maybe something your parents used at work… but you talked to your friends on a landline, looked up movie times in the newspaper, discovered new music from the radio and purchased the songs you couldn’t live without from a record store, did your reading in paper books that you found in a library from searching through a card catalog. Yes, programming came easily to me; but why would I want to spend my days working on them?1

    Instead, the dream of my early teenage years was to grow up and be a writer. I’d had my nose in books for many of my formative years (mostly fantasy & sci-fi), and the thought of spending my days creating worlds in my brain and then bringing them to life for others — wow, that’s still an intoxicating vision for me! (Gives me chills to this day.) When the time finally came to pick a major in college (at nineteen! we ask people to decide their future when they’re so young), I was torn between knowing I’m good at computers but wanting to write. Incapable of making a decision, I majored in both computer science and English.

    I definitely grew as a reader thanks to majoring in English. My pre-college reading diet of fantasy & science fiction taught me use books for escapism and as a way to imagine what kinds of worlds we might be able to build someday. In the years I haunted the English department in Tucker Hall at William & Mary, I read, absorbed, and was influenced by so many things that hadn’t been part of my earlier reading diet: the King James Bible, Milton, Zora Neale Hurston, Gabriel García Marquez, Shakespeare, Boethius, Tolstoy, Marlowe, Edith Wharton, Walt Whitman, Adrienne Rich, Harriet Jacobs, Frederick Douglass. I learned to love how reading old books connected me to the past, and I learned to love reading books that let me touch the minds of authors of different backgrounds than mine. Being an English major taught me how to use books to make my world bigger.

    However, the dream of being a writer slipped away from me in phases. While still in college, I realized I wasn’t good enough at writing fiction. I then thought I could combine computers and writing by becoming a professor (they have to write a lot, right?), so I headed to grad school in computer science at the University of Washington. While there, I still wondered: Maybe there was a niche for me as a writer who could understand technical topics well and then explain them to a mass audience? I applied for an internship at The Economist to be a science writer (didn’t get it) and read all the materials I could find about the National Association of Science Writers. The thing that first lured me away from the University of Washington was a job opportunity to be a technical writer for Microsoft.

    While I earned a paycheck as a writer for a few months, that technical writing job killed my dream of being a professional writer for good. I could no longer avoid these facts: While I enjoyed writing, it was hard for me, and what I could produce wasn’t different enough from all the other writers in the world. In contrast, with the way my brain works, everything about software is easy. That technical writing job also taught me that I didn’t want to write about things that other people got to invent: I wanted to play a part in inventing the things. I switched jobs in Microsoft from being a technical writer to being a program manager… and here I am, 25 years later, still doing the computer thing.

    This isn’t the path I’d planned for my life, I’m really happy with how my career has turned out. As I wrote about two weeks ago, it’s nothing short of magic. I have no regrets about my meandering path, either. The time I spent as an English major, the effort to find a science writing job, the time I spent as a writer for Microsoft: If I hadn’t spent all that time early on trying and failing, I don’t think I could have come to peace with my home in Big Tech. I would have always wondered, “what if?” Now, I still get to live a life enriched by books, and since I don’t need to write for a living, I can focus on writing for fun.

    T.S. Eliot once said, “Some editors are failed writers, but so are most writers.” In that spirit, I can proudly sign off,

    —A failed writer


    1. Also, if you meet a Gen Xer who’s in the software industry, like myself, rest assured that we did not get here because society lured us with promises of riches and status. For those of us who grew up in the 80s, this is what we thought would await us if we ventured too close to the computer lab: 256421F6 6C58 45AC 8A4B 3DFC424C1BEE

  9. January 16, 2024

    I read a lot, and most of the books I read slide through my consciousness without leaving a clear mark on my mind. Every now and then, though, I read something that makes me stop and say, Whoa. I’ve never thought of it that way before. One of those books was Annie Duke’s Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts. In this book, Duke draws on her successful career as a professional poker player to explain different decision-making tools. She devotes an entire chapter to “mental time travel,” and this is the example that blew my mind:

    Imagine you are standing on a narrow strip of concrete on the shoulder of the highway. Behind you is your car, hazard lights flashing. The rear tire on the driver’s side is shredded. It is fully dark and the drizzle has turned into a cold, heavy downpour. You’ve called roadside assistance, twice, and both times (after long hold times) spoken with operators who’ve told you someone will arrive “as soon as they get there after responding to your call.” You decide to change the tire yourself, only to discover you have no jack. You’re soaked to the skin and cold. How does it feel? It likely feels like the worst moment of your life… You are miserable and you can’t imagine feeling any other way.

    Yeah, I’d say that would be a pretty miserable. But then she continues:

    That’s how it feels in the moment. But if the flat tire had happened a year ago, do you think it would have an effect on your happiness today, or your overall happiness over the past year? Not likely. It likely wouldn’t cause your overall happiness to tick up or down. It would probably have faded to a funny story (or a story you try to make sound funny) told at cocktail parties.

    That’s the power of mental time travel: Projecting your brain forward in time can help you keep perspective and realize that many of the things that seem urgent today will probably fade to insignificance in ten months or ten years. Duke recommends practicing mental time travel in both directions: Not just imagining the consequences of things on different future time horizons, but also looking back into the past. After reading this passage, I looked into my own past and asked myself, What things did I do ten months ago that have a major effect on my life today? What about ten years ago?

    About ten months ago, I obsessed reading online reviews of different espresso machines, comparison shopping to try to get the “right” one. Did the espresso machine brand I wound up picking have a major effect on my life today? Not really. Maybe I should remember the next time I start thinking a kitchen gadget will change my life.

    About twelve years ago, I rebooted my career. I switched from being a program manager at Microsoft to being an iOS developer at a small company. Does that decision have a major effect on my life today? Absolutely — it’s clearly the best decision I made in that era, and I’m still reaping the rewards today.

    But it’s not the best decision I’ve ever made, not by a long shot. For that, I have to go back 23 years. I can draw a line from every good thing in my life today back to my wife Molly and our marriage. We met as grad students in the Computer Science department at the University of Washington. Our relationship wasn’t love at first sight. I was kind of obnoxious to her at our first meeting. Molly politely but clearly turned down my romantic overtures after our disastrous first date. I’m still not sure what caused her to change her mind about me. She told me once that one thing she liked about me back then is that I didn’t care about sports, so maybe that was it?

    Molly rescued me from my… questionable fashion choices. I’ve always been a bit of an absent-minded professor type with poor impulse control; our household finances would be in shambles if Molly hadn’t taken over the bank account. Everything in my life is easier because I have a partner I completely trust to share burdens with. Everything in my life is more rewarding because Molly and my kids are by my side to share the good times with.

    Here’s the crazy thing. It’s when I do mental time travel from today, reflecting back on my life, that it’s so obvious that marrying Molly is the single best decision I’ve made. But then? When I started dating Molly, I remember thinking that I didn’t want to get married. Only old people were married, and I wasn’t ready to be old.

    A limit of my mental time travel is I can’t remember what changed. After I started writing this essay, I re-read my journals from those years, and they’re frustratingly short of insight. In late 1998 I made a note that an ex-girlfriend thought it would be funny if I got married because it would mean I’d gotten over my “phobia” (her words). Around a year later, I was writing about how Molly and I were shopping for a ring. I have no recollection and no record of what changed in me during the intervening time. Maybe I just grew up? Maybe I was smart enough to realize I’d found someone special and it would be crazy to let her drift away? Because my memory is terrible I don’t trust that I’ll ever know exactly.

    However, marriage is a lot more than the first “hey, let’s get hitched!” decision. Every day since our wedding, Molly and I have faced a new choice: Do we stay married? Or put another way: Do we keep living our lives so we’re worth being married to? Do we continue to make changes in our lives so we can be better partners to each other? Of course it hasn’t always been easy; our lives have changed so much since those days in grad school, and we could not predict the stresses that careers and parenting would have on our lives. Because marriage is an ongoing effort, though, it doesn’t matter that I can’t clearly remember what finally changed in my young brain to convince me to say I do in October 2000. What matters is that I’ve remembered, in each of the 8,487 days since our wedding day, that I was lucky to find my life partner when I was young and that I better not screw things up. (And I’m lucky that she has forgotten in each of those 8,487 days that she can do so much better than me!)

    PS, it’s Molly’s birthday today. If you see her, tell her Happy Birthday and remind her she’s the best thing that’s happened to me.

  10. January 11, 2024

    Once upon a time, when I was a brash and headstrong program manager in the Windows division of Microsoft, I got used to being the youngest person in the room in whatever meeting I was in. That era lasted a few years, then gradually turned into an even longer period when I was “about the same age” as most of the people I was working with. But that period is behind me now, too. For some reason, the software industry favors youth, and now with my gray hair and reading glasses I’m often decades older than the people I’m working with. I’ve started to realize that I’m a bit of an outlier for persisting in this industry for so long, and an idea has been tickling the back of my brain for a while: staring a series of blog posts called, “I’m an old person in the software industry. Ask me anything!”

    So here’s my first stab at it, and I’m going to answer a question that someone asked me at the end of a Duolingo interview. I’d given the candidate a quick outline of my career (13 years at Microsoft as a program manager, then a mid-career shift to becoming an iOS engineer, which lead to a year at a startup, 6.5 years at Facebook, and now 5 years at Duolingo). He asked me, quite bluntly: “Why are you still a programmer?”

    The candidate wasn’t from the United States, and the impression I got is that the programming jobs in his country were poorly paid, low-status jobs that you took as a stepping stone to something better, like becoming a manager. So that’s one factor in my answer to the question: I’m a still a programmer because in the US tech industry, I can afford to be. I’m not in the position of painters and poets, those who are torn between a job they love and a job that can support their families. If, like me, you love programming, you can do it your whole life.

    But why do I love programming? Two reasons. First, Being a programmer is as close as you can come to being a real magician in this universe. This isn’t the magic of smoke, mirrors, and sleight of hand. This is magic like people dreamed of in the old days: You make things happen just by saying the right words in the right order. Through your willpower and incantations you direct supernatural demon forces to do your bidding. This isn’t a tortured analogy; the electricity that flows through our computers is pretty close to a demon force, and we control it merely through the words that we write in the form of computer code. Willing something into existence through words alone is an amazing feeling.

    The second reason I love programming goes even deeper into my personality. One of the greatest pleasures I get is when I understand how something works. That’s why my original career goal was to teach: I thought helping students understand things would multiply the joy I got from understanding things the first time. I abandoned teaching as a career when I realized I liked building software way more than I liked being an academic. In a way, though, my teaching aspiration just morphed a little. Instead of explaining concepts to people, I spend my days “explaining” concepts to computers. Writing the precise instructions needed to get a computer to do anything provides the same kind of feedback of “do I really understand this” as trying to explain it to another human.

    My career in software has had its downsides. I’m jealous of people who get to work in hospitals. There are times it strikes me how trivial everything I do is, especially compared to the doctors, nurses, and technicians who actually help people. I’m jealous of people who have practical skills like carpentry. I have remarkably little ability to manipulate the physical world, and sometimes it feels like the only thing I can do well is think and type.

    I’ve made my peace with those downsides, though. Instead, every day I am amazed I get paid to understand how things work and then explain what I’ve learned to a computer. It’s a thrill, and I don’t know if I’ll ever get tired of it.

  11. September 05, 2023

    Update: Found a workaround. Minutes after publishing this, I was reading Chris Eidhof’s presentation on SwiftUI, and noticed he uses ZStack to wrap his conditional SwiftUI statements where I’ve been using Group. On a whim I tried changing Group to ZStack in my repro code below… and it solved the problem of the disappearing toolbar buttons. I’m relieved there’s a workaround to this bug but it’s still been a bit of a maddening experience.

    Original post:

    I really want to like SwiftUI. I like the way it manages state, and when it works, it feels magical. The problem is it doesn’t work reliably. For example, I spent way too much time banging my head against the wall managing toolbars in a SwiftUI app that targets the Mac. I finally distilled things down to this: If you have a NavigationSplitView, and have toolbar items associated with the detail: view, and that detail: view is conditional… then you’ll lose the toolbar items when the condition changes, and they never come back.

    For example, here’s what my simple app looks like when I launch it:

    D735457D 32EB 4151 BC39 E74A4DA693A1

    See those toolbar items over the detail view? That’s what we’re going for.

    But then click the Toggle Globe button and this happens:

    89C3F695 3CD8 4F2F A9AD 77676BF72B72

    The toolbar buttons have gone away and they’re not coming back. I haven’t figured out a great way to work around this. This bug only happens for SwiftUI apps that target MacOS. It doesn’t repro on iOS or in the “Scaled to match iPad” Catalyst mode. (All is not great on iOS though. In my “real” app instead of my toy app I wrote to isolate the disappearing-toolbar bug, the iOS app crashes when I try to install the toolbar. I haven’t yet isolated why it crashes. I don’t have time for this.)

    Every year I think, “Maybe this is the year I can rely on SwiftUI,” and every year I walk away disappointed.

    rdar://FB13106004

    Repro code

    import SwiftUI
    
    struct ContentView: View {
      @State private var showGlobe = true
      var body: some View {
        NavigationSplitView {
          Text("sidebar")
        } content: {
          VStack {
            Text("content")
            Button("Toggle Globe", systemImage: "globe") {
              showGlobe.toggle()
            }
          }
          .padding()
          .toolbar(id: "my-toolbar") {
            ToolbarItem(id: "trash") {
              Button("Test", systemImage: "trash") {
                print("Tapped test")
              }
            }
          }
          .navigationTitle("Content")
        } detail: {
          Group {
            if showGlobe {
              VStack {
                Image(systemName: "globe")
                  .imageScale(.large)
                  .foregroundStyle(.tint)
                Text("Hello, world!")
              }
              .padding()
            } else {
              Text("There is no globe")
            }
          }
          .toolbarRole(.editor)
          .toolbar(id: "my-toolbar") {
            ToolbarItem(id: "toggle-globe", placement: .automatic) {
              Button("Toggle Globe", systemImage: "globe") {
                showGlobe.toggle()
              }
            }
            ToolbarItem(id: "bold", placement: .automatic) {
              ControlGroup {
                Button("Bold", systemImage: "bold") {
                  print("tapped bold")
                }
                Button("Italic", systemImage: "italic") {
                  print("tapped italic")
                }
              }
            }
          }
        }
      }
    }

  12. August 21, 2023

    Today I learned another difference between iOS and MacOS: Dealing with attachments (specifically, images) in rich text. This is something I’m adding for my Permanent Marker project, which is a multi-platform UIKit / AppKit application.

    I had my first exposure working with attachments when working on my Library Notes application. Library Notes does not have full rich text editing, but it does support embedded images, and that’s where I learned the following simple recipe for adding an image to rich text on iOS:

    1. Create an NSTextAttachment that contains the data for your image.
    2. Create an NSAttributedString that displays the attachment and insert it into the appropriate location in the rest of your rich text. (Rich text attachments are stored as an attribute on a NSTextAttachment.character. While you can manually create this by inserting the character and applying the right attribute, it’s easier to use the NSAttributedString(attachment:) initializer.)

    Now often I find myself trying to insert photographs into text, which presents a problem: Modern digital photos (even ones from cell phone cameras) are huge. Consequently, I’m almost never working with raw image data from disk or the network for my attachments; instead, I use CoreImage to resize the image before including it as an attachment. This works fine on iOS; NSTextAttachment has an image property that I can use for my resized image.

    Things are a little different on MacOS. First, and a little surprisingly to me, the designated initializer for NSTextAttachment on the Mac takes a FileWrapper. When I did initial prototyping of image support on the Mac version of Permanent Marker, this worked fine — I’d create a FileWrapper for my image file, pass the FileWrapper to the NSTextAttachment, and boom, there was an image.

    My surprise happened when I started resizing the image before trying to show it. I no longer used NSTextAttachment(fileWrapper:) and instead used NSTextAttachment(data:ofType:). Suddenly my images stopped showing up.

    And that’s when I learned about a MacOS-specific NSTextAttachment property: attachmentCell. If you actually want to display an attachment on MacOS, you need to create an object that conforms to NSTextAttachmentCellProtocol and assign it to the attachmentCell property. The built-in NSTextAttachmentCell class conforms to the protocol, and you can easily create one for an image using the NSTextAttachmentCell(imageCell:) initializer. Apparently the NSTextAttachment(fileWrapper:) initializer takes care of setting up the attachmentCell for you, so you only need to learn about this trick when you need to manipulate data directly rather than using data straight from a file.

    Working on Permanent Marker is giving me quite the education on the differences between UIKit and AppKit!

  13. August 14, 2023

    Today I learned: If you want to create a menu item in a SwiftUI app for the Mac that has a checkmark next to it and has a keyboard shortcut, you want to use the Toggle control in the menu.

    A screenshot of a menu with checkboxes

    I spent many hours confused because:

    • If you use a Button, you can easily get a control that has a keyboard shortcut, but there is no modifier that will give it a checkmark.
    • If you use a Picker, then you can get a checkmark, but you can’t give keyboard shortcuts to individual items in the picker.

    This little discovery highlights one of the problems I have simply learning how to use SwiftUI. Now, menus aren’t the most intuitive things in the world in AppKit. However, when you read the documentation for NSMenu, you can find out that it contains a bunch of NSMenuItem objects, and then find out these have a state property, and make a guess that setting the state property will change the checkmark.

    In contrast, when you read the documentation for the SwiftUI Menu, there’s no information at all about what kinds of content the Menu can contain. The documentation shows a Menu containing a Button and a child Menu. But you have no way to know you can also put a Picker or a Toggle or a Divider in there too… you just have to figure this out by trial and error and ChatGPT (which isn’t great because SwiftUI is so new).

    Anyway, putting this info out on the internet so the next versions of ChatGPT will be able to answer this question for future developers…

  14. August 01, 2023

    It’s been a while. What have I been learning lately?

    As an iOS engineer, I spend a decent chunk of my summer free time tinkering with side projects to get hands-on experience with the new APIs coming in the Apple ecosystem. The nice thing about side projects is I don’t have to worry about pesky things like “backwards compatibility.” I never really have a plan during these summer coding hours. I just work on things that seem interesting to me and that could benefit from the new APIs.

    So far this summer, my focus has been my Permanent Marker app. Things I’ve done:

    1. I adopted the new SwiftUI data flow APIs (@Observable and its ilk). My verdict: Easy to adopt, much easier to use than the old version, something I really wish I could use on projects that do require backwards compatibility.
    2. I’ve poked around in TextKit2. I think “TextKit2 on iOS 17 / MacOS Sonoma” is the answer to my how to do rich text lists question.
    3. I’ve turned the Mac version Permanent Marker into a SwiftUI + AppKit app, not a SwiftUI + Catalyst app. It’s helped me learn how to make an app feel “right” when running on the Mac.
    4. I’ve started using Apple’s Swift Markdown package and I’m working on a rich text editing component for my Markdown files.
    5. I don’t know if I’m getting better at SwiftUI, but I keep re-training my intuition on the “right” way to do things in a SwiftUI app. For example: As I said, Permanent Marker now has a rich text editor component. This is a SwiftUI wrapper around UITextView / NSTextView. The scenario: How do I get SwiftUI actions (“someone tapped the bold button”) to get recognized by the UIKit / AppKit code? If I was writing this as a UIKit app, I can just send the toggleBoldface() message to the UITextView instance. In SwiftUI, though, even when you bridge to UIKit classes, you don’t have a natural way to manipulate the underlying UIKit code. My first approach was to just dispatch messages to the responder chain, knowing my UITextView instance would get a chance to respond to that message. But later, I stumbled on an approach that feels more aligned with the SwiftUI style: I created an @Observable model object called CurrentSelectionFormatting, and I do things like toggle bold properties on that model object. I then update the text view in response to model object changes, not in response to messages. Is it better this way? I’m not sure. But it just feels more SwiftUI.

    There’s still a lot I want to do with this app, but so far I’m really happy with the changes I’ve been able to make.

  15. April 05, 2023

    In my career, I’ve learned that there can be vastly different strategies for something you do only once versus something you get to do repeatedly.

    Suppose I have a unfair coin. It comes up heads 60% of the time and tails 40% of the time. You have $25. I’ll let you bet as often as you want, and you want to maximize your money. What do you do?

    I first encountered this problem on Paul Butler’s website, which has a simulator which lets you place bets over and over and tracks what happens to your bankroll. I did awfully at this. According to this reference from the website, a lot of people do terribly at this problem, so I don’t feel too badly.

    Here was my reasoning: If I only got to make a single bet, I figured the best strategy was to go all-in. If I won (60% chance), I’d have $50, but if I lost (40% chance), I’d have $0 — the expected value of my “all in” strategy was $30. You can check different values of what you could bet, and you can see that “bet it all” gives you the highest expected value if you know you’ve got a better than 50% chance of winning.

    I knew the wrinkle in this game is I could place multiple bets, so getting wiped out was bad. If I lost, I’d want to have some money left to bet again and try to earn more money. I figured there was a 40% chance of losing, so I’d ratchet down my “bet it all” strategy by 40% and only bet 60% of my money.

    As I said, this strategy doesn’t work. Try it in the simulator. When you bet 60% of what you’ve got, the losses hit too hard. You have a 16% chance of two losses in a row, and that event reduces your bankroll by almost two thirds, making it hard to make up lost ground. (A small bankroll means you can only make small bets, so even when you start winning again you’re not winning as much.)

    I’ll spare you the math, but it turns out the right amount to bet in this game is 20% of your bankroll. I was shocked that there was such a divergence between the optimum strategy if you could only make one bet (bet it all!) and the optimum strategy if you could make multiple bets (bet just 20%).

    I think about this sometimes when it comes to my job. Every now and then, I find myself having influence over task allocation — deciding who works on what. When I’m in this situation, I think of advice I heard from one of my managers at Microsoft. He told me not to just assign tasks to the person who is best able to do the task. Instead, I should focus on who is going to learn the most by working on the task.

    My former manager’s insight here comes back to different optimum strategies for things you do just once and things you are going to do repeatedly. Assigning tasks to the people who are most capable of finishing those tasks is the optimum “play the game once” strategy. With the most capable person working on any given task, the project will finish in the least time and with the least risk — but also, there will be the least growth in the capabilities of the team. Nobody will be stretched by the work they do. Assigning tasks based upon who’s going to learn the most from the task is a “play the game over and over” strategy. In exchange for a little bit of risk / time on the current project, you’re going to have increased capabilities on the team for the next project. Over time, the crew be able to take on more and more ambitious projects.

    Recognizing the difference between single and repeated play strategies — and taking the time to figure out what kind of game you’re playing! — can profoundly impact outcomes.

  16. March 23, 2023

    This isn’t a “today I’ve learned,” but a “thing I’ve learned from working a long time in this industry.” I’m also taking advice to “write what you repeat”, and I’ve repeated this a lot over the years as I’ve interviewed people — usually recent college graduates — for roles at Microsoft, Facebook, and now Duolingo. They ask me, “What’s the work-life balance like at your company?”

    Here’s the same answer I’ve given for years: I can’t say what will be true for you, but for me, work life balance has been about my life stage, not about my company.

    When I started working at Microsoft, I was fresh out of grad school. I didn’t have kids or many responsibilities outside of work. I liked what I was doing. My wife also worked at Microsoft, she also liked her work, and we hated sitting in traffic. So, we crossed the 520 bridge from Seattle to Redmond before rush hour, returned home well after rush hour, and built a life that revolved around our work. Even when we weren’t in the office, it was easy to let work bleed over into our evenings and weekends.

    Fast forward many (many) years: My wife and I were still both working at Microsoft, but now we had two small kids. Suddenly, life no longer revolved around work — it centered instead on helping our two little humans grow and thrive. My workday had really hard boundaries at both the beginning of the day and the end of the day. Once I was home, I focused my attention and energy on family, not work.

    This is the exact same company, with the exact same management and the exact same company culture, with the exact same person still working there (me), but with two very different pictures of “work life balance.” Microsoft didn’t change in the intervening years. I changed, by moving to a different life stage.

    This is on my mind these days because I’m in another life stage (with yet another on the horizon). I love my work and my kids are older and don’t need constant supervision. However, we’re just three years away from having both kids off at college. As Tim Urban puts it, I’m at the tail end of my time with them. They don’t need as much of my time — and, quite honestly, don’t seem to want much of time or attention any more. However, because I’m at the tail end, any time they do want to spend with me is a gift and trumps anything that my job might want me to pay attention to. I’m at a new life stage with a new work-life balance. I suspect that in five years, when I’m firmly in the “empty nest” stage, I’ll have yet another view of the new work-life balance for my new life stage. Life keeps changing, so the balance point changes, too.

  17. March 22, 2023

    Today I learned: two businesses I’ve enjoyed for years are going out of business.

    First I heard that Kitsbow is shutting its doors. That hurts because it’s my favorite bike clothing brand by far. I have several of their pieces. The quality’s always been top-notch, and they’ve been really innovative in how they produce their clothes in North Carolina. I’m really sad to see them go.

    Then, venerable digital photography site DPReview.com is also shutting down. It turns out that I had young kids right at the peak of the digital photography boom, so I spent a lot of time on that site dreaming of my next piece of gear. It’s obvious in hindsight: How can a site dedicated to camera gear survive the transition to the cell phone age? However, I never really thought about it, and I just assumed that dpreview would stay around forever.

    This feels personal because biking and photography were such big parts of my identity at one point. (Biking still is, but photos have waned as the kids got older.)

    I feel terrible for the people who are losing their jobs.

  18. March 18, 2023

    I live in Seattle but I work for Duolingo, a company headquartered in Pittsburgh, PA. Recently, I’ve been doing a lot of meetings on “East Coast Time” — jumping on Zoom meetings as early as 6:30 AM. Every time I’ve had to reassure my coworkers: “No, really, it’s not a big deal! I’m a morning person. I’m up at 5:30 every day.”1 So in case any of them read this blog, I’m going to record for posterity how and why I became a morning person. This is a life hack I’ve shared with other parents of young kids.

    I didn’t start as a morning person. When I was younger, I was a combination of “night owl” and “moderate morning person” (usually up and alert around 7am), and absolutely not an “afternoon person.” I think I would have thrived well in cultures with a siesta culture — stay up late, wake up at a normal time, crash in the afternoon for a bit to recharge. This was my life for years.

    Then came kids. For those who’ve lived through it, you know that having infants in the house destroys whatever sleep routine you think you have. Common advice to new parents is “sleep when the baby sleeps.” Those are wise words, but it means that you spend a few years just in a fog.

    Then a miracle happens! Sooner or later, your infant or toddler will start sleeping soundly through the night, and at least with my kids they could sleep a lot on a good night — 10, 11 hours at a stretch.

    Suddenly, after a few years of being in a fog, you have these glorious few hours after the kids have gone to bed when you can do adult things again, like catch up on work or relax or read a book or whatever. Reclaiming that adult time feels magical. I loved it.

    And what I wound up doing, for a few months after my kids started sleeping regularly through the night, was I extended my adult time by staying up later and later. The problem with that plan? The kids still woke up at the same time in the morning. (7-ish? It’s so long ago that I no longer remember exactly.) I kept staying up later, they woke up at the same time… my sleep started suffering again.

    The problem is once I started being in “adult time” mode at night, I couldn’t muster the willpower to stop and go to bed. So I kept cheating myself out of sleep.

    Then one day I had a thought: Since the hard part is “stopping adult time”, why don’t I rearrange things so I have a hard boundary at the end of “adult time” that I can’t move? Thus began my shift to being a morning person. My new plan: I went to bed shortly after the kids went to bed, and set my alarm clock for 5am. When I woke up, a solid two hours of adult time in the morning. That time block had a really clear ending that I couldn’t control. The kids wake up when the kids are going to wake up. This system let me control how much time I was devoting to sleep and how much time I was giving myself to be awake-without-kids without relying a lot on willpower. (Other than the willpower to wake up when my alarm clock went off — but that’s always been easy for me.)

    So that’s how I became a morning person. I’m long past the stage of life when I need to organize my sleep schedule around the sleep schedule of my children, but once established, I saw no need to change the pattern. I did allow my luxury a few years ago of changing my morning alarm to 5:30am from 5:00am, but that’s it. Now that I have a dog who’s gotten used to my early-morning ways, I don’t think I’ll be able to change this sleep pattern any time soon.

    This isn’t a lifestyle for many people, but I do think that any parents of young kids should at least consider it. If you find yourself cheating yourself out of sleep because you want to stay up late and to enjoy time without kids, try taking that time-without-kids in the morning instead when you’ll have clearer boundaries!


    1. This doesn’t really belong in the “Today I Learned” section of this website, but it turns out it’s a pain to maintain separate categorized streams of content.

  19. March 17, 2023

    Every now and then, I realize that some of my thought processes are deeply, deeply irrational.

    For example: I have a modestly long streak in Duolingo. (178 days as of this writing. I’m hoping to nuture this streak and grow it to beat my previous best of 780 days.) This streak is precious to me. I go out of my way to make sure it continues to grow.

    I’ve also been in the Diamond league — the highest league in the Duolingo leaderboards — for the past 8 weeks, and I’m terrified that if I start coasting and start doing the bare minimum to extend my streak each day, I’ll find myself demoted. So I spend more time in the app.

    Caring about these things makes no sense. Nobody else knows how long my Duolingo streak is, much less cares about it. I’m not going to have them engrave “Achieved a 780 day Duolingo streak” on my tombstone. I can’t put “Duolingo Diamond League” on my resumé.

    When one part of my brain realizes that another part of my brain is essentially malfunctioning by caring about something meaningless, what should I do?

    One option: I can try to correct my irrational thinking.1 Like most people I have loss aversion, but loss aversion for something meaningless like a Duolingo Streak is just silly.2 I’m sure with a little bit of work, I can get myself to stop caring about these trifles.

    Or… maybe I realize I need to be grateful for this irrationality. I’m a fundamentally lazy person. If I didn’t have nudges from loss aversion kicking in, there’d be many days my laziness would triumph over my good intentions. I can’t reason with “I don’t feel like doing that today.” I need to turn to something powerful, primal, and — yes — irrational to overcome that feeling. Without Duolingo streaks and leaderboards, I’d practice languages less.

    Maybe keeping a little bit of irrationality can be rational. The thing I hope I can learn is how to tell when it’s good to learn to identify and fight my faulty thinking versus channel it towards something useful.


    1. I pasted a draft of my essay up to the “what should I do?” question into ChatGPT, and it helpfully suggested that I try to practice “cognitive restructuring.”
    2. I winced inside when I described my Duolingo Streak as “meaningless.” That’s how strongly it’s grabbed me.

  20. March 08, 2023

    Earlier, I wrote about using ChatGPT to perform the “because/but/so” exercise from the book The Writing Revolution. While it took me two tries to get the prompt correct, and there were some unexpected responses, I’m still amazed that I’m able to get a computer to walk someone through a writing exercise at all.

    Recently, I’ve been trying to see if I can get AI to help with another exercise in The Writing Revolution. This exercise is about teaching students how to revise. Because it’s hard to revise your own work, the teacher starts the exercise by giving the students an “unadorned paragraph.” Here’s an example from the book:

    Eleanor Roosevelt was important. She was married to Franklin Roosevent. She was the First Lady from 1933 to 1945. She made many contributions. Many people admired her.

    This is all true, but boy is it vague and dull! The book then talks about how the teacher gave the students a list of strategies they could try (“expand sentences” / “use transitions” / “give examples”) to come up with a much improved paragraph.

    How well could an AI do this?

    I gave ChatGPT the following prompt:

    I want you to act as a teacher who is using the methodology of “The Writing Revolution” to teach students how to revise paragraphs. You are going to teach students how to revise by generating an unadorned paragraph about a topic. An unadorned paragraph is well organized and grammatically correct, but lacks variety and detail and uses only simple vocabulary and sentence structure. This is an example unadorned paragraph: “The Titanic was a big ship. It sank. Many passengers died. There are new regulations.”

    I will be the student and you will be the teacher. You will give me an unadorned paragraph on the topic of The Wife of Bath’s Prologue from The Canterbury Tales. You will then walk me through steps for revising the paragraph to make it better by giving me a prompt for revising the paragraph. Example prompts that may work for improving the paragraph could be “try adding detail,” “explain why this is true,” “give examples,” “use more interesting vocabulary,” and “improve the topic sentence.”

    After you give me a prompt to improve the paragraph, you will wait for me to revise the paragraph based on your prompt. After you read my revised paragraph, you will then give me feedback on how I did. If needed, you will then give me a new prompt for improving the paragraph and then wait for me to revise the paragraph based on the prompt. You will repeat this process until we have a paragraph about The Wife of Bath’s Prologue that is well written, well organized, with good variety of sentence structure, and good supporting detail.

    And we were off! ChatGPT gave me the following paragraph to start with:

    In The Canterbury Tales, The Wife of Bath’s Prologue is a story about a woman who has had five husbands. She talks about how she likes to control her husbands and how they should always do what she wants. She says that women should have sovereignty over their husbands and that they should be able to do what they want.

    ChatGPT then walked me through a few different prompts to improve this paragraph:

    • “Can you add more information about the main character, the Wife of Bath, and her personality?”
    • “Can you provide an example or illustration to support the claim that women should have sovereignty over their husbands?”
    • “Can you improve the topic sentence to better introduce the paragraph?”
    • “Can you provide more supporting detail to strengthen the paragraph?”

    By this point, I was getting kind of tired with this first experiment and ended the chat. For the record, this is the paragraph I ended with:

    Through the character of the Wife of Bath in The Canterbury Tales, Chaucer presents a vivid and complex portrayal of a medieval woman who defies conventional expectations. She is a blunt and sexual woman who has had five husbands. She is smart and argues well. For example, she quotes St. Paul and says that women should have “power of the body” over their husbands, so women should be able to do what they want in marriage.

    Things that went well:

    • Holy cow, I’m still amazed I can have this kind of interaction with software. A year ago it was impossible.
    • If I was ever stuck, I could ask ChatGPT for help, and its tips were pretty spot-on.

    Things that weren’t great:

    • The steps that it took me through felt kind of robotic. It didn’t feel like a human was picking the best next step for my paragraph based on what I’d done, but that an uninterested and uninspired tutor was just picking the next step on a list.
    • Whenever I completed a prompt, I was always told “great job,” so I’m not sure how well ChatGPT will do at evaluating the output of novice writers.

    After spending about an hour creating and editing prompts, testing them with various texts, and reflecting on my experience, I remain extremely optimistic about the potential of large language models to help students become better writers. Things aren’t perfect, and they don’t work out-of-the-box… but again, a year ago, this wasn’t even possible. I can’t wait to see what sort of educational tools we can build with this technology.

  21. March 05, 2023

    Just three quick things to throw on the Internet today so OpenAI can index it for GPT-5.

    For work, we’ve been converting a simple UICollectionView from a single horizontally scrolling row of content (that uses a plain ol’ UICollectionViewFlowLayout) to a view that has two independently scrolling rows of content. “This sounds like a job for `UICollectionViewCompositionalLayout!” The problem is I find the learning curve for compositional layout to be pretty tough, and I haven’t mastered it yet.

    First quick thing: Read this guide if you want to understand compositional layout. It’s long, but easier to follow than the official Apple documents.

    Second quick thing: If you want to skip reading the docs and just get the recipe for “how do I create a simple layout that is a vertically scrolling list of lines, where each line is a horizontally scrolling list?”, here it is:

    /// A layout of independent lines. Each line scrolls horizontally.
    private let horizontalScrollingLinesLayout: UICollectionViewCompositionalLayout = {
      let item = NSCollectionLayoutItem(
        layoutSize: NSCollectionLayoutSize(
          widthDimension: .fractionalWidth(1),
          heightDimension: .fractionalHeight(1)
        )
      )
      let group = NSCollectionLayoutGroup.horizontal(
        layoutSize: NSCollectionLayoutSize(widthDimension: .absolute(100), heightDimension: .absolute(100)),
        subitems: [item]
      )
      let section = NSCollectionLayoutSection(group: group)
      section.orthogonalScrollingBehavior = .continuous
      section.interGroupSpacing = 10
      section.contentInsets = NSDirectionalEdgeInsets(top: 10, leading: 10, bottom: 0, trailing: 10)
      return UICollectionViewCompositionalLayout(section: section)
    }()

    Full playground showing this is here.

    Finally, a protip: As I mentioned, we were porting an existing collection view from a flow layout to a compositional layout. We ran into a strange bug: When there wasn’t enough content to fill an entire row, you’d wind up dragging the entire content when you tried to tapped in that area. It looks like this:

    Image showing wiggly scrolling

    The problem: The solution that was based on the flow layout used contentInsets on the collection view to provide spacing between items and the edge of the view. That breaks the compositional layout in the way you see above. A compositional layout will create orthogonal scrolling subviews that are the exact width of their containing collection view. If you then apply a content inset to that, you wind up creating content that is bigger than the scroll view bounds, which you can then try to drag around with your finger. To fix this, make sure you apply insets inside the compositional layout instead of to the collection view. When you do this, the orthogonal scrolling views will have precisely the right size and you no longer get strange scrolling behavior.

    Image showing fixed scrolling

  22. March 02, 2023

    Today, a SwiftUI recipe.

    Problem

    In SwiftUI, the TabView component doesn’t report how much vertical size it needs. Here’s how it manifests. This layout kind of works:

    VStack {
      TabView(selection: $selection) {
        HorizontallyScrollingContent()
      }.tabViewStyle(PageTabViewStyle(indexDisplayMode: .never))
      ScrollView {
        VerticallyScrollingContent()
      }
    }

    Because the TabView doesn’t say how much space it needs, it’ll get half the available vertical space in the VStack, and the scroll view with its VerticallyScrollingContent() will get the other half. However, I was running into cases where “half the space” wasn’t enough, so I tried this:

    ScrollView {
      TabView(selection: $selectedPage) {
        HorizontallyScrollingContent()
      }.tabViewStyle(PageTabViewStyle(indexDisplayMode: .never))
      VerticallyScrollingContent()
    }

    I expected the contents of the HorizontallyScrollingContent() on top of the VerticallyScrollingContent(), and the whole thing scrolls vertically. (In other words, the HorizontallyScrollingContent() scrolls away, like a header.)

    What actually happened? HorizontallyScrollingContent() didn’t show up at all, because TabView doesn’t tell the ScrollView how much space it needs.

    The solution

    Write a component that uses GeometryReader and preferences to report its size. (Hat tip: this project — I wouldn’t have figured this out on my own. My only improvement is putting it into an easy-to-reuse component.)

    /// A variant of `TabView` that sets an appropriate `minHeight` on its frame.
    struct HeightPreservingTabView<SelectionValue: Hashable, Content: View>: View {
      var selection: Binding<SelectionValue>?
      @ViewBuilder var content: () -> Content
    
      // `minHeight` needs to start as something non-zero or we won't measure the interior content height
      @State private var minHeight: CGFloat = 1
    
      var body: some View {
        TabView(selection: selection) {
          content()
            .background {
              GeometryReader { geometry in
                Color.clear.preference(
                  key: TabViewMinHeightPreference.self,
                  value: geometry.frame(in: .local).height
                )
              }
            }
        }
        .frame(minHeight: minHeight)
        .onPreferenceChange(TabViewMinHeightPreference.self) { minHeight in
          self.minHeight = minHeight
        }
      }
    }
    
    private struct TabViewMinHeightPreference: PreferenceKey {
      static var defaultValue: CGFloat = 0
    
      static func reduce(value: inout CGFloat, nextValue: () -> CGFloat) {
        // It took me so long to debug this line
        value = max(value, nextValue())
      }
    }

    Gotchas

    I spent a lot of time frustrated because I didn’t understand SwiftUI preferences and wrote the preference key wrong. I started with this:

    private struct TabViewMinHeightPreference: PreferenceKey {
      static var defaultValue: CGFloat = 0
    
      static func reduce(value: inout CGFloat, nextValue: () -> CGFloat) {
        value = nextValue()
      }
    }

    My thinking was there was no need to really “reduce” the min height preference, because only one thing would report its height.

    However! My mental model of preferences was wrong, and this article helped straighten me out. It turns out every view will have the value you are looking for with the preference key. Views get the defaultValue if you don’t specify something else. So my original code worked only if the last child inside the TabView reported its height. As soon as I changed things even a little, things stopped working. (I still find it really hard to debug SwiftUI because I don’t know where to put breakpoints or print statements.) Once I understood that every view gets a preference value, I realized I had to change my reduce logic to value = max(value, nextValue()), and things started working reliably.

  23. February 26, 2023

    Ten years ago today, I made the following Facebook post:

    Rode my bike to work today for the first time in about 10 years. It’s a pretty easy commute to downtown, so I hope to do this more often.

    Fast forward a decade, and “I hope to do this more often” looks like quite the understatement. By 2014, I’d committed to bike commuting. Rain or shine, heat or freezing weather — if I was in the office, I was there by bike.1

    What are some things I’ve learned in ten years of bike commuting?

    1. I am my own worst enemy. While I’ve been hit by cars twice (one super-minor in 2014, one slightly more serious in 2018), my worst crashes have come from my own mistakes. (Those mistakes are almost always biking too fast downhill for the road conditions.)

    2. Bike commuters are the only people who think, “Wow, the weather is great. I’m going to make my commute longer.”

    3. Related to the above, one of the little joys of bike commuting is it’s easy to stop and take photos when the scenery justifies it.

    4. The hardest part is starting. This is still true for me ten years later, and it’s still true no matter how many times I’ve experienced this: When the weather is bad, or I’m tired, or for whatever reason just don’t feel like biking: If I can just get myself on the bike and pedaling, I feel great five minutes later. (I’ve often used this mental trick when I’m feeling too worn out to bike to work. I’ll tell myself, “It’s OK, I’ll just bike to work slowly.” And after five minutes, I’m biking at my normal pace.) The inverse is also often true. There have been times I’ve just not been able to muster the mental energy to bike to work and work from home instead. Often, an hour or two later, I feel bad because I didn’t ride.

    5. I’m a gear junkie and I’ve used bike commuting as an excuse to buy way too many jackets and bags.

    6. Fundamentally, I’m a lazy person. Since 2014, if I’ve gone to the office, I’ve almost certainly gotten there by bike. I’ve lost weight and controlled my cholesterol through the regular exercise. However, for two years of the pandemic, I didn’t have an office to commute to. Nothing stopped me from riding my bike just for exercise during those two years! I tried. But for me, my generic willpower isn’t as strong as this simple thought process: “Well, I need to get to the office today. And the way I get to the office is on my bike. So, I guess I’m biking today.”

    I’m so happy to have an office to commute to again.

    A scene from my commute in the fall of 2016 A scene from my commute in the fall of 2016


    1. One thing I don’t do is bike in the snow — I don’t think the risk of slipping is worth it. This is relevant today because I’d been looking forward to celebrating 10 years of bike commuting with the 2023 Cascade Bicycle Club Chilly Hilly, but we’ve had a rare late-February snowfall.

  24. February 21, 2023

    Here’s a quick little today I learned about the new RegexBuilder framework in Swift. I haven’t paid that much attention to it so far, as it requires iOS 16 / MacOS 13. It also turns out that I’m pretty good at writing cryptic regular expression syntax, so I haven’t felt a huge need to change.

    However, it turns out the new RegexBuilder can do a cool trick: It can transform a matched substring into some other type for you.

    Recently, I’ve been dusting off my Captain’s Log project. The core of that app is just a text file with a bunch of lines that look like this:

    2023-02-16 📖 Read (20.0 min)

    I parsed that line with this regular expression:

    private let lineRegex = try! NSRegularExpression(pattern: #"^(\d{4}-\d{2}-\d{2}) (.*?)(\(.*\))?$"#, options: [])

    And part of parsing involved transforming data from one type to another. For example, I don’t want to deal with the string 2023-02-16, I want to deal with a Day struct that contains a year/month/day. So in my parsing logic, I have to check to make sure I can build a valid Day from the string, like so:

       guard
           let result = lineRegex.matches(in: line, options: [], range: NSRange(location: 0, length: line.utf16.count)).first,
           // It's not enough to parse the string; it needs to be a valid Day
           let day = Day(line[result.range(at: 1)])
       else { return nil }

    Now I admit that the regex ^(\d{4}-\d{2}-\d{2}) (.*?)(\(.*\))?$ was easier to write than read, and I first wrote it over two years ago, so when I was looking to add some features to the project I also thought I’d try the new RegexBuilder to see if it would make the regular expression easier to read and maintain. And that’s when I discovered a cool trick: RegexBuilder lets you put the string matching and data transformation in one place, where it’s much easier to read and maintain. For example, I now have the following code:

    private enum LogEntryRegex {
      let day = Regex {
         TryCapture {
           Regex {
             Repeat(count: 4) {
               One(.digit)
             }
             "-"
             Repeat(count: 2) {
               One(.digit)
             }
             "-"
             Repeat(count: 2) {
               One(.digit)
             }
           }
         } transform: { dateString in
           Day(dateString)
         }
       }
    }

    Now, together in one place, I get to say that “a day regex is supposed to parse a string of this particular format and produce a Day struct.” If it can’t make the Day, it doesn’t parse. When I match something against LogEntryRegex.day, the resulting output is a Day struct, not a substring.

    This is definitely something I’ll remember on any projects that do a lot of text processing!

  25. February 19, 2023

    When my kids were in middle school, I spent a lot of time helping them revise their writing assignments for their Humanities class.1 I found this process humbling. It reminded me that many things that I take for granted about writing need to be taught, and I had no idea how to teach them. I turned to the book The Writing Revolution for help. While I wasn’t able to directly use much of the content of that book with my own kids, I really liked how the authors broke down writing into simple steps that could be practiced with guidance. A year ago, I briefly toyed with the idea of developing software to help students write better. Working at Duolingo, I’ve seen how technology can help people learn by encouraging them to practice new skills every day. Perhaps I could build software that used some of the exercises from The Writing Revolution and helped kids with daily practice in writing? However, I couldn’t even figure out where to start! Sure, I could write software that gave kids writing prompts, but I had no idea how to write software that would evaluate if kids did a good job responding to those prompts. How can software understand and evaluate all of the intricacies of language? I quickly shelved my “work on software to improve writing” idea.

    Fast forward to 2023 and the widespread availability of Large Language Models like ChatGPT. Suddenly, “software that understands language” is within reach. Is it possible to use software like ChatGPT to coach students through writing exercises? To test this idea, I tried to see if I could get ChatGPT to walk me through the “because/but/so” exercise from The Writing Revolution. The idea behind “because/but/so” is that students need to practice crafting complex sentences, and this exercise helps students by giving them some structure. The way the exercise works is the teacher starts with a sentence “stem,” such as “The Ancient Egyptians built the pyramids.” (Ideally the sentence stem comes from material that the class is studying.) Then, you get the students to elaborate on the idea in the sentence stem with the words “because/but/so”:

    • The Ancient Egyptians built the pyramids because…
    • The Ancient Egyptians built the pyramids, but…
    • The Ancient Egyptians built the pyramids, so…

    This gives the students practice in writing sentences that connect different ideas.

    I gave ChatGPT the following prompt to start our “because/but/so” session:2

    I want you to act as a teacher who is using the methodology of “The Writing Revolution” to teach middle school students how to write well. You are going to lead the students through the “because / but / so” exercise. You will generate a sentence stem from a topic we have been studying in school and end the stem with one of the words “because”, “but”, or “so”. You will then wait for the student to finish the sentence and give helpful and constructive feedback on how the student performed.

    For example, you could give the student the prompt, “The planets in our solar system orbit the sun because…” If the student responds “The planets in our solar system orbit the sun because gravity keeps the smaller objects in orbit around the larger object,” you could compliment the student on the sentence for showing an understanding of science. If the student writes “The planets in our solar system orbit the sun because the sun is in the center,” you could suggest, in a kind way, that the sentence is repetitive.

    Please generate five prompts suitable for middle school students on the topic of The American Revolution. After each prompt, wait for me to reply and evaluate my sentence before giving the next prompt.

    This (mostly) worked! ChatGPT would give me a prompt and give me feedback on how I did. Here’s an example where I did a good job completing the prompt:

    ChatGPT Great, let’s get started with the prompts on the topic of The American Revolution: The American colonies wanted independence from Great Britain because…

    Me The American colonies wanted independence from Great Britain because they felt it was unfair that they did not have the same rights.

    ChatGPT Great job! Your sentence shows a clear understanding of the American colonies’ desire for independence and the reasons behind it. Keep up the good work!

    ChatGPT does a pretty good job detecting nonsense.

    ChatGPT (prompt) Many colonists were angry about the tax on tea, so…

    Me Many colonists were angry about the tax on tea, so monkeys cannot fly.

    ChatGPT I appreciate your effort, but your sentence doesn’t make sense in relation to the prompt. Remember, the “because/but/so” exercise is intended to help you connect ideas and express your thoughts more clearly. Let’s try again: Many colonists were angry about the tax on tea, so what happened next? Can you finish the sentence with a logical outcome or consequence?

    If I was too vague, ChatGPT told me so.

    ChatGPT (prompt) The American victory at the Battle of Saratoga was significant because…

    Me The American victory at the Battle of Saratoga was significant because the Americans won.

    ChatGPT That’s a good start, but let’s try to expand your sentence by explaining why the American victory at Saratoga was significant. Can you think of any consequences or outcomes of this battle that had an impact on the Revolutionary War? You can use “so” to help connect the ideas.

    At this point I should admit that I haven’t studied the American Revolution in a very long time and remembered nothing about the Battle of Saratoga. ChatGPT can help!

    Me I don’t remember the Battle of Saratoga. Can you help?

    ChatGPT Sure, I’d be happy to help. The Battle of Saratoga was fought in September and October of 1777 during the Revolutionary War. It was a turning point in the war because it was the first major victory for the Americans, and it convinced France to enter the war on the side of the Americans. This support from France was crucial to the ultimate success of the American Revolution. So, with this information in mind, can you think of a sentence that explains why the American victory at Saratoga was significant?

    What if I draw the wrong conclusion from this helpful information?

    Me The American victory at the Battle of Saratoga was significant because it convinced the French to help the British.

    ChatGPT Actually, the opposite is true. The American victory at Saratoga convinced the French to enter the war on the side of the Americans, which helped turn the tide of the war in the Americans’ favor.

    This session wasn’t perfect. There was one prompt where I thought I was being too vague but ChatGPT thought I did a good enough job. (Who knows? Maybe it would have been a good enough job for a middle school student.) And after I drew the wrong conclusion about the significance of the Battle of Saratoga, ChatGPT seemed to lose patience with me and it completed the sentence on my behalf. Here’s ChatGPT’s full response to my last “the French helped the British” sentence:

    ChatGPT Actually, the opposite is true. The American victory at Saratoga convinced the French to enter the war on the side of the Americans, which helped turn the tide of the war in the Americans’ favor. So, let’s try again:

    ChatGPT The American victory at the Battle of Saratoga was significant because it convinced the French to enter the war on the side of the Americans, which helped the Americans ultimately win the war.

    ChatGPT Great job! You’ve shown an understanding of the significance of the Battle of Saratoga and its impact on the outcome of the Revolutionary War.

    At the end of the five prompts, I asked ChatGPT for feedback. It sounds like a helpful coach:

    Me As a teacher, how would you evaluate my effort as a student? What have I done well and where should I work to improve?

    ChatGPT Overall, I think you did a great job! You demonstrated a good understanding of the prompts and were able to generate sentences using the “because/but/so” exercise. You also took feedback constructively and adjusted your responses accordingly, which is a sign of a good learner.

    One area where you could improve is in providing more specific and accurate information when responding to the prompts. For example, in prompt #5, you initially responded with a simple sentence stating that the Americans won, but you were not aware of the significance of the Battle of Saratoga. That’s understandable, as we all have areas where we need to improve, but it’s important to take the time to learn and research in order to provide accurate and complete responses.

    Overall, though, you did a great job and I’m confident that with continued practice and effort, you will become an excellent writer!

    While this session wasn’t perfect, it was amazingly good. One year ago, I thought it was impossible to create software that could coach a human through a writing exercise. Today, not only is it possible, it was only about 20 minutes of work figuring out how to craft the right ChatGPT prompt. While some worry that ChatGPT means “the end of High School English”, I’m excited: I think we are at the beginning of a new era of software that can turbocharge students’ ability to learn to write.


    1. When I was at William & Mary, I worked in their Writing Resources Center. This gave me a lot of practice walking the fine line between “coaching people to write better” while still letting their work be their own. (It was much harder coaching middle school kids!)
    2. It took me two attempts to get a prompt that worked the way I wanted. The first time, ChatGPT role-played both the teacher and the student… it didn’t wait for me to respond to its prompts. This experience gives me more empathy for the emerging discipline of prompt engineering.

  26. January 16, 2023

    Today I haven’t learned something: I haven’t learned a good strategy to write a better rich-text-editing experience for Library Notes and other projects of mine.

    Doing syntax-highlighting-as-you-type for plain text content has a lot of advantages, but it is pretty geeky. I’m trying to figure out what’s involved with building a full-on rich text editor instead: One where you make text bold by applying a “bold” attribute to existing text rather than adding **bold** delimiters to the text.

    Where I’m currently stuck is figuring out how I’m supposed to handle lists. What I want are lists that behave like best-in-class rich text editors:

    • Bulleted and numbers lists have a hanging indent
    • The text input caret doesn’t ever land in the list delimiter. In other words, if the caret is at the start of one element in the list and you try to move the caret to the left, the caret will move to the last character in the previous list entry rather than somewhere in the list delimiter.
    • I want this to work in UIKit (UITextView instead of NSTextView).

    Things I’ve investigated:

    • Use an NSTextList on the NSParagraphStyle attribute for an attributed string. However, this doesn’t render anything on a UITextView that is using TextKit 1. For a UITextView that uses TextKit 2, it will render list delimtiers as expected. However, the experience of moving the caret through the document becomes really buggy. TextKit 2 currently has a reputation of being buggy, so I don’t want to waste more time here.
    • I’ve tried overriding UITextInput methods, like position(from:offset:) to see if I can “skip” positions that would land inside the list delimiters. However, while this method gets called as I expect (on Catalyst, anyway), it’s not preventing the caret from landing inside the list delimiter.
    • Do all of this in the layout manager, somehow. I currently have a custom layout manager in Library Notes to handle rendering quotes, but I’m not sure how I can use a layout manager to display list delimiters when they’re not present in the text.

    There’s remarkably little info about how to make lists work in UITextView on the internet. So, time to do some Rubber Duck Debugging and then take a break.

  27. January 06, 2023

    As you might tell from Library Notes, reading is one of the great pleasures of my life. As a techno-optimist, one of the great disappointments of my life is that the computer revolution has not done more to improve readers’ lives.

    That’s why I’m excited to give Readwise Reader a try. I love the Readwise team’s mission: “Improve the practice of reading through software by an order of magnitude.” Their flagship product, Readwise, helps you manage the highlights and annotations you’ve made in ebooks. I’d probably be a big user of this service if I hadn’t already built Library Notes. (I tried Readwise and loved how seamless the integration with my Kindle library was. If you’re a heavy Kindle user, I recommend giving Readwise a try. However, I’m going to stick with Library Notes because I want to make sure that all of my notes stay on my own computers, forever.)

    Readwise Reader goes a step further than Readwise: It’s a complete digital reading experience that integrates content and annotations. Some things I love about what they’ve done:

    • Their reader handles all “modern” content: Blog posts, twitter threads, PDFs, newsletters, and epubs. (Sorry, Kindle.) Even YouTube?! I haven’t tried their YouTube integration and I can’t envision what that’s like, but it’s certainly true that there’s a ton of interesting educational content on YouTube these days.
    • They have all of the great annotation tools from Readwise integrated into their reading experience.
    • They’ve added science-fictiony features like “Ghostreader,” which uses GPT-3 to help you do things like summarize passages and generate flashcard content.

    Speaking of GPT-3, I’ve been meaning to learn how the new wave of generative AI products work. When I came across “Transformers from Scratch”, I thought it would be a great testing ground for Reader. Here are my impressions of Reader after using it for this initial article:

    • The app’s reading experience does what it needs to do: Gets out of the way and lets me focus on content.
    • On the iPad at least, where I did my reading, the highlighting experience was a bit finicky. Often the highlight wouldn’t start on the precise word I intended, and I couldn’t find handles to adjust the highlighted range. I had to delete the highlight and start again.
    • I loved being able to follow links in the article I was reading to original PDFs and other helpful tutorials and add those to my reading list. This is a great feature for doing research. My “learn AI” reading list is already growing.
    • Ghostreader feels like it could be really useful. When I was reading “Transformers from Scratch,” I came to this section, and it seemed like important information for my brain to really internalize. I asked Ghostreader to generate a flashcard for the section, and it produced the following:

    Q: What are three practical considerations when implementing transformers?

    A: 1. Computers are especially good at matrix multiplications. 2. Each step needs to be differentiable. 3. The gradient needs to be smooth and well conditioned.

    Not bad!

    I plan to keep using Readwise Reader as I try to teach myself more about modern AI. I think the Readwise team is building a great tool to help readers’ lives.

  28. December 04, 2022

    To help me learn SwiftUI, Permanent Marker is primarily developed with that framework.

    One of the first problems I had to solve: How do I handle loading / editing / saving files in SwiftUI? Here are the constraints I had:

    • Loading and saving files are async operations.
    • I don’t want to save on every keystroke. Instead, I want to autosave at periodic intervals.
    • However, when I’m done editing a file, I want to save any outstanding changes right away (rather than waiting for the autosave timer).

    I’ve put together a sample app that shows the main parts of my solution. The core idea is a class I call FileBuffer. A FileBuffer manages:

    • The in-memory copy of the file contents
    • A flag isLoading that is true if the in-memory copy of the file has not yet been loaded from disk.
    • A flag isDirty that is true if the in-memory copy of the file contents have changed, and therefore needs to be saved back to disk.
    • FileBuffer manages autosaving dirty file contents at periodic intervals…
    • …while also exposing a save() method that saves the file contents right now.

    Here are the key parts of FileBuffer. First, note its declaration: this is a @MainActor ObservableObject because its primary job is to communicate “truth” to UI elements.

    @MainActor
    final class FileBuffer: ObservableObject, Identifiable {
      // ...
    }

    Each FileBuffer exposes publishes three properties, only one of which (text) is settable. The isDirty and isLoading properties change as side-effects of other operations inside of FileBuffer.

      /// The in-memory copy of the file.
      /// This is a computed property! More details later.
      var text: String { get set }
    
      /// If true, this buffer contains changes that have not yet been saved.
      @Published private(set) var isDirty = false
    
      /// If true, the contents of the buffer have not yet been read from disk
      @Published private(set) var isLoading = true

    When you first create a FileBuffer, isLoading starts as true. Once the contents of the file have been loaded from disk, isLoading becomes false and remains false for the remainder of the lifetime of the FileBuffer.

    isDirty becomes true any time you make a change to text, and stays true until those changes have been saved to disk.

    Speaking of text, let’s take a look at how that is implemented:

      /// The actual file contents. The stored property is private and is exposed through the computed property ``text``
      private var _text = ""
    
      /// Gets/sets the in-memory copy of the file contents.
      ///
      /// Setting the in-memory copy of the file contents sets ``isDirty`` to `true` and makes sure that autosave will run some time in the future.
      var text: String {
        get {
          assert(!isLoading, "Shouldn't read the value of `text` until it is loaded.")
          return _text
        }
        set {
          assert(!isLoading, "Shouldn't write the value of `text` until it is loaded.")
          objectWillChange.send()
          _text = newValue
          isDirty = true
          createAutosaveTaskIfNeeded()
        }
      }

    Basically, the computed property text is responsible for three things:

    1. Validity checking: You shouldn’t be accessing text until the file contents have been loaded.
    2. Maintaining isDirty: Any time you change text, isDirty needs to get set to true.
    3. Ensuring that autosave will run after changes get made to text.

    What is the “autosave task”? It’s an example of a technique I’ve been using in my apps that support Swift Structured Concurrency — to my brain, it’s the most natural way to say, “Run a function exactly once at some point in the future.” Here’s what that code looks like:

      private(set) var autosaveTask: Task<Void, Never>?
    
      /// Creates an autosave task, if needed.
      ///
      /// The autosave task will save the contents of the buffer at a point in the future.
      /// This lets you batch up saves versus trying to save on each keystroke.
      private func createAutosaveTaskIfNeeded() {
        guard autosaveTask == nil else { return }
        autosaveTask = Task {
          try? await Task.sleep(until: .now + .seconds(5), clock: .continuous)
          try? await save()
          autosaveTask = nil
        }
      }

    Here’s how it works.

    • The private autosaveTask property serves as a flag to know if autosave has been scheduled to run in the future. If it’s nil, then there’s no autosave; if it’s non-nil, the autosave will run. While I don’t take advantage of this here, in this pattern I use a Task? instead of a Bool for this flag so you can write something like _ = await autosaveTask?.value to wait until the current task completes.
    • The first thing the autosave task does is sleep for some duration. I picked a fairly long one in this test code to make it easier to see delays.
    • After waiting, the task runs save() and clears the autosave task.

    The final outcome of this work: As you type away in a document, repeatedly setting the text property and changing the in-memory copy of the file, the first change will create an autosave task. Subsequent changes within the autosave window will see that the task exists, so won’t create a new task. Finally, after the delay, the FileBuffer will save its contents to disk. The next change that happens to text will create a new autosave task.

    save() is an interesting method. I got it wrong two times while working on this sample. This was my first attempt:

      func save() async throws {
        guard isDirty else { return }
    
        try await FakeFileSystem.shared.saveFile(_text, filename: filename)
        isDirty = false
      }

    Simple and elegant! If isDirty is false, there are no changes to save. Otherwise, save the changes and set isDirty to false. It turns out this code is also buggy. There is a race condition. Can you see it? (As an aside, I still haven’t fully internalized “running code on a single actor does not mean there are no race conditions.” I keep making mistakes like this.)

    Here’s the race condition:

    1. Change text to some value, like “version 1.” This sets isDirty to true.
    2. Call save(). You see isDirty is true, so you continue.
    3. You get to the point where you await saveFile(), and this operation suspends until the save completes.
    4. (This is the part I always forget can happen.) While waiting for the operation in Step 3 above to complete, change text to some new value, like “version 2.” This sets isDirty to true.
    5. The operation in Step 3 completes, and you resume executing save() after the await statement, setting isDirty to false. This is the bug. The value of text is “version 2”, and this hasn’t been saved to disk yet, so isDirty should be true. Since we set it to false, we’ll never save the string “version 2” to disk (unless something comes along and makes another change).

    This was my first attempt to fix the race condition:

      func save() async throws {
        guard isDirty else { return }
    
        isDirty = false
        try await FakeFileSystem.shared.saveFile(_text, filename: filename)
      }

    This code looks wrong to me. “Surely,” my brain says, “you don’t want to set isDirty to false until you’ve saved the file?” However, waiting until the save finishes opens the door to the race condition described above. Setting isDirty = false before saving means that, when the code suspends in the await statement, any future changes to text will properly set isDirty back to true and we won’t overwrite that when we resume from the await. It fixes the race. However, this code creates a new bug. What happens if the saveFile() call fails? We’ve set isDirty = false, but we didn’t actually save the contents to disk, so isDirty should be true at the end of the function.

    This leads to my third and hopefully final version of this function:

      func save() async throws {
        guard isDirty else { return }
    
        isDirty = false
        do {
          try await FakeFileSystem.shared.saveFile(_text, filename: filename)
        } catch {
          // If there was an error, we need to reset `isDirty`
          isDirty = true
          throw error
        }
      }
    

    At this point, FileBuffer contains enough logic to connect files to SwiftUI. Here is an example of how to use a FileBuffer:

    /// Creates a `TextEditor` that can edit the contents of a `FileBuffer`
    struct FileEditor: View {
      @ObservedObject var buffer: FileBuffer
    
      var body: some View {
        Group {
          // (1)
          if buffer.isLoading {
            ProgressView()
          } else {
            // (2)
            TextEditor(text: $buffer.text)
              .font(.body.leading(.loose))
          }
        }
        .navigationTitle((buffer.isDirty ? "• " : "") + buffer.filename)
        // (3)
        .onDisappear {
          Task {
            try? await buffer.save()
          }
        }
        // (4)
        .id(buffer.filename)
      }
    }

    A quick guide to understanding this code:

    1. Remember to check the isLoading property on the buffer so you don’t attempt to read or write invalid contents!

    2. If you know the buffer has loaded, you can get a binding to the in-memory copy of the file with $buffer.text. Making changes through this binding will create an auto-save task that will ensure the changes get written at some later point in time.

    3. However, when we are done with this view, we want to save its contents immediately, rather than waiting for the auto-save task to run.

    4. If you forget the .id(buffer.filename) line, then the .onDisappear block might not run! Without this line, switching from one file to another could reuse the same FileEditor instance. An instance doesn’t “disappear” if it’s reused. The .id(buffer.filename) causes SwiftUI to treat FileEditors for different files as different View instances, which means .onDisappear will run.

      Incidentally, this is one of those SwiftUI cases where the order of modifiers matters. The code above works. This code doesn’t:

      .id(buffer.filename)
      .onDisappear {
        Task {
          try? await buffer.save()
        }
      }

      This is another one of those things I often get wrong! My mental model is that all of the view modifiers are setting properties on some object, whereas what really happens is each view modifier creates a new View with with a new property. In the broken code above, the .id modifier creates a new View with the id property set, and then the .onDisappear modifier creates yet another new View with an onDisappear block. That “onDisappear” view doesn’t have an id property tied to the filename, so the “onDisppear” View doesn’t actually disappear when the filename changes, so the “onDisappear” block doesn’t run. (At least I think this is what’s happening. I don’t know if my SwiftUI mental model is the best.)

    I’m not sure this is the best way to work with files in SwiftUI, but it works for me. As you can see, there is some surprisingly tricky issues to work through. I hope this writeup helps others who are working on editing files in SwiftUI!

    (A sample working SwiftUI app with all of the code referenced here is available at https://github.com/bdewey/SavingInSwiftUI.)

  29. November 28, 2022

    In my last post, I hinted I’ve started a new project that I’m calling Permanent Marker. What’s it all about?

    The main idea of Permanent Marker is to bring git from the world of writing code to the world of writing English. As programmers, we rely on git for version control. This is important for writing English, too! The most terrifying thing that can happen when writing is losing your work. Permanent Marker aims to be a writing environment where this fear doesn’t exist: It uses git under the covers to maintain a version history of all of your writing.

    The main data structure for Permanent Marker is “a bunch of Markdown files in a git repository.” There isn’t anything particularly innovative about that. For years, tools like VSCode (for programming) and Ulysses (for English) have provided excellent experiences working with collections of files in a directory hierarchy. What I’m exploring with Permanent Marker is how to integrate git into the writing workflow, as opposed to the programming workflow.

    Here are the main ideas I’m playing with for bringing git to writing:

    • All writing starts in an unpublished state. I represent this as a “scratch” branch created in the git repository holding the content.
    • As you edit a file, it’s automatically saved every 5 seconds and committed to the scratch branch. To prevent an explosion of tiny commits, I will amend the previous commit in the repository if it’s to the same file and the commit isn’t “too old”. The goal is that an author, without thinking, will be able to keep distinct versions every N minutes of every file she works on.
    • When a file is sufficiently “done”, you can either publish it (which moves it directly from the scratch branch to the main branch of the repository) or open it for feedback. Whereas all unpublished files live in the same “scratch” branch, I create a different branch for each file in the “feedback” state so reviewers just see the changes to the file that needs feedback.

    By far, this work of thinking through how to model writing workflows in git has been the most interesting part of this project. I’ve gained more than an in-depth understanding of git from this project, too. Like any self-respecting developer, I maintain this website as a “collection of markdown files in a git repository” that’s published with a static site generator. Permanent Marker, even in its rough state, helps me edit and maintain this site.

  30. November 23, 2022

    This fall, I’ve been interviewing a lot of college students for internships at Duolingo. (We’re hiring!) A repeated question I’ve gotten from these students, as the interview wraps up, is if I have any advice for them as they look to start their career in tech. (Maybe it’s my gray hair that suggests I’ve got a valuable perspective here.)

    Each time I’ve given the same answer: Don’t expect learning to stop when you leave school. This industry is constantly reinventing itself. The technology you’ll be working on in 15 years probably doesn’t exist yet. To build a long career in tech, you need to be comfortable constantly learning new things.

    Personal programming projects have been my most effective way to keep learning in the decades since I left the University of Washington. My laptop is littered with projects I’ve started and mostly abandoned. Only one, Library Notes, has grown to the point where another human can use it. All of the other projects linger, unknown and unsung, as dusty Git repositories. I accumulate them like hidden scars. Only I know or care they exist. Yet I’ve been able to build a 24 year career with the experience these projects gave me. Some notable examples:

    • As a young Microsoft program manager, I wrote a stress test for Transactional NTFS. My hands-on experience helped me be a better program manager for that project.
    • When the iPad was released in 2010, I bought my first Mac, learned Objective-C, and wrote a simple app that turned the iPad into a digital picture frame that downloaded photos from Flickr. (Remember that site?) This project gave me the experience to pivot from “Microsoft program manager” to ”iOS developer.”
    • When I was working at Facebook, I learned about spaced repetition and wrote apps for my kids, who were then in middle school, to help them with spelling tests and Spanish vocabulary. This experience with educational technology led me to Duolingo.

    I’m writing this post using my latest personal project: An iOS/Mac app I’m calling Permanent Marker. Permanent Marker is a simple writing app for editing plain text files stored in a Git repository. The project makes it a tiny bit easier for me to update content on this website. But more importantly: it makes it much easier for me to learn SwiftUI and how to programatically work with Git.

    College is long behind me, but learning never ends.

  31. August 30, 2022

    Nevermind! The previous bug about clipping images in a SwiftUI-optimized-for-Mac toolbar is fixed in Mac Ventura Beta 6.

    I thought I had been up-to-date on my Beta builds when I wrote the prior post. Here’s how I discovered I wasn’t:

    1. I wanted to see if this problem reproduces on MacOS Monterey. When I did this, I discovered that on Monterey, there’s no attempt to create a Mac-style toolbar at all from SwiftUI Catalyst. Note this is different from the .borderedProminent bug I wrote about yesterday — that bug also happens on Monterey.
    2. Filing a Radar on bugs in new functionality seems more valuable than filing a Radar on bugs that Apple already decided is OK to ship. So, I prepared to file a Radar…
    3. And that’s what made me think, “Let’s make sure I’m up-to-date first.” And I wasn’t. (In my defense, System Settings said I was up-to-date, but it also said it hadn’t checked in over a week. I saw on the Downloads page that there was a new version after Settings had checked. I forget what I had to jiggle to get System Settings to properly refresh.)

    Anyway, it’s awesome seeing bugs get fixed in Beta builds. Feels like receiving a gift.

  32. August 29, 2022

    I’ve had so much fun creating a Mac version of Library Notes in Catalyst that I’ve started a couple of other multi-platform projects. Along the way I’ve encountered some bugs and come up with at least one workaround that I will now share with you, Gentle Reader.

    Developing for the Mac: So many choices

    If you’re an iOS engineer, like me, venturing into Mac land for the first time, be aware that there are at least three ways to make this journey without going full AppKit.

    1. SwiftUI targeting the Mac SDK: In this mode, you write SwiftUI code, and under the hood that SwiftUI code will create native Mac (AppKit) controls. This route works if your UI is 100% SwiftUI. You don’t get the escape route of creating a UIViewRepresentable to manage a UIView with SwiftUI, because your app doesn’t have access to UIKit at all.
    2. SwiftUI targeting Mac Catalyst, “optimize for Mac” mode: In this mode, you’re writing SwiftUI, but under the hood your app is using the iOS SDK and will use Mac Catalyst to run on the Mac. Catalyst will try to make your UI controls look more Mac-like.
    3. SwiftUI targeting Mac Catalyst, “scaled to match iPad” mode: In this mode, you’re writing SwiftUI, it uses the iOS SDK, it uses Mac Catalyst to run on the mac, but the controls will look like iOS controls. (This is most noticeable with buttons and navigation bars.)
    4. UIKit, not SwiftUI, with the different Catalyst modes: Replay options (2) and (3) above, but this time substitute “UIKit” for “SwiftUI.”

    I’ve outlined the different modes because many of the issues I’ve run into only affect one of them: SwiftUI code that uses Mac Catalyst to run on the Mac in “optimize for Mac” mode. You’d think that this would be the easiest way for an iOS Engineer to write apps that look like native Mac apps, but beware these sharp edges.

    Bug: .borderedProminent doesn’t work in “optimize for Mac”

    Mac “push buttons” have borders. According to the Human Interface Guidelines, you should use a filled button for the primary action in a view. SwiftUI provides an easy way to get this: Apply the .borderedProminent style.

    The problem? This works for SwiftUI-targeting-iOS apps, and SwiftUI-targeting-Mac apps, and SwiftUI-targeting-Catalyst-in-iOS-mode apps, but not SwiftUI-targeting-Catalyst-in-optimized-mode apps. For just that mode, the button doesn’t get filled in.

    This seems to be a SwiftUI bug and not a Catalyst bug. If you write UIKit code, and create a UIButton and use UIButton.Configuration.borderedProminent() to create a button configuration, you’ll get a button that shows up in your “optimize for Mac” Catalyst app appropriately filled in.

    That, then, is the workaround for this bug. If you’re writing a Mac app, using Mac Catalyst, choose “optimize for Mac” for your UI mode, you cannot use the SwiftUI Button View for any button you want to display in the prominent “filled” style. Instead, you need to use UIViewRepresentable to create a UIButton and explicitly give it the UIButton.Configuration.borderedProminent() configuration.

    Bug: Toolbars clip their toolbar buttons

    Update: This was a bug in MacOS Ventura that was fixed in Beta 6.

    This is another one that appears to be SwiftUI + Mac Catalyst + “optimize for Mac” specific. In this mode, the toolbar clips its buttons. Instead of this:

    A sample app with a proper Mac toobar

    you get this:

    A sample app with a broken Mac toobar

    Note that the top & bottom of the toolbar icon are clipped.

    While I haven’t coded this yet, I suspect the answer is going to be the same as above: Use UIKit to manage your toolbars if you want a Mac-style toolbar in your Catalyst app.

  33. August 21, 2022

    Today I learned that a UISplitViewController behaves differently in a Mac Catalyst app when it is the root of a window versus if it is wrapped in a window. If you want Mac-style toolbar behaviors, make it the root of a window.

    Library Notes uses a UISplitViewController for its main screen. When I first wrote this app, I was deep into the “composition instead of inheritance” philosophy, and I used view controller containment to avoid subclassing UISplitViewController. I created a class called NotebookViewController to manage the UISplitViewController. NotebookViewController creates the split view controller & adds it as a child view controller that completely fills its view.

    Running on iOS, it looks like a normal UISplitViewController fills the whole screen.

    Now that I’m working on porting Library Notes to the Mac using Catalyst, though, I noticed something: Even when I tell Xcode that I want to optimize for the Mac interface, I still get iOS-style bar button items displayed in the navigation bar instead of Mac-style buttons displayed in the toolbar. My UI looked like this:

    UISplitViewController with iOS-style buttons

    I suspected that Mac Catalyst did something different if the window’s rootViewController is a UISplitViewController, so I rewrote NotebookViewController to be a UISplitViewController rather than contain a UISplitViewController. Sure enough, after that simple change, my UI looked like this:

    UISplitViewController with a Mac-style toolbar

    In retrospect, this makes perfect sense. Mac Catalyst will adapt iOS components to a Mac interface based upon what those components are. Something to keep in mind if, like me, you are an iOS developer venturing into Mac-land for the first time.

  34. August 14, 2022

    tl;dr: If you’re trying to use UIDocumentPickerViewController or UIDocumentBrowserViewController from a Mac Catalyst app and always get stopped in the debugger with the message “this class is not key value coding-compliant for the key cell”, just temporarily disable breakpoints and continue. Everything will work.

    I’m embarrassed how much time I lost on this problem.

    When I first worked on getting Library Notes ready to submit to the App Store, I figured I should also try this new-fangled “Mac Catalyst” technology and get a version of the app that runs on the Mac, too.

    In my notes at the time, I wrote:

    Update Jan 14, 2021 — The Catalyst app crashes on launch with an error about an NSView not being key-coding compliant for “cell”, and I have no idea how to debug further. So, I’m just going to ignore making a Mac app for now.

    I didn’t pursue the Mac version of this project because, at the time, it worked just as well for me as an iPad / iPhone app.

    Fast forward 18 months, and I’m getting ready to go on a long series of back-to-back business and personal trips, and I didn’t want to bring both my iPad Pro and Mac. Suddenly it really bothered me that I didn’t have a Mac version of Library Notes that I could use to continue to update my reading notes while on my trip. There was also a brand-new version of Xcode, and a lot of hubbub about “desktop-class iPad apps” at WWDC. Surely, this problem about an NSView not being key-coding compliant for “cell” is fixed in with the new developer tools, right?

    Wrong.

    I fire up the Xcode project, set it to target Mac Catalyst, hit Run in Xcode, and almost immediately hit the error message:

    Thread 1: ”[<NSView 0x14363e4f0> valueForUndefinedKey:]: this class is not key value coding-compliant for the key cell.”

    The frustrating this was this error doesn’t originate in my code. As near as I can tell, it comes from using UIDocumentBrowserViewController in a Catalyst app on a Mac with a Touch Bar. I only found one other person on Twitter who had this problem. Google searches turned up nothing. I tried:

    • Refactoring my code to use UIDocumentPickerViewController instead of UIDocumentBrowserViewController. Same problem as soon as I bring up the picker.
    • I created a custom UIDocumentPickerViewController subclass and used that instead. I tried manipulating every Touch Bar hook in UIViewController to see if I could make the problem go away. No luck.
    • Creating a new sample app that just brought up a UIDocumentBrowserViewController. This one worked, so I systematically started looking for differences between Library Notes and the test app. I changed random Info.plist properties, target SDK versions; really, anything I could think of. This took about a day.
    • Finally, I decided to run my broken Catalyst app in Instruments to see if there’s any code running at app start that I’ve forgotten about. “Maybe,” I thought to myself, “something running at app start is putting things in a funny state?” Imagine my shock when the app worked when connected to Instruments.
    • This is what lead me to discover that everything works when I run the Release build. Something related to optimizations?
    • No! Even dumber. Everything works if I don’t have breakpoints enabled when running the app in the debugger. It’s not enough to just hit “continue” when you run into this problem. It will just happen again. However, if you get this message in the debugger, just disable breakpoints then click Continue. The document picker will work.

    This one problem set back the Mac version of Library Notes by over a year. (Facepalm.)

  35. July 23, 2022

    Recently I’ve been exploring using “files in a Git repository” as the main storage for iOS and Mac apps. I’ve got two little projects using this.

    The key technology that enables this approach is libgit2, which is a C language implementation of the core git methods. There are at least two popular ways to use libgit2 from iOS / Mac. The first, ObjectiveGit, is Objective-C bindings to the C API. The second, SwiftGit2, is a set of Swift bindings.

    Being me, I wound up going with neither of these libraries. Things that made me shy away:

    1. Neither project use Swift Package Manager, which I use exclusively in my personal projects.
    2. Neither project has been updated recently. SwiftGit2 links against version 1.1 of libgit2 (the library is now, at the time of this writing, at version 1.5). ObjectiveGit is worse, linking against version 0.28.1!

    So I’ve approached git integration to Swift iOS/Mac apps from first principles and created two projects:

    1. static-libgit2 is a Swift package that exposes the libgit2 C API through the Clibgit2 module. This project is a modification of LibGit2-On-iOS and follows the same basic strategy:

      1. Use build scripts to build libgit2 and its dependencies (libssh, openssl) and create a single xcframework for all of the necessary SDK and architecture variations.
      2. Create a Package.swift file to let projects include the xcframework through Swift Package Manager.

      static-libgit2 is pretty stable, and if you want to just use the C APIs in a Swift app, it gives you want you want. It’s ready for public consumption now.

      import Clibgit2
      import SwiftUI
      
      struct ContentView: View {
          var body: some View {
              Text(LIBGIT2_VERSION)
                  .padding()
          }
      }
    2. AsyncSwiftGit is much more experimental and much less stable. It’s my attempt to write Swift wrappers around the C API that uses the new concurrency features of Swift 5.5. For example, instead of passing in C-compatible callback functions when fetching changes from a remote repository, I can write:

      for try await progress in repository.fetchProgressStream(remote: "origin", credentials: credentials) {
          // do something with `progress` here
      }

      This is “more experimental and less stable” because I’m still figuring out the right way to use Swift concurrency, the best way to design wrappers around a C API, etc. This one is not yet ready for public consumption.

    Overall, though, I’ve been really impressed with how fun and reliable it is to use git as the main storage system for personal programming projects! I predict I’ll be using it more and more.

  36. October 22, 2021

    …the lesson is clear: a central challenge to improving the way we learn is finding a way to interrupt the process of forgetting.

    (Make it Stick: Brown & Roediger)

    I started my love affair with books in elementary school. By the time I entered high school, though, I noticed I was forgetting most of what I’d read. Sometimes I couldn’t even remember that I had read a book at all, much less remember what the book was about. This started a mild obsession of figuring out ways to remember more of what I’ve read. I wrote about books in my journal; I kept a running bibliography of books for a few years; I cataloged and reviewed books on LibraryThing.

    My Library Notes app my current system to help me remember what I’ve read. While the app has only been on the App Store for a few weeks, I’ve been using this app for about three years. I’ve noticed that I use Library Notes differently for different kinds of books. Basically, there are four “levels” to how involved I am with a book, and I can use Library Notes for all four levels:

    1. For hundreds of books, I just use Library Notes as a book cataloging app. I just want a record that the book is in (or has been in) my personal library. (Alas, I buy more books than I read!) Title, author, cover image: That’s all I want. Because Library Notes can scan the book’s ISBN barcode and look up bibliographic information & cover images from the Internet, the cataloging process is fairly streamlined.
    2. For a lot of fiction books I read for fun, I add a little more information: A quick blurb about the book and a star rating. There’s no formula for how much I write about each book, but recently I’ve been happy with the following pattern: I write the names of the main characters and the rough plot arc. I’ll probably forget everything else about the book within a year, but this is enough to help me recommend books to friends: I sort my library by star rating, and then I can say, “Oh yeah! Have you read anything by Tana French? She’s great…” It doesn’t matter that I can’t remember the plot of In the Woods — I remember that I loved the book and that my friends will probably love it, too.
    3. For books I really want to remember, I use a technique I first learned from The Well-Educated Mind, by Susan Wise Bauer: I write down an outline of the book in my notes. This is much more intensive than writing a quick blurb when I’m done with the book. When I’m in this mode, I’m going back-and-forth between the book and Library Notes after each chapter, creating a chapter-by-chapter summary of what I’ve read. However, I’ve noticed two things: First, probably because of the work I’ve put into creating the outline, these books remain much more firmly lodged in my brain in the first place. Second, if I do need a refresher on what’s in the book, rereading the outline brings back a lot more detail than reading my quick “character-and-plot-arc” blurb. I save this for “serious” reading.
    4. Outlines are great. However, the science is clear: If you really want to cement something in your brain, the best techniques are active recall and spaced repetition. For the most interesting & challenging works I read, I spend time to create active recall prompts in my notes (either question & answer prompts, or fill-in-the blank prompts). I can then use Library Notes’ review mode to quiz myself on the prompts.

    Library Notes isn’t an app for everyone, but I’m really happy with how it scales from “simple cataloging” to “advanced memory tool with active recall and spaced repetition.” It’s been a great companion on my reading journey. If you think it’s something that would help you, you can get it on the App Store now. It’s software made for the love of books, not to be a business, and is now and will always be free.

  37. August 21, 2021

    I’ve recently extracted another module out of Grail Diary: KeyValueCRDT.

    It turns out designing a file format that works in an era of cloud document storage is hard! Cloud documents and mobile devices make it really easy for people to make conflicting changes to the same document. It’d be nice to provide a better experience for people than a “pick which version of the file to keep” dialog box.

    The key to avoiding the “pick the version of the file to keep” dialog is making your file format a Conflict-Free Replicated Data Type (CRDT). With a CRDT, you can reliably merge changes made from multiple devices rather than forcing a person to pick which file version to keep.

    My goal with KeyValueCRDT is to provide a CRDT implementation that can work as a file format for a wide range of applications. There are more details about the API the GitHub page, but here’s the bullet-point summary:

    • KeyValueCRDT uses SQLite for its storage, for all of the reasons listed in SQLite As An Application File Format.
    • The data model is a key-value store.
    • Values can be text, JSON, or arbitrary data blobs. Text values are indexed using FTS5 for fast full-text search.
    • At its core, KeyValueCRDT is an observed-remove set and provides multi-value register semantics. When you read a key from the database, you may get multiple values returned if there were conflicting updates to that key.
    • In addition to the underlying database operations, the module provides a UIDocument subclass that lets you integrate with the iOS document ecosystem (including iCloud documents). The module also provides a command-line tool (kvcrdt) to allow you to inspect and manipulate the database from scripts.

    Currently I use KeyValueCRDT for the document format for Grail Diary, and I hope it will be a useful format for other applications as well.

  38. June 28, 2021

    Yes, async / await is going to be great. However, Xcode 13’s DocC documentation compiler is currently the most inspiring feature for me. For the past several days I’ve been pulling out the building blocks of Grail Diary into separate packages and revamping the documentation. Often, when trying to write the documentation, I’ve realized that the APIs themselves are awkward, so I’ve refactored those as well. While this work hasn’t done much to make Grail Diary feel different when using it, I’m feeling awesome because the foundation of the program is getting more solid.

    Since the new documentation toolchain is in beta, I’m isolating this work in an xcode13 branch across the following repositories:

    • SpacedRepetitionScheduler for recommending times to review prompts in a spaced-repetition system
    • BookKit for utility routines for dealing with different book web services, like Google Books, Open Library, Goodreads, and LibraryThing.
    • TextMarkupKit for parsing and formatting text as you type.

  39. June 23, 2021

    Yesterday, I released TextMarkupKit. This is the core text processing code that I use for Grail Diary — it handles all of the text parsing, formatting, etc.

    If you’ve ever wanted to build an iOS app that does automatic formatting of plain text as you type, check out TextMarkupKit. It might be exactly what you need.

  40. May 22, 2021

    In my quest to make Grail Diary a great app for book lovers, I’ve just finished adding a feature I’ve wanted for a while. In a stroke of marketing genius, I’m calling it Random Quotes. It does exactly what it says: It scans through your book notes for five random quotes and shows them, nicely formatted, on a single page. Want to see a different selection of quotes? Just hit the Shuffle button.

    The goal here is perusability. When you flip through your book notes, you get to revisit the books in your mind. It’s like dropping in on old friends. Random Quotes tries to make this easy and fun.

    The feature’s only a few hours old but I’ve gotten a lot of joy from hitting the Shuffle button!

    Random Quote Screenshot

  41. May 12, 2021

    Once upon a time, I was going to take what I learned writing the custom syntax-highlighting editing component of Grail Diary and turn it into a stand-alone tutorial on text editing. I ran out of time to work on that after writing one item: An overview of how you can take a custom data structure for text editing (a piece table) and give it a natural API by conforming to Swift Collection. I don’t want this material to die, so I’ve moved it over here.

    The Theory

    What’s so hard about editing text? Let’s ignore for the moment the problems with even storing Unicode text, with its encodings, multi-byte characters, etc. If you put those considerations aside, the abstract model for a text file is an array of characters. An array is about as simple a data structure as you can get. What’s the problem?

    The answer, of course, is that changing things in an array can be expensive. Appending to or removing from the end of an array is cheap. Any other operation, though, means copying elements to make room for the new element (or to remove existing elements). And of course in a text editor, you want to make changes all throughout the text, not just at the end. That’s kind of the point. If your editor’s main data structure for text is “an array of characters”, it’s doing a ton of memory copying on every keystroke whenever the cursor is anywhere but the very end of the file.

    So we need something better. But what?

    One option is to store the file as a linked list of lines, and each line is an array of characters. You still need to do copying as you insert and remove characters, but you’re now only copying characters on the same line instead of all characters to the end of the file. If you’re implementing a source code editor, where you can assume that lines are all of a reasonable maximum length, you can get far with this approach.

    lines

    Next up in sophistication is a data structure known as a gap buffer. The main idea behind a gap buffer is that edits to a text file aren’t randomly distributed throughout the text file — they exhibit a lot of locality. If you insert a character at offset 42 in the file, the next insertion is much more likely to be at offset 43 than any other location, and the next deletion is likely to be at offset 42 than any other location. Basically, where the cursor is is where edits are likely to be. A gap buffer makes edits at the cursor really cheap, but you pay a cost to move the cursor.

    A gap buffer does this by storing the text in an array that’s much larger than what’s needed to store the text. This gives you a lot of free space inside the array (the “gap”), and the key insight is you can pay a cost to move the gap to the location of the cursor to make insertions and deletions at the cursor really cheap.

    gap

    While you can implement world-class editors with a gap buffer, for Scrap Paper we’re going to use a third approach, called a Piece Table. Remember how we said that appending to the end of an array is cheap? The piece table exploits that by keeping two arrays. One read-only array contains the original file contents. The second append-only array contains all characters inserted at any time during the editing session. Finally, the piece table tells you how to build the file as a sequence of “pieces” from the different arrays.

    piece2

    Just as with the gap buffer, a piece table works efficiently because most edits to a text file are localized. When you insert character after character into the same spot, you’ll end up with a pretty compact representation of the “pieces” constructed from the two arrays. For example, I edited this file in a version of Scrap Paper that recorded all of the changes that I made to the text file (backspaces and all). At the end of my editing session of 2276 individual edits, I had 48 pieces representing the contents of the file.

    One more bit of theory: String, NSString, and unicode

    I glossed over the challenges of representing text earlier. It’s now time to pay a little attention to that.

    1. The Swift String struct and the Objective-C NSString class made different engineering choices about how to store and model strings. Swift models its strings as an array of “characters” and encodes those characters in UTF-8. The NSString class, in contrast, does not expose individual Unicode characters, and it uses UTF-16 encoding internally.
    2. The TextKit classes are from the NSString era.
    3. Since we will be interfacing a lot with TextKit, we’re going to use the NSString convention and model our text as an array of UTF-16 code points.

    Let’s build a Piece Table!

    With the theory out of the way, it’s time to do some building.

    /// A piece table is a range-replaceable collection of UTF-16 values. At the storage layer, it uses two arrays to store the values:
    ///
    /// 1. Read-only *original contents*
    /// 2. Append-only *addedContents*
    ///
    /// It constructs a logical view of the contents from an array of slices of contents from the two arrays.
    public struct PieceTable {
      /// The original, unedited contents
      private let originalContents: [unichar]
    
      /// All new characters added to the collection.
      private var addedContents: [unichar]
    
      /// Identifies which of the two arrays holds the contents of the piece
      private enum PieceSource {
        case original
        case added
      }
    
      /// A contiguous range of text stored in one of the two contents arrays.
      private struct Piece {
        /// Which array holds the text.
        let source: PieceSource
    
        /// Start index of the text inside the contents array.
        var startIndex: Int
    
        /// End index of the text inside the contents array.
        var endIndex: Int
      }
    
      /// The logical contents of the collection, expressed as an array of pieces from either `originalContents` or `newContents`
      private var pieces: [Piece]
    
      /// Initialize a piece table with the contents of a string.
      public init(_ string: String) {
        self.originalContents = Array(string.utf16)
        self.addedContents = []
        self.pieces = [Piece(source: .original, startIndex: 0, endIndex: originalContents.count)]
      }
    }

    This code defines the stored properties we need for a piece table:

    • originalContents is the read-only copy of the characters from the file we are trying to edit.
    • addedContents is an append-only array of all characters added during an edit session.
    • pieces describes the logical contents of the file as a series of contiguous characters from either originalContents or addedContents.

    Conforming to Collection

    To make PieceTable feel Swift-y, we’re going to make it conform to a few standard protocols. First: Collection — this will let users read characters from a piece table as easily reading characters from an array. In Swift, a Collection is a data structure that contains elements that can be accessed by an index. If you’ve used arrays in Swift, you’ve used a collection.

    The Collection protocol is big. While it contains over 30 methods, most of those have default implementations. To create a custom Collection, this is all you need to implement:

    // The core methods of a Collection.
    // Everything here should have O(1) complexity.
    protocol Collection {
      associatedtype Element
      associatedtype Index: Comparable
      var startIndex: { get }
      var endIndex: { get }
      func index(after position: Index) -> Index
      subscript(position: Index) -> Element
    }

    If your only exposure to Collection has been through arrays, you may have assumed that the index needs to be an integer. Not so! The Collection protocol gives implementations a ton of flexibility about the index type. You can use any type so long as:

    1. You can efficiently return the index of the first element of the collection
    2. You can efficiently return the index that means “you’ve moved past the last element of the collection”
    3. Given an index, you can efficiently return the next index in the collection.

    For our piece table, we are going to need a custom index type. To find a character in the piece table, we will use two values: The index of the piece in the pieces table, and then the index of the character within the contents array. With this information, we can easily figure out the character at an index (use the piece index find the correct contents array, then return the character at the correct character index). It’s a tiny bit more complicated to figure out the index that comes after the next index, because you have to consider two cases: If the current index represents a character at the end of a piece, you have to move to the next piece; otherwise you move to the next character in the current piece.

    With this overview, here is the minimal code to have a piece table conform to Collection:

    extension PieceTable: Collection {
      public struct Index: Comparable {
        let pieceIndex: Int
        let contentIndex: Int
    
        public static func < (lhs: PieceTable.Index, rhs: PieceTable.Index) -> Bool {
          if lhs.pieceIndex != rhs.pieceIndex {
            return lhs.pieceIndex < rhs.pieceIndex
          }
          return lhs.contentIndex < rhs.contentIndex
        }
      }
    
      public var startIndex: Index { Index(pieceIndex: 0, contentIndex: pieces.first?.startIndex ?? 0) }
      public var endIndex: Index { Index(pieceIndex: pieces.endIndex, contentIndex: 0) }
    
      public func index(after i: Index) -> Index {
        let piece = pieces[i.pieceIndex]
    
        // Check if the next content index is within the bounds of this piece...
        if i.contentIndex + 1 < piece.endIndex {
          return Index(pieceIndex: i.pieceIndex, contentIndex: i.contentIndex + 1)
        }
    
        // Otherwise, construct an index that refers to the beginning of the next piece.
        let nextPieceIndex = i.pieceIndex + 1
        if nextPieceIndex < pieces.endIndex {
          return Index(pieceIndex: nextPieceIndex, contentIndex: pieces[nextPieceIndex].startIndex)
        } else {
          return Index(pieceIndex: nextPieceIndex, contentIndex: 0)
        }
      }
    
      /// Gets the array for a source.
      private func sourceArray(for source: PieceSource) -> [unichar] {
        switch source {
        case .original:
          return originalContents
        case .added:
          return addedContents
        }
      }
    
      public subscript(position: Index) -> unichar {
        let sourceArray = self.sourceArray(for: pieces[position.pieceIndex].source)
        return sourceArray[position.contentIndex]
      }
    }

    Conforming to RangeReplaceableCollection

    We can now iterate through the contents of a PieceTable. However, we don’t have a way to modify the contents of the PieceTable. To add this capability, we are going to make PieceTable conform to RangeReplaceableCollection. This protocol has a single required method, replaceSubrange(_:with:). If you implement this method, you get a ton of other APIs for free.

    For our implementation of replaceSubrange, we have to do two high-level jobs:

    1. Append the new characters to the end of addedContents. Remember, in a piece table, we only ever add characters — never delete — and they always get added to the end of the array. This is the easy part.
    2. The hard part: Update pieces to reflect the new contents of the file. The performance of the piece table will depend on how many entries are in pieces, so we need to take care to avoid creating unneeded items.

    This implementation manages the complexity of updating the pieces array by creating a stand-alone change description that contains the new piece table entries. When constructing the change description, the implementation adheres to two rules to minimize the size of the pieces array:

    1. No empty pieces! If an edit creates a Piece with no characters, it’s removed.
    2. If it is possible to coalesce two neighboring pieces into one, do it.

    Here is the code that adds conformance to RangeReplaceableCollection:

    extension PieceTable: RangeReplaceableCollection {
      /// This structure holds all of the information needed to change the pieces in a piece table.
      ///
      /// To create the most compact final `pieces` array as possible, we use the following rules when appending pieces:
      ///
      /// 1. No empty pieces -- if you try to insert something empty, we just omit it.
      /// 2. No consecutive adjoining pieces (where replacement[n].endIndex == replacement[n+1].startIndex). If we're about to store
      ///   something like this, we just "extend" replacement[n] to encompass the new range.
      private struct ChangeDescription {
    
        private(set) var values: [Piece] = []
    
        /// The smallest index of an existing piece added to `values`
        var lowerBound: Int?
    
        /// The largest index of an existing piece added to `values`
        var upperBound: Int?
    
        /// Adds a piece to the description.
        mutating func appendPiece(_ piece: Piece) {
          // No empty pieces in our replacements array.
          guard !piece.isEmpty else { return }
    
          // If `piece` starts were `replacements` ends, just extend the end of `replacements`
          if let last = values.last, last.source == piece.source, last.endIndex == piece.startIndex {
            values[values.count - 1].endIndex = piece.endIndex
          } else {
            // Otherwise, stick our new piece into the replacements.
            values.append(piece)
          }
        }
      }
    
      /// If `index` is valid, then retrieve the piece at that index, modify it, and append it to the change description.
      private func safelyAddToDescription(
        _ description: inout ChangeDescription,
        modifyPieceAt index: Int,
        modificationBlock: (inout Piece) -> Void
      ) {
        guard pieces.indices.contains(index) else { return }
        var piece = pieces[index]
        modificationBlock(&piece)
        description.lowerBound = description.lowerBound.map { Swift.min($0, index) } ?? index
        description.upperBound = description.upperBound.map { Swift.max($0, index) } ?? index
        description.appendPiece(piece)
      }
    
      /// Update the piece table with the changes contained in `changeDescription`
      mutating private func applyChangeDescription(_ changeDescription: ChangeDescription) {
        let range: Range<Int>
        if let minIndex = changeDescription.lowerBound, let maxIndex = changeDescription.upperBound {
          range = minIndex ..< maxIndex + 1
        } else {
          range = pieces.endIndex ..< pieces.endIndex
        }
        pieces.replaceSubrange(range, with: changeDescription.values)
      }
    
      /// Replace a range of characters with `newElements`. Note that `subrange` can be empty (in which case it's just an insert point).
      /// Similarly `newElements` can be empty (expressing deletion).
      ///
      /// Also remember that characters are never really deleted.
      public mutating func replaceSubrange<C, R>(
        _ subrange: R,
        with newElements: C
      ) where C: Collection, R: RangeExpression, unichar == C.Element, Index == R.Bound {
        let range = subrange.relative(to: self)
    
        // The (possibly) mutated copies of entries in the piece table
        var changeDescription = ChangeDescription()
    
        safelyAddToDescription(&changeDescription, modifyPieceAt: range.lowerBound.pieceIndex - 1) { _ in
          // No modification
          //
          // We might need to coalesce the contents we are inserting with the piece *before* this in the
          // piece table. Allow for this by inserting the unmodified piece table entry that comes before
          // the edit.
        }
        safelyAddToDescription(&changeDescription, modifyPieceAt: range.lowerBound.pieceIndex) { piece in
          piece.endIndex = range.lowerBound.contentIndex
        }
    
        if !newElements.isEmpty {
          // Append `newElements` to `addedContents`, build a piece to hold the new characters, and
          // insert that into the change description.
          let index = addedContents.endIndex
          addedContents.append(contentsOf: newElements)
          let addedPiece = Piece(source: .added, startIndex: index, endIndex: addedContents.endIndex)
          changeDescription.appendPiece(addedPiece)
        }
    
        safelyAddToDescription(&changeDescription, modifyPieceAt: range.upperBound.pieceIndex) { piece in
          piece.startIndex = range.upperBound.contentIndex
        }
    
        applyChangeDescription(changeDescription)
      }
    }

    Does it make a difference?

    For large file sizes, yes!

    I gathered a trace of all of the edits I made to a text buffer for a couple of minutes of editing a file. I then replayed that trace on an NSTextStorage object and on a PieceTable, timing how long it took to perform all of the edits on files of different sizes. This is the result:

    DDDB752B EBA1 4DD0 84AA 36D78CA12529

    For the NSTextStorage, the time to perform the edits increases linearly with the file size. For PieceTable, however, the time to perform the edits is independent of file size; PieceTable operations will get slower as the complexity of the edit history increases.

  42. May 09, 2021

    Since deciding to focus Grail Diary on book notes, I’ve changed the app’s navigation model to be reading-focused instead of notes-focused.

    screenshot

    Next up is extending the Review Mode. Currently review mode is all about spaced repetition and active recall. However, part of the value of having your book notes all in one place is perusability — I want a review mode that makes it easy to revisit your favorite quotes. What I implemented is a mode that quizzes you on the material you are likely to forget. (I still want a spaced repetition mode! I just also want another one that’s about revisiting your favorite quotes.)

    Finally, it’s time to look for an audience for Grail Diary. I’ve long been conflicted about if I should put Grail Diary on the App Store. I don’t want to turn Grail Diary into a side hustle. That doesn’t sit right with me for some reason. However, this past week I re-read a post by Brent Simmons where he writes, “This is the age of writing iOS apps for love.” That struck a chord. I’ve been working on this app for years because I love books, I love writing apps, and Grail Diary makes my experience of reading better. Writing an app for love why I’ve decided it’s worth the work of finding an audience for Grail Diary and putting it on the app store. I know there won’t be a huge audience for this app, but I also know there is a niche somewhere. And just like my writing gets better when I get feedback, Grail Diary will become better when I find the right audience and start getting feedback. That’s my goal.

  43. April 30, 2021

    I last wrote:

    When I was working on Grail Diary, I was confused on if I was writing a digital Commonplace Book (specifically designed for storing quotes and other things you want to remember about the books you read) or if I was writing a general-purpose notes app with a spaced-repetition feature…

    I’ve been thinking about this a lot for the past few days, and I’ve made a decision: Grail Diary is going to be an app for taking notes about the books you read. That’s currently what I use it for, and I want to start evolving the app to make it as great as possible for any other book lover.

    Book notes are important enough to deserve a dedicated application! For a book lover, your book notes will be among the most important things you create in your life. Your notes help books become a part of you.

    I designed Grail Diary around three factors that make book notes different:

    • Permanent value: If you are a book lover, you can build up a reading log and notes over the course of decades. My log and notes go back over twenty years. Grail Diary uses simple plain-text markup so your notes will be readable forever, by almost any application.
    • Personal ownership: Your book notes are yours. You shouldn’t lose them when a software company goes out of business. Grail Diary works with simple files! No account, no sign-up, and you can move or copy your files wherever you want. You can use file synchronization services like iCloud Drive or Dropbox to keep your content in sync across multiple devices.
    • Perusability: The joy of writing book notes is rereading them! It’s like meeting old friends again. Grail Diary already has features to help you get reacquainted with what you’ve put in your notes, and I’ve got ideas for many more.

    Even if Grail Diary never makes it beyond “personal project,” I feel better with focus. I’m trying to decide if I want to really polish the app and put it on the app store, or if I just want to keep it as an open-source app for the motivated techies to benefit from. On the one hand, getting my own app back on the app store would be a good ultralearning project. On the other hand, I’m kind of scared to take that step. If I release the app to the app store and people don’t like it… can my ego handle it?

    That’s a decision for another day.

  44. April 25, 2021

    Three years ago today, I was reading the book Factfulness by Hans Rosling, and I wrote in my journal that I’d like a program that would add Anki-like spaced repetition to the notes I was making about the book. I wrote:

    I’m thinking of maintaining a simple text file with Markdown syntax, one bulleted line per “fact” I want to remember from a book. Markdown-underline something like this and it becomes a phrase that gets elided for an Anki card.

    That idea turned into Grail Diary, which I still use today to take notes about the books I read. The project has been a huge personal success. I write personal programs as a way to teach myself things, and by working on Grail Diary I cemented knowledge into my brain about piece tables, incremental packrat parsing, spaced repetition, sqlite, and iCloud document storage. I also have 760 prompts about the 66 books I’ve read in the past 3 years, and by regularly reviewing those cards I’ve remembered the material I’ve read these past three years way better than what I’d read for the prior 44. It was also the start of my journey into educational technology, which lead me to leave Facebook and join Duolingo.

    Of all of my side projects, Grail Diary feels like it’s got the most potential to be useful for someone other than myself. However, it’s got one huge problem at its core: When I was working on Grail Diary, I was confused on if I was writing a digital Commonplace Book (specifically designed for storing quotes and other things you want to remember about the books you read) or if I was writing a general-purpose notes app with a spaced-repetition feature. As a result, it’s this strange mishmash of features. I doubt anyone else would understand why the software in its current form behaves the way it does.

    Who knows… maybe by the time Grail Diary turns 5, I’ll have picked “Commonplace Book” or “General-purpose notes app” as the primary identity for the project and it will have another user. Time will tell!

  45. April 24, 2021

    I’m awed by Scott Young’s MIT Challenge for its simplicity and audacity. In 2011, he gave himself one year to complete the full four-year curriculum in Computer Science from MIT. MIT made most of its course material available for free online, including the tests and the answer keys, so Young could work at his own pace, at his own home, and not spend any tuition money on this experiment. He successfully finished this project in 2012. In 2019, he published the book Ultralearning, a book that helps people plan big learning projects of their own.

    Young writes that for any ultralearning project, you should budget about 10% of your time on metalearning — making a plan to identify what you need to learn and how you will go about learning it. Furthermore, he advises you to break down the what you need to learn into three buckets:

    1. New concepts that you need to understand
    2. New facts that you need to memorize
    3. New skills that you need to practice and acquire

    You do this because the techniques to efficiently learn things are different for the different categories. I’m shocked I’d never thought about this before! After reading this, I understand my own learning shortcomings much better than I did before. I love love love learning that falls in “understanding new concepts” category. I gravitated to subjects like math, physics, and computer science that are rich in first-principles conceptual understanding. However, the learning tools that help me pick up new concepts don’t help me pick up new skills or memorize things, so I struggled in subjects like foreign languages and art. I wish I’d had this book back in my high school years to know that I needed to use different tools to learn different things.

    Ultralearning also makes another important argument. As much as possible, you should structure your learning project around doing the thing you’re trying to learn how to do and you need a way to get feedback on how you are doing. Do you want to learn a language so you can speak to locals when you travel? Then you should be speaking to locals as much as you can as early as you can. (The reaction of the native speakers gives you real-time feedback!) If you want to learn jazz guitar, you need to spend a lot of time playing guitar.

    The book devotes a few pages arguing against the effectiveness of my employer, Duolingo, because it is a very indirect way to learn a language. At the same time, though, Young writes of the importance of using drills to isolate and improve specific skills for your learning project. Someone trying to improve at tennis will do more than play games; forehand / backhand / serving drills will help you isolate and improve the building block skills for the game faster than you can just from games. Duolingo plays a similar role for serious language learners. It’s not a substitute for talking to native speakers, but the app does help you drill on vocabulary and grammar. (Also! Duolingo provides way more than app-based translation exercises. You can use https://events.duolingo.com to find groups to practice speaking and listening. You can use Duolingo Podcasts for practice understanding native speakers. And perhaps most importantly, Duolingo offers learners of all levels motivation to keep learning — the hardest part of learning a new language.)

    Anyone who is interested in learning, and in particular anyone who is interested in self-directed learning, should read Ultralearning. You’ll find a ton of helpful material. For those interested in educational technology, Ultralearning suggests two things where technology seems uniquely positioned to help people learn faster and better: In providing material to practice and in feedback. All of the influential educational software I can think of — from Duolingo to Anki to Kahn Academy to experimental efforts like the “mnemonic medium” — deliver in both of these dimensions. However, I think because Ultralearning already assumes its reader is highly motivated to learn, it doesn’t say much about one of the most interesting contributions of educational technology. Successful technology makes learning fun and contains mechanisms to help sustain motivation over time.

  46. April 21, 2021

    Since I lamented about the black-box nature of performance engineering in SwiftUI two days ago, I spent some time familiarizing myself with the SwiftUI tools inside of Instruments. While I’ve made some headway, I’ve also hit a wall.

    First, some context. As I mentioned earlier, Captain’s Log is a simple habit-tracking app that I want to use as a playground for experimenting with streaks, streak freezes, and the psychology of motivation. It’s currently a document-based app that stores its data in a plain text file. Right now there are only two screens. The main screen shows a day’s status on each habit and a calendar to help visualize how each streak is going. Then, there’s a second screen to tweak any additional details for completing a habit. (For example, to prepare for exiting quarantine, I’m trying to ride my bike at least a little every day. When I record a bike ride in Captain’s Log, I track how long and how far I ride.)

    Streak Visualization

    While performance isn’t terrible, there are noticeable lags in several interactions. For the past two evenings, I wanted to eliminate one of the lags: There is a noticeable delay processing keystrokes in the edit form, with the first keystroke being the worst.

    Here’s what I learned after a few days poking around with the SwiftUI tools in Instruments.

    1. My pre-SwiftUI Instruments workflow of just looking at heaviest stack frame in Time Profiler and optimizing that doesn’t work in this case. The heaviest stacks are all deep in SwiftUI library code that I don’t understand.
    2. When debugging UI glitches, the Core Animation tool is really helpful. All of the places where I noticed that the UI was lagging, like typing characters, were visible in the Core Animation track as “long transaction commits.” For instance, before any performance optimizing, there’d be a 135ms core animation commit when processing the first keystroke in my edit form. Having these sections called out in the track let me focus specifically on what was happening at this problematic times.
    3. Paul Hudson pointed out that you can use the Hide System Libraries option in Time Profiler to quickly find bottlenecks in your code instead of the SwiftUI code. This helped! I found a couple of places where I was doing excessive calendar math when drawing the streak visualization view. However, unnecessary calendar math was only about 25% of the CPU use during the long transaction commits — the rest is SwiftUI library code. With my optimizations I got the long commit down to 100ms. Better, but still way too long for processing a keystroke.
    4. The SwiftUI View Body tool showed that my view bodies aren’t that heavy. Most compute in 2-4 microseconds. In the span of a 100ms core animation commit, I spend 0.6ms computing view bodies for my own views and a total of 2ms computing all view bodies. 98% of the time is spent somewhere else.
    5. But here’s what I don’t understand. The SwiftUI View Properties tool shows that my main FileDocument struct changes on each keystroke. And I assume because the FileDocument changes, SwiftUI recomputes everything that depends on that document (basically, all the UI in the app). On every keystroke. I don’t understand this at all. Inspecting my code, it doesn’t look like the file document should be changing on each keystroke (the text fields are backed by a separate String binding independent of the FileDocument until you tap Done). I wrote some custom Binding code and set breakpoints everywhere I could think to validate that the document is not changing on each keystroke. In spite of that, SwiftUI is convinced that it needs to recompute everything that depends on this document every time I enter a new character in a text field.

    I don’t know how to debug this further. This is exactly what I meant when I wrote earlier about the inherent tension between declarative UI frameworks and performance tuning. I’ve described what I want in my UI (“a monthly calendar with line segments showing how long I’ve maintained different streaks”). There are probably things I can do to make that “what” even more efficient. However, code I don’t understand and has don’t have access to has decided the “how” of making my app work involves recomputing that calendar on each keystroke. I don’t know how I can make the app responsive without having the ability to influence that “how.”

    Since Captain’s Log is a toy app meant for me to learn, I’m just going to leave things as is and hope that Apple provides better performance guidance and tools at WWDC 2021.

  47. April 19, 2021

    Yesterday, I wrote about streaks and motivation. To let me experiment with streaks and streak freezes, I’ve started work on a simple habit-tracking app. One of my personal goals is to become proficient in SwiftUI, so I used SwiftUI for this project.

    The good news: My project, Captain’s Log, is done “enough” for me to use it. It’s also pleasingly compact (1200 lines of code). However, I’m now wrestling with performance. This is the slowest app I’ve written in a long time, and in spite of working as an iOS performance engineer at Facebook for years, I have no idea how to make this simple program faster. The time profiler tool in Instruments shows me deep, incomprehensible stacks that have nothing obvious to do with my SwiftUI code. The new View Body and View Properties tools are a little more helpful. For instance, one performance problem I have is the app takes too long to process keystrokes. Using the new tools, it looks like my central document property updates on each keystroke, and this causes most of the app to redraw. However, I can’t figure out why this property is updating on each keystroke, nor can I tell if I’ve broken some intelligent View diffing that’s supposed to be happening. I feel stuck.

    When Apple introduced SwiftUI, they explained the difference between imperative and declarative programming with a sandwich shop metaphor. If you walk into a sandwich shop and say, “I’d like an avocado toast,” that’s like declarative programming. You’re describing what you want and you let the server figure out how to make it. To get an avocado toast imperative-style, you’d need to tell the server individual steps instead. (“First, I want you to get a slice of bread. Next, toast it for 2 minutes. Then, get a properly ripe avocado. Mush some avocado and spread it on the toast…“)

    I love this metaphor! It shows the promise of declarative frameworks — and also hints at why performance problems might be inherently harder to solve with them. Suppose I order an avocado toast at brunch, and the server disappears. 20 minutes pass. 30 minutes. Where’s my food? Since I don’t know the steps that the server takes to fulfill my order, there’s no way to figure out why things are taking so long. This seems to be the state of performance tuning in SwiftUI: You, sitting at a table, alone & hungry, wondering where your food is.

    Clearly, if I’m going to become proficient with SwiftUI, I’m going to need to learn some new performance skills. Paul Hudson has the best performance tuning guide I’ve found so far, and my next project is to see if I can use this to make Captain’s Log pleasantly snappy.

    Always new skills to learn!

  48. April 18, 2021

    Inside Duolingo, we have a saying: The hardest part about learning a new language is staying motivated. I didn’t appreciate this aspect of effective educational technology before I started working here. The best educational software will not only have great content: It will have mechanisms that help learners stay motivated to keep learning.

    Streaks are one of the most important mechanisms that Duolingo uses to keep people motivated. Streaks encourage people to do an activity a little bit every day by counting the number of consecutive days you’ve done something you care about (spent time studying a language, got some exercise, wrote in your journal, etc). Skipped a day? Your streak counter resets.

    While tons of apps use streaks, Duolingo adds one twist that, as far as I know, is unique: The streak freeze. As you use the app, you earn the ability to buy streak freezes. Each streak freeze protects your streak for one full day of inactivity. Imagine: You’ve been studying Spanish dutifully every morning before breakfast for a month. But then one day you wake up feeling a little sick, sleep in a bit to recover… and since your routine was disrupted, you forget to practice Spanish that day. Most apps will say that you broke your 30-day streak, and the streak counter will reset the next time you practice. With Duolingo, though, if you had a streak freeze active for your account, your sick day would use up that streak freeze but your streak continues.

    Streak freezes dramatically increase the length of the streak you can build. Suppose you’ve got a 99% success rate at remembering to practice on Duolingo each day. Without streak freezes, you could expect your streaks to average around 100 days before they get broken. Impressive, yes! However, if you keep your account equipped with two streak freezes, you have to miss three days in a row to break your streak. With just a little bit of care, you can keep that streak going indefinitely. (If you didn’t take care and let chance dictate your streak length: that same 1% chance of forgetting gave you 100-day streaks in a world with no streak freezes. With streak freezes, left entirely to chance, you could expect your streak to last almost 30 years.)

    Longer streak lengths tap into two motivation centers in learner’s brains.

    1. Loss aversion It just hurts so much to lose something you “own.” If you have a long streak you’ll want to keep it. Each day your streak gets longer, your brain realizes it gets harder to replace if it breaks… so you care that much more about keeping it going.
    2. Identity At some point, after practicing a language and caring for a long streak, it stops being something you do and starts being part of who you are. “I’m a person who practices languages at least a little every day.” As Angela Duckworth writes in Grit, once you make an activity part of your sense of identity it makes it much easier to stick with it because your brain stops doing cost-benefit calculations.

    I’m not surprised that so many apps try to use streaks as a motivational tool — it’s a simple concept that’s simple to implement in almost any program. Streak freezes, on the other hand, require much more design and programming work.