The chaos began the day we moved into our new building. A little plot of land outside was the scene of the terror. Not only was our Cambridge parking lot too small for the number of people in our office—21 spots for about 60 employees—getting spaces for clients was a pain, there was a pile of poop in spot 16 that even a car couldn’t clear, and a neighborhood transient liked to sleep in spot 20. Instead of managing the workings inside the office, our employee experience team was suddenly tasked with assigning parking spots. It got so bad they were often working as a valet service in addition to their day jobs.
So we built a bot to do it for us.
It’s an AI that listens to a designated Slack channel (more or less a chatroom) to keep tabs on who is WFH, who has a dentist appointment, and who needs a spot only for 90 minutes next Wednesday. Report your parking needs in Slack, and the bot assigns (or de-assigns) a parking spot. We named it Lotbot.
We also mounted cameras?—?we named them cargoyles?—?to visually detect open spots that the chatroom would inevitably miss. The Lotbot and the cargoyles communicate directly in the chatroom like two people.
We even built a little website that visualizes our parking lot (and the current weather). We keep a screen near the front desk so that the community can sense the demand on the lot in real time. It was perfect.
Except, it wasn’t. When we first built Lotbot, we expected that the humans in our studio were going to tease and bully it to test its limits, much like people do with Siri and Alexa. We had to make it clear that Lotbot was in charge. Too many misunderstandings, and we could end up in chaos. People with nowhere to park. Spaces that are empty all day. Dogs and cats living together.
So we decided to train Lotbot, and make it tough. We started with an isolated chatroom, so that we could gradually prepare it for the real world. We copied over all the chatter from our parking channel, and helped it categorize and understand each post. Over time, it made very few mistakes. Eventually it made none.
We then tried to anticipate ne’er-do-wells in the channel. We had a list of the usual suspects and guessed at their meddling. We predicted (accurately) that one person I won’t name (Gian) was going to throw fake spot numbers at Lotbot. Another (Lopper) would demand lengthy conversation. One particular individual (Paul) held the potentially troublesome opinion that our parking chatter was a form of performance art. We worked to specifically prepare Lotbot to handle these provocateurs, as well as anyone who tried to make backdoor deals in the Slack channel. There would be no trading of spots without Lotbot’s approval. So we gave it a rogue personality, with a take-no-prisoners approach to parking spot assignment.
To our surprise, when we first introduced Lotbot to the Slack channel, it was feted with many emoji. Rather than tease and poke at Lotbot, people said, “I love you.” But love them back, it did not. Lotbot must have reamed out half the people in the channel within a day. Rather than a passive AI assistant, Lotbot was a force to be reckoned with.
I rushed to lobotomize Lotbot’s aggression, but the damage had been done. Lotbot was now the strongest character in the channel. And it was putting the smack down.
It was then that I realized I had made some fundamental mistakes in building our AI. For one, I shouldn’t have isolated Lotbot as it was learning. Since “it takes a village to raise a child,” I should have introduced the bot to its village at a much younger state. It would have helped us feel the shift in interpersonal dynamics early on. We should have iterated on the dynamic, rather than trying to predict it.
We also underestimated how well our colleagues would treat Lotbot—its misdirected temper actually endeared it to the team. When it made mistakes, people wanted to help it out. But Lotbot, the spoiled child that it was, didn’t know how to accept. One morning, Lotbot went completely off the rails. Someone tested its French, which wasn’t very good. Then, another “helpful” person tried to help resolve things, at which point Lotbot lost it. It told her to cease at once, and then started throwing spot assignments at her. It was such a dramatic scene that two fans of the parking channel turned it into a play that they performed in front of the entire studio.
I should have realized that Lotbot would need to continue to adapt and learn. Our invention is still our responsibility, and we’re often checking in to see what it’s saying and doing. But at least that’s a lot easier than checking up on the parking lot.
Over the last few months, the drivers and Lotbot have managed to work together smoothly. But winter is coming. Here in snowy, slushy Cambridge, demand for parking is going to get intense. For better or worse, Lotbot is about to have more power than ever.
Illustrations by Carlos Ruiz