This is a talk I gave about Heartbeat.


Isaac: All right, hey folks, I'm Isaac. I made this thing called Heartbeat that you guys have been subjected to week after week. So thank you, first of all, and I'm here to give a very overdue overview on this thing that you all know now exists.

It started out as an experiment. What if we knew everything? Because that's interesting to me. So envision with me, if you will, a future in which everyone knows everything. This is interesting to me. What if everyone was actually on mission?

I've thought about this a lot, and I'm going to begin by answering with my own answer that I've come up with. If all of us knew everything, we'd see all the context around friction points that exist and we'd smooth them out. We'd understand what would be necessary to make things flow.

I mean that both metaphorically and psychologically. The patterns would make sense, the chaos would be gone, and we'd be able to focus productively on what we're doing with our entire selves. This is the stuff of sci-fi. Once everything is known, everything becomes consistent, and a productive order follows from there.

The big difference between us and "The Giver" or any other fictions that talk about this is that we are not in a closed system. We don't control everything. New people come in, bringing new ideas and new perspectives, and if we knew everything, we would see those ideas and we would assimilate them and we would grow from them. I think both these things are positive—achieving flow and achieving evolution. From my value set, anyway, those things are positive.

The Core Idea

So the question then becomes how do we get there? How do we know everything? Knowing everything is kind of a long shot. Instead, let's narrow the scope a little bit and talk about what's the smallest, easiest to extract, most useful chunk of data available. What are the things that make or break a team?

We talked about this a lot and we actually got this wrong at first because we were talking about culture specifically. This whole thing came out of a culture guild discussion in which we were trying to figure out what makes good culture and how do we tell if we've made things better or worse?

We started out with this list. These are the things that we thought made our culture. We got the guild together, hashed out a bunch of things that we actually cared about, and distilled it into these five-ish things. These are the tenets of culture that we came up with and this makes sense. All these things that you see here are useful indicators of how well we are doing.

Then Blake came up and gave me a useful sanity check and said sensibly that we should actually ask people how they are actually doing, if you care about that. So we added these, and the significance of that came up later. We noticed over time that the first set of metrics that you saw mostly seemed to be flat lines, the response rate on those wasn't too high, and a lot of people had questions about what those things actually meant. It wasn't very naturally useful data.

Evolving the Metrics

A couple weeks ago we rehashed everything and came up with the metrics that you all saw this morning if you clicked through on that email—thank you to the 25 of you who did. We had a few conversations, Blake, me, and a bunch of others, and in an attempt to get to the core of what makes people tick, this is what we came up with.

Backing up a little bit in the design process, just to give you some background there, this is an early mockup of what this thing was going to look like because we were trying to figure out, having established metrics that we want to measure, how do we actually structure and collect the data? One person's 60% happy might be somebody else's 80%. So I wanted to simplify things until only the core signals remained. Blake, the fountain of wisdom had spoken again.

Blake: Jesus, how many times are you gonna say my name.

Even number of options to force people to go to one side of the spectrum or the other. So we came up with something like this, and the four-point scale that we ended up with actually works well for a reason that I didn't expect. It changes the choice from a matter of N degrees to two building properties, good or bad and a lot or not a lot. A score of one means bad and really bad. A score of two means bad, but not by much. Three is good-ish, and four is stellar. That actually works out well. We went through a couple of iterations that I'm skipping before arriving here to what you all saw today. It's sequential to force people through it because a lot of you like to drop off at the end of things. So that's been working pretty well.

Analyzing the Data

But that's not the interesting part. The interesting part is actually the data, and that's the part that is most interesting to me and to you. What I want to talk about is some stuff that you can't get from the current Heartbeat interface. In the last 10 weeks, we've collected 4,500 data points and there's a lot of really fun stuff in there. I'm excited for what's going to happen later on.

To begin with, let me talk about happiness. It's kind of a general non-specific query as it exists within Heartbeat. How are you doing? We can generally agree that more happiness is a good thing. Humans are generally optimized for it, but if we're being intentional about that optimization, we need to know what makes happiness. And good news, we have data for that now.

It's kind of what you would actually expect. If you mark down a three or four for happiness, 87% of the time you've also chosen a three or four for trust. They're very highly correlated. Also highly correlated is empathy, at 82%. Very, very similar things. In fact, all the things that we measure for currently are fairly highly correlated. You'll notice that the top four are all things—well, trust, empathy, and focus. Those were all things that we added in the most recent round. What doesn't correlate strongly with happiness are these things. And note that these are all the metrics that we actually removed in the most recent past. I think we did a good thing there.

Focus and Happiness

So it's not enough to actually only look at what correlates positively with happiness. We also need to look at what tears it down. Curiously, and this is not what I expected. The unhappy folks still had high-ish scores for empathy and trust in community: 74%, 65%, 62%. Most of the unhappy people still thought empathy and trust in community were doing fairly well. This was surprising to me. So we can say then that these are not responsible for making people unhappy. What is responsible for it is focus. This is the thing that correlates quite strongly, more strongly than any of the other things. As focus improves, so does happiness, so we can say that removing the things that tear down focus should result in a bump in happiness across the board.

Donny's been working very on making sure that teams can actually do this, and there's less of the flux that we're all accustomed to over the last couple of years. It's my theory that if we continue down that road, things are going to get better here, which also intuitively makes sense. So it's cool that the data actually lines up.

Wrapping Up

So, what have we made? I'm not really sure yet. But the way that you guys are using it, it's sort of a town hall of a platform. Something that you all contribute to, something that you speak up in, something that lets you all know how well you yourselves are doing. It drives engagement and community and gives us really useful insights on what's actually going on internally. At least that's what we've done so far. It can do more than that, and I think we'll get there by doing more analysis. The data can help us see why people feel the way that they feel. What contributes to a good week or bad week. What builds or tears down unity.

The numbers that I just showed you are all the data that Heartbeat can provide, and more needs to happen there. I invite all of you to join me in this. It's an open source project and there's some cool stuff that can happen here. Beyond that, you know, it's open source, like I said, and it's my goal in the next couple of weeks to open this up as a hosted platform to any organization that actually wants to use this. We'll see how that goes. And if that's something that you're interested in, please talk to me, because I would love helping getting this done.

At Enova, just to be clear, because Friday is actually my last day in a weird coincidence, the culture guild is going to be the steward of Heartbeat as it exists right here. We just had a meeting about that earlier this morning. We're still working out where the actual deployment is going to live. But regardless, Heartbeat will continue to beat here at Enova.

Audience Member: I get it.

Isaac: I'm sorry. I'm sorry about that.

Audience Member: I see what you did there.

Isaac: Yeah, yeah, yeah. So anyway, wrapping up. I have a ton of gratitude to the powers that be and to all of you for making this experiment actually a thing that worked well. This is something that I wanted to try. I wasn't really sure how it went or how it could go, but this was a worthwhile thing and I'm really excited that this kind of thing can actually happen here. So I want to thank all of you for being part of making that actually a useful test. So that's actually it. Heartbeat.im, there's a GitHub repo. The whole thing

is open source, come and play. Tomorrow's my last day. You can find me online, it's Isaac Bowen, I am all over the place. I'll send an email later. But anyway, in closing, I want to say that I love you all very much and I'm happy—

Audience: Aw.

Isaac: Thank you. Cheers.


Isaac: Questions?

Audience Member: Oh, questions, I guess. Do any of you have questions? We can do those too. Nobody has questions. Blake?

Blake: So one of the things I noticed when I was looking at the data from my team, my vertical, was that occasionally, my sentiment, my ideas about how things were going were not necessarily, they didn't match up with my team occasionally, and I felt like that was maybe something that was actionable on my part. Right?

Isaac: Absolutely.

Blake: I'm wondering if any of the people that you've talked with noticed any kind of insights that they had derived from the information, just the information they were able to see.

Isaac: Sure. And the kinds of things people were able to figure out.

Isaac: Yeah, absolutely, so as you might expect, the kind of data that goes, that is reflected in the system, varies based on what is actually happening on the ground. There's one team that did a release, and you could see in the data, as they were approaching that release date, optimism was going up. People were really psyched about what was going to happen here. Curiously enough, the actual, this is back when we were testing communication awareness. It was a metric that talked about how well people would communicate with each other and how well or aware of what each other are doing. Curiously, that metric was actually going down as optimism was going up as the actual release date approached. Once the release date happened, optimism tanked. But people, like the communication score, actually went up because people were forced to, as a result of all the that went down, they were forced to talk to each other and actually sort out what to do next in this fresh hole that they had discovered.

So yeah, there are things that actually are reflected in the data that come out of things that happen day to day. And if we look at that and understand what's actually going on, we can use that to handle situations better and to foresee the kinds of cliffs or jumps that would occur in these situations. So yes, definitely, and I expect to see that kind of thing going forward. I'm not going to be here, so I'm hoping that that works out and I would love to hear about anything that does happen. But yeah, definitely. Yes, that is a thing. Other questions? Comments, complaints?

Audience Member: Methodologically, how would you, what did you do to determine which metrics were not useful?

Isaac: Guesswork.

Audience Member: Okay.

Isaac: Mostly, a lot of it, like a lot of this is highly non-scientific. In the conversations that we had about which metrics were and were not useful, a lot of it did have to do with what was actually reflected in the data. The things that we removed basically were flat lines across the board. If you average out the company, like the awareness or the, yeah, was it awareness? Something like that. Most of the metrics that were removed were just threes across the board or 2.5s or something like that. And there's very little variance. There's also very low engagement with those metrics. People weren't talking about them, people weren't even responding to them much at the time. Anecdotally, I heard from a lot of you that some of them were just confusing. So we tried to cut down on things that didn't intuitively make sense because at the core this should be a really intuitive tool. Like I'm having a conversation with you to understand what's going on in your head over the past week, and you should be able to do that without thinking too hard about the process of responding. So we tried to settle on a list of metrics that just kind of naturally made sense and that also covered all the things that affect your week. There's a Heartbeat channel in Slack, which some of you have been a part of. And the conversations that occur there have actually been really useful. The trust metric was something that like there were a bunch of people who actually said, "Trust is important to me, please ask me about this so I can tell you and we can understand what's going on here." That was something that came straight out of the community and I never would've thought of it, would've thought about it. So it was a combination of user voices and understanding what engagement was or wasn't what the metrics were around. There was a question over perhaps? Go. Yeah.

Audience Member: I know you said that you have why didn't you do one that goes down, like just to make sure that it backs up the data? You know, like if I'm happy and these things go up, there's something that also goes down. I don't know if it's useful, but that was like, if correlation—

Isaac: Yeah, anything that tells me things that I do or don't know, like anything that reflects reality is useful, honestly. And if there are things that show a strong, predictable, even negative correlation, that's fine too. Like if there's something that's consistently tearing down happiness, I would like to know about that thing. Oh, I think I might have misheard the original question.

Audience Member: Why don't you have one?

Isaac: Why don't I have one?

Audience Member: Yeah, you have strongly correlate, but yeah.

Isaac: Yeah. Oh, I gotcha, I gotcha.

Audience Member: Have a strong correlation. Why not keep one which has the strong—

Isaac: Yep, makes sense. If you have an idea of what that would be, I would love to hear about it. I'm not sure what that would be.

Audience Member: Rage.

Isaac: Rage?

Audience Member: Rage.

Isaac: As rage goes up. Okay. All right. I know some people who get off on that though, so I'm not sure about that. No, that is a good point though. And if something like that exists, I would love to toss it in. Also part of the metric selection thing was trying to keep things as concise as possible. So the original list, there were like eight or nine metrics and we did some UI stuff to try and make that appear less imposing. But distilling it to the core of what somebody cares about week to week is something that we try very hard to do. If there's something that makes sense to throw in, absolutely let's talk about it. This is a very flexible thing. It's already changed once, it's gonna change again, it's my intent that this thing will evolve over time in whatever way it makes the most sense here for people that work here. Okay. Anything else? Yes?

Audience Member: You mentioned people would skip questions a lot. What's the general response either Heartbeat as whole, just going into so many different questions individually or whatever.

Isaac: The actual, and there's more stuff that I want to do around like sorting out what people's behaviors are once they actually open the page. There's an opportunity for me to, for anyone to see, how long it takes people to fill out the thing, where they actually drop off, and none of that is actually measured right now. The response rate from just the emails is around like 60% for the entire organization right now. There are 109 people who get surveyed week to week. A lot of you have gotten one-off Slack messages from me, saying, "Hey, why haven't you filled this out yet?" And people respond more quickly to that than they do to email. We'll see what makes the most sense. Like I want to strike a balance between getting what data you guys are willing to contribute and also not annoying the shit out of you. So we'll see how that actually works out. That's definitely for discussion over the next couple of weeks as I'm not gonna be here anymore. But yeah, we'll see. Our response rate was actually a lot higher than I expected, and I've had conversations with HR prior to actually launching this thing and afterwards, and they were more surprised than I was that the response rate was as high as it was. We actually got 99%, not 99, 90% our presentational week, which was fantastic. That's way higher than any survey they've ever done so.

Blake: That's why they were surprised.

Isaac: I know, exactly. I was kind of like, yeah. Sir?

Audience Member: I might have missed this, but did you run factory analysis on—

Isaac: Nope. Would you like to? Absolutely.

Audience Member: I'm just wondering.

Isaac: Yeah, come on in and join the project, but there are a lot of capacities that I, like in which I know some things, but not enough of these things, and Heartbeat, I know just enough to actually get this thing done. So that kind of thing would be great. I still need to talk to the analytics team to see if they want to play. Yo?

Audience Member: Well, sorry, I didn't mean to interrupt, but I was gonna say, along those lines, sort of adding onto that, you know, obviously, you and I have discussed a number of integrations or things that we could correlate data to that happen outside of Heartbeat.

Isaac: Yeah.

Blake: Maybe you could enumerate some of those ideas that have been tossed around because I think

that's maybe where some of the most interesting stuff happens. Like can we correlate this to discrete events that we're aware of, that kind of thing. So if you could talk about integration ideas that people have discussed with you.

Isaac: Sure, sure. One of the cooler ones that's come up so far is git tagging, and I'm excited to see what that one could actually be like. We've talked in the past about using tags and git commits as a way to, like if only for audit purposes, I need to see everything that went to FCA, hear all the commits that are tagged that way, and now we know these things. We can also use these as sentiments, like sort out what's actually, like what the movers and developers are actually writing stuff for just on a project level as well. Like I mentioned before, what does this code actually have to do with anything? And if we correlate a spike in FCA tags with a drop, like a precipitous drop in happiness, now we know something about developer behavior and how it's actually affecting their week. So that's a thing that we could do. We could tie it into Pivotal, do some natural language analysis, and what's happening in Slack. There are a bunch of opportunities to actually tie things together because ultimately, we're just describing reality, right? We're trying to analyze all the things that exist and try and pull out insights that are less obvious to those of us who just have a single point perspective in our heads. And the more ways that we can find to analyze what is real and what actually exists, if we tie these things together, there's a huge opportunity to actually make things better for everybody who works here. And if I'm being wildly idealistic, everybody who works in programming and knowledge work, there are really cool things that can happen here. And those kinds of integrations are going to be a big part of what actually makes that work, I think.

So, anybody else? Sweet. All right, thank you all very much.

Last updated