Come Join us at the Quantified Self 2015 Conference!

We’re excited to be sponsoring the Quantified Self 2015 Conference & Expo, June 18-20 in San Francisco. If you haven’t been to a QS Conference before, they’re awesome (more on that below). This year, we’re doing a collaborative data tracking project that should be a lot of fun. Basically, we’re exploring the combined digital activities for people who have opted-in to a special QS 2015 group, looking for interesting statistics and visualizations showing how the group as a whole spends time at a conference like Quantified Self 2015. These reports will be displayed on a live updating display at our exhibitor table, and group members will receive a special report showing how their individual time contributed to the larger group.

If you’re attending the conference, we’d love for you to join the experiment! Sign up now.

You should consider attending if you are in the area. It’s an amazing gathering of passionate self-trackers from around the world who have come to share their stories about what has and hasn’t worked for them as they’ve tried to improve their lives through data. Check out the trailer for this year’s conference:

We’ve got free tickets to the Expo! (while supplies last!)

This year, the conference is changing up it’s usual format and turning the last day into a public exposition that will be a great way for people who are more casually interested in the Quantified Self to learn more. The Expo will be a day of “how-to’s” with packed sessions on how to track, learn, and reach personal goals using methods emerging from the Quantified Self movement.

We have a limited number of free passes to share with the first 50 people who register. Please follow this link: to register and use code: rescuetimefree. Just make sure you swing by our table and say hello!

Can’t make it to QS 2015? Here are some videos!

The conference videos usually go up a few weeks after they are filmed. In the meantime, here are some of our favorite talks from previous years.

David El Achkar on Tracking Time
David uses a homegrown, spreadsheet-based system for tracking his time. It’s intensive, but he is able to learn some really interesting things about himself.

Laurie Frick: Experiments in Self-tracking
Laurie is an amazing artist who’s work is based on her self-tracking experiments (she currently has a show at this gallery in NYC, if you happen to be on that side of the country). Here she is talking about her process and how her self-tracking experiments inform her art.

Paul LaFontaine: Heart Rate Variability and Flow
Paul examines his heart-rate variability to understand his work efficiency, especially getting into a state of flow, where he’s absolutely absorbed and focused on what he’s working on.

Steven Jonas: Memorizing My Daybook
Steven experimented with spaced repetition to boost his memory with some impressive results.

Robby Macdonnell: Tracking 8,300 Screen Hours
Finally, (and a bit of a shameless plug) here is a video of me talking about what I’ve learned from several years tracking my time with RescueTime.

Can we skip the “Quantified” part and communicate directly to senses and emotions?

Several weeks ago, I stumbled on this video of Linda Stone speaking about what she calls the Essential Self, which is a way of thinking about personal data and how people should interact with it at a sensory and emotional level. I was really intrigued by the idea. Essential Self technologies are, in her words:

Passive, ambient, non-invasive technologies are emerging as tools to help support our Essential Self. Some of these technologies work with light, music, or vibration to support “flow-like” states.  We can use these technologies as “prosthetics for feeling” — using them is about experiencing versus tracking. Some technologies support more optimal breathing practices. Essential Self technologies might connect us more directly to our limbic system, bypassing the “thinking mind,” to support our Essential Self.

This is a somewhat different perspective than that of the Quantified Self movement, which places emphasis on analysis and reflection of personal data. I’m generally on Team QS in this regard. Numbers are good, right?. The more data you have about something, the more opportunities to understand yourself at a deeper level. Right?!

Still, there’s something I really like about the idea of bypassing the analysis and skipping to the benefits that hopefully will be the result of the Quantified Self-flavored reflection. Digging through ever-growing piles of data searching for meaning has it’s drawbacks. Mainly, not everyone wants to be a data scientist. It can be daunting to learn how to think about your life in such a clinical context, both from a skills perspective (learning statistical analysis), and simply because it can feel really unnatural to think of yourself as a bunch of rows on a spreadsheet when that obviously can only represent a sliver of who you actually are. Also, I LIVE this stuff and I find it difficult to carve out the time to dive into my personal datasets and do some proper exploration (although its one of the most satisfying things when I do manage to find the time). I think this is one of the reasons many self tracking products fail to stick with people. They’re neat, but not enough to justify the effort to keep using them.

In many ways, I see the ideas around the Essential Self (as far as I understand them) as a progression of the Quantified Self, or at least something that is layered on top of QS. They attempt to sidestep the analysis and focus on creating a meaningful connection with the user at a purely emotional or sensory level. I think it’s an exciting idea, and really starts sounding like the future. You’re not building tools that people use to methodically figure things out. You’re giving them something that feels like super powers.

Here are some examples:

  • You sleep better than your co-workers because f.lux helps you avoid disrupting your circadian rhythms while you work.
  • You have a magical sense of direction because you wear a North Paw anklet.
  • Your posture is fantastic thanks to the Lumoback you’re wearing that nudges you to sit up straight.

While watching that video, my brain started racing with thoughts about RescueTime in this context. Could I have an ambient sense of how my work day is going without constantly disrupting myself to check some numbers? Often, the exercise of pausing what I’m doing – however briefly, checking my stats, then understanding what they mean is counterproductive to the state of flow that I’m in.

With an Essential Self perspective in mind, I hacked together an alternative that uses a colored LED to keep me persistently aware of how productive my online activities had been. It fades between bright blue for productive activities and red for distracting ones. Here’s what it looks like:



It’s a neat first attempt, but I don’t think it totally succeeds. There are a few reasons why.

The experience of a real-time monitor felt a little bit like having a personal trainer. This is really awesome sometimes, but imagine if you had a personal trainer staring over your shoulder at all times? I felt an uneasy pressure when the light would fade to red.

It was too “right now”, and ignored previous aspects of my day. I oddly found myself resenting the red light, especially later in the day after I’ve already gotten a lot of work done. I think the problem was that the interval was too short, and perhaps should take the overall productivity pulse for the current day as some sort of weighting mechanism.

The red light feels like a slap on the wrist. I’m not huge on things that wag a finger in my face when I’m doing a bad job. I much prefer positive reinforcement. I may experiment with some other color schemes that prioritize communicating a state of focus. Perhaps using brightness instead of color.

The good news is, some of those objections can be address with a relatively simple design iteration. So I’ll keep investigating and see if I can make it feel better.

But in a way, this still seems like QS-style reporting. I’m swapping colors for numbers, but I haven’t fundamentally ventured outside of the realm of what most Quantified Self apps attempt to do. One thought I’m curious to explore is seeing if I can pulse the light in a way that encourages a calm breathing pattern when in a state of focus (addressing another idea from Linda Stone, email apnea). In that case, the light would become something that not only informs you about a state of focus, but actively takes a role in supporting you while you’re in it.

This is still very much a nights and weekends project for me, but I think it’s an interesting idea and wanted to share. What do you think about an ambient monitor to help you stay focused and productive? Or what about technology’s ability to communicate with you directly at an emotional or sensory level? Have you seen any other examples of this that you really like? I’d love to hear your thoughts in the comments.

Build it and they will come? Performant Search Part 2: The Technology Sauce For Better Spaghetti

Our job was to find a long term scalable solution to the problem of finding your activities that match your key word search. This post pertains to the technology involved. Read about product features and new capabilities here.

Turns out, search in RescueTime is a surprisingly complicated problem, for the simple fact that your typical search prioritizes ranked relevance results– it’s ok for the engine to stream results to you, it’s ok for the search to be incomplete, as long as the ranking is bubbling up the best match. It’s ok for it to be probabilistic or best-guess in nature, sometimes. Generally speaking, you are looking for something small (some words) in something large (a web page).

Our challenge, is that while the user experience semantically matches “search”, what you really need is a complete result set, combining all relevant reporting filters, of activities that match your requested search expression. It should produce repeatable results assuming new data hasn’t arrived. It should be real-time updatable as your new data comes in (~ every 3 minutes). It should be ok if every record or zero records match, there can be no cap on number of matches. All this, for about 100-400 logs of time data per user per day for many tens of thousands of users. The longer a user is with us, the huger the list of activities to match against, just for that user. The list of unique activities of our users is well over 1 billion. We should be able to completely rebuild indexes of activities at will, in near real time, to provide application improvements. Yet, cost needs to scale at worst linearly as a fraction of the rest of our products’ demands, and speed needs to remain constant or better.

All of these characteristics eventually killed our previous attempts to use industry-standard search models based first on Lucene/solr, and secondly on Sphinx. Both were able to hold up for a period of time, but were fundamentally flawed solutions in the assumptions they make expecting a relatively static, ranked-match document-based search model. To shoehorn them into our needs required operational spaghetti and some pretty ugly reporting query structures.

Our old search platform may have looked like this and required similar engineer attention, but it didn’t sound as good.

Enter MySQL fulltext Boolean search. First there is the critical advantage of being *inside* and native to our database platform in a way that even Sphinx with it’s plugin can’t be. This allows for more integrated and simpler reporting queries– no longer is the step of matching your search expression required to be isolated from the reporting query that depends on it (Sphinx could have done this, sort of, with the plugin, but not really the same). Second, in Boolean search mode, MySQL provides an unlimited result set (no cap on results). Additionally, there is less duplication of supporting data, since it can operate entirely in the same instance as the source data– this value is not to be underestimated, for all the inherent caching this leverages. Operationally, it is far easier to dynamically and programmatically add, destroy, and rebuild indexes– since they behave like tables with normal indexes to the operator.

But for performance, the  most critical options it offered was a way to fluidly and transparently provide per-user-account search indexes, which lets our performance remain consistent despite constant multi-dimensional growth (new users + accruing existing users’ time data). This isolation-of-index solution would have been theoretically possible but horribly unwieldy and in need of huge operational supporting code in the other options. Secondly, it provides a clear way to constrain the size of keyword indexes, in other words, we know from your requested report you could only possibly care about activities that were in the particular time range you requested, and this can be of value both in index partitioning options and the submitted report query itself, especially in the amount of memory that must be held to build final results. A huge benefit of this known-maximum-scope for the searchable data means that at any time we can intelligently but programmatically throw away or update whatever dynamic table plus index we had that intersects the requested scope rather than the entire source tables, and rebuild it in real time, for a minor speed penalty (< 0.01 sec vs .1 to 3 sec for your search). Any subsequent search request that remains a subset of that most recently persisted scope can just reuse the current index, with the < 0.01 sec speed. We can play with permitted scope expansion to tune for speed. Furthermore, any sharding of accounts across new instances allows the search data to naturally follow or be rebuilt inline with the same scaling decision that drove the sharding to begin with– no separate stack to worry about following the shard logic.

Check out some example code for sneaking a search result into a series of joins rather than hanging out in the WHERE clause. Here it can be treated just like any other join, and like any other filter on your pivot reporting.

— check dynamically_maintained_text_and_things_with_controllable_scope
— if exists and is up to date continue,
— else, create if needed and
— push missing scope by intersecting request scope with existing
— index is maintained by table definition

SELECT * from things
INNER JOIN other_things on = other_things.thing_id
— begin search acting like join
SELECT * FROM ( SELECT things_id
FROM dynamically_maintained_text_and_things_with_controllable_scope `things_search_data`
WHERE MATCH (things_search_data.a_text_field, things_search_data.another_text_field)
AGAINST (‘search words here -there’ IN BOOLEAN MODE)
) d_things_result_table
) things_result_table
ON things_result_table.thing_id = other_things.thing_id
— end search acting like join
other_things.limiting_characteristic = a_factor

We’re using the table alias for the search data there because that would allow you to chain multiple searches, one after the other (search inside a search), against the some source data by adjusting its table alias.

Engineers looking to provide high performance, easily maintainable search capabilities should carefully consider MySQL fulltext search options if they are on a late version of the product. This is especially true if you are trying to manage a search index of billions of tiny items based on database text columns that mutate very quickly, and benefit from programmatically created and maintained indexes. For example, it occurs to me that Twitter could use this model in very much the same way we do to provide some powerful realtime search indexes for targeted subsets of accounts or hashtags.

In our case, we were able to reduce our search related platform costs by about 60-70%, significantly reduce our operational pain, even while delivering a solution that provided vastly improved performance and eliminating that “well, it works now but we’ll just re-architect next year” problem.

Our spaghetti search code and virtual spaghetti cabling has now been re-factored into something tastier, ready for your consumption.

creative commons credit to Eun Byeol on flickr

Obama racks up karma and blows up servers with his Reddit AMA

President Obama made an appearance on Reddit this last Wednesday with an AMA (Ask Me Anything), answering questions from users for thirty minutes. At the peak of the event, there were over 198,000 users who were attempting to view the President’s AMA.

In just a couple of hours, the President gained over 17K points of comment karma. Based on stats from the RescueTime user base, he also had a pretty significant impact on Reddit’s server load (and other users’ page load times).

We saw that many users spent more time viewing Reddit’s heavy load outage page than they actually spent on the AMA page itself.

Reddit's Heavy Load Outage Page

We looked at the aggregate user’s Reddit time broken down between viewing the outage page and successfully viewing the AMA page. Here’s what that data looks like:

Graph of time spent on Outage page vs Obama's AMA page

The guys at Reddit posted a blog today giving some other great statistics regarding the event. They added a total of 60 servers to try and keep up with demand, but still had problems due to their load balancers not being able to keep up.

Overall this was a great opportunity for Reddit and the community at large, I would love to see on-going AMA sessions with the President on a monthly basis (regardless of who that ends up being!). Perhaps as an addition to the “Weekly address”?

How we use RescueTime, at RescueTime.

We built RescueTime because we thought it should be easier to make sure we’re spending our time the way we want to. It has opened up a whole new world of data for us, and we wanted to share some of the ways we make use of it around the office.

Forming a baseline lets us read the pulse of our team at a glance.

RescueTime lets us see how much time we’re spending on the computer, without having to keep time sheets or manual logs. By categorizing different applications and websites, we can get a pretty good sense of how much time we’re spending on productive stuff vs. unproductive stuff. That gets really cool when you have enough data for patterns to jump out. It also makes it really easy to see when something weird (not necessarily bad) happens.

Take the month of April, for example:

It’s clear that something is very different about the first week, and that it appears something odd happened on the last day of the month. Turns out two of us were out of town during the first week, and on the last day we were trying to make some end-of-the-month deadlines.

Working 9-5? Not us.

UPDATE: Here’s a post from our CEO explaining the “5-hr productive rule” in much better detail.

We completely got rid of having set working hours. After looking at a couple months of our data, we decided that 5 hours of productive time per day is a pretty good average. We set up an alert for RescueTime to let us know when we’ve reached that 5 hour mark, and use that instead of a set hourly schedule. This flexibility works out really well for us (especially considering we’re a semi-distributed team). We still make sure there are a few hours in the day that we’re all available at the same time, but aside from that it’s up to each person to decide when they want to work.

Meetings at exactly the right time.

We use RescueTime’s efficiency report and comparisons report to figure out what times we’re the most productive and then never schedule a meeting on top it. Since meetings can be a bit of a distraction anyways, we try to reserve meetings for the times of day when we’re already a little bit scattered to begin with.

Unfortunately, it’s not totally homogenous, some people are more focused in the mornings and others in the afternoon. Having that extra context is still a huge help, though. (For example, I won’t go near a meeting on Tuesday afternoons, which is when I’m the most focused.)

It’s not just for team-wide decisions, either.

Those are a few ways that RescueTime impacts our entire team. Individually, we use our RescueTime data in all sorts of ways. Here’s a few highlights:

Robby (Product Development / Design):

“I use the time reports along with some metrics from other systems to figure out how long it takes me to do certain things. For instance, I can pretty easily tell that I spend just over 11 minutes on each customer support request I deal with (on average). I’d really like to bring that number down, and its easier to do now that I have a visible baseline.”

Joe (CEO / does a little bit of everything):

“I find it invaluable being able to know how long I’ve been working on a specific task. By being able to search for an individual document, like “linux_extended_info_grabber_sqlite.cpp”, I can see that I spent 4 hours 16 minutes so far this week working on that coding that feature. In that same search, I can see how much time others in my group have spent on that same document. Being able to look back at this type of data is amazingly powerful for me. It helps me estimate times better, judge overall effort, and make better business decisions.”

Mark (Chief Architect):

“I find value in surprisingly specific ways. For example, sometimes I will compare time spent in terminal applications versus code editors, to confirm or dispute the emotional feeling that I’m dragging and thrashing due to too much incremental testing (evinced by excessive terminal / shell time). And, probably unlike others, I’ll sometimes respond to my communications / email category to being too little of my time, and force myself to re-engage some lagging communications efforts.”

Jason (Sales / Marketing)

“I love having the Offline time prompt. It is motivating to me to keep me working, and it allows me to enter valuable time spent on the phone or Skype with customers. By categorizing all of my time, even unproductive time, it provides me with a clear picture across the top on how I’m performing that day, versus my best day and how many hours a day I average.”

It’s probably worth noting that anything in this post could be applied in either a team setting or for a single individual.

How are you using RescueTime?

Google Doodle Strikes Again! 5.35 million hours strummed

Happy birthday to guitar legend, Les Paul who would have been 96 today!

Google launched its Les Paul Tribute Doodle on Thursday which allowed users to record and playback friend’s recordings or just play along. Almost immediately you could hear the rumblings on Twitter and Facebook about this great musician’s effect on music and many aspiring musicians – lots of people were doing their best Les Paul impersonations all the while Google entertained the masses. As one Twitter user said, “he’s the REAL guitar hero!”

Here’s what we saw on Thursday:

Les Paul Doodle by Google

We immediately saw tweets coming in for the trending tag #lespaul at a speed of 20 tweets/sec or more while writing this article. With each major online publication commenting and recommending the Les Paul Doodle, traffic was way up and people kept talking all day!

Les Paul Google Doodle

Along with the excited buzz surrounding the Les Paul Doodle, there plenty of tweets regarding the loss in productivity that it was causing. We saw hundreds of tweets similar like:

Tweet about Productivity

Last year we reported on the effect of Google’s playable Pac-Man Doodle, so as a follow up we ran the numbers to see if the Les Paul Doodle consumed a similar amount of users time.

We looked at ~18,500 random RescueTime users who spent time on Google search pages. In previous time periods, users spent a very consistent 4.5 minutes (+/- 3 seconds) actively using Google search. However, when Google released the Les Paul Doodle, the average user spent 26 seconds MORE on than in previous time periods. On average, users spent 36 more seconds time on last year’s Pac-Man Doodle, so you would think that the Les Paul Doodle had less impact. Wrong. According to Wolfram Alpha and Alexa, Google’s daily unique vistor count is up to 740 million versus the 505 million last year.

  • Google’s Les Paul Doodle consumed an additional 5,350,789 hours of time versus the 4,819,352 hours consumed by the Pac-Man Doodle
  • $133,769,725 is the dollar tally, If the average Google user has a COST of $25/hr (note that cost is 1.3 – 2.0 X pay rate)
  • Users did not spend much more total time at their computer than previous periods, but they did spend 10% more time at Google’s website than they typically would, meaning that the 10% more time spent at Google was stolen from other computer use time
We are already looking forward to the next interactive Doodle and wondering how it will stack up.

About the data:
RescueTime provides a time management tool to allow individuals and businesses to track their time and attention to see where their days go (and to help them get more productive!). We have hundreds of millions of man hours of second-by-second attention data from hundreds of thousands of users around the world, tracking both inside and outside the browser. The data for this report was compiled from roughly 18,500 randomly selected Google users.

About our software:
If you want to see how productive you are vs the rest of our users, you should check out our service. We offer both individual and group plans (pricing starts at FREE).

Does the World Cup matter?

Tweet ItEvidently there are plenty of hooligans in my neighborhood looking for an excuse to start drinking and yelling at a TV around noon in my favorite pub.  This was a little surprising to me, since I live in a yuppy downtown Seattle neighborhood which is full of software geeks and otherwise respectable people.

Now that it’s all over with, I decided to see if there was a broader trend in RescueTime’s data. Time spent on the computer dropped about 4% and productive time dropped a full 10% here in the US on the day of our first game vs England.  More people than usual checked the news, which managed to grab a 5% bump despite the drop in total time.  Evidently no one was watching the game on their computers, since Entertainment (including sports) stayed flat.

The effect was even more pronounced in the UK.  Productive time dropped 13%, total time dropped 7%, and instead of reading about the upset in the news like their American counter parts, the English were apparently watching it live with an 5% bump in Entertainment.

All that’s interesting, but that game took place on a Saturday, when most people aren’t supposed to be working anyway.  When the US squeaked out a tie during the final minutes of their next match against Slovenia on Friday, our American users spent a little more time than normal on the news, but it wasn’t enough to cause a significant change in productivity.

Here is a graph of all the days of the World Cup, compared to a typical week* to help see if there was real trend here.

It’s obvious that productive time was consistently down during the entire World Cup.  The US’s game dates are circled in red.  It’s interesting that you can see after we were eliminated by Ghana, things picked up a bit, but still didn’t quite make it back to normal.  This might be because we have more international users than members in the US.  Total time spent on computers was down 4% and productive time was down 3% over all the working days in the tournament.

There are a couple other interesting points in that graph, particularly the 18% drops in productivity over Fathers Day and Fourth of July weekend.  People seemed to come back pretty slowly after the 4th, and didn’t manage to get back into full swing until the end of the week.

When you look at it from RescueTime’s perspective, it’s pretty clear that the world cup does matter.

*A typical week is the average from the 28 days before the World Cup began (Memorial Day was tossed out).

RescueTime provides a time management tool to allow individuals and businesses to track their time and attention to see where their days go (and to help them get more productive!). We have hundreds of millions of man hours of second-by-second attention data from hundreds of thousands of users around the world, tracking in real time both inside and outside the browser.