(Firefox users: up vote this if you want support on Firefox for Android: https://bugzilla.mozilla.org/show_bug.cgi?id=908224)
We just pushed an update for our Android app that adds the ability to report on time you spend browsing on your phone or tablet. Get it here:
To do this, we needed use Android’s Accessibility services, and this requires an elevated privilege you will need to manually enable. Our app will walk you through this when you click the “Enable website logging” option. However, here is a brief explanation of the process:
1) Open up the RescueTime app and click the settings button (the gear icon). Click the “Enable website logging” option. This will automatically take you to the system Accessibility Settings screen, if it needs to be enabled.
2) Find RescueTime in the Services list on the Accessibility settings screen and select it. On older devices you may already see an “on/off” switch for RescueTime here, just select On and you are done.
3) After tapping it, on newer devices it opens the screen for enabling the service for RescueTime that has a description of the service. Click “On” to enable it. This automatically signals RescueTime to begin looking for site info in browsers.
4) Achieve success! Supported browsers are: the stock Android browser, just called “Browser”, the Nexus series stock browser (a version of Chrome), Chrome (the version in the app store), Chrome Beta, and Dolphin. Not supported: Firefox and DolphinMini.
A really quick sunset, the kind you see in the tropics. REALLY quick. I’m thinking: tomorrow. This being the kind of sunset where no new data is accepted from these old client apps.
We have new plugins for both Firefox and Chrome that replace the old. They have been out for quite a while now, and the old one has been de-listed for a long time. Here’s where the new one is (links to extension galleries):
I imagine this affects no actual person, only zombie systems that are enjoying harassing our site, but if you are a person or sensitive “good” zombie currently using the old plugin, please switch to the new one.
IF you are an old plugin user, you can follow these steps and keep your old data:
1) Open the full dashboard on our site from the plugin: https://www.rescuetime.com/dashboard
2) Click “settings” top right and set an email address for yourself, and add the password
3) Delete the old plugin from your add ons/extensions list
4) Add the new one https://www.rescuetime.com/browser-plugin and register using that email address
To all Kindle Fire users: just a quick post to let you all know you can now get RescueTime for Android directly from the Amazon Appstore without having to go through alternative marketplace hoops.
RescueTime for Android works by noting how long you spend in your mobile apps and phone calls, reporting back to you your efficiency score, top distractions and categories right on your mobile device. There is a handy stopwatch tool for manually tracking things like meetings and exercise, and you can set a productivity score for each activity as you log it.
If you also have the RescueTime desktop application installed, you’ll be able to see your mobile time right alongside your other logged time:
Here’s the listing: RescueTime for Kindle Fire
Of course, users of non-Kindle Android devices can still get RescueTime for Android from the Google Play store.
After many requests on behalf of customers– especially users outside the US, we are happy to announce we support subscription to premium service using PayPal.
PayPal payment choice is available at signup time or from the billing page for existing accounts. You can upgrade from free plans, or convert from credit card payment– it should support all account transitions.
Teams can pay using PayPal as well. Because the subscription is managed by PayPal and requires your approval for each change, to add or remove seats you need to go through a few extra steps than when using a regular credit card, but any plan is supported.
New users: Get RescueTime using PayPal
Existing users: Upgrade RescueTime using PayPal
Our job was to find a long term scalable solution to the problem of finding your activities that match your key word search. This post pertains to the technology involved. Read about product features and new capabilities here.
Turns out, search in RescueTime is a surprisingly complicated problem, for the simple fact that your typical search prioritizes ranked relevance results– it’s ok for the engine to stream results to you, it’s ok for the search to be incomplete, as long as the ranking is bubbling up the best match. It’s ok for it to be probabilistic or best-guess in nature, sometimes. Generally speaking, you are looking for something small (some words) in something large (a web page).
Our challenge, is that while the user experience semantically matches “search”, what you really need is a complete result set, combining all relevant reporting filters, of activities that match your requested search expression. It should produce repeatable results assuming new data hasn’t arrived. It should be real-time updatable as your new data comes in (~ every 3 minutes). It should be ok if every record or zero records match, there can be no cap on number of matches. All this, for about 100-400 logs of time data per user per day for many tens of thousands of users. The longer a user is with us, the huger the list of activities to match against, just for that user. The list of unique activities of our users is well over 1 billion. We should be able to completely rebuild indexes of activities at will, in near real time, to provide application improvements. Yet, cost needs to scale at worst linearly as a fraction of the rest of our products’ demands, and speed needs to remain constant or better.
All of these characteristics eventually killed our previous attempts to use industry-standard search models based first on Lucene/solr, and secondly on Sphinx. Both were able to hold up for a period of time, but were fundamentally flawed solutions in the assumptions they make expecting a relatively static, ranked-match document-based search model. To shoehorn them into our needs required operational spaghetti and some pretty ugly reporting query structures.
Our old search platform may have looked like this and required similar engineer attention, but it didn’t sound as good.
Enter MySQL fulltext Boolean search. First there is the critical advantage of being *inside* and native to our database platform in a way that even Sphinx with it’s plugin can’t be. This allows for more integrated and simpler reporting queries– no longer is the step of matching your search expression required to be isolated from the reporting query that depends on it (Sphinx could have done this, sort of, with the plugin, but not really the same). Second, in Boolean search mode, MySQL provides an unlimited result set (no cap on results). Additionally, there is less duplication of supporting data, since it can operate entirely in the same instance as the source data– this value is not to be underestimated, for all the inherent caching this leverages. Operationally, it is far easier to dynamically and programmatically add, destroy, and rebuild indexes– since they behave like tables with normal indexes to the operator.
But for performance, the most critical options it offered was a way to fluidly and transparently provide per-user-account search indexes, which lets our performance remain consistent despite constant multi-dimensional growth (new users + accruing existing users’ time data). This isolation-of-index solution would have been theoretically possible but horribly unwieldy and in need of huge operational supporting code in the other options. Secondly, it provides a clear way to constrain the size of keyword indexes, in other words, we know from your requested report you could only possibly care about activities that were in the particular time range you requested, and this can be of value both in index partitioning options and the submitted report query itself, especially in the amount of memory that must be held to build final results. A huge benefit of this known-maximum-scope for the searchable data means that at any time we can intelligently but programmatically throw away or update whatever dynamic table plus index we had that intersects the requested scope rather than the entire source tables, and rebuild it in real time, for a minor speed penalty (< 0.01 sec vs .1 to 3 sec for your search). Any subsequent search request that remains a subset of that most recently persisted scope can just reuse the current index, with the < 0.01 sec speed. We can play with permitted scope expansion to tune for speed. Furthermore, any sharding of accounts across new instances allows the search data to naturally follow or be rebuilt inline with the same scaling decision that drove the sharding to begin with– no separate stack to worry about following the shard logic.
Check out some example code for sneaking a search result into a series of joins rather than hanging out in the WHERE clause. Here it can be treated just like any other join, and like any other filter on your pivot reporting.
– check dynamically_maintained_text_and_things_with_controllable_scope
– if exists and is up to date continue,
– else, create if needed and
– push missing scope by intersecting request scope with existing
– index is maintained by table definition
SELECT * from things
INNER JOIN other_things on things.id = other_things.thing_id
– begin search acting like join
INNER JOIN (
SELECT * FROM ( SELECT things_id
FROM dynamically_maintained_text_and_things_with_controllable_scope `things_search_data`
WHERE MATCH (things_search_data.a_text_field, things_search_data.another_text_field)
AGAINST (‘search words here -there’ IN BOOLEAN MODE)
ON things_result_table.thing_id = other_things.thing_id
– end search acting like join
other_things.limiting_characteristic = a_factor
We’re using the table alias for the search data there because that would allow you to chain multiple searches, one after the other (search inside a search), against the some source data by adjusting its table alias.
Engineers looking to provide high performance, easily maintainable search capabilities should carefully consider MySQL fulltext search options if they are on a late version of the product. This is especially true if you are trying to manage a search index of billions of tiny items based on database text columns that mutate very quickly, and benefit from programmatically created and maintained indexes. For example, it occurs to me that Twitter could use this model in very much the same way we do to provide some powerful realtime search indexes for targeted subsets of accounts or hashtags.
In our case, we were able to reduce our search related platform costs by about 60-70%, significantly reduce our operational pain, even while delivering a solution that provided vastly improved performance and eliminating that “well, it works now but we’ll just re-architect next year” problem.
Our spaghetti search code and virtual spaghetti cabling has now been re-factored into something tastier, ready for your consumption.
Build it and they will come? Performant Search brings Flexible Reports Part 1: Key Word Filtering works!Posted: November 7, 2012
Our job was to find a long term scalable solution to the problem of Searchable Time. This post discusses our search capability and some ways to use it, now that we have reliable and speedy access to this feature. There will be a follow up post presenting the technology chosen, for those interested.
RescueTime has three features that depend on what we are calling “search”, I will be presenting two of them here: using keywords and expressions as a reporting filter with the “Search” field, and the Custom Report module (the third is “hints” in projects time entry interface).
I’ve been putting “search” in quotes (though I’ll stop that affectation now) because what we’re doing here is a bit different than a traditional Google-style search. We’re giving you a way to see a view of your RescueTime history across any span of time you choose, pivoted on your perspective of interest, eg. Categories or Activity Details or Productivity, for any activity we find that matches your search request. A “Custom Report” is just a way to save a search query for repeated use. But what does this all mean?
If you take a moment and think about it, this filtering can be very powerful. If you pick a good set of keywords, and possibly some tweaking with logical expressions (more on that later), you can get a fascinating view across your history, regardless of category, productivity, or other classification that is focused in high resolution at particular project, client, or other meme that might appear in many different applications and websites. How much time did you spend dealing with “John”? or, what is my pattern of time spent in a console versus my text editor (“terminal iterm aquamacs sublime vim”)?
Consider your document names, or folder names, email addresses, chat identities, and websites as potential members of a search expression to build these reports. The search engine will also understand logical AND and NOT and nesting. The default relationship between words is OR.
Let’s consider another example: How much did the last mini-release cost us?
You’ve got a team working on a project codenamed “Piranha”. This name appears in code filenames and directories, or Eclipse project names. It appears, with a little discipline, in your email subjects. And your support ticketing and requirements tracking system. And your marketing material’s files and web pages. And your internal chat group. And your meetings entered via offline time tracker. You get the idea– we can give a total time cost of this project, with 0 (zero) data entry across your entire organization . Well, plus any time your team spent learning about piranhas on Wikipedia (pick smart project names for best results, use logical operators to help out, eg “piranha NOT wikipedia NOT vimeo). You can then save this as a Custom Report for ongoing metrics, and side by side comparison with other ongoing custom reports.
Thank you to all our customers for sticking with us and giving feedback during the iteration of this slightly magical tool. We think search is finally fully operational.
I couldn’t resist a quick post noting The Onion’s entrée into at-the-desk analytics…
Unfortunately, since he wasn’t running RescueTime he couldn’t prove it or share it with posterity. How many of you have bested this stellar performance today?