Indaba in 2011: What we learned on the way to 3,000 completed jobs

Jonathan Eyler-Werve

January 09, 2012

This post was written for Global Integrity, as part of the Indaba fieldwork platform site. Original post here. (CC/by)

The Indaba fieldwork platform went live in September 2010, with the launch of fieldwork for the Global Integrity Report 2010. That project published the following spring, with a dozen more projects launching on the system since then. To date roughly 3000 assignments have been completed on Indaba.

During the past year, Indaba has moved from a good idea to a field-tested tool with strong user validation. Of the organizations that have completed a project on Indaba (Global Integrity, Transparency International UK, Publish What You Fund), all of them are scaling up their use of Indaba with new projects. I can’t think of a better metric for success than that.

Here’s a review of the last year, with some insight into our learning process and findings. We’re not big on glossy sales pitches — this post includes some fairly brutal self-analysis of our strengths and weaknesses. Your feedback is always welcome at info@getindaba.org. Thanks as always for your support and interest.

In a personal note, after ten wonderful years, I’ll be retiring from service at Global Integrity in 2012 to take a sabbatical and pursue projects in Chicago. You can follow my next steps at eylerwerve.com.

I leave the Indaba project in the capable hands of Monika Shepard, Nathaniel Heller and others at Global Integrity and our growing network of partners and contributors. Working with the Indaba team has been the highlight of a great run — thank you!

Jonathan Eyler-Werve

_Director of Technology and Innovation

_ Global Integrity

 

Our process

Our thinking is evolving fast as Global Integrity is settling into running a small tech startup within the larger organization. One lesson learned has been to resist grandiose feature expansion in order to focus on a lean, iterative, and user-oriented approach to upgrades. This has been a good year for this, as our Early Adopters program provided us with a structured dialogue process with which to constantly challenge our hypothesis about our market fit.

While most of our initial ambitions remain intact, our focus is much sharper. In particular, we are very clear on two things:

1) Scorecards (blending text and datasets) with a workflow are what people can’t get elsewhere. This is Indaba’s core value proposition.

2) Our project install process is too slow. This needs to be less complex for project managers (fewer decisions to make) and less work for admins (data entry chores shared with more users; lower risk of harm from misconfigured settings; faster launch, test, and adjust cycles).

How we learn from users

We took the following steps to gather user input:

  • Field contributor surveys that captured three years of baseline experience from our previous-generation fieldwork tools and compared that to data from Indaba users.
  • On-site trainings and Q/A with partner project managers in the Philippines, Mexico, Kenya, UK, US, and other locations.
  • Several dozen webcast trainings and Q/A sessions with diverse populations of field contributors (for the record, American journalists are the most skeptical users).
  • On site, multi-day project design sessions with partner organizations in Atlanta; London; Manila; New York; Port Morseby; and Washington, DC.
  • A day long visioning session around future features with internal Global Integrity admins and project managers in one room, proposing and ranking options.
  • Year-round capture of requested features into system documentation.
  • Sharing an open plan office with Global Integrity project managers who are using Indaba every day.

Raymond June reviews data on Irish public policy from a Hawaii Internet cafe. (image: cc by/sa Raymond June)

 

What we learned (or confirmed) by talking to users

  • People like Indaba’s scorecards. Creating a structured, indicator-based analysis with peer review and workflow attached is the function that no other tool can deliver to users.
  • Project scale matters: Building a giant wooden deck? Get a nail gun. Hanging a picture? Get a hammer.  Google Forms is a hammer. Indaba is (metaphorically) a nuclear-powered auto-feeding large bore nail gun. It works best for large, repetitive projects.
  • Managing text document workflow at medium to small scale (producing less than 30 docs) isn’t the most valuable use of Indaba. If it’s small enough to fit your files in one folder, you should consider running the project via email attachments, which is inherently more flexible at some costs to security, stability, and management complexity.
  • There’s little current interest in using Indaba to purely manage file uploads (photos, video, audio, “blob storage”). However…
  • There’s lots of interest in attaching files to a survey data point. For example: an Indaba researcher can now attach PDFs of legislation to individual scores in a policy scorecard, providing a reference doc that is more permanent than a government website which might go offline at any time. We’re not aware of other Web-based tools that do this, although tools such as DocumentCloud can be useful in more journalistic projects.

Does it work? — Observations on user experience

  • The “Fieldwork Manager” (runs fieldwork for remote teams and managers) emphasized usability and stability in our initial build. It has proved highly successful with end users.
  • The “Designer” (used by Indaba admins to configure projects) emphasized lowest possible cost over usability in our initial build. It has proven difficult to use and needs serious help before we can scale up the number of project deployments. It’s slow and is a source of risk, as misconfigured projects can impact system stability. This is partially addressed in Indaba 2012.
  • Trouble tickets from field staff are increasingly uncommon. We’re seeing 1 to 2 trouble reports a month from a user base of ~500 people active in any given month.  Roughly 3,000 assignments have been completed on the platform, with ~30 trouble tickets generated from those field contributors.
  • Internet censorship is a problem we can work around. Yemen’s state-owned telecom classifies Indaba as pornography. Timor Leste’s telecom blocks access for reasons unknown. In most cases the free Hotspot Shield VPN circumvents the local blocking.
  • More than half of our trouble tickets are resolved by updating field contributors to a supported web browser. We are becoming more comfortable requiring field contributors to update browsers, as the security considerations around 10-year-old browsers (Internet Explorer 6) make it a worthwhile battle to have with field contributors, regardless of Indaba’s needs.
  • Project managers outside of the Global Integrity office (i.e. Global Integrity’s local partners and outside groups using Indaba) have offered consistently positive feedback. We actually would prefer a bit more pushback, but mostly they just tell us they “love it.” As an interesting control case, Global Integrity Executive Director Nathaniel Heller facilitated a feedback session in early-December 2012 between project managers at a large international NGO and their field contributors. The teams had recently completed a pilot research project by fielding 75 indicators across multiple government ministries in three countries. They used Survey Monkey to gather their pilot data, and the uniform feedback was, “We hated Survey Monkey.”
  • Project managers are at times overwhelmed by the options available at project launch. We are working to create simplified options based on the choices made on previous projects (“Do you want chocolate or vanilla?” instead of “Configure these 31 flavors of variables…”).
  • Global Integrity managers have reviewed ~50,000 scorecard data points on the platform with good results.  They have a number of usability and feature requests, which are reflected in the upcoming Indaba 2012 build.
  • Global Integrity managers are less enthusiastic about using Indaba as a tool for the editing of text documents, as noted above. Recent workflows involved the submission of text from field contributors via Indaba, an offline “editing” process, and then inserting final text back into Indaba for peer review, approval and publishing. This reflects the reality that mature text editing/versioning tools are widely available (Microsoft Word and Google Docs are pretty good at this) and we are not willing to replicate this functionality in Indaba.
  • Of the initial outside groups to complete projects on Indaba (Publish What You Fund & TI-UK) both are using Indaba for new projects within weeks of completing their original projects. Global Integrity continues to use Indaba for all data collection projects.
  • The publishing tools are less rigorously tested than the rest of the system, because there is a 4- to 12-month lag time between starting and publishing a project. This will change over the next six months as the first wave of projects is published. Global Integrity has successfully published two projects through Indaba to the web (the Global Integrity Report: 2010 and the Kenya City Integrity Report) with no significant challenges.

What we learned from other people’s projects

In addition to our own users, we also have been in contact with others in our field that are developing tools in the same space.

On the input side: mobile device data collation is well understood by Nokia Data Gathering, Ushahidi, Citivox, and Frontline SMS. We’re in contact with these groups and could, if needed, build links to Frontline SMS or Nokia Data Gathering as the first step of a workflow that included both mobile input and Web based submission and review of data. We currently believe that we should not build custom mobile input tools for Indaba since it’s possible to link to very good existing tools.

On the output side, we’re talking to CKAN.org and the Open Knowledge Foundation about hosted data stores. Our users are frequently requesting a place to host/distribute/visualize the completed datasets they already have in hand. We’re undecided as to whether to support this use case (it’s not a big expansion from what we already have) or send them elsewhere so we can focus on managing field teams. Creating a bridge between the Indaba database and a CKAN-like network of databases is an attractive long-term goal depending on demand.

We learned about other projects while presenting at events organized by the World Wide Web Foundation, Aspiration, the Transparency and Accountability Initiative, and others.

Note to readers: if you run a project that we should be aware of, please contact us!

New features added since launch

While we have not emphasized expanding the feature set this year, we have incrementally updated what we have based on user feedback and our original road map.

  • Automation of daily/weekly/monthly system backups.
  • 24/7 monitoring and alarms for HTTP availability and application response (see Uptime section).
  • Allowing file attachments that are embedded into scorecard in Fieldwork Manager. This functions similar to the “reference” text field.
  • Support for “Not Applicable” score formats.
  • A “Force Exit” feature which gives managers better options for handling an assignment that has been abandoned in progress by a field contributor.
  • Research into how to improve the display of dynamic visualizations. (Best approach appears to use Flash on the server side and display cached image files to the Web. Flash has mature charting tools, but is not desirable for end users. HTML5 or jQuery are options, but we’re late adopters by nature.)
  • Partner user branding integrated into Fieldwork Manager.
  • Improved attachments support in text tools.

System reliability

Uptime for the first 14 months has exceeded our best hopes. Based on our monitors (which run once a minute):

  • Our total unplanned downtime was less than ten minutes.
  • Two network availability errors that resolved within five minutes each.
  • Server/application stack has had no unplanned outages, though one planned server reboot was not communicated effectively, leading to approximately 4 hours of downtime.
  • We had one incident of limited performance (scorecards were unavailable) for 6 hours total.  This was due to a database error in which a misconfigured “delete” button affected database records. The Indaba ops team recovered all functionality without data loss within 6 hours, despite the error happening at 5pm on a Friday.
  • Hurricane Irene damaged hardware associated with the Indaba development environment but no data was lost and the storm did not affect system performance.

Staffing

We continue to work with and recommend Open Concept Systems as our engineering and operations contractors.

Monika (Kerdeman) Shepard joined the Indaba team within Global Integrity in August 2011. She is focused on staffing project design, support to users, and community building. Monika contributed to Indaba user stories while at the World Resources Institute and has been highly successful in bridging the gap between user desires and system realities.

Jonathan Eyler-Werve is transitioning out of Global Integrity at the end of 2011 to take a sabbatical and explore projects in his hometown of Chicago. He joined Global Integrity in 2002 and has been working remotely since 2005.

This post was written for Global Integrity, as part of the Indaba fieldwork platform site. Original post here. (CC/by)