Comment

Goodbye/Hello!

I’ve spent the last five years working on the Google Civics team alongside some of the most talented engineers in the business.  During that time we built and launched products that have helped hundreds of millions of users participate in the civic process.  Among my most favorite Google projects are:

  • The ever more ambitious US election products for the 2012, 2014 and 2016 cycles.
  • Launching “Who Represents Me” information in the Google Civic Information API
  • A pilot initiative that provided geo-targeted municipal information (like school cancellations, suspension of parking rules or city counsel meeting announcements) to Google Now users in Los Angeles.  
  • Election results features for the UK, Mexico, Spain, Brazil, France and Canada*

I’ve learned so much from the smart, kind, playful people I’ve worked with at Google, and I’m immensely grateful for the opportunity to work on projects that push the envelope of what can be done.  It’s been a privilege, but the time has come to say “goodbye”. 

And also “hello”.  The first week of March, I start a new role at Facebook as the product manager for local news.  I have long been interested in the ways that technology can help us identify shared values and build better mechanisms for making decisions about our communities.  I’m wary of becoming infected with tech-savior syndrome, but it’s clear that our platforms play a role in maintaining strong democracies that serve all of us well.  Our global society is complex and diverse. Defeating tribalism, fostering informed public debate and making space for compromise is necessary for good governance. Of all the shared institutions supporting our civic infrastructure, trusted, local journalism may be the most threatened.  We need access to accurate sources of information and a healthy public square to make decisions in the best interest of the people.  

Last week, Facebook publicly committed to focus on building features that will improve our ability to create the world we want for each other.  I am honored to be asked to join this effort, and I look forward to serving the users, journalists and publishers in my new role.

I start the new gig in two weeks.  Until then, you can find me knee deep in moving boxes.  I’ll be packing up my family for a move to New York.  If you’re a friend in DC, Reid and I would love to say goodbye in person.  If you aren't yet a friend, it's not too late!  We will be hanging out with friends at the Copycat Co. starting at 7:30 on Wednesday March 1.  I hope to see you before we leave.  

Comment

2 Comments

An Open Letter to Zack Exley and Becky Bond in Response to Their Article

Dear Zack and Becky,

I am writing in response to your article, "Hillary Clinton’s Vaunted GOTV Operation May Have Turned Out Trump Voters."  I have several issues with the article, most of them regarding the substance of your argument, but I also take issue with the style of your critique.  I review my concerns here with the hopes that my thoughts help contribute to an improved post-election retrospective.

1.  At the start of your argument, you state that "Anecdotal evidence points to anywhere from five to 25 percent of contacts were inadvertently targeted to Trump supports." Later you say, "Volunteers reported as many as 30% of the replies they received from voters they were urging to get out were Trump supporters."  I do not believe that anecdotal data is sufficient to support your conclusion that micro-targeting should not be used for creating GOTV universes.  As a movement, we take pride in our fact based approach to both policy and also campaigning.  If you don't have the data to back up your accusations, then you shouldn't go public with your critique.

2. Even if, for the sake of argument, we assume that your opening premise is true, the increased contact rate with Trump supporters demonstrates correlation, not causation.  In order to demonstrate that a strategy based in "big organizing" beats out "small organizing" you'd need to present a comparison proving that real IDs beat a model based GOTV universe.  You do not present this comparison.

3.  You should not need to tear down someone else's work in order to build credibility for your own conclusion.  If the "big organizing model" was so successful for you during the primaries, your argument would be more powerful by publishing metrics showing this success.  How good were your IDs at predicting turnout for Senator Sanders?  Even if you didn't use a comparison model at the time, it wouldn't be that hard to ask an agency to run a historical snapshot of your data through their model and compare your IDs to a model.  It's ironic that you cite a "lack of actual data" as the reason for your skepticism of micro-targeting in an article that cites no data to support it's own conclusion.

4.  Given the weight you both carry within the progressive movement, I worry that you've set your philosophy of big organizing, a philosophy that I support, up against the science of data.  Models aren't just used for defining a GOTV universe.  We also depend on that same math to help us listen properly to constituents when we have millions and millions of messages that couldn't possibly be read by a small staff.  Models help us understand what kinds of issues are important to a voter so that we can send a nurse more detailed information about the health care policy she cares about without inundating every single supporter with every detail about everything.  Models help us understand whether a voter might prefer to be contacted via the phone, email or text.  Models help us serve our supporters better.  Don't make math the enemy.

5.  Finally, I take issue with the meanness of this article.  I will always support after-action examination of our work with an eye to determining how we can improve.  However, your thesis did not require you to attack the work of our colleagues and friends, and you chose to do so anyway.  Your critique wasn't data-driven.  And you chose to publish three days after a devastating loss.  I can forgive a lot, but I struggle to forgive unkind.  

No doubt there will be a lot of public and private conversations about the 2016 campaign moving forward. I'd like to remind those of you who read this that truly productive post-mortems are NOT about highlighting failures with the purpose of assigning blame.  They are about reviewing the work and the outcome in service of improvement. The conversation needs to be constructive, not destructive.  And, most importantly, our conclusions should be based in fact.

2 Comments

Comment

Describing the data pipeline: A vocabulary for city data analysis

Recently, I was lucky enough to attend the What Works Cities Summit, a gathering of city leaders dedicated to using data to inform their policy-making. As I walked through the conference, I tried to identify whether each new city official I met was a strategist or practitioner. Strategists include mayors, city planners, police chiefs—officials seeking to use data-driven metrics in decision-making. Practitioners focus on gathering data from noisy and imperfect real-world sources and creating usable products, like dashboards and reports, which decision-makers use to inform their work.

The city data practitioners I met were mostly self-taught. The typical career path involved starting as the database manager, munging data in the basement in the late ’80s or early ’90s. As technology and data became more important in city government, these individuals were elevated and tasked with generating the reports upon which the strategists were depending.

I recognized many of the problems practitioners were tackling. There were questions about data authority, accuracy, objectivity, and coverage, all questions I encounter in my work at Google. It was clear that practitioners had developed many inventive solutions to these challenges, but the lack of a shared vocabulary hampered their ability to discuss best practices. I saw many conversations during the conference in which it took skilled practitioners ten or fifteen minutes of discussion to even reach the point where they could meaningfully share their experiences. I started as a self-taught data munger, and I identify with the practitioners I met. However, at Google I now work in a community of data analysts and data scientists who have developed a formal framework and vocabulary for describing the problems involved in this work.

The basic framework I use to describe the data pipeline consists of four steps:


1. Ingestion refers to the process of finding data and importing it into a database, even if that’s just a spreadsheet. Ingestion can involve reformatting existing files or it can mean, in the worst-case scenario, manually transcribing data from paper to a digital format.

2. Munging and wrangling refers to the often arduous task of getting data ready for analysis. Kind of a catch-all bucket of work, data munging involves untangling all the tricky knots that inevitably form when data is not well cared for. Munging can involve parsing fields that need to be separated, correcting spelling mistakes, dealing with missing data, normalizing data, and ensuring consistency of format throughout a dataset.

3. Computation, analysis, and modeling refers to the work involved in taking cleaned-up data and generating metrics upon which decision makers rely. In the most sophisticated data science shops, this would include building predictive models that correlate data with outcomes. It can also be as simple as writing a basic a formula in a spreadsheet.

4. Reporting, the final step in the data science pipeline, is an often overlooked part of the data analysis pipeline. Helping humans understand the lessons they can derive from analyzed data, as well as the strength or weakness of the data supporting metrics, ensures all decisions based on data are more likely to produce improved performance.

Each step of this process is associated with common challenges, sources of error, and approaches to efficient handling. Adopting this framework and vocabulary would help city analysts share lessons learned, identify best practices, and form a stronger community.

Cross Posted from the What Works Cities blog

Comment

1 Comment

Using Data to Create Better Public Policy: A Literature Review

Over the past 18 months, I've become interested in projects that use data, particularly aggregated, anonymized location data from cell phones, to create indicators of population well-being.  Public-private data partnerships offer us the opportunity to achieve feedback on the effectiveness of policy changes in near real-time and more cheaply than existing policy measurement programs. The same data used by the private sector to gain insights into customers and markets can be used to define public vulnerabilities and measure potential solutions.  This kind of research and data science will significantly improve our ability to create effective public policy.  

The articles and books below are ones I found interesting or useful during my research.  I will update this post as I find new information.


Socioeconomic Indicators

Title:  On The Relationship Between Socio-Economic Factors and Cell Phone Usage

Authors: Frıas-Martinez V. and Virsesa J

In 2012, researchers were able to create a method of approximating socioeconomic indicators from calling patterns based on the expense of calls, the reciprocity of communications, physical distance with his/her contacts, and the geographical areas around which people move.

 

Title:  Network Diversity and Economic Development

Authors: Eagle N., Macy M., Claxton R.,

In 2010, researchers were able to prove that the an individual’s relationships correlated with the economic development of communities. Researchers were able to validate an essential assumption that more diverse ties correlate with better access to social and economic opportunities.  This particular finding could be particularly relevant to policy makers interested in combatting poverty.

 

Title:  Ubiquitous Sensing for Mapping Poverty in Developing Countries

Authors: Smith C., Mashhadi A., Capra L.,

Researchers used cell phone call records to map poverty levels in the Cote d’Ivoire.  

 
Map of poverty estimates based on the diversity of connections between mobile phone antenna.

Map of poverty estimates based on the diversity of connections between mobile phone antenna.

 


Title:  Finger on the Pulse: Identifying Deprivation using Transit Flow Analysis.

Authors: Smith. C., Quercia, D., Capra L.

A map of London colored by an index of deprivation based on geolocation data from London’s Rail System.  


Title: On the relationship between socio-economic factors and cell phone usage

Authors: Vanessa Frias-Martinez and Jesus Virsenda

Researchers approximated census variables from cell phone records.


Title: Prediction of Socioeconomic Levels Using Cell Phone Records

Authors: Victor Soto, Vanessa Frias-Martinez, Jesus Virseda, Enrique Frias-Martinez

Information derived from the aggregated use of cell phone records can be used to identify the socioeconomic levels of a population, correct prediction rates of over 80% for an urban population of around 500,000 citizens.


Title: The Hidden Image of the City: Sensing Community Well-Being from Urban Mobility

Authors: Neal Lathia, Daniele Quercia, Jon Crowcroft

Test whether urban mobility—as measured by public transport fare collection sensors—is a viable proxy for the visibility of a city’s communities. We validate this hypothesis by examining the correlation between London urban flow of public transport and census-based indices of the well-being of London’s census areas. We find that not only are the two correlated, but a number of insights into the flow between areas of varying social standing can be uncovered with readily available transport data. For example, we find that deprived areas tend to preferentially attract people living in other deprived areas, suggesting a segregation effect.


Title: Poverty on the Cheap: Estimating Poverty Maps Using Aggregated Mobile Communication Networks

Authors: C Smith-Clarke

They outlined and tested a methodology for estimating poverty levels and validated their results. They have massive data collection issues:  “Difficulty in obtaining data currently prevents us from establishing the global applicability of our work”


Title: Estimating Migration Flows Using Online Search Data

Author: Global Pulse

This study explored whether online search data could be analyzed to understand migration flows and produce a proxy for migration statistics, using Australia as case study


Title: Analysing Seasonal Mobility Patterns Using Mobile Phone Data

Author: Global Pulse

This project quantified seasonal mobility of populations in different regions of Senegal, based on analysis of anonymised mobile phone activity data.


Title: Using Mobile Phone Data and Airtime Credit Purchases to Estimate Food Security

Author: Global Pulse

This study assessed the potential use of mobile phone data as a proxy for food security. Results showed high correlations between airtime credit purchases and survey results referring to consumption of several food items. In addition, models based on anonymised mobile phone calling patterns and airtime credit purchases were shown to accurately estimate multidimensional poverty indicators.


Transportation Health Indicators

 

Title: AllAboard: a System for Exploring Urban Mobility and Optimizing Public Transport Using Cellphone Data

Authors: Berlingerio M., Calabrese F., Di Lorenzo G., Nair R., Pinelli F., and Sbodio M.,

Researchers at IBM’s Allabord Project used call record data mapped against 85 bus routes in Abidjan, Cote d’Ivoire’s largest city, to suggest a solution to the city’s congestion.

 

Title: Sustainable Urban Transportation: Performance Indicators and Some Analytical Approaches

Authors: Black, J., Paez, A., and Suthanaya, P.

 


Public Health Indicators

Title: Quantifying the Effect of Human Mobility on Malaria

Authors: Wesolowski A.,  Eagle N., Tatem A., Smith D., Noor A., Snow R.,  Buckee C., (2012)

In 2012, using anonymized cell phone locations, researchers were able to model malaria transmission routes in Kenya.

 

Title: An Agent-Based Model of Epidemic Spread using Human Mobility and Social Network Information.

Authors: Enrique Frıas-Martinez , Graham Williamson, Vanessa Frıas-Martınez

In 2009, after an H1N1 flu epidemic, researchers used cell phone information to measure the impact of government policies designed to limit the movement of residents in affected areas.


Post-Crisis Well Being Indicators

Title: Improved Response to Disasters and Outbreaks by Tracking Population Movements with Mobile Phone Network Data: A Post-Earthquake Geospatial Study in Haiti

Authors: Bengtsson L., Lu X., Thorson A., Garfield R., Schreeb J.,

Using cell phone data, researchers in Haiti studied anonymized population movements before the 2010 earthquake and afterwards.  This information provided a more accurate accounting of displaced populations than the Haitian Civil Protection Agency.

 

Using Mobile Phone Activity for Disaster Management During Floods

Author: Global Pulse

This project combined the analysis of mobile phone activity data with remote sensing data during severe flooding in the Mexican state of Tabasco as a method to inform emergency management response.


Data and Privacy Design

Title: k-Anonymity: A Model for Protecting Privacy.

Authors: Bengtsson L., Lu X., Thorson A., Garfield R., Schreeb J.,

Researchers created a model called k-anonymity and set of policies for releasing private data with scientific guarantees that individuals who are the subjects of the data cannot be re-identified.

 

Title: ℓ-Diversity: Privacy Beyond k-Anonymity

Authors: Ashwin Machanavajjhala, Johannes Gehrke, Daniel Kifer, Muthuramakrishnan Venkitasubramaniam

Researchers identify vulnerabilities in the model developed in the study above and propose a new privacy definition called l-diversity.

 

Title: Mapping the Risk-Utility Landscape of Mobile Data for Development & Humanitarian Action

This project assessed the impact that aggregating mobile data to protect privacy has upon the utility of the data for transportation planning and pandemic control and prevention. The proposed methodology allows for determining what level of data aggregation is the minimum required to adequately protect individual privacy while preserving its value for policy planning and crisis response.


Books

 

The Responsive City

The New Science of Cities State of the art, academic, summary of modelling how cities function, with an emphasis on the flow of people.

Architecture: A Modern View  Richard Rogers’ manifesto, briefly - buildings should be sustainable, and that implies outliving the purpose they were designed for. Technology can help keep purposes flexible. The same approach should apply at a city scale. The social consequences of a project shouldn’t be viewed in isolation to the project.

A Manifesto for Sustainable Cities  Building ‘sustainable’ cities from scratch by definition isn’t sustainable - you need a path that evolves existing cities in a sustainable direction.

Happy City: Transforming Our Lives Through Urban Design  After decades of unchecked sprawl, more people than ever are moving back to the city. Dense urban living has been prescribed as a panacea for the environmental and resource crises of our time. But is it better or worse for our happiness? Are subways, sidewalks, and tower dwelling an improvement on the car-dependence of sprawl?

Social Physics (Sandy Pentland)  From one of the world’s leading data scientists, a landmark tour of the new science of idea flow, offering revolutionary insights into the mysteries of collective intelligence and social influence

Against the smart city (The city is here for you to use)  "A cogent debunking of the smart city. Adam Greenfield breaks down the term with wit and clarity, exposing that the smart city may be neither very smart nor very city at all. An insightful, timely and refreshing read that will make you rethink the city of tomorrow."

The Atlas of the Real World: Mapping the Way We Live  Digitally modified maps or cartograms depict the areas and countries of the world not by their physical size, but by their demographic importance on a vast range of subjects, from basic data on population, health, and occupation to how many toys we import and who’s eating the most vegetables.

Nudge: Improving Decisions About Health, Wealth and Happiness  We are all susceptible to biases that can lead us to make bad decisions that make us poorer, less healthy and less happy. And, as Thaler and Sunstein show, no choice is ever presented to us in a neutral way. By knowing how people think, we can make it easier for them to choose what is best for them, their families and society. Using dozens of eye-opening examples the authors demonstrate how to nudge us in the right directions, without restricting our freedom of choice. Nudge offers a unique new way of looking at the world for individuals and governments alike.

Programming Collective Intelligence  Programming Collective Intelligence takes you into the world of machine learning and statistics, and explains how to draw conclusions about user experience, marketing, personal tastes, and human behavior in general -- all from information that you and others collect every day. Each algorithm is described clearly and concisely with code that can immediately be used on your website, blog, Wiki, or specialized application.

Beyond Transparency  Cross-disciplinary survey of the open data landscape, in which practitioners share their own stories of what they’ve accomplished with open data.


Courses

https://www.coursera.org/course/techcity

This course focuses on how technology is used to engage with the public to support decision-making and the creative ways that every-day people are using technology to improve their cities.  Students will be examining tools for analyzing the city. Then we move into exploring the fascinating ways that cities are using real-time, technology. You'll hear from technological innovators and thought leaders about all of these topics. You will get to engage around a topic you are most interested in to create a project in your own city.

 


Research groups/people

CASA at UCL

Sandy Pentland at MIT

Senseable City Lab at MIT

Urban Design Studies Unit at U Strathclyde

Renaud Lambiotte and Vincent Blondel at U Namur & UCSB


Misc

Benthem Crouwel - Five archetypes for a changing world - exhibition at Berlin Architektur Gallerie

Shows that even though the ‘names’ of public spaces are the same as in the 19th century, their meaning and vocation have changed.

1 Comment

Comment

Hey Uncle Sam, Eat Your Own Dogfood

It’s been five years since Tim O’Reilly published his screed on Government as Platform. In that time, we’ve seen “civic tech” and “open data” gain in popularity and acceptance. The Federal Government has an open data platform, data.gov. And so too do states and municipalities across America. Code for America is the hottest thing around, and the healthcare.gov fiasco landed fixing public technology as a top concern in government. We’ve successfully laid the groundwork for a new kind of government technology. We’re moving towards a day when, rather than building user facing technology, the government opens up interfaces to data that allows the private sector to create applications and websites that consume public data and surface it to users.

However, we appear to have stalled out a bit in our progress towards government as platform. It’s incredibly difficult to ingest the data for successful commercial products. The kaleidoscope of data formats in open data portals like data.gov might politely be called ‘obscure’, and perhaps more accurately, ‘perversely unusable’. Some of the data hasn’t been updated since first publication, and is quite positively too stale to use. If documentation exists, most of the time it’s incomprehensible.

Measuring impact is hard, but I feel safe in saying that the number of real people who have benefited from the open data released in these portals is smaller than we would have liked. Data.gov cheerleaders will point to the sheer quantity of data available now which was never available before, and I applaud that work. However, if we want to move beyond releasing data for data’s sake, we need to focus on data quality over data quantity.

The real problem behind our data quality issues, is that the people who have the power to fix the data, don’t have an incentive to understand the problem or improve it. Government officials are lovely people who work hard in under-resourced offices. Although many of them believe deeply in transparency and citizen engagement, these portals tend to generate additional burdens that get in the way of their primary functions. When data is stale or data is inaccurate, someone has to take the time to update it or fix it. It is difficult for any one group to see beyond the limits of their own projects. The real trick is to align incentives. What we actually need, is for Uncle Sam to start dogfooding his own open data.

For those of you who aren’t familiar with the term, dogfooding is a slang term used by engineers who are using their own product. So, for example, Google employees use Gmail and Google Drive to organize their own work. This term also applies to engineering teams that consume their public APIs to access internal data. Dogfooding helps teams deeply understand their own work from the same perspective as external users. It also provides a keen incentive to make products work well.

Dogfooding is the golden rule of platforms. And currently, open government portals are flagrantly violating this golden rule. I’ve asked around, and I can’t find a single example of a government entity consuming the data they publish. (I’m sure there are examples, but it’s not a common occurrence.) For the most part, inter-agency data sharing doesn’t happen at all. However, when it does, the agencies set up non-public back channels to push and pull information. If the CFPB needs Census data, they call up their counterpart and ask for it directly. If the data isn’t in the exact right format, or if it’s lacking coverage and there’s more recent information, CFPB will ask the Census directly. The upshot of all of this, is that the public never benefits from the improved data. Also, if the data is important to other agencies, it’s a good signal that the data will be important to third-party developers. If it hasn’t already been released, it should be considered for inclusion in the open data effort. Because the data portals aren’t a method of inter-agency data flow, the portal operators are currently missing these signals.

Tim O’Reilly likes to offer up Amazon Web Services as the model for the concept of government as platform. It’s a great model. But Amazon didn’t start off as a platform. According to Steve Yegge, Jeff Bezos realized Amazon needed to become a platform after Amazon was already a large company and Jeff then issued a Big Mandate:

His Big Mandate went something along these lines:

1) All teams will henceforth expose their data and functionality through service interfaces.

2) Teams must communicate with each other through these interfaces.

3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4) It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter. Bezos doesn’t care.

5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6) Anyone who doesn’t do this will be fired.

7) Thank you; have a nice day!

(If you haven’t yet read Steve Yegge’s Platform Rant, you should go read it now. It’s highly entertaining, and also levies the criticisms I’m making here at Google.) Amazon slowly, painfully transformed from product to platform, and in the process created a world-class developer platform. If we want to continue to see government develop as a platform, officials will have to learn to treat each other as they would external developers— with the same support, data, interfaces, and documentation.

Don’t get me wrong, I’m a realist. I don’t think President Obama could realistically micro-manage the Federal Government the way the Jeff Bezos can manage Amazon. But the dogfooding principle can be applied as a pilot and expanded over time. If I could offer some unsolicited advice to the new US CTO, Megan Smith, I would suggest she focus on coaxing, encouraging, bartering with, and otherwise chivvying government agencies into commiting to use data they publish as the they sole source for internal purposes.

If government as platform is where we’re headed, let’s make sure all of us, governmental and non-governmental actors, are using the same platform to build our technologies. Aligning the incentives of public officials with third party developers will help ensure we all have access to the high quality data we need to build technology for everyone. It’s time for Uncle Sam to start eating his own dogfood.

Suggestions for a Dogfood Pilot in the Government

 

  1. Identify the data most often shared through intra-agency and inter-agency transfers. This data could be currently shared as emailed spreadsheets or through sophisticated APIs. The current method of sharing isn’t relevant.
  2. Work with a cross-functional team of public officials and outside stakeholders to prioritize the data most likely to be valuable in products built for public consumption.
  3. Work with agency officials to create service interfaces for the prioritized data. It doesn’t matter what technology is used, but the service must be designed so that the interface could be exposed to the outside world.
  4. After the service interface is finished, issue a directive that all interprocess communication must be through the service interfaces. No back-channeling of data allowed.
  5. Make the service interfaces and documentation available to the developer community.

Comment

The Three Levers of Civic Engagement

3 Comments

The Three Levers of Civic Engagement

Earlier this month, I had the chance to speak at the Personal Democracy Forum in New York. The conference explores the ways in which the internet is changing politics and governance.  This year's theme, Save the Internet: The Internet Saves, gave us the chance to delve into the recent revelations that the internet is enabling mass surveillance, the threats to net neutrality and also the ways the Internet is transforming civil society and government

In my talk, I presented a framework for evaluating new civic technologies.  Below you can find my slides and my speakers notes from the occasion.


The Three Levers of Civic Engagement

For the past ten years, I’ve seen all sorts of civic tools come and go, and all of them were supposed to be the next great thing in civics. But if you really look at the metrics, like the number of users, number of return visitors, or time spent using the tool, and compare them to the metrics of products designed to help people shop or hail a cab, it’s clear we haven’t begun to tap the internet’s potential to change the way we make decisions and collectively allocate resources within our communities.

Over time, I have come to believe that our inability to attract mass usage for our products is, in part, due to a willful ignorance of how users are motivated to civic action. To build the technologies that will transform government and governance, we need to do a better job of designing and building tools for real world users.

People still need to make dinner.

Or, in other words, we need to start treating our users as rational actors.  The people we want to use our tools to influence politics and governance are very busy, and there are plenty of ways they can spend their time aside from on our platforms.

The internet isn’t magic.  

We all wish users would suddenly become passionate about the intricate policy decisions that impact their lives, but they aren’t going to develop that interest just because we invented the internet.

As creators of campaigns and civic engagement tools, we need to be much more honest with ourselves, and each other, about the motivations and interests of “real people.”  Just to be clear, none of us here in this room are real people for the purposes of civic engagement. Today I’m going to lay out a framework that we can use to talk about digital civic engagement and whether real people will (or won’t) interact with those tools.

Right after law school, I took a job as a community organizer.  I was responsible for empowering constituents to find shared interests and to help them work together to impact the decisions being made about their communities.  My day-to-day involved working with church leadership, holding house meetings, organizing one-on-ones and yes, I also did my share of door knocking and phone banking.  It was as a community organizer that I first learned about leadership and power, and how to use action to achieve specific outcomes.

And it was very hard work.  The people in my community cared about their neighborhood and the decisions being made by local representatives, but they also had two jobs.  They had mothers with health problems.  They had roofs with leaks.  They needed to pick the kids up from soccer.  They had very busy lives.  Much of my time was spent convincing people that the work they could do on behalf of their community was worth the time they would sacrifice from other very important life tasks.  

My time as an organizer has informed my work as a product manager.  When designing tools to help people take action online, I never forget how hard it was for my friends in Milwaukee to find time to work on local issues.  

The numbers back up my anecdotal evidence.  A 2013 Pew study, Civic Engagement in the Digital Age, found that, in the previous 12 months, only 22% of respondent had attended a political meeting on local, town, or school affairs. 18% had contacted their elected representative about an issue that was important to them.  17% had signed a petition.  In the California state primary last Tuesday, only 13.5% of California’s eligible voters voted.  

Game Theory and Civic Engagement Tech

A mathematical equation buried in a 50-year-old academic paper might not seem the most logical place to start a conversation about human motivation and community action. But it’s what got me started.

Last year, I stumbled across a paper written in 1968 by William Riker and Peter Ordeshook.  In it, William and Peter (I think of them as first-name-basis friends now) apply game theory and mathematics to elections to explain voting behavior.  The paper “The Calculus of Voting” has had a profound impact on my thinking about civic technology.

And here’s the equation.  PB + D > C.  For this talk, I’m going to adapt the equation to capture the relative importance of various factors influencing a citizen to take a civic action.  So basically, I’m going to use this to explain why people will or won’t use your awesome new civic engagement tool.

Probability

P is the perceived probability that an action will change the outcome of a civic decision.  Or, for example, the likelihood that writing a letter to your Congressional representative will change the outcome of a floor vote.

Benefit

B is the benefit the citizen as an individual will receive if the civic decision swings in their favor.  As an example, if I have a pre-existing condition, the passage of the Affordable Healthcare Act would be a very real and measurable benefit to me.  Or, if I had a lot of money invested in the stock market, a change to the tax code that cut capital gains tax would increase my net worth.

Duty

The good professors William Riker and Peter Ordeshook referred to D as Civic Duty.  But I think it's something more than that.  D represents feelings of goodwill, and the sense of being part of something bigger than yourself.   D is the pleasure people get from wearing an "I Voted" sticker and the pride they feel by being good citizen.  It is the sum total of enjoyment a person gets from being part of a communities' civic life.

Cost

C is the time, effort and financial cost of taking a civic action.

In the 2012 general election, 129 million people completed a ballot. The likelihood that your vote was going to decide the outcome of that election was so close to zero, it makes no difference. Essentially P = 0. Zero times times B equals zero. So the equation becomes 0 + D > C. As applied to the 2012 election, people didn’t vote because they thought they were going to be the deciding vote between Governor Romney and President Obama. They voted because their D, or their sense of civic duty, outweighed the C, or the cost of finding a polling place, making sure they were registered to vote, making sure they had the right ID and taking the time to go and vote.

However, at the local level, your chance of influencing a decision creeps upward. Let’s say your local city council is considering funding a dog park across the street from your house, and you own a dog. Going to a city council meeting and making a passionate argument might sway a vote. The P value is high. However, the cost of this particular civic action is also pretty high. You need to figure out that the city council is thinking about funding a dog park, identify when the city council is meeting, take time to travel to the meeting, and then (eek) public speaking! In this case, the higher P value along with the benefit to you as an individual dog owner might help you overcome the inhibitions presented by the cost of the civic action.

One of the mistakes we make as a community interested in building civic engagement tools is that we assume the old world rules no longer apply because internet = magic.  But the internet isn't magic.  

Final-Anthea_Watson_Strong-_Personal_Democracy_Forum_2014.png

Users have and will continue to act in a mostly rational way online, just as they did offline.  Our users are people with lots of demands on their time, and the likelihood that their civic action will change the outcome of a civic decision is close to zero.  It’s going to be hard to get them to pay attention to us.

And so, as technologists who are building the next generation of  civic tools, we are left with these three levers to influence user behavior.  All of our work falls into these three categories -  probability, cost and duty.

Lower the Cost

The cost lever is about ensuring we connect users with the information they need to be involved. 

Many of us build tools that reduce the cost of civic engagement.  As a company whose mission is all about organizing information and making it accessible, this is Google’s sweet spot.  Here in the US, in partnership with the Voting Information Project, my team makes information about elections available to users.  We want users to be able to find where to vote, who they can vote for, and how they can vote.  And because we know not all people use Google, we make this information to other developers through the Civic Information API so they can surface this information on their platforms and lower the cost of voting for their users.

We also provide this kind of information to voters abroad through election portals.  For example, we built a hub for the Indian election earlier this year which featured, among other things, the number of felonies with which each candidate was charged.  This is a high demand piece of information that Indian voters find useful.

Increase the Duty

The duty lever is all about increasing a user's sense of civic duty and it’s the kind of thing campaigns do very well.  The Obama 2012 campaign was particularly good at using images and video to create these positive feelings of belonging.

This picture was tweeted by @BarackObama on election night 2012 with the caption “Four more years.” It was retweeted over 775,000 times, and held the record for the most retweeted tweet ever until March 2014. (See here for more on Obama director of social media Laura Olin and the most viral photo of the campaign.) In part, Facebook, Twitter and Tumblr are effective engagement tools because they make use of this lever. All of the social media platforms are used by campaigns to increase a user’s sense of civic duty and thus their likelihood of taking action.

We’ve also seen successful “social pressure” experiments like the one run by the Democratic National Committee which sent a letter to registered voters telling voters that “our records indicate that you voted in the 2008 election” and thanking them for their “good citizenship.”  This mailer increased voter turnout by 2.5 percentage point.  

Increase the Probability

The probability lever, at first glance, looks like we can’t move it except through very local civic actions. Right now, at the federal level, the likelihood of a civic action, like calling congress or writing a letter to congress, changing the outcome of a vote is basically zero.

However, the internet allows us to measure online actions in a way that we never could with offline actions.  Community organizers have always known that impact is made when groups of people take action together.  If we can help our users find each other online and take action in small groups, then maybe we can start to shift “P”

Here’s another way to think about it. In campaigns, there are individuals called kingmakers or bundlers who each pledge to raise x many millions of dollars for a campaign. But when you look at the way fundraising is structured, it turns out that a donor who raised a million dollars didn’t actually bring one million unique dollars in the door. That donor went out and found 10 friends who pledged to raise 100,000. And those friends found 10 friends who pledged to raise 10,000— it’s all a giant pyramid game.

We can do the same thing with civic action. If a user can get other users to take impactful action in coordination with others, they scale their efforts and increase the likelihood that they have impact. If these actions take place online, we can bundle a user actions in the same way that we bundle donations to help shift the probability that their work will change the civic outcomes.

Users are Rational Actors

Too often, I’ve sat in this room and seen presentations about tools designed for the way that we wish users would behave.  Yes, it would be nice if suddenly the whole world got passionately interested in the nuance of policy decision-- but that’s not going to happen just because we invented the internet.  Our users still have dinner to make.

To the funders in this room, I challenge you to think critically about the consumer facing civic projects you’re funding.  Are you funding products for rational users?

To the technologists in this space, we need to spend more time thinking about the ways we can reduce the cost of civic action and increase the probability that an action will impact an outcome.  This means continued investment in standardizing and scaling acquisition of information for the data layer that supports our tools. We have the technology to create personalized, localized civic engagement tools.  But we don't have the organized data to power those products.  To build a tool that lets me know when the city council is considering an issue about my neighborhood that will impact my life, I need an integrated, accurate, real-time dataset of laws, city council agendas, city council meeting minutes, city council votes, city council members and city council jurisdictions.  It also means looking beyond the federal layer of government for potential civic tools.  We have a better chance at truly transforming governance at the local level than we ever had at changing things at the federal level.  

And to the academics, we need research that is focused on the right ways to move these three levers. What motivates our friends and neighbors to action? What is the information they need to participate in the civic process? What actions are the most impactful, and how do we demonstrate this to our users? 

We’ve seen the internet transform all sorts of industries— from news and media to banking and transportation. But we haven’t yet seen this kind of disruption in our civic vertical. I’m certain the internet will fundamentally change the way we make decisions and allocate resources in our communities. However, if we want internet scale usage of the next generation of civic tools, we need to better understand real world people and how they will act in the civic space. 


Some End Notes

Unfortunately, because of time constraints, I was not able to really go into any of the complexities and potential weaknesses of this framework.  In future posts, I hope to spend more time exploring what this could mean for civic tech product design.  However, until then, I leave you with two additional important notes.

  1. In individual level studies, citizen perception of an election's closeness have little effect on their decisions of whether or not to vote.  (Ferejohn, J.A., & Fiorina, M.P. (1975). Closeness counts only in horseshoes and dancingAmerican Political Science Review)  Some have interpreted this to mean that P is not an important factor in the calculation we make when deciding whether or not to vote.  However, others have argued that this proves we are all REALLY BAD AT MATH. ( Darmofal D. (2010) Reexamining the Calculus of Voting: A Social Cognitive Perspective on the Turnout DecisionPolitical Psychology)  Or, in other words, that users consistently overestimate the impact of civic actions.  In the talk, I did not necessarily clarify the difference between the perceived probability and actual probability of impact.
  2. Although I have defined B in terms of pure self-interest, some of our users care about others and take actions that are not motivated by self-interest. An individual's altruism may also play a factor in the calculus.  (For more Fowler, J. (2006) Altruism and Turnout. The Journal of Politics.)

3 Comments

Comment

Google and Diversity

Matthew Stempeck's visualization of Google's recently released diversity numbers.

Matthew Stempeck's visualization of Google's recently released diversity numbers.

Here are my thoughts on Google's diversity numbers:

  • The numbers are not great, not great at all. I really hate that.
  • In the two years I've worked here, Google has talked openly and transparently about diversity issues and their goals for improving.
  • (I'm sure it hasn't always been this way, and that we lost a lot of great co-workers and potential co-workers over misogyny and racism.  I mourn for those we lost.)
  • I think Google handles diversity better than any other company or organization in which I've ever worked-- including "progressive" organizations. Obama for America was the worst organization I've worked at in terms of talking about and addressing diversity issues. Here's a picture of the Obama 2012 tech team.
  • What Google says publicly is not the limit of what they tell us internally. Leadership's openness about their failure makes me trust their decisions.
  • By talking about the numbers publicly, Google is demonstrating a commitment to making a change.
  • As a technologist, the way I can tell if things are improving over time is by measuring them. Publicizing these numbers ensures Google is accountable to its employees and the external community.
  • The categories they break the numbers out into are dumb. Google knows gender isn't binary and that Asian isn't an ethnicity. There are good, non-public reasons the numbers are tracked this way that have nothing to do with lack of awareness or sensitivity.
  • I can feel the support and the intentional way Google is trying to ensure I achieve my potential in my day-to-day work through special mentoring, repeated and consistent messaging from my bosses that I am valued, respected and included, and through programs aimed at ensuring I am supported as a lady in the field. I have never had this anywhere else.
  • I'm really proud of the work my colleagues are doing to encourage students of color to pursue technical degrees, including Charles Pratt who sits next to me here in DC. Charles developed a new curriculum at Howard and taught CS for a year. He has been incredibly successful in preparing his students for entry into the field of technology.
  • Google has changed its policies to ensure they recruit and retain women. 
  • Google has given 40 million dollars to organizations working to bring computer science education to women and girls. Google literally puts its money where its mouth is.

I'm not saying life is perfect at Google, but I am saying I am proud to be a part of this company. Although the results are not yet where they need to be, I think Google's leadership is aggressively pursuing a data-driven strategy to ensure we attract highly talented women and colleagues from other under-representative communities.

Also, you should come and work with me.

Edit: to Add:  I love Matt Stempeck's visualization of Google diversity.

Comment

Comment

It’s Not My Job to Fix Your Pipeline Problem

I spend a lot of time, money and energy building my own network— including extra effort devoted to connecting with technical women and communities of color. It drives me nuts when folks ask for help recruiting diverse candidates for job openings.

By asking me for free recruiting favors, organizations undervalue my own work in building the community. (There are people who get paid a lot to recruit; maybe these organizations should go hire one of them?) It also transfers the responsibility for fixing the pipeline problem from the people with the power and money to solve it to those negatively impacted by the bias. When conferences ask me for speaker referrals, and I fail to come through, the companies still pat themselves on the back telling themselves, “We tried, but it was Anthea’s failure to surface appropriate candidates that prevented us from achieving a diverse candidate pool.” Then they post lineups like this.

However! If a company is taking steps — aside from JUST asking me for assistance — to address pipeline problems, I am very willing to devote my time and energy to helping. I love to support organizations that run paid internship programs, fund programs like Black Girls Code or in some other way demonstrate an interest in building diversity within the ecosystem.

If organizations aren’t committing resources of some kind (manpower or money) to addressing the problem, I don’t believe they are actually interested in structural changes. My recruiting assistance may help my community get entry level positions, but the fundamental power dynamic within the organization won’t change. When they’re asking me for free advice without any incurring any additional costs, they demonstrate an interest in mitigating the risk of a negative press cycle, not in solving a problem.

So many of these organizations think it’s enough to just ask representatives of the communities they are trying to reach for help. In some cases it’s almost framed like a favor — “Oh, so you want a diverse lineup? Sure, why don’t you send me a list of 25 people I might consider qualified?” (And no, we can’t get you Sheryl Sandberg, David Drummond or Marissa Mayer.) This usually works because so many of our community really do want to help. Including me! It’s just time to frame it with some real talk.

So: Under the following conditions, I will offer recruiting help to organizations interested in diversifying candidates:

  1. The person asking for the favor has materially helped me in my own work for free
  2. The request comes from a tech lady or hacker/maker of color
  3. I am paid to help with recruiting, or
  4. The organization can demonstrate that they are devoting resources— either money or staffing— to address the pipeline problem themselves.

I want to do everything I can to change the ratio and promote diversity — but if you ask for my help, you’d better be sure that you are, too.

Originally published in Medium

Edit: Want to prove to me that you’re serious about fixing the pipeline problem? My good friend Katrin Verclas has written “10 Tips for Solving Your Pipeline Problem.”

Comment

Comment

I Am Spartaca

It happens, or it has happened, to all of us. A promotion denied, a struggle to be heard in the meeting, a hand on a thigh.

The boss is out with all the dudes from the office, and somehow he forgot to invite us.

We’ve watched a man with less experience and less education get promoted ahead of us. We’ve wondered: “Is it because I’m not as good, or is it because he’s a guy?” Even now, we don’t know.

The VC only invests in hot chicks.

We’ve felt a combination of angry and hopeless when the idea we presented gets repeated by a male colleague, praised, and implemented.

Worse: Some of us have been raped. Some of us have been hit.

Our friends have witnessed the harassment, the belittlement, the slow erasing. They didn’t say anything because they were too afraid of jeopardizing their own position, or they just didn’t know what to do.

Our colleagues undermined us, because they needed the trust and support of the boss. They needed to be one of the boys, and they obtained admission to the group by denying what they saw around them.

We didn’t say anything either because it was too dangerous. Because we couldn’t afford to lose our jobs; because a public accusation would turn into a “he said, she said”; because someday we wanted that career-making project.

But mostly we didn’t say anything because we didn’t think we were strong enough to stand up alone.

So we’re going to stand together.

We want a career in tech. We love what we do. We want to stay.

And though we still can’t tell you what happened—because we can’t prove anything, and we can’t name names—we’re going to tell you that it did happen.

We wear this badge to say so, and to let every woman in tech know that she’s not alone.

I am Spartaca.

Wear the Badge

I knew she was left out. I saw that guy being an ass. I could see the hurt in her face.

I helped when I could. I tried to make her laugh on the bad days. I made sure she got home safe.

I was sad when she left.

She was smart. She was passionate. She was good at her job.

Should I have done more?

I know now, that I could have helped. I didn’t know what to do.

We, your allies, wear this badge to say that we saw it happen. That we want you here. Every woman in tech should know that she’s not alone.

I am Spartaca.

Wear the Badge

Enough. It’s time for this cycle of insult, indignity and injustice to end.

There is no need to suffer in silence. You do not bear these burdens alone. Together, we stand up and speak out about all these ways in which our friends, colleagues and sisters have been mistreated. An affront to one is an affront to us all.

We wear this badge because we stand together for a better, safer, and more equal community, and to say to every woman in tech: You are not alone. I am with you.

I am Spartaca.

 

Originally posted on Medium

Comment

Comment

What's the Goal of Civic Engagement Work?

Is the goal of our civic engagement work a higher rate of participation?  

It seems not.  The goal is to create a representative government that is more representative of the interests of the people.  We want to decrease the influence of power and money in politics.  

Is increasing the number of people who are voting, contacting their representatives, commenting at civic meetings and participating in the decision-making process the best way to get to a more representative government?  Are there are other ways, and if so, how do we analyze the effectiveness of those ways?

Comment

Comment

Control Theory and Government Inefficiency

When talking about “government inefficiency” I am always brought back to my college control theory classes. In control theory you learn that for a system to be stable, you must decrease performance. Likewise if you want a performance system, you have to narrow the conditions in which it will be stable (race car needs dedicated track). Government is intended to be more stable so if the control theory model fits, government is not going to be as efficient but it is also a lot less likely to fail when disturbances vary or increase.

Reddit

h/t Jon Henke

Comment

Comment

Civic Innovation and Behavioral Science

I’m obsessed with the design principles Deena Rosen wrote in her capacity as UX director for Opower.  The principles describe how Deena uses design and behavior science to build tools that encourage people to make choices that reduce their impact on the environment.  

Civic Innovation needs to learn from these kinds of efforts and adopt behavioral science based practices.  The internet and most of the consumer facing technology industry is optimized for entertainment and commerce.  People seek out and want to engage with products that supply a need or entertain them in some fashion.  Conversely, it’s not actually a rational behavior to engage in civic activity.  The likelihood that one person’s actions will change the outcome of a governmental decision is so small, it might as well be zero.  

Our users have busy lives.  The kids need to be picked up from day care.  Someone needs to get the car’s oil changed.  Great Aunt Gertrude is back in the hospital, and it would be nice to go visit her.  If a user’s question to her representative or comment on a budgeting plan is very unlikely to impact the outcome, why would she take the time to ask the question or submit the comment?

Be honest, if civic innovation wasn’t your profession, how engaged would you be in the day-to-day activities of your local government?

In the collective, we are all better off if we participate in our communities and provide oversight and feedback to those who represent us.  The challenge of relevancy is one Deena faces at Opower as well.  Individually, turning down my thermostat won’t reduce the damage caused by global climate change, but collectively a reduction in the amount of energy we use to heat our houses can make a big difference.  

At Opower, Deena’s team uses normative comparison, social proof, loss language, defaults, and user commitment to encourage user behavior that’s good for us as a community. The Civic Engagement community could do this as well!  There are well-known tactics we could adopt to encourage users to do the right thing.

Except, there’s a big problem.  The Civic Innovation ecosystem doesn’t actually agree on the desired outcomes.  We don’t agree on the metrics for success.  Are we trying to increase the amount of interaction between residents and their elected representatives?  Are we trying to increase the flexibility of opinions tolerated by the electorate?  Are we trying to maximize the amount of information users consume before they form an opinion and take an action?

Without agreement on these metrics, we’ll be blocked from using the kinds of techniques that will help us overcome the problems associated with getting users to care about civic issues in the collective.  It’s time to get real about forming a theory of change.

Comment

Comment

Measuring the Heartbeat of Civic Health

Last night, I got a chance to hang out with Christina Hollenback, an amazingly talented community organizer in New York.  A year ago she left her job in DC as the director of the Generational Alliance and moved to Harlem where she’s focused on organizing at the hyper-local, block level.  For example, she’s currently focused on helping musicians organize CSA type projects that allow the neighborhood to support the artists in their area by paying in advance for musical performances and recordings.

My gut tells me that the work she’s doing in Harlem is fantastically important and more impactful than all of her electoral and issue organizing, the many meetings she attended in DC, and the events she planned for Congressman and Senators.  By creating ways for community members to come together and support each other, she’s building the social fabric upon which people fall back when hard times hit.  She’s also building the kinds of relationships between people who have different socio-economic and political backgrounds that allow us to better understand each other.  I would hypothesize that her work is the what we need if we want to see a depolarized government.  But I have no way to prove that.  

What community metrics are correlated with a healthy government?  Does the strength of a community’s social fabric, its interconnectedness, impact the effectiveness of the decision making bodies that set policy for that community?

The crisis response ecosystem looks to resilience as a measurement of whether a community can bounce back from a disaster.  Practitioners and researchers in this field recognize that disaster resilient communities use personal and community strengths to recover from calamity and have strong social networks.  They spend a lot of time looking at the connection between the strength of the social fabric and better outcomes for individuals post-disaster.  One of the core elements of social capital - trust- is seen as vital to preparedness efforts.

I think in terms of data— and the different kinds and qualities of connections between residents is the base layer of all civic innovation.  These shifting currents of trust influence and underpin all the public and charitable work in a place flowing from the doorstep all the way up to the Federal government.  This particular dataset is particularly hard to collect and validate.  After spending time with Christina, I’m even more sure that the tools we build to make collaborative decisions or change the way we govern ourselves need to be designed with a firm understanding and data-driven approach to  the kind of offline community strength Christina builds.  We need to pay attention to civic resilience and start to effectively measure what it takes to increase it.

Comment

Comment

Continued-- Following the Conversation: Can Silicon Valley Change the World

Over the weekend, Steve Johnson wrote an article that continued the discussion of the political belief systems dominating Silicon Valley and what that does and doesn’t mean for the rest of the world.  

"Learning from Los Gatos."  

"Yes, people who work in the tech sector today (particularly around the web and social media) believe in the power of decentralized systems and less hierarchical forms of organization. But that does not mean they are greed-is-good market fundamentalists."

Comment

Comment

Following the Conversation: Can Silicon Valley Change the World?

Recently, the chattering class has turned it’s attention to Silicon Valley’s efforts to impact the political and civic worlds in which it exists.  As someone who works at a company founded in Silicon Valley, on a team which is working to improve civic discourse, I find the conversation absolutely fascinating.  

I’ve compiled a list of the pieces I find most interesting.  

George Packer has a long story (pay-walled) out this week in the New Yorker about Silicon Valley and the impact it is (and isn’t) having on the American civic and political discourse.

"The industry’s splendid isolation inspires cognitive dissonance, for it’s an article of faith in Silicon Valley that the technology industry represents something more utopian, and democratic, than mere special-interest groups."

Catherine Bracy wrote about the issue months ago, and I think she has a better handle on the issues involved.  She explores “the tension between Silicon Valley’s impact on democracy and its utter lack of interest in or understanding of the institutions and systems of government its companies do business in.”

Hamish Mckenzie responded to George Packer in PandoDaily calling his piece “kind of unfair.”

Today, the New York Times printed a piece, Lessons for Silicon Valley from Capitol Hill. 

Comment

Comment

Operational excellence is not a sustainable competitive advantage.
In a business context this means it is foolhardy to believe that your company will maintain a superior position vis a vis your competitors, because you are “excellent” (e.g. competent, efficient, smart) at doing what you do. There are a lot of smart people in the world. If you make enough money off being excellent, other smart, competent people will come along and copy what you are doing, and whoops, there goes your advantage.
Hallie Montoya Tansey: What I learned in my MBA Strategy class, as applied to campaigns. 

Comment

Comment

Everyblock

The impact of Everyblock goes far beyond the traffic to the site itself. Everyblock is one of those ideas that bent the world in a new way when it came around. One of those ideas that felt both so obvious and so ingenious simultaneously, that it looked *easy* when it was anything but. Back when it launched in 2008, the idea of arcane civic data being of use to regular citizens didn’t really exist. The idea of geolocation-based information gathering didn’t really exist. The idea of (shudder) “hyperlocal” information at the street-level didn’t really exist. And yet today, five years later, these ideas are commonplace thanks in large part to Everyblock proving that they were possible and vital.
daniel sinker: We’re all living in an Everyblock world 

Comment

Comment

Some Thoughts on Investing in Data Infrastructure for Civic Technology

Any civic technology project is made up of two parts. 

  1. The application that surfaces data and contextual information to the user and perhaps allows the user to interact with that data in different ways.
  2. The dataset, often published by the government or sourced through scraping, hand collection and crowdsourcing, which the application queries.

The application piece is important.  It’s the public face of civic technology and already receives quite a bit of attention.    The work of developing healthy data infrastructure, however, can be overlooked, and requires just as much thought and attention.   As a community, we should concentrate on new ways to develop access to the data sets users need, and acknowledge that the costs to government to publish data may require trade offs.

 In order to publish high-quality data, government officials must change the way data is stored, collected and audited—often a costly endeavor.   Public servants have a duty to spend resources only on projects that advance the public interest.   While there is real value in making almost any data set publicly available, and the presumption should always be to publish public data sets, we often depend on the argument “transparency is better” without spending the time and energy to flesh out the case that a particular dataset is worth the cost to the public of publishing it.  

As the cost of publishing data trends towards zero, it will become harder and harder to plausibly argue that the associated costs are higher than the value to the public of an accessible dataset, but for now, we must recognize the trade-offs involved.  Developing sharper arguments will require us to better recognize what data sets will be valuable to users, and build stronger relationships with civil servants, helping them to reduce the costs associated with publishing data.

There’s a lot of excitement in the civic technology community.  2013 promises to bring a flood of investment and attention to this space.  As we work towards building tools that help citizens access government information and access the pathways for influencing their communities, let’s keep in mind the importance of the data infrastructure and ensure we’re being as thoughtful about building interoperable, user-centric datasets as we are about the applications resting on top.

Comment