Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can governments assess compliance with an open data policy? #14

Open
waldoj opened this issue Mar 8, 2016 · 9 comments
Open

How can governments assess compliance with an open data policy? #14

waldoj opened this issue Mar 8, 2016 · 9 comments

Comments

@waldoj
Copy link
Member

waldoj commented Mar 8, 2016

More and more governments are requiring that their agencies publish open data, often adhering to best practices for those policies. With the passage of time, it's obvious that victory is to be measured not in the creation of such policies, but in their outcomes. That will raise this question in an increasing number of places: How can governments assess compliance with an open data policy?

Having a plan to publish data? Executing that plan? Number of datasets? Size of datasets? Quality of datasets? ROI? Number of views? Number of API calls? Number of public interactions?

The real challenge here is creating metrics that are a) enforceable, b) meaningful, and c) hard to game.

Are there existing efforts to determine this? The Sunlight Foundation and Center for Government Excellence both seem like organizations likely to have put some thought into this.

@waldoj
Copy link
Member Author

waldoj commented Mar 8, 2016

@technickle, @SLarrick, @rebeccawilliams, could I impose on y'all to mention if you know of or are working on any existing best practices in this space?

@JoshData
Copy link

JoshData commented Mar 8, 2016

+1 to thinking about ROI, especially framed within the existing programmatic goals of the agencies that are opening the data (e.g. Vision Zero) and the metrics that are applied to those goals (e.g. reduced deaths).

@waldoj
Copy link
Member Author

waldoj commented Mar 8, 2016

I'd like to see agencies demonstrate that other agencies are using the datasets that they publish (either at the same level or at other levels of government). Turning that on its head, I'd like to see agencies demonstrate that they are using datasets published by other agencies.

@technickle
Copy link

We have a resource that starts to get at this, but I think the movement has been concerned with ROI for years now, and has made very little progress towards reproducible, scaleable measurements of success. They way SF identifies goals and tracks towards them is one of the best we've seen.

At least some part of the measurement problem is due to the tech stack (CSVs/json/etc). The movement also prefers unregistered/untrackable as an ideology, which hampers measurement.

At an OKFN workshop in Mexico City, I proposed a mechanism of measuring any issue and then trying to identify how much change in the measurement might be attributed to open data. But even this I think is difficult to get to reliable precision on.

Internal use of open data would be fairly easy to measure if governments had access to a portal's raw analytics, so they can see requestor IP addresses. Sometimes they use google analytics, which probably can tell that.

SF also employs a continuous improvement philosophy, which is, in my opinion, way better than a continuous growth approach.

@waldoj
Copy link
Member Author

waldoj commented Mar 9, 2016

That's really helpful—thank you, @technickle.

@CEhmann
Copy link

CEhmann commented Mar 9, 2016

It's a good question; it's easier to show views or downloads than actual use and impact. We are trying to get more adoption of DAP so that we can get better google analytics across the government: https://analytics.usa.gov. You can check out our rubric for measuring implementation of the U.S. Open Data Policy here: http://labs.data.gov/dashboard/docs/rubric; also check out Project Open Data, our repository for tools here: https://project-open-data.cio.gov/

@waldoj
Copy link
Member Author

waldoj commented Mar 9, 2016

Thank you, @CEhmann!

@scuerda
Copy link

scuerda commented Mar 10, 2016

In the context of open data ordinances / policies, it seems like dataset request / challenge mechanisms can provide hooks for evaluating efficacy. The relationship between requests, subsequent challenges and resolutions could be a component of indicating how successful a policy is at delivering data that is meaningful and useful. Tying measure of staff time required to address requests and challenges might provide some insight into the internal change being brought about by such policies. Of course, the request cycle is likely to look very different at different stages of policy implementation, but I suspect that there is a lot of collective knowledge about how that curve should look over time.

The challenge here would be that introducing this review function adds a lot of overhead into a process that is often already a challenge to justify from a budget perspective. If the request / review function were to be the mechanism by which a number of a additional processes are initiated (inventories, publishing plans/workflows, metadata) you would then have an implementation process that can be justified in terms of citizen engagement. Alas, this approach still leaves us in the cave, so to speak, but I do think it provides insight into how an open data policy is working within the context to which it has been introduced.

Along these lines, if portals are the primary point of contact for citizen engagement (debatable), it would behoove us to invest in / request / require / build enhanced support for the dataset request function so that we can move towards some standard processes for capturing activity and reporting out.

@waldoj
Copy link
Member Author

waldoj commented Mar 11, 2016

Thanks for weighing in, @scuerda!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants