We are pleased to announce that PlaceMatters is partnering with WalkDenver to develop an open-source, on-line data collection tool that will create an inventory of pedestrian facilities and conditions in Denver neighborhoods, as well as collect pedestrian data counts. The tool will allow community members to record neighborhood conditions via smart phones or tablets, upload and store this information in a shared online database, and create compelling maps and other visualizations that illustrate the need for improvements. WalkDenver will use the tool to engage residents in assessing the walkability of their own neighborhoods and understanding how walkability relates to quality of life and health. The data collected will help identify and prioritize strategies for improving walkability and track changes over time. The project is funded by a grant from Mile High Connects.
OpenPlans describes their newest tool for using Google Street View for planning, Project Fitzgerald. Project Fitzgerald, a follow-up to Beautiful Streets (another very cool project) is designed to gather public input on a block-by-block basis.
Engaging Cities has a pile of great stories: web-based games promoting civic literacy created by iCivics, some unusual participatory city planning activities, a research paper on the role of digital media in deliberative decision-making, and the use of augmented reality in neighborhood engagement on development projects.
The always-insightful Ethan Zuckerman explores some of the complicated equity implications of crowdfunding public infrastructure.
Gov 2.0 Watch cites the American Association of State Highway and Transportation Officials on the use of crowdsourcing to improve transportation planning. I’m not entirely persuaded by the comparison of transportation public engagement to product development, but the notion of integrating crowdsourcing (which doesn’t have much to do with product development, per se) can actually be pretty useful.
Nina Simon published a fascinating piece, drawing on a new paper by Colby College professor Lynne Conner, exploring the idea that the experience of art in Western culture was historically deeply participatory. The understanding of the audience as passive and non-participatory, her argument goes, is a relatively recent development. Becoming more participatory, for art and cultural organizations, might actually be returning to its roots rather than creating a new paradigm.
What did we miss?
PlaceMatters is on a panel submitted by OpenPlans’ Aaron Ogle and Rob Goodspeed of MIT to talk about data and cities at SXSW Eco. We would love to go, and we hope you want us to go to. We need your support and votes to get us there. The panel is called “Measure it, improve it: data for better cities.” Here’s a description:
Lord Kelvin famously said, “If you can not measure it, you can not improve it.” He was talking about physics, but the same applies to cities. Recent years have seen a proliferation of available data about cities – from real-time transit locations to trees, impervious surface to bikeshare locations. And where data doesn’t yet exist, crowd sourcing and mobile phone sensors provide new opportunities for data collection. This panel will discuss different examples of how data is being collected, analyzed, and visualized for planning and designing more sustainable cities. You’ll hear from a software developer, planner and researcher, all working on data collection and analysis tools to create better places to live and work.
Voting closes June 29th. Get your votes and comments in soon and we hope to see you there!
At PlaceMatters, we’ve been looking for ways to test new platforms for civic engagement that use all the benefits of online technology to explore physical places and what we love about them (read some more about this concept as it relates to Planning 3.0). At APA 2012, I was on a panel on data literacy with Frank Hebbert of OpenPlans when he showed off a project called Beautiful Streets. I was instantly enamored with the simplicity and beauty of being able to do quick pairwise comparisons using Google Street View. We saw an opportunity to take an experiment done in Philadelphia and apply it in Denver ahead of our summer hackathonto generate a test case for simple engagement methods and generate a large amount of data.
In partnership with OpenPlans, we are proud to announce Denver’s Beautiful Streets. Over the next couple of months we will be asking the city to answer the basic question: which street is more beautiful? We hope to generate a large database of crowdsourced data on preferences for streets throughout the city. The choices have been randomly generated across the city. This dataset will then be available for coders and designers at our summer hackathon to visualize and interpret using other available datasets in the region. We are very excited about this because it will help us test an interface that could be used in the future on specific planning and civic engagement processes here in the region and across the country. All of the data will be transparent and even the source code is available as an open source project on GitHub, and if you want to get an idea of some next steps, check out the public issue tracker.
Please join us in this experiment by participating and getting the word out to your friends and colleagues. Share your experience on your own blogs, Facebook, Twitter and Google Plus. We know there are and will be flaws, but with your help we can kick the tires and squash some bugs to make this an even more useful platform for civic engagement. Let us know what you think in the comments below and on twitter with hashtag #beautifulst. Also, read more about the original genesis of this project as a Valentine’s Day gift to the city of Philadelphia. Looking forward to your participation and feedback!
This post, by guest blogger Jeff Warren, is the third in a month-long series on the impressive diversity of participatory decision-making tools that communities can use for land use plans, transportation plans, sustainability plans, or any other type of community plan. Our guest bloggers are covering the gamut, from low-tech to high-tech, web-based to tactile, art-based to those based on scenario planning tools, and more. We welcome your feedback and would love to hear about the participatory design strategies that you’ve found to be the most useful. This blog post was originally published on the Public Laboratory blog.
Public Laboratory is made up of a diverse group of contributors, some working from their homes or garages, some from their workplaces or even university labs. What brings us together is the idea that open-source, collaborative development can result in inexpensive and accessible environmental sensing.
But to many, the way our community operates can be disorienting. We’ve approached these unique challenges in several ways.
Most people are familiar with collaborative development of textual works, such as co-authorship, or even mass co-authorship in projects such as Wikipedia. Software development is textual as well, and such communities are made possible by carefully tailored open-source licenses, which effectively stop any individual or organization from controlling the whole project.
By contributing to these works — say, an open-source web browser or an article on gumdrops — authors are assured attribution but cannot stop others from building upon their work, improving or adapting it for new uses. This works in part because each time programmers or Wikipedians contribute, their name is explicitly entered in a registry of sorts. By publishing their contributions, they give up a certain amount of control — of course, they’d almost certainly built upon the prior contributions of others who made the same choice.
Now imagine applying that system to non-textual works, such as a new kind of camera or a tool for detecting air pollution. The way Public Laboratory works, these designs are developed, tested and improved slowly through dozens of meet-ups, workshops, field events, and brainstorming sessions. At each meeting, participants agree to share their contributions in an open-source manner — but there is typically no explicit record of every contribution.
To compound this, journalists (not to mention partners and even funders) prefer hierarchical organizations so they can say things like “developed at MIT,” and they really love citing individuals, not nebulous groups of “contributors.” We’ve often had to insist on group attribution in the media, and developing a so-called “attribution infrastructure” is a major focus on our website.
Design for attribution
We recently launched a small set of new features on our website, PublicLaboratory.org, to address these challenges. While many people make use of our tools, as a community we’d like to highlight those who contribute improvements and share their knowledge with others. With that in mind, we’ve come up with some ways to track when Public Laboratory contributors actually post about their work on the PLOTS website.
Taking a cue from socially oriented open-source website Github.com, we’ve posted small graphs of the amount of activity on a given project over the past year. A quick look at these graphs shows how much activity they’ve seen in recent weeks, and gives visitors a sense of how dynamic a research community is involved in a particular project.
Above that graph, we’ve listed contributors and the number of posts they’ve made (which are tagged with the tool, i.e. “thermal-photography”. The intent here is not to make things competitive (though that wouldn’t necessarily be a bad thing) but to give people a sense of satisfaction that they’ve been a part of a communal effort, and a glimpse (to outsiders) of the number of people who have made the project happen.
By placing emphasis on the posting of content, we hope to highlight attribution for those who do good documentation and share it in a public venue — though anyone is welcome to use, adapt, repurpose, and improve upon Public Laboratory projects.
In order to be an active participant in our grassroots research efforts, you’ve got to reach out to others and share your work. This may not be natural for many people; contributors from many backgrounds are often accustomed to sole authorship credit, while others wonder who will care whether they publish or not. In a collaborative effort such as ours, however, success is gauged by how many others are able to leverage your work and reproduce or improve upon a set of tools you have contributed to. In an open-source context, seeing someone else replicate or adapt your work is a gratifying affirmation that your documentation and development work have resulted in legibility and accessibility for a potential collaborator, not an instance of plagiarism or infringement.
ShareAlike and Free Hardware
“Open source” means different things to different people, and with the above challenges in mind, it’s important to make some distinctions. Strictly speaking, open source just means that you publish the source files of your work — and in the case of hardware, the associated design files. A good open-source project will provide legible documentation and support for others who wish to read and understand those files. If you’ve heard of “free software” (we’ll invoke the refrain “free as in freedom, not as in beer” here), you might be familiar with its more stringent requirement that users have the right to “run, copy, distribute, study, change and improve” the software. This is the basis of our approach to open source, public, civic science — and it underlies our community’s aversion to proprietary non-free (in both senses of the word) software such as Photoshop or Google Earth.
The noted lack of such freedoms in the area of scientific equipment and instrumentation — and the barriers that creates for a more legible and participatory approach to science — is a major motivation for our work.
Finally (for now) there is the idea of requiring anyone who takes advantage of these freedoms (by downloading, adapting, modifying and improving) to share their work in turn, under the same license. This requirement, known variously as a “sharealike” or “copyleft” clause, can be controversial, as it explicitly requires people (and companies) to become producers, and not just users, of open-source works. With some exceptions for datasets and privacy considerations, we have adopted sharealike licenses across all Public Laboratory content, and are in the process of releasing even our hardware designs under a sharealike license, the CERN Open Hardware License.
While these ideas may be unfamiliar for many, they make it possible for diverse communities such as ours to develop complex technical systems in a way which attributes and protects contributors’ work, and ensures that these shared efforts remain public, accountable, and open to newcomers. They allow anyone to use PLOTS tools and techniques without needing to seek permission, while encouraging newcomers to contribute just as they benefit. They offer a public and grassroots alternative to closed, expensive, and proprietary systems of technology production which have resulted in a science that serves powerful and wealthy corporations above local communities and the underprivileged.
Such considerations are an important part of the PLOTS approach to building participatory environmental science collaborations. Ideally, our community’s works will inspire readers or viewers to apply civic science ideas to their own lives — adapting tools to local issues — and with luck, they will become active participants in our research community by sharing their work publicly. In time, some may go on to organize local civic science groups, further the development of PLOTS’ open-source tools, innovate new technologies or approaches to environmental monitoring, and challenge and refigure the very structure of participation.
I’ve been really excited about the power of data to help us understand our cities and make better decisions. There have been some neat visualizations and infographics recently that demonstrate the power of visualizing big data. One uses data collected from Twitter and another through GPS tracks of cabs in Manhattan. Both are visually striking and both lead to deep conversations and insights.
Eric Fischer has done a number of visualizations around cities using geotagged content. In this recent project, he traces paths through cities using geotagged tweets. Pictured above is New York, but you can see many more on his flickr feed. Also, you can read more about this project on the Fast Company Design blog and dig through the comments for some great discussion. While there is much to say (pro and con) about the utility of using this data for making actual decisions (like where a new transit line should go), it still points toward the possibilities of sensor networks. In many ways, Twitter operates as a sort of high level, rudimentary opt-in sensor network. As more people volunteer location data on social networks like Instagram, Facebook, Twitter, and Google+, cities will have a growing compendium of data that could be sliced and diced in many ways. Now, of course there are equity and access issues that need to be solved before we start backing up our decision making with Twitter feeds and the like, but in the interim, I can see this layer of data providing another view of our world that can deepen and enliven conversations, not to mention this makes for some really cool art. Also in the world of information visualization are those of taxi cabs in New York. Coming out of the Spatial Information Design Lab in the Graduate School of Architecture, Planning and Preservation at Columbia University, Juan Francisco Saldarriaga programmed an origin – destination visualization (video below) for a randomly sampled group of NYC cabs from 2010. This is part of larger research by Professor David King. Eric Jaffe, on the Atlantic Cities blog, notes “the origins and destinations have a geographical asymmetry that suggests people are only using cabs for one leg of their daily round trip.” Why is this important? Well, to King this means people are using cabs to supplement their journeys. One leg in the morning to work may be by cab, with a ride home on the subway. This points to the role that cabs may have in a multi-modal transit system. In King’s own words:
This matters because it means that individual’s travel journeys are multi-modal. If we want to have transit oriented cities we have to plan for high quality, door-to-door services that allow spontaneous one-way travel. Yet for all of the billions of dollars we have spent of fixed-route transit and the built environment we haven’t spent any time thinking about how taxis (and related services) can help us reach our goals.
3 months prior to this post was another visualization using the TAXI! analytical model coming out of the same lab. In this one, we see the interaction of cabs over a 24 hour period.
Hopefully we’ll have more to you about big data and cities and what it means for community decision making. We’ll be at APA on 2 panels (Community Engagement in Intelligent Cities | Smarter Cities through Data Literacy) about the topic, so look out for us there. Also, tell us about other exciting uses of big data in cities in the comments below or on Twitter.
I recently returned from a gathering in Salt Lake convened by the Lincoln Institute for Land Policy and Sonoran Institute in concert with partners including us (PlaceMatters), OpenPlans, Fregonese Associates, the University of Utah College of Architecture and Planning (our gracious host), and Decision Commons. The agenda was ambitious but the conversations were deep and meaningful.
This convening (the Open Source Planning Tools Symposium) was just about 2 days of rolling up our sleeves and figuring out what it will take to move mature and emerging tools to greater use and refinement to tackle the greatest challenges of our day. There were 36 people in attendance representing non-profits, regional and local government, scenario tool developers, private firms, and universities.
Part of the agenda included working on edits and recommendations to a Policy Focus Report on this topic that will be published right around the National APA conference by Lincoln with contributions from OpenPlans, Sonoran, PlaceMatters, Decision Commons and Fregonese among many others helping with edits and filling in gaps. Additionally, this group talked about a range of topics to really advance this effort into the next year. These topics included ways in which university curricula could prepare planners with scenario planning skills, data standards and interoperability among tools, sample work programs for regional support, indicators for social equity, and developing clearer approaches to linking planning needs to available tools.
The group was action oriented and very excited to keep the work going before another convening sometime next year. We will continue to support that conversation using the Open Source Planning Tools Ecosystem (OSPT-Ecosystem) Google Group. If you are interested in getting involved, feel free to join the group and peruse previous notes from our calls. Materials will also be available online that came out of this meeting and we will want to engage a broad and deep network of people as we move this effort forward.
On a personal note, I am very excited about all of this and this has become my “extracurricular” work for now as we figure out how to build out the Decision Lab’s capacity to support open source planning tools and scenario planning practice across the country. We will be building a basic page on the PlaceMatters’ website as a hopeful precursor to something bigger. Check back for that soon. This will be a place where you can learn about the ongoing activities and events related to Open Source Planning Tools and will eventually have a compendium of open source tools.
If you have a perspective on how open source can improve planning tools, let us know on Twitter or below in the comments. More results and documents will follow, so check back on our blog or sign up on the Google group to stay up to date.
The Department of Housing and Urban Development (HUD) announced yesterday that they’ve awarded a 2-year contract to Manhattan Strategy Group (MSG) and our friends over at the Center for Neighborhood Technology (CNT) to develop a national Housing and Transportation (H+T) Affordability Index. CNT developed an H+T Affordability Index for 337 Metro regions; this contract will allow them to expand that research and cover the nation. From the site:
Americans traditionally consider housing affordable if it costs 30 percent or less of their income. The Housing + Transportation Affordability Index, in contrast, offers the true cost of housing based on its location by measuring the transportation costs associated with place.
This is an exciting announcement for many tackling this issue on a planning level, not to mention for personal decision-making, business decision-making and policy making. Our hope at PlaceMatters is that the data that comes out of this 2 year study is made available through an API. It looks like that’s in the works for Abogo, another tool by CNT built on top of the H+T data and centered more around individual decision making. By making this data easily consumable on the web through a service architecture, people could develop all sorts of new tools on top of it. Imagine a scenario planning tool (on the web or desktop) that let you populate the data in your analysis of a neighborhood or region. And I’m not talking about shapefiles or zipped downloads (although that would be great too). I’m talking about a truly accessible API that allows mashups in the same way that Google’s APIs have inspired hundreds of innovations.
I’d even love to host an H+T hackathon someday where we get a bunch of programmers, developers, and UI designers in a room and dream up innovative uses for the data. What about a real estate search with H+T data embedded in the results? Or a site that invites you to track your actual transportation and housing costs against the average with “rewards” or bragging rights for beating the region? If the data is open, these are all real possibilities that could be designed not just by contractors but people with passion and interest. We’ll be tracking this and hope to report that someday this data will be accessible so the benefits can multiply and really help individuals, communities and regions understand the true costs of their decisions.
How would you use this data?
Christmas came a little late at the PlaceMatters office, but we have ourselves an XBox Kinect sans XBox. Why? Well, as Jacob indicated in his last roundup, this handy piece of hardware is quite a powerful device. Now that we have one here ourselves, we’re going to see what small projects we can do. We’ve already used the Wiimote hack to create low cost pen-screens, now we’re curious what depth perception could allow us to do.
Could we track objects on map? Maybe translate physical game chips into digital forms in a scenario planning tool? Capture sketch planning? Maybe create some 3D holgrams? While motion tracking is interesting to us, I’m very excited about seeing what we can do around object detection as this gives us a low cost way to merge real participant interactions with digital capture. Tell us in the comments ideas of how you would like to see the Kinect used in a planning/mapping/civic engagement context.
Recently, Google and Verizon announced their “policy framework” for net neutrality. Google calls this a “principled compromise our companies have developed over the last year concerning the thorny issue of “network neutrality.”” While the compromise may be principled, maybe even well intentioned, it is a big gamble on such an important social and economic resource and leaves wide open loopholes for the corporate takeover of the Internet.
It allows for the creation of tiered services that would be differentiated from traditional broadband Internet. And while overseen by the FCC, the role of the FCC beyond watchdog and complaint center is not defined well enough. This provision could easily creep into Verizon and other ISPs defining a completely separate parallel network, undermining traditional broadband services where the likes of Google, flickr, and Facebook were born in garages, apartments and universities.
Also, none of the neutrality rules apply to wireless networks. The argument being that this is a nascent industry and providers need to be able to throttle services as they grow their capacity. Again, this seems like it could be a reasonable argument, but my free market gut tells me that protecting network neutrality will spur innovation faster as providers attempt to capture fickle and impatient consumers. Also, there are other ways to protect bandwidth including charging for different levels of service or per gigabyte. The difference between this and tiered services is that service level plans apply equally across all content and applications and the end consumer pays the bill. A tiered service model would allow ISPs to charge content providers different rates to send content out on to the Internet, effectively creating an Internet toll road. So even though I’m still paying my $56 a month to Comcast, I may not be able to access the latest independent innovation at the same speed as it’s corporate backed brethren.
So why does this matter for planning? Because planning has benefited from having the infrastructure to share public information (including transit efforts originally spearheaded by Google). Open Source mapping and other tools are bringing down the cost of important information, which will increasingly benefit public planning processes as well as agencies. And there are yet to be developed tools that could continue to transform public access to government in general.
While the Internet runs on the backbones built by corporations, we need policy that looks at the Internet as important infrastructure that can benefit not only planning, but education, science, access to jobs, and so on. Government should put the right resources behind something so important to remaining competitive as a country. Instead of tightening access, we should be opening up this vital innovation and information infrastructure to as many people as possible.