PlaceMatters logo PlaceMatters text logo

Participation by Design: Providing Context for Data

This post, by guest blogger Daniel Saniski, is the eighteenth in a slightly-more-than-a-month-long series on the impressive diversity of participatory decision-making tools that communities can use for land use plans, transportation plans, sustainability plans, or any other type of community plan. Our guest bloggers are covering the gamut, from low-tech to high-tech, web-based to tactile, art-based to those based on scenario planning tools, and more. Daniel’s post explores the challenges and importance of unpacking complex quantitative data using unemployment statistics as an illustration. We welcome your feedback and would love to hear about the participatory design strategies that you’ve found to be the most useful.

“Unemployment is down!”
“Imports are up!”
“The price of coffee skyrocketed last month!”

News headlines scream data points at us each day assuming we understand their meaning, source, and context. Although we see the same greatest hits of data each month (unemployment rate, inflation, job openings, GDP, imports/exports, etc.), many people do not realize much of this data is available not just at a federal and state level, but is available for their town. The sheer quantity is sure to induce information overload and it takes great care to find exactly the right points. Local and comparison data from other cities, states, or a federal average can and should be used in community decision-making, but it is a bit of a challenge wrangling data without misleading people. Graphs provide enormous rhetorical power and should keep near the question at hand. Given the terabytes of possible data series we can explore, today we will explore some ways to frame and contextualize one metric: unemployment.

The Bureau of Labor Statistics tracks data regarding employment, prices, and consumer spending habits and they have well over 10,000 data series ranging from standard headline numbers to narrow measures like the prices of intercity bus and trains. Most of their major data sources contain federal, state, regional, and city/metro sub-series which can be used to provide endless curation opportunities. Using their unemployment data alone we can produce a number of discussions.

Consider the dramatic contrast of the unemployment rates of California and North Dakota. North Dakota has an unemployment rate of 3.1% while California clocks in at 10.9%. Why? Theories range from North Dakota’s use of a state bank to their extensive oil reserves. The answer, to keep correlation from causation, does not matter as much as the framing of the visual question. Seeing such a great disparity in North Dakota prompts a lot of compelling civic questions and can be easily used to start a discussion, although this is still a narrow context. In order to better inform the unemployment discussion, we need more numbers—some of which are equally dramatic.

The newspaper headline unemployment rate and the “real” one are often pretty far apart. The headline rate measures people actively participating in the unemployment system (i.e. on benefits, etc.), but not people who have dropped out of the formal economy or work less than they’d like. The broadest unemployment rate, the U-6 or “Total unemployed, plus all marginally attached workers plus total employed part time for economic reasons” measures all people on the employment fringes and is over 14% compared to the narrowest measure at 8.2%. Now we have some sense of measurement disparities, but these numbers do not tell the whole story.

One must look at the Labor Force Participation Rate and the Civilian Employment-Population Ratio as well. These two figures tell you the rate that people are participating in the economy. If the unemployment rate drops and these measures drop, then one of three things probably contributed: a whole lot of people retired, went to prison, or dropped out of the labor force. Dropping out means their unemployment insurance ran out and they are no longer part of the 8.2%, but if they’re still looking for jobs they’re part of the 14%. When trying to make sense of unemployment’s ups and downs and how they might affect your town, keep these in mind to make better decisions.

Bringing this closer to PlaceMatters, following is some data about Denver which we will unpack in a minute. Local unemployment has more lag than national numbers. Denver’s unemployment rate was 9.2% as of January, which is .9% higher than January’s national average. Payroll in Denver grew from 1990-2000, but has been essentially flat since then though the unemployment rate changed drastically. Third, the percentage of government employees in Denver has been stable at about 14% since at least 1990.

From these figures we gain some interesting insight. First and foremost, the unemployment in Denver should be addressed, as it is well above average. But we would be remiss to blame it on the crash of 2008. Why? Payroll in Denver has not substantially changed in more than ten years. How very odd—who are these unemployed people? The city’s population has expanded dramatically since 1990. There is a serious discussion to be had in Denver since people keep coming, aren’t getting jobs, and haven’t been for a decade.

As a final example, the last graph on this page shows the percentage of people working for the government in Denver. In an age where claims about “bloated” government size, we can show with some quick calculations that these arguments are untrue (in Denver). Taking some time to dig into employment statistics can help to track where your city has been, where it is, and where it is going.

By adding context at a federal, state, and/or local level we can gain greater understanding and start asking better questions—and framing the questions we do ask—with data. As seen when comparing the two big unemployment measures and their supporting participation rates, it takes more than one number, carefully curated, to successfully and fairly employ data graphs. Using these and other contextualized figures, we can help take government data out of the headlines and into our civic discussions where they belong.

For more information, see also:

This post was contributed by Daniel Saniski, the managing editor at Data360.org and an associate consultant at Webster Pacific LLC. He catalogs, writes news about government data, and guides site development for Data360 and provides business intelligence and information systems design services at Webster Pacific LLC.

Participation by Design: Community Planning … A New App for Collaborative Geodesign

This post, by guest blogger Matt Baker, is the thirteenth in a month-long series on the impressive diversity of participatory decision-making tools that communities can use for land use plans, transportation plans, sustainability plans, or any other type of community plan. Our guest bloggers are covering the gamut, from low-tech to high-tech, web-based to tactile, art-based to those based on scenario planning tools, and more. We welcome your feedback and would love to hear about the participatory design strategies that you’ve found to be the most useful.

Combining GIS and design presents an opportunity to merge art and precision, geography and graphics, the human mind and creativity. The software that has resulted continues to redefine how we work with a GIS—not just cartographically, but how we capture the many processes and workflows any designer might undertake.

When building a new plan for a community, there will likely be multiple stakeholders, each with their own vision and ideas. These plans all have their own importance, and each needs to be captured, analyzed, compared, and evaluated.

The new Community Planning web application from Esri (the documentation is also online) demonstrates how GIS and the web can provide a collaborative design tool that can be used to capture the visual qualities of a design, capture multiple scenarios, and save them to a central location. From there, the design lives as data, and with that comes the full ability to perform the spatial analysis and evaluation available in a GIS.

This application uses a combination of ArcGIS Server and Adobe Flex. With the release of ArcGIS 10 came the feature service—essentially a map tied to a database published to the web. Once a service has been published, The ArcGIS API for Flex allows for the creation of an interactive rich internet application consuming an ArcGIS Server feature service. Web collaboration is born!

Creating a Plan

Begin by clicking the “Create My Plan” button (as seen on the photo above). This reveals a panel of features that can be drawn on the map, as well as a field to enter a Plan Name and your email address—which will serve as your identifier in the design collaboration process.

When you click “Submit My Plan”, you are creating a space on a GIS server that stores the features and attributes you draw on the map.

Sketching and Modifying Features

From the Create My Plan palette, click a type of land use to enable it for sketching, and click the map to add the shape of the polygon. As you click, you can see the feature being added to the map. Double-click to finish the sketch. If you want to change its shape, click the feature to select it, which reveals handles at each node you added. Drag a node to move it, and hover over an edge to reveal a ‘new’ node you can add to the shape.

Drawing Data and Measuring Impact

What makes this application powerful is the ability to draw data. As features are added to the plan, the area, length, and location are already known by the GIS.

When you click a feature you just drew, a pop-up displays the total values of several indicators.

Planners know from researching existing plans that certain indicators – environmental, economic, and social – can be measured based on the area of a particular feature. With the use of Flex and a simple expression, a value for an indicator can immediately be calculated as a function of the area of the feature it represents on the map. For example, if I assume that an acre of Commercial can generate 150 jobs, 4.5 acres will generate around 675 jobs.

Clicking the “Community Impact” button along the top menu reveals a charting widget, giving the option to compare the areas of all the land use types, and each indicator, giving a visual measure of proportion to the land use plan.

The indicators chosen for this application were pulled from various planning manuals and guidelines, such as the APA’s “Planning and Urban Design Standards,” and “The Smart Growth Manual.”

Submitting your plan

Clicking the “Review My Plan” button reveals a widget that will reveal all plans you have already submitted that are tied to your email address. Clicking each plan retrieves the plan from the server, allowing you to re-evaluate, and even edit features, then re-save the plan to the server.

Sharing your plan

This application also gives the ability for you to share your unique plan with the rest of the world. Clicking the “Share My Plan” reveals a widget with options to share a link to your plan via Twitter, Facebook, or E-mail.

When the user on the other end clicks the link, they’ll be taken to the application and a map showing your plan.

What’s it all mean?

As citizens expect up-to-the-minute news about their community, so will they expect updates on plans for future development. Today’s web technology gives us instant communication through so many channels and data types. ArcGIS server gives our maps and GIS data the chance to participate in this exchange, giving planners and designers the ability to instantly post a design, share an idea, and receive feedback from other stakeholders and community members as instantly as we receive tweets from friends.

This post was contributed by Matt Baker, a product engineer with Esri.

Participation by Design: Mapping Media Ecosystems at Center for Civic Media

This post, by guest blogger Ethan Zuckerman, is the tenth in a month-long series on the impressive diversity of participatory decision-making tools that communities can use for land use plans, transportation plans, sustainability plans, or any other type of community plan. Our guest bloggers are covering the gamut, from low-tech to high-tech, web-based to tactile, art-based to those based on scenario planning tools, and more. One critical element of participatory design for community decision-making is ensuring that the relevant information, especially complex information, is presented in understandable and meaningful ways, and this post gives some terrific examples of how to display complex data in ways that vividly highlight key relationships and insights. The subject of the post – civic media – isn’t terrain we normally focus on, but it’s an awfully interesting subject in addition to the data visualization ground that it covers. Ethan originally published this post on his own my heart’s in accra blog on November 7, 2011. We welcome your feedback and would love to hear about the participatory design strategies that you’ve found to be the most useful.

This summer, Sasha, Lorrie and I started brainstorming the sorts of events we wanted to host at the Center for Civic Media this fall. The first I put on the calendar was a session on “mapping civic media”, a chance to catch up with some of my favorite people who are working to study, understand and visualize how ideas move through the complicated ecosystem of professional and participatory media.

To represent the research being done in the space, we invited Hal Roberts, my collaborator on Media Cloud (and on a wide range of other research), Erhardt Graeff from the Web Ecology project, and Gilad Lotan, VP of R&D for internet analytics firm BetaWorks. On Wednesday night, I asked them to share some of the recent work they’ve been doing, understanding the structure of the US and Russian blogosphere, analyzing the influence networks in Twitter during the early Arab Spring events and understanding the social and political dynamics of hashtags. They didn’t disappoint, and I suspect our video of the session (which we’ll post soon) will be one of the more popular pieces of media we put together this fall. In the meantime, here are my notes, constrained by the fact that I was moderating the panel and so couldn’t lean back and enjoy the presentations the way I otherwise might have.

Hal Roberts is a fellow at the Berkman Center for Internet and Society, where he’s produced great swaths of research on internet filtering, surveillance, threats to freedom of speech, and the basic architecture of the internet. (That he’s written some of these papers with me reflects more on his generosity than on my wisdom.) He’s the lead architect of Media Cloud, the system we’re building at the Berkman Center and at Center for Civic Media to “ask and answer quantitative questions about the mediasphere in more systematic ways.” As Hal explains, media researchers “have been writing one-off scripts and systems to mine data in haphazard ways.” Media Cloud is an attempt to streamline that process, creating a collection of 30,000 blogs and mainstream media sources in English and Russian. “Our goal is to get as much media as possible, so we can ask our own questions and also let others ask questions of our duct tape and bubblegum system.”

Hal’s map of clusters in popular US blogs.

(An interactive version of this map is available here.) Much of Hal’s work has focused on using the content of media – rather than the structure of its hyperlinks – to map and cluster the mediasphere. He shows us a map of US blogs that cluster into three main areas – news and political blogs, technology blogs and what he calls “the love cluster”. This last cluster is so named because it’s filled with people talking about what they love. Subclusters include knitters, quilters, fans of recipes and photography. The technology cluser breaks down into a Google camp, an iPhone camp and a camp discussing Android Apps. Hal’s visualization shows the words most used in the sources within a cluster, which helps us understand what these clusters are talking about. The Google cluster features words like “SEO, webmaster, facebook, chrome” and others, suggesting the cluster is substantively about Google and its technology projects.

While we might expect the politics and news cluster to divide evenly into left and rightwing camps, it doesn’t. Study the link structure of the left and the right, as Glance and Adamic and later Eszter Hargittai have, and it’s clear that like links to like. But Hal’s research shows that the left and right use very similar language and talk about many of the same topics. This is a novel finding: It’s not that the left and right are talking about entirely different topics – instead they’re arguing over a common agenda, an agenda that’s well represented in mainstream media as well, which suggests the existence of subjects neither the right or left are talking about online.

Building on this finding, Hal and colleagues at Berkman looked at the Russian media sphere, to see if there was a similar overlap in coverage focus between mainstream media and blogs. “Newspapers and the television are subject to strong state control in Russia – we wanted to see if our analysis confirmed that, and whether the blogosphere was providing an alternative public sphere.

The technique he and Bruce Etling usedis “the polar map” – put the source you believe is most important at the center, and other sources are mapped at a distance from that source where the distance reflects degree of similarity. The central dot is a summary of verbiage from Russian government ministry websites. Right next to it is the official government newspaper. TV stations cluster close to the center, while blogs cover a wide array of the space, including the edges of the map.

It’s possible that blogs are showing dissimilarities to the Kremlin agenda because they’re talking about knitting, not about politics. So a further analysis (the one mapped above) explicitly identified democratic opposition and ethno-nationalist blogs and looked at their placement on the map. There’s strong evidence of political conversations far from the government talking points in both the democratic opposition and in the far right nationalist blogosphere.

What’s particularly interesting about this finding is that we don’t see the same pattern in the US blogosphere. Make a polar map with the White House, or a similar proxy for a US government news agenda, at the center, and you’ll see a very different pattern. Some right wing American blogs flock quite closely to the White House talking points – mostly to critique them – while the left blogs and mainstream media generally don’t. However, when Hal and crew did an analysis of stories about Egypt, they saw a very different pattern than in looking at all stories published in these sources. They saw a tight cluster of US mainstream media and blogs – left and right – around the White House. The government, the media and bloggers left and right talked about Egypt using very similar language. In the Russian mediasphere, the pattern was utterly different – the democratic opposition was far from the Kremlin agenda, using the Egyptian protests to talk about potential revolution in Russia.

The ultimate goal of Media Cloud, Hal explains, is to both produce analysis like this, and to make it possible for other researchers to conduct this sort of analysis, without a first step of collecting months or years of data.

Erhardt Graeff is a good example of the sort of researcher Media Cloud would like to serve. He’s cofounder of the Web Ecology Project, which he describes as “as a ragtag group of casual researchers that has now turned in a peer-reviewed publication.” That publication is the result of mapping part of the Twitter ecosystem during the Tunisian and Egyptian revolutions, and attempting to tackle some of the hard problems of mapping media ecosystems in the process.

The Web Ecology Project began life researching the Iranian elections and resulting protests, focusing on the #iranelection hashtag. With a simple manifesto around “reimagining internet studies”, the project tries to understand the “nature and behavior of actors” in media systems. That means considering not just the top users, or even just the registered users of a system like Twitter, but the audience for the media they create. “Each individual user on Twitter has their personal media ecosystem” of people they follow, influence, are followed by and influenced by.

This sort of research rapidly bumps into three hard problems, Erhardt explains:

  • Did someone read a piece of information that was published? Or as he puts it, “Did the State Department actually read our report about #IranElection?” It’s very hard to tell. “We end up using proxies – you followed a link, but that doesn’t mean you read it.”
  • Which piece of media influenced someone to access other media? “Which tweet convinced me to follow the new Maru video, Erhardt’s or MC Hammer’s?”
  • How does the media ecosystem change day to day? Or, referencing a Web Ecology paper, “How many genitalia were on ChatRoulette today?” The answer can vary sharply day to day, raising tough problems around generating a usable sample.

The paper Erhardt published with Gilad and other Web Ecology Project members looks at the Twitter ecosystem around the protest movements in Tunisia and Egypt. By quantitatively searhing for information flows, and qualitatively classifying different types of actors in that ecosystem, the research tries to untangle the puzzle of how (some) individuals used (one type of) social media in the context of a major protest.

To study the space, the team downloaded hundreds of thousands of tweets, representing roughly 40,000 users talking about Tunisia and 62,000 talking about Egypt. They used a “shingling” method of comparison to determine who was retweeting whom ad sought out the longest retweet chains. They looked at the top 10% of these chains in terms of length to find the “really massive, complex flows” and grabbed a random 1/6th of that sample. That yielded 774 users talking about Tunisia, 888 talking about Egypt… and only 963 unique users, suggesting a large overlap between those two sets.

Then Erhardt, Gilad and others started manually coding the participants in the chains. Categories included Mainstream Media (@AJEnglish, @nytimes), web news organizations (@HuffingtonPost), non-media organizations (@Wikileaks, @Vodaphone), bloggers, activists, digerati, political actors, celebrities, researchers, bots… and a too-broad unclassified category of “others”. This wasn’t an easy process – Erhardt describes a system in which researchers compared their codings to ensure a level of intercoder reliability, then had broader discussions on harder and harder edge cases. They used a leaderboard to track how many cases they’d each coded, and goaded those slow to participate into action.

The actors they classified are a very influential set of Twitter users. The average organization in their set has 4004 followers, the average individual 2340 (which is WAY more than the average user of the system). To examine influence with more subtlety than simply counting followers, Erhardt and his colleagues use retweets per tweet as an influence metric. What they conclude, in part, is that “mainstream media is a hit machine, as are digerati – what they have to say tends to be highly amplified.”

The bulk of the paper traces information flows started by specific people. In the case of Egypt, lots of information flows start from journalists, bloggers and activists, with bots as a lesser, but important, influence. In Tunisia, there were fewer flows started by journalists, more by bots and bloggers, and way fewer from activists. This may reflect the fact that the Tunisian story caught many journalists and activists by surprise – they were late to the story, and less significant as information sources than the bloggers who cover that space over time. By the time Egypt becomes a story, journalists realized the significance and were on the ground, providing original content on Twitter, as well as to their papers.

One of the most interesting aspects of the paper is an analysis of who retweets whom. It’s not surprising to hear that like retweets like – journalists retweet journalists, while bloggers retweet bloggers. Bloggers were much more likely to retweet journalists on the topic of Egypt than on Tunisia, possibly because MSM coverage of Egypt was so much more thorough than the superficial coverage of Tunisia.

While Gilad Lotan worked with Erhardt on the Tunisia and Egypt paper, his comments at Civic Media focused on the larger space of data analysis. “I work primarily on data – heaps and mounds of data,” he explains, for two different masters. Roughly half his work is for clients, media outlets who want to understand how to interact and engage with their audiences. The other half focuses on developing the math and algorithms to understand the social media space.

This work is increasingly important because “attention is the bottleneck in a world where threshhold to publishing is near zero.” If you want to be a successful brand or a viable social movement, understanding how people manage their attention is key: “It’s impossible to simply demand attention – you have to understand the dynamics of attention in the face of this bottleneck.”

Gilad references Alex Dragulescu’s work on digital portraits, pictures of people composed of the words they most tweet or share on social media. He’s interested not just in the individuals, but in the networks of people, showing us a visualization of tweets around Occupy Wall Street. Different networks take form in the space of minutes or hours as new news breaks – the network around a threatened shutdown of Zuccotti Park for a cleanup is utterly different than the network in July, when Adbusters was the leading actor in the space.

Images like this, Lotan suggests, “are like images of earth from the moon. We knew what earth looked like, but we never saw it. We knew we lived in networks, but this is the first time we can envision it and see how it plays out.”

When we analyze huge data sets, we can start approaching answers to very difficult questions, like:

  • What’s the audience of the New York Times versus Fox News?
  • What type of content gains wider audiences through social media?
  • What topics do certain outlets cover? What are their strengths, weaknesses and biases?
  • How do audiences differ between different publications? How are they similar?
  • How fast does news spread, and how does it break?

Much of media and communications research addresses these questions, though rarely directly – as Erhardt noted, we generally address these questions via proxies. But Lotan tells us, we can now ask and answer questions like, “How many Twitter users follow Justin Bieber and The Economist?” The answer, to a high degree of precision, is 46,000. It’s just shy of the number who follow The Economist and the New York Times, 54,000.

Lotan is able to research answers like this because his lab has access to the Twitter “firehose” (the stream of all public data posted to Twitter, moment to moment) and to the bit.ly firehose. This second information source allows Lotan to study what people are clicking on, not just what media they’re exposed to. He offers a LOLcat, where the feline in question is dressed in a chicken costume. “We can see the kitty in you, and the chicken you’re hiding behind.” What people share and what they click is very different, and Lotan is able to analyze both.

This data allowed Lotan to compare what audiences for four major news outlets were interested in, my measuring their clickstreams. Al Jazeera and The Economist, he tells us, are pretty much what you’d think. But Fox News watchers are fascinated by crime, murders, kidnappings and other dark news. This sort of insight may help networks understand and optimise for their audiences. Al Jazeera’s audience, he tells us, is very engaged, tweeting and sharing stories, while Fox’s audience reads a lot and shares very little.

Some of Lotan’s recent research is about algorithmic curation, specifically Twitter’s trending topics. Many observers of the Occupy movement have posited that Twitter is censoring tweets featuring the #occupywallstreet hashtag. Lotan acknowledges that the tag has been active, but suggests reasons why it’s never trended globally. Interest in the tag has grown steadily, and has a regular heartbeat, connected to who’s active on the east coast of the US. The tag has spiked at times, but remains invisible in part due to bad timing – a spike on October 1st was tiny in comparison to “#WhatYouShouldKnowAboutMe”, trending at the same time.

At this point, Lotan believes he’s partially reverse engineered the Trending Topics algorithm. The algorithm is very sensitive to the new, not to the slowly building. This raises the question: what does it mean to “get the math right”. Lotan observes, “Twitter doesn’t want to be a media outlet, but they made an algorithmic choice that makes them an editor.” He’s quick to point out that algorithmic curation is often very helpful – the Twitter algorithm is quite good at preventing spam attacks, which have a different signature than organic trends. So we see organic, fast-moving trends, even when they’re quite offensive. He points to #blamethemuslims, which started when a Muslim women in the UK snarkily observed that Muslims would be blamed for the Norway terror attacks. That tweet died out quickly, but was revived by Americans who used the tag unironically, suggesting that we blame Muslims for lots of different things – that small bump, then massive spike is a fairly common organic pattern… and very different from the spam patterns he’s seen on Twitter.

When we analyze networks, Lotan suggests, we encounter a paradox that James Gleick addresses in his recent book on information: just because I’m one hop away from you in a social network doesn’t mean I can send you information and expect you to pay attention. In the real world, people who can bridge between conversations are rare, important and powerful. He closes his talk with the map of a Twitter conversation about an event in Israel where settlers were killed. There’s a large conversation in the Israeli twittersphere, a small conversation in the Palestinian community, and two or three bridge figures attempting to connect the conversations. (One is my wife, @velveteenrabbi.) Studying events like this one may help us, ultimately, determine who’s able to build bridges between these conversations.

Ethan is a blogger, media researcher, and the director of the MIT Center for Civic Media.

PlaceMatters Blog Roundup: February 2, 2012

Photo by Flickr user kmakice.

The value of crowd- and group-based thinking has drawn some attention lately. The New York Times ran a guest editorial (“The Rise of Groupthink“) arguing that people are more creative when they are able to work in solitude rather than in groups, a theme covered by the New Yorker as well (you can read the summary but the full article is behind a paywall). There’s quite a bit of thoughtful commentary on the subject, including posts on the National Charrette Institute blog and Fast Company’s Co.Design blog.

Involve explores a related theme, suggesting the value of crowdsourcing may be more about generating ideas and enthusiasm than generating consensus.

The National Coalition for Dialogue and Deliberation offers some great tips for designing a successful online collaboration or deliberation process.

Engaging Cities summarizes the highlights of an online discussion (on Cyburbia) about increasing public participation in rural communities.

Intellitics writes about a new IBM report called “A Manager’s Guide to Evaluating Citizen Participation.

Digital Urban posts a characteristically cool Twitter data visualization.

Ascentum reports on German Chancellor Merkel’s web-based national engagement effort, “Dialogue about Germany’s Future.

Museum 2.0 explores the challenges of designing interactive activities that work for both adults and kids.

What did we miss?

Most Exciting Trends in 2012: Mobile, Social, and Local

PlaceMatters’ Ken Synder using his smartphone as part of a Walkshop demonstration. We expect to see increasingly cool and robust ways to use smartphones in community decision-making in 2012.

Five trends I’m excited about for 2012:

1) Mobile Everything
It’ll all be about mobile in 2012. Smartphone sales continue to grow, and consumers are increasingly shifting from PC-based web activity to using smartphones. Because of the pervasiveness of mobile devices and the growing sophistication of both native and HTML-based apps, many of the tools that groups like PlaceMatters use will rely increasingly on versions that run on mobile devices. This will present some terrific opportunities, but it will also mean we need to be even more mindful of digital divide problems, ensuring that individuals without mobile devices and communities with lower mobile penetration are still able to fully participate and contribute.

2) Social Media Goes Even Bigger
Although Facebook use has already reached mind-boggling proportions (more than 800 million active users, according to Facebook), we expect that Facebook and other social media products will become even more universal and essential as engagement platforms, web portals, and discovery engines. Civic participation will increasingly rely directly on Facebook and social media and on tools that themselves are built on social media platforms. Continue reading

Most Exciting Trends in 2012: Big Data, Collaborative Problem Solving

Big Data, Big Business

Decision support systems that take massive data sets from multiple public and private entities and synthesize the data into valuable cross-discipline information for city and regional decision making is clearly becoming big business. Television, online, and magazine ads are populated with ads from IBM, Cisco, and Siemens, to name a few, that are promising to improve our communities with sophisticated data management, synthesis and analysis. This fall I was struck by a large nine-screen interactive wall created by Siemens prominently displayed at National Airport in DC. The interactive touch screens invited travelers to experiment with different strategies to improve a city’s mobility and energy efficiency. The Decision Labs at the University of Washington has been experimenting with applications first developed in the gaming industry to combine dynamic data with scenario planning and visualization. They are creating a decision-making framework for the Seattle region that can be tailored to a wide range of public and private users for the different stages of planning and development.

A nine-screen touchscreen display at Washington’s National Airport.

On the low cost end, Google has improved the API for graphs in spreadsheets posted on Google Docs. You can now easily embed them into websites with nice hover features to view the details within the graph. More importantly, anytime new numbers are added to the cloud-based spreadsheet, the graphs get updated on your site. This opens the door for a wide range of interactive technologies where participants can push data to the site. PlaceMatters is using this functionality in the next iteration of the Omaha’s Comprehensive Energy Management Program website for tracking the progress on project indicators. Another company providing a more packaged deal for viewing data linked to maps is Geowise and their cool InstantAtlas indicator interface. For example, the Council of Community Services incorporated InstantAtlas into their website to display county and census data in a multi-county region in the Roanoke region of western Virginia.

Collaborative Problem Solving

This year PlaceMatters is collaborating with the Environmental Protection Agency to host a second round code-a-thon in pursuit of new and/or improved applications for data collection, analysis, and project implementation around sustainable development. Universities and software developers will join planners and practitioners to identify shortcomings with existing tools and highlight opportunities to create new tools that improve decision-making in communities. The first code-a-thon will take place in Washington, D.C. on January 22. PlaceMatters will take the lead in organizing the second code-a-thon to take place in Denver during summer 2012. This approach to collaborative tool development is in part inspired by past successes in the field of citizen science. Foldit is one such project that emphasizes the wisdom of crowds for certain types of problem solving. Scientists recruited volunteers to assist in the predicting where to expecting folding to occur in protein and RNA strands. It turns out this is the type of problem where collective brainpower excels. Untrained online gamers outperformed even the best computer programs.

Another great example of collaborative problem solving can be found at OpenIdeo, where an individual, group, or organization poses a challenge and various participants contribute to various stages of problem solving (including inspiration, concepting, and evaluation). Last month, one of the posted challenges was: “How might we restore vibrancy in cities and regions facing economic decline?” Nearly 900 ideas where submitted at the inspiration stage with twenty final concepts emerging to the top. This week the project will shift into evaluation of the winning concepts.

OpenIdeo’s status screen on the ‘How might we restore vibrancy in cities and regions’ challenge.

PlaceMatters Blog Roundup: November 22, 2011

We love online technology here at PlaceMatters, but it doesn't replace offline, in-person engagement.

EngagingCities does a nice job making the case for the importance of merging online and offline engagement strategies.

inCommon links to an op-ed arguing that transparency and information, while essential, do not alone constitute public engagement. We’d argue, for similar reasons, that there’s more to accountability than just transparency.

James Fee, on his Spatially Adjusted, blog gushes on SketchUp and its new “Making Things Real” project. We share his enthusiasm . . . we find SketchUp to be a a powerful tool for visualizing land use and design options (that happens to be free, with thanks to Google).

All Points Blog reports that Flickr added “geofencing,” which creates a privacy option based on the location of a photo. Photos geotagged as being taken with geographic areas designated by users are only shared with specific, pre-selected people. Although this particular tool may not be very useful from a civic participation perspective, it is suggestive of functionality that Flickr might eventually add that could be.

Digital Urban reviewed Instant City Generator for Cinema 4D.

Digital Urban also posted a visualization of “urban complexity” data in London. We always enjoy the videos Digital Urban digs up, including this one. What we found most interesting about this one: the way the visualization highlights the corridors and satellite urban hubs around the central city.

Our friends at the National Charrette Institute posted a list of charrette-oriented resources on their blog.

Some food for thought on Gigaom: the continued evolution of QR codes and the emergence of NFC (near field communication) technology. This post focuses on the offerings of one specific startup called Social Passport, but it offers a sense of the potential for these technologies (especially NFC) in community decision-making, especially projects that involve community members actually out in the community through asset mapping, walkshops, or other participatory activities.

We stumbled across this dated but enjoyable video of Bobby McFerrin leading an “audience participation jam.” It’s an impressive call-and-response participation model that results in some very cool music. We aren’t sure you’d want to structure an entire community participation process on this model, but we can imagine some ways that this could work for pieces of a process.

Snurblog provides a useful overview of crowdsourcing in public participation processes.

PlaceMatters‘ Ken Snyder offers his take on the emerging field of geodesign on Planetizen (and reposted on the PlaceMatters blog).

What did we miss?

PlaceMatters Blog Roundup: September 14, 2011

A very cool engagement strategy: Harry Potter-style map that reveals new areas as you travel thru a museum (h/t to All Points Blog).

Digital Urban shows off a cool augmented reality implementation: incorporating 3-D content, overlaid on the iOS video feed, that can be manipulated through user interaction in real time.

EngagingCities thinks through hackathons and some of the opportunities and challenges of government app-creation efforts.

More from EngagingCities: three fun tools (games?) for community planning.

And another post from what is our favorite blog this week: EngagingCities describes an awesome art-heavy “collaborative mapping” process in Tokyo.

There’s a really nice Nick Grossman interview courtesy of the Open Plans blog.

A new study: federal agencies need to improve public participation standards.

The BMW Guggenheim Lab created an “Urbanology” web site. Answer a series of questions and the site will create your own ideal “future city” and compare it to other cities around the world. It’s an interesting idea but the execution isn’t very strong yet. For instance, the trade-offs – an essential element in any future scenarios type of tool – just don’t make a lot of sense.

As reported on a bunch of blogs over the past couple of weeks, the White House launched a new “We the People” initiative inviting citizens to submit e-petitions seeking federal action on presumably just about anything. The system allows anyone to create a petition; if at least 150 people sign the petition it becomes publicly searchable on the White House site. The White House committed to reviewing and responding to any petition receiving at least 5,000 signatures within 30 days. You’ll find some thoughtful comments on the National Coalition for Dialogue and Deliberation blog, and a couple of more skeptical reviews on Intellitics (“White House Petitions: The Need for Robust FAQs” and “White House Petitions: a Small Sample of Popular Feedback“).

We are technology enthusiasts at PlaceMatters, but we agree with A Planner’s Guide that technology needs to be used thoughtfully and in ways that are appropriate to the audience and the context.

A cool, sticker-based engagement project on Grist.

StreetsBlog reviews the book “Visualizing Density,” which includes photographs and descriptions of 250 neighborhoods across the country. The goal: “provide an impartial and comparative view of the many ways to design neighborhoods.” Actual photographs of actual neighborhoods aren’t what we usually think of when we talk about visualization tools, but it seems like one pretty obvious and useful approach.

What did we miss?

PlaceMatters Blog Roundup: August 11, 2011

Engaging Cities writes about the recommendation engine Scoville (and Scoville rejects me for their beta because I don't have enough Facebook check-ins!).

Between presentations at the White House and the Ford Foundation’s 75th anniversary gala (we’ll blog about both of these soon), tons of amazing projects (we’ve got a team in the New River Valley of rural southwestern Virginia at this very moment), and just the general zaniness of summer we’ve got quite a backlog of great blog posts to round up:

Our friend Chris Haller on the EngagingCities blog writes about a new recommendation engine app, Scoville, built on Foursquare’s API. His take: if it works, it might be pretty useful to planners as a community asset mapping tool. Chris also posted a nice checklist for planners hoping to use social media tools in their community engagement efforts.

Digital Urban commented on Urban Sensation’s interesting approach to urban visualization, layering data on top of CCTV footage as part of an immersive sensory emulation project. Hard to explain, and pretty unclear where they’ll end up, but a creative and ambitious idea about creating engaging experiences.

Three other Digital Urban posts to note: an interesting Big Data/urban operating system concept called Urbanflow Helsinki, a creative urban design model inverting the conventional transportation paradigm [http://www.digitalurban.org/2011/07/clockwork-city.html], and a Nike-supported data visualization called YesYesNo illustrating the running patterns over the course of a year.

Open Source Planning offers another take on the Nike data visualization, noting that there’s a clear bias in the data collection (i.e., what sort of folks happen to run with the fancy iPod-Nike chip system, and what parts of New York they run in and which boroughs they avoid).

We love smart technology aimed at improving civic engagement and community decision making, and we think Next American City rocks, so we especially liked their roundup of the best city- and community-oriented technology tools.

countably infinite has a thoughtful post about the challenges of pseudonymity in community decision making.

The e-Participation and Online Deliberation blog reflects on the challenge of making technology-enabled engagement tools do more than simply gather more “trickle-up” opinions but, rather, to foster genuine engagement, conversation, and deliberation.

Intellitics comments on the role of public participation in a new Open Government Partnership.

Metropolis reports on New York City’s new app development competition.

Planetizen blogs about the Guggenheim City Laboratory and its six-year nine-city tour.

Design Mind describes the challenges that cities and their CTOs and Chief Digital Officers face in the transition to digital participation (h/t to Planetizen).

All Points Blog describes a new augmented reality implementation and a new conceptual implementation. We aren’t all that excited about driving while viewing the road through our mobile device, but these types of developments will no doubt move the ball forward on applications that are relevant for community planning and civic engagement.

inCommon writes about a participatory park planning project in Santa Monica and CoolTown Studios describes another, similar planning effort for a downtown area in the Village of Hempstead on Long Island.

PlaceMatterssummer intern Matt Weinstein blogged about our walkshop in Somerville, Massachusetts, and Jason offers some context on Esri’s acquisition of Procedural (the makers of CityEngine) and spells out some of the implications.

What did we miss?

PlaceMatters Blog Roundup: April 26, 2011

Photo by Engaging Cities.

Engaging Cities had two great posts. One was by guest blogger Claudia Paraschiv on “El Carrito,” a mobile community participation cart used in Barcelona, Spain. The idea is clever enough on its own, but the cart then itself contains the tools for some great public engagement techniques, like the “Neighborhood Detective” game for kids.

And Jennifer-Evans Cowley has a great pecha kucha presentation on the diverse and impressive apps that communities are developing around the country.

Jennifer also was a guest blogger on the Cubit Planning blog Plannovation about the use of social media in planning, and she dives into the data on how APA 2011 participants actually used Twitter during the conference, including the impressive buzzwordification (my word) of phrases like skyboxification (Michael Sandel’s word), blight porn, and gray tsunami.

Another Cubit Planning guest blogger, Chris Haller, wrote about bridging the online/offline divide in public involvement during planning processes.

The Dirt explores visualizing brownfield and Superfund data in ways that help community members understand the challenges and explore options. Engagement is half the challenge, but once you’ve got folks plugged in you still have to provide tools for making sense of the issues, the alternatives, and the trade-offs.

Noah Raford describes an online scenario planning process using SenseMaker Suite to create the scenarios and auto-aggregation tools to analyze the narratives submitted by participants. It’s more of proof-of-concept than a fully fleshed-out approach, but seems to have some promise.

Intellitics describes the idea of “microparticipation” in online community engagement.

National Coalition for Dialogue & Deliberation describes a demo of a software tool called EngagEnterprise, designed to aid in stakeholder management. It’s not a project management tool, they explain, but one that helps keep track of who all the stakeholders are and their relationships between each other and the project. It’s a “self-serve information dispensary and an online bulletin board.”

National Charrette Institute writes about the Better Block project, “a demonstration tool that acts as a living charrette,” enabling communities to work on, provide feedback to, and iterate complete streets project in real-time.

The PlaceMatters blog features an interview with Rob Matthews of the Decision Commons and a very cool video on the future of glass and displays.

What did we miss?