Patrick Meier: Could CrowdOptic Be Used For Disaster Response?

Tags:
Patrick Meier

Patrick Meier

Could CrowdOptic Be Used For Disaster Response?

Crowds—rather than sole individuals—are increasingly bearing witness to disasters large and small. Instagram users, for example, snapped 800,000 #Sandy pictures during the hurricane last year. One way to make sense of this vast volume and velocity of multimedia content—Big Data—during disasters is with PhotoSynth, as blogged here. Another perhaps more sophisticated approach would be to use CrowdOptic, which automatically zeros in on the specific location that eyewitnesses are looking at when using their smartphones to take pictures or recording videos.

“Once a crowd’s point of focus is determined, any content generated by that point of focus is automatically authenticated, and a relative significance is assigned based on CrowdOptic’s focal data attributes […].” These include: (1) Number of Viewers; (2) Location of Focus; (3) Distance to Epicenter; (4) Cluster Timestamp, Duration; and (5) Cluster Creation, Dissipation Speed.” CrowdOptic can also be used on live streams and archival images & videos. Once a cluster is identified, the best images/videos pointing to this cluster are automatically selected.

Read full post with graphics and more links.

Comments Off
Jun 5

Patrick Meier: Data Mining Wikipedia in Real Time for Disaster Response [or Any Current Event]

Tags:
Patrick Meier

Patrick Meier

Data Mining Wikipedia in Real Time for Disaster Response

My colleague Fernando Diaz has continued working on an interesting Wikipedia project since he first discussed the idea with me last year. Since Wikipedia is increasingly used to crowdsource live reports on breaking news such as sudden-onset humanitarian crisis and disasters, why not mine these pages for structured information relevant to humanitarian response professionals?

In computing-speak, Sequential Update Summarization is a task that generates useful, new and timely sentence-length updates about a developing event such as a disaster. In contrast, Value Tracking tracks the value of important event-related attributes such as fatalities and financial impact. Fernando and his colleagues will be using both approaches to mine and analyze Wikipedia pages in real time. Other attributes worth tracking include injuries, number of displaced individuals, infrastructure damage and perhaps disease outbreaks. Pictures of the disaster uploaded to a given Wikipedia page may also be of interest to humanitarians, along with meta-data such as the number of edits made to a page per minute or hour and the number of unique editors.

Click on Image to Enlarge

Click on Image to Enlarge

Fernando and his colleagues have recently launched this tech challenge to apply these two advanced computing techniques to disaster response based on crowdsourced Wikipedia articles. The challenge is part of the Text Retrieval Conference (TREC), which is being held in Maryland this November. As part of this applied research and prototyping challenge, Fernando et al. plan to use the resulting summarization and value tracking from Wikipedia to verify related  crisis information shared on social media. Needless to say, I’m really excited about the potential. So Fernando and I are exploring ways to ensure that the results of this challenge are appropriately transferred to the humanitarian community. Stay tuned for updates. 

See also: Web App Tracks Breaking News Using Wikipedia Edits [Link]

Comments Off
Jun 4

Patrick Meier: Automatic Processing of Tweets & Crowd-Sourced Reports

Tags:
Patrick Meier

Patrick Meier

Automatically Classifying Crowdsourced Election Reports

As part of QCRI’s Artificial Intelligence for Monitoring Elections (AIME) project, I liaised with Kaggle to work with a top notch Data Scientist to carry out a proof of concept study. As I’ve blogged in the past, crowdsourced election monitoring projects are starting to generate “Big Data” which cannot be managed or analyzed manually in real-time. Using the crowdsourced election reporting data recently collected by Uchaguzi during Kenya’s elections, we therefore set out to assess whether one could use machine learning to automatically tag user-generated reports according to topic, such as election-violence. The purpose of this post is to share the preliminary results from this innovative study, which we believe is the first of it’s kind.

Read full post with graphics.

Over 1 Million Tweets from Oklahoma Tornado Automatically Processed

My colleague Hemant Purohit at QCRI has been working with us on automatically extracting needs and offers of help posted on Twitter during disasters. When the 2-mile wide, Category 4 Tornado struck Moore, Oklahoma, he immediately began to collect relevant tweets about the Tornado’s impact and applied the algorithms he developed at QCRI to extract needs and offers of help.

Read full post

Comments Off
May 22

Patrick Meier: Jointly: Peer-to-Peer Disaster Recovery App

Tags:
Patrick Meier

Patrick Meier

Jointly: Peer-to-Peer Disaster Recovery App

My colleague Samia Kallidis is launching a brilliant self-help app to facilitate community-based disaster recovery efforts. Samia is an MFA Candidate at the School of Visual Arts in New York. While her work on this peer-to-peer app began as part of her thesis, she has since been accepted to the NEA Studio Incubator Program to make her app a reality. NEA provides venture capital to help innovative entrepreneurs build transformational initiatives around the world. So huge congrats to Samia on this outstanding accomplishment. I was already hooked back in February when she presented her project at NYU and am even more excited now. Indeed, there are exciting synergies with the MatchApp project I’m working on with QCRI and MIT-CSAIL , which is why we’re happily exploring ways to collaborate & complement our respective initiatives.

Read full post with multiple graphics.

Comments Off
Apr 24