Fortune Telling with Data: Modeling Threat with Feeble Predictors

Anna Pavlova - foxy chiromancerWhile in college, and unbeknownst to most people, I dabbled in some performance artistry and predictive analysis. Part of the performative nature of college is developing the capacity to claim competence in topics and credentials one has yet to earn, to educate formally while informally fumbling through social mechanics for which no adequate prerequisite is ever published, and finally to make elaborate promises to potential employers you can’t yet keep. Corroborating this farce with some documentation is usually expected, a cover letter here, a résumé there.

On professional applications, it seemed impressive to have a range of acronym memberships to organizations with undefined but assumed-legitimate titles. Interviewers, however, seemed inattentive to merit or documentation fluff, so in small print among some legit scholarships and volunteer positions, I wedged in a nod to my extra-curricular involvement in the “ESP Volunteer Aid Org.”

esp

I am not a performance artist or a seer and this was an experiment I dropped shortly after, but I think there’s some irony in that, while attempting to assemble my prospective professional qualifications, I spent some time considering a career in heightened sensory perception. Or rather, I tried to jest-test the merit system with an obviously bogus acronym.  Funny in retrospect, but previous experience in ESP isn’t far from the tacit prerequisite many would assign to data mungers these days, and particularly anyone who does even mild statistical analysis on crisis datasets. With all these data, surely someone would be able to claim clairvoyance, solve international crises with a affinity for computational analysis of historic precedent, surely the answers are there?

psychic

fortunetellerAs someone who works with data, and more importantly as someone currently living in the future (compared to those currently living in my home country…whoohoo Nairobi time), I thought it might be appropriate to exercise my clairvoyance and provide clarity on life in Nairobi, assessing some of the current intensities, and the probabilities they might aggravate. In any case, the explosion of state department travel warnings in my inbox this week has made my reticence on the subject a bit obnoxious so I’m going to diverge from my typical open source software soap-boxing to write a bit about the statistics of terrorism and the particulars of my current condition.

I write from the position of a math hobbyist,  and an amateur clairvoyant, and Electric Powerso I’ve peppered this post with some specious but thoughtful observations about the news I’ve been following, what I’m currently experiencing, and the links, images, and resources my limited bandwidth allows me to explore.

In brief, Nairobi has been intense of late. The state department issued 4 official warnings this week, encouraging US residents and visitors to avoid Eastleigh, travel to Mombasa, proximity to Burundi, and most recently, travel in Kenya, period. These precipitate from the bombing earlier this week in Nairobi (6 fatalities and 20+ injuries), the recent church attack in Mombasa (4 fatalities) and general insecurity about al-Shabaab and armed operatives threatening attacks in any country conducting peacekeeping and/or military efforts in Somalia. :(

For it be morrow.. Speculation about the probability of a “large scale attack to come” made me start thinking about the meaning of “large scale” and projected “imminence” when it comes to statistically predicting events of high variability. How large is a “large scale” event? Anything where multiple deaths result seems “large” to me, though my definition has adjusted to accommodate recent conditions. If authorities are projecting a large-scale event to come, what about the unsettling events of now? How soon is imminence, not to be too much of a Morrissey fan-girl, but how soon is now?

Which to choose?So with all of these questions and my own preoccupation with quantifying self, I thought it might be time to read up on a few predictive models of the likelihood that something might happen.

Periodically people people post comparatives online like “you’re __times likely to die of x than get involved in a terrorist attack.” These were net popularized post-9/11 it seems, though, the scale and impact of that attack in the domestic US was fairly singular and data collected prior to it would do little to predict its occurrence and re-occurrence without admission of several limiting factors and uncontrolled variables.  That said, in the Annals of Applied Science (vol. 7, no. 4 2013) last year, Clauset and Woodward wrote a paper called “Estimating the Historical and Future Probabilities of Large Terrorist Events” in which they hoped to define a generic statistical algorithm for estimating the likelihood of terror events in complex social systems. These kind of predictive stats depend on so many variables outside precedent empirical data but the authors present multiple tail models and disclaimers about the limitations of their predictions to control for this (Matlab code and sample data available here if you want to play).

Kreskin's ESP board game

Of particular interest is their summary forecast are estimates for 3 potential scenario probabilities based on possibly forecast from past data:

“Rather than make potentially overly specific predictions, we instead consider three rough scenarios (the future’s trajectory will presumably lay somewhere between): (i) an optimistic scenario, in which the average number of terrorist attacks worldwide per year returns to its 1998–2002 level, at about ⟨nyear⟩ = 400 annual events; (ii) a status quo scenario, where it remains at the 2007 level, at about 2000 annual events; and finally (iii) a pessimistic scenario, in which it increases to about 10,000 annual events.”

That then, looks something like this:

table
Crystalball looking cuteThe Clauset and Woodard analysis further predicts a range forecast of 19-46% chance that at least one catastrophic global event will take place in the next decade. But to localize this a bit, and for the sake of argument, let’s take their “status quo” (a modest median) probability calculation trained from the RAND-MIPT Terrorism Knowledge Base, and set up the conditional probability that there will be a terrorist happening of catastrophic proportion (p=0.461) while I am in Nairobi (approx. 30/(365*10yr) possible days; p=0.008) . The condition is fairly unlikely, but unfortunately increases when you factor in covariates like my general foreignness (> victim likelihood…bummer, p=0.475), and the  logic that the violence occurring with agglutinative regularity will likely foster additional conflict and escalated tension:
“For instance, international terrorist events, in which the attacker and target are from different countries, comprise 12% of the RAND-MIPT database and exhibit a much heavier-tailed distribution, with αˆ = 1.93 ± 0.04 and xˆmin = 1.”
EAC MapTrying to control for multiple variables is complicated, so even in a problem which can be structured as conditional (probability of x given n state) struggles in this scenario. Does the probability of one state affect the other and yet still require factoring in both? And if so, perhaps it’s a joint probability issue between independent events. When the prediction derives from historical information, perhaps a Bayesian use of prior probabilities could be trained for future forecasting but even then…complicated. And regardless, perhaps the historical data is limiting in applicability due to scope; my definition of “catastrophic” scales down to the mere injury of a family member/friend, decidedly distant from the catastrophic proportion of 9/11 or any event with Chiromancerupwards of 1000 fatalities used to make these kind of probabilistic predictions.
Most of the math here is beyond my own research level but one factor that strikes me as strange, given my work with crises in the context of maps, is the absence of particularly specific geo-data analysis. The East African Community (EAC) hasn’t been spared much violence in the past few years, and sadly, in the last few months in Kenya. so I’m interested in reading about statistical modeling done on conflict probabilities with geo-specificity. Maybe this a usecase for the Wolfram Language when I’m brought out of my beta-in-waiting status; something to counter the pop-y around the world travel time estimates and polar auto-opposite calculations that have been so fun but maybe not particularly applicable to my current situation. What might be applicable is a computational knowledge engine that would assess my IP address, map it to a lat/long and then calculate how far I should move in the city to avoid conflict on a daily basis (*winks* at Wolfram friends).
To be fair, the Clauset and Woodward research  honestly nods to the variables not-completely considered in their analysis:
“Technology, population, culture and geopolitics are believed to exhibit nonstationary dynamics and these likely play some role in event severities…our approach is nonspatial and says little about where the event might occurrefinements will likely require strong assumptions about many context-specific factors (Clauset + Woodward, 15).”
Evangeline Adams (American Astrologer) explores a mapBut, I’m still wondering about alternatives to these estimations, what is the best research body to design these kinds of models and who has the best open test data on the topic? I’m generally skeptical of predictions based on historic data without geo-reference these days, since so much of what happens depends on a cultural/historical/social context that is impossible to divorce from a particular place;  the general forecast of 19-46% chance of something happening in the next decade at a global scale is hard to conceptualize when you consider the umpteen geopolitical factors that might cluster likelihood around certain high-tension locales (Clauset + Woodward 14). Perhaps there will one day be a service to prioritize these factors and co-variates based on personalized social and surveilled data as Seth’s Worry App concept suggests:
“Worry is the very first technological solution that maximizes the benefit of mankind’s oldest task: anxiety.
Using this flow of data, the Worry app computes the things you ought to be worried about. For example, instead of needlessly wasting time worrying about a random event like being bitten by a brown recluse spider, the Worry GPS system can point out that based on where you are, you’d be better off worrying about a different, unpreventable event like being killed by a fire hydrant flying through the air or perhaps by an angry rooster wielding a knife. The Worry app will alert you to that, which dramatically increases the effectiveness of your worrying.”
Destiny awaitsI’m into anxiety optimization and maximized thought efficiencies, perhaps a maturation of my adolescent ESP :)
In all seriousness, there is probably little statistical value in projecting these possibilities where I am currently. Fortune telling with so many variables can be complex, though the projections remain pretty unsettling.
But apart from all the speculative quant, there are some simple qualitative observations that I can make:
  • things are heating up every day here
  • any situation where “safety in numbers” is a paradox because avoiding congregation points (malls, churches, etcet) has become a way of avoiding conflictESPad is probs bad
  • at the end of the day, statistical randomness is a really unfortunate jerk who even despite your best precautions can allow for some pretty horrific happenings (a carjacking happened in my inner circle this week, for example)
  • there’s something broken about the fact that the entire reported anti-terror budget of Nairobi is less than my current apartment’s rent outside the city (both cases supposedly sustaining a month’s worth of expenses). If we drill down on that statement semantically, and not quite statistically, we can conclude that the collective safety of a city in a time of “imminent” crisis is roughly worth a one bedroom apartment.

Mathematical calculated? Feebly. Statistically significant? Probably. Totally unfortunate? Predictably.

Tagged , ,

Crowd-ed + Coordinated: FOSS in Africa

“There’s no more powerful force in modern society than the news. It shapes how we see the world, what we judge to be good or bad, important or silly, right or wrong.”
~ Alain de Botton, “Have you Heard the News?” Psychologies, 4/2014

In the April 2014 issue of Psychologies Magazine, Alain de Botton’s interview discusses his new book The News: A User’s Manual, and his philosophical reading of the news as trending toward more personal, more philosophically predictable. It’s perhaps significant that I’m reading this article in an airport news stand out of a pop magazine, rather than reading his book. More on this trend in abbreviated news ingest later…but for now, his points about our pot-boiler appetite for the news does well to introduce some of my recent professional happenings, perspectives on crowd-driven data journalism, and particular perspective on crowd-data programs in Africa.

Nairobi - Crowdmap of Tweets

In Nairobi, while the news has been of late focused on other topics, the last two weeks IDLELO Conference Badgesof my workflow concentrated on two conferences, a IDLELO: FOSS conference and a Global Innovation Competition for citizen-driven government initiatives; they share crowdsourcing and open journalism as themes. I had the pleasure of speaking at the IDLELO-06 conference, supporting Ms. Angela Odour’s talk on Ushahidi prior to preparing my own with James Raterno and Daniel Cheseret of Internews-KE. Of the few journalism organizations presenting, we applied the free-and-open-source-software (FOSS) theme to investigative news reporting and interactive political commentary. Our talk was a case study in health projects, demoing three interactive news stories from this past year at Internews-Kenya. Each interactive delved into some aspect of health monitoring in Kenya, spanning a spectrum of topics from medical services availability to mapping the outposts and effects of extractive industry across the country. While the details and data behind these stories are important and interesting, the presentation in each case was paramount; TL;DR the realities of healthcare and economic/industrial health of the nation were best communicated via interactive charts, and Internews’ series of Data Dredger infographics. The refrain of this and de Botton’s Psychologies perspective persists: attractive and interactive stories, stories that engage with personal, psychological topics, stories that illustrate rather than allude to data are driving our journalism programs and our teams.

Crowdsourcing Comic - XKCDAnd part of that means democratizing the newsroom to a broader population of citizen journalists and crowdsourced contributors, part of this also means broadening our view of where data journalism trendsetting is happening in our world, but to persist on these points, let’s move off the African continent briefly.  Among the most popular articles in the NY Times last year were approachable, interactive pieces; it’s not unreasonable to conclude that the appetite for news often bends to people’s visceral interests, regional perspectives and even “popular biases” as de Botton suggests in his Psychologies interview. Likewise, the Guardian’s 2013 popular titles for most popular articles (among Snowden and the Boston bombing coverage) include the following:

  • Why have young people in Japan stopped having sex?
    3.2m page views, 1,263 comments
  • Michael Douglas: Oral sex caused my cancer
    2.0m page views
  • Royal baby: Duchess of Cambridge gives birth to a boy – live
    1.5m page views

Global Innovation Challenge CrowdThis is not to suggest that the most popular news publications follow predominantly potboiler subject lines, but rather to note that there is a persistent appetite for pop culture throughout all news sources and dissemination platforms, irrespective of reputation. Mixed in with the seriousness and severity of crises worldwide, the presence of pop culture news commands significant attention; perhaps Global Innovation Challenge Collab - Nairobi, KEreflecting an appetite for popular and approachable media. When de Botton claims that “the ideal news would take into account people’s natural inclinations…it wouldn’t start with the wise, good, or serious outlooks,” I thought the judgement was a bit unfair and dismissive of journalism’s future, but maybe, on reflection, not so removed from reality in journalism’s present (Psychologies Magazine, 54).

This media appetite is agnostic to journalism hierarchies, persistently attracted to KE-MAVC8personalized stories, that show how one girl lives in NYC projects, or how a population’s accent differs according to regional divisions. We crave a personalized experience with the news even in the most distinguished publications, we crave a flat structure of open contribution, where the stories are interactive, where we can comment publicly in the thread following each post, where the content is sometimes crowdsourced, and the platforms are participatory. Our appetite for pop culture parallels publication output. In a digital media landscape where everyone from Buzzfeed to Fbook to O.K. Cupid have a data science team, our population of increasingly connected readers is interested in the personalized analytics of their networks, in the data science that drives our personal lives and pop culture as much as our professional publication platforms, and sometimes, in how all of these data fuse.Lagos - Crowdmap of Tweets

One way to adapt to this is to invite more contributors into the news reporting community from the reported community; to flatten the reporting structure, to amplify the data-driven projects that drive the page view counts often used to index our community impact. Promoting “popular” media isn’t just about echoing celebrity gossip and simplified story-lines but rather developing a sensitive authoring practice, crafting stories that readers can identify and interact with, and this trend is carrying into bootstrapped newsrooms across the African continent and throughout the world. In supplement to interviews, we crowdsource data collection in the way of Ushahidi, instead of lone-wolf work of an re-located investigative journalist, we train teams of indigenous journalists to report on their own local communities in the way of Internews. I’m privileged to work with organizations actively contributing to this type of globalized citizen journalism and crowd-reporting, likewise privileged to work with journalists when I am at best an “outsider-[FOSS]-artist.”

This is not new science of course, most established papers have a data teams these days, and it’s not uncommon for teams of developer-journalists to collaborate on investigative pieces, but to recognize the trends as reflective of an interest in crowd-driven projects, and citizen-journalism engagement globally is perhaps important and worth considering as we re-evaluate where journalism is, and where it is going.

Accra - Crowdmap of TweetsCrowd-sourcing information, crowd-funding and crowd-feedback loops in the journalism community are more popular, and not just in the USA. Analytics permit us to track what our crowd of readers actually reads (or at least what they click on), to adapt our stories and investigative practice to suit those interests. Though we still have a rockstar reporter hall-of-fame that celebrates individuals and their contributions to the industry, with data-driven projects, we can now appreciate more than ever, that often, and maybe always, the byline includes a team, a small crowd of developers-journalists-researchers working on a comprehensive and data-informed investigation.

“I doubt if it makes much difference, frankly, but at the margin I think that we’re moving to a kind of journalism that is more casual, more informal, more personal, and a very formal byline seems as out of place as a three-piece suit in the newsroom.”
~ Nicholas Kristof, “What’s Missing in my Byline,” New York Times: Opinion Pages, 1/2014

Tunis - Crowdmap of TweetsAnd this isn’t only happening at the New York Times or The Economist, it’s happening in Africa too. This brings me to the second conference happening of the past two weeks of work. At this week’s Global Innovation Challenge week in Nairobi, we’ve been working with teams of selected delegates from 10 countries around the world, teams who are working to connect their citizens more directly with their governments and foster policy change through open data. This type of effort can read as a quixotic ambition, but with developer and data-driven programs, it is possible. Johannesburg - Crowdmap of Tweets

Further, it’s noteworthy that all of the delegates are paired teams, not-lone crusaders, these efforts are built on partnerships between multiple contributors (developers, political activists) and multiple institutions, on crowd-driven programs meant to collect a maximum of opinion and surface a population of opinions from a representative sample of constituents. Supported by Ushahidi and hosted by iHub, this week of conference talks, pitches and programs is designed to foster more crowd and community driven data reporting across the globe, and model the crowd-centric trends so observable in our increasingly personalized and popular media.

Crowd-driven journalism and FOSS initiatives have in one respect opened the community to a broader population of self-taught developers and scrappy reporters, and also broadened the potential for citizen-sourced, -funded, -voted journalism projects. The crowd will doubtless drive even more data projects in the future, and craft a more personalized and popular media with a global scope. Crowd + Africa doesn’t have to mean crisis mapping or violence, it can mean participatory reporting and progressive reform, it can mean a program of re:activism, or react-ivism, piloted by a crowd of programmers and a ragtag group of pirates and outsider journo-artists. We’re working to amplify the crowd, and data-driven newsrooms internationally, in keeping with up the [journalism] Joneses.

Ushahidi Ecosphere Diagram

To that end, and in conclusion, I leave you with a link to our Ushahidi community survey, an effort on our part to make crowdsourcing a part of our own analytics and feature development workflow. Please fill it out so that we might improve our software and help other investigative journalists spin up custom instances of geo-local data collection all over the world:

HELP US OUT, FILL THIS OUT:

CROWDMAP COMMUNITY SURVEY

Recent Happenings:
Current:

Upcoming:

Images in this post courtesy of XKCD, IDLELO06, Global Innovation Competition, and FloatingSheep.org (African tweetmaps)

 

Tagged , , , ,

Privacy in Post-Prism Mode

In the wake of Prism Break Up, a weekend of privacy and cyber-security events that I co-organized with Heather Dewey Hagborg, Allison collageBurtch, and Ramsey Nasser almost a month ago, it seems appropriate to reflect on some of the weekend’s more salient takeways [read a summary here], and otherwise provide a general recap of the privacy and security thoughts. My subsequent experience attending privacy and security (as well as taxonomy) sessions at MozFest informs this discussion about the problems with privacy, and how we can hope to define and defend it in the future. Likewise I’ve sprinkled in some good quotes and epic links collected from my meanderings through the unexpected brilliance of the wwworld.

TL;DR, this is a blogpost about how no one knows what privacy is, and the ambiguities of internet ethics make the maintenance of any value system almost impossible. It’s also a hopeful post (don’t despair!) about what we can do to create a better cryptocabulary and more secu-fluent world.

identicons

..::THE PRESSdarknets
Rare is the day that I don’t read an article of internet privacy, information security, intellectual property…all things that weren’t necessarily anomolous in my news feed previously, but for their adjectives. Privacy, Security, and Property have long been active concerns in most news, law, public policy and government information outlets, but our approach to defending and managing has complicated since the internet, since one (or two) laptops per [everybody], since we started as a domestic populace to live half our lives online. Clive Thompson’s recent Wired Article on “Using Darknets to Foil the NSA” contributes to a body of literature now discussing the “dark[net] side” as a viable meeting space for the quotidian networking of “average” internet users. And these days, most of the privacy press I isolationread isn’t in Wired. Current media fixation outside the standard news outlets also support this swell in privacy/security concern. Two weeks ago, the Stop Watching Us Rally in DC, YACHT’s EFF support campaign, and umpteen blog posts on the topic have pushed metadata discussions and black hat hackery into the realms of common parlance.

Perhaps our existing vocabulary is ill-suited to accommodate the ethical implications of privacy in post-digital environments, as this Necessary and Proportionate article does well to assert in its preamble and subsequent breakdown of privacy “principles”:

“Traditionally, the invasiveness of communications surveillance has been evaluated on the basis of artificial and formalistic categories. Existing legal frameworks distinguish between “content” or “non-content,” “subscriber information” or “metadata,” stored data or in transit data, data held in the home or in the possession of a third party service provider.[7] However, these distinctions are no longer appropriate for measuring the degree of the intrusion that communications surveillance makes into individuals’ private lives and associations.”

Perhaps all of our a schema for conceptualizing values and rights needs some re-tooling to suit the digital folksonomy of our contemporary computer world. Perhaps we should all start participating in the assembly of that dictionary of terms. The refrain being: we need a defined terminology to reference in our defence of privacy, and  everyone should be an editor.

identicons

..::PROMOTING A PRIVACY VOCABULARYcryptocollage
As the catalogue of compromised security protocols grows and ignorance about security assurances with “safer” homespun systems persists, a more consistent program of secu education should develop, and a more consistent understanding of what we are seeking when we say “security” and “privacy” should preface that development. When we demand rights and respect for our ideas expressed publicly, we negotiate a spectrum of interests between two poles of extremity: censorship (restricting our rights to view and express publicly), and surveillance (restricting our rights to view and express content privately). Our understanding of the relative rights provisions and restrictions in other nations informs our interest in what we feel entitled to as citizens, and part of the opening Prism Break Up panel almost a month ago was meant to address the legal defence of loosely defined values like privacy in the United States. In other nations, privacy’s importance and value occupies a more codified realm of articulated and provisioned rights, but in the States its value and definition remains nebulous. Throughout the initiatives we support as citizens, we seem broadly confused about whether privacy and security enforcement should champion the right to keep secrets or the right to expose them.

identicons

privacyartTHE PRECEDENT
In previous media generations, I believe this distinction was more clear, at least, in the States. As champions of free speech and minority representation, US citizens found it easy to defend the rights of individuals, and interrogate the motives of institutions. As individuals were were entitled to privacy and autonomy, while institutions (backed by the power of a crowd and the tendency toward crowd-control/monopoly) we were subject to scrutiny. On the internet however, the distinction between individual and institution blurs. We are indeed entitled to some degree of privacy, anonymity, and autonomy on the internet, but how that articulates vis-à-vis previous policy and legacy scenarios; post-PRISM practice remains fuzzy. Should we not advocate for an alternative internet? Should we not participate where possible in a decentral movement populated by subnodes and occupy.here projects and punctuated by the likes of redecentralization cypherpunks? Yes, certainly, but we should understand more about what values warrant re-definition in the new contexts they now occupy.

identicons

..::THE PROBLEMSPrivacy International Publication
Some of the challenges we must negotiate in defending privacy in newer contexts stems from  the open culture of the internet, where individual privacy and innovation get muddled in a very public realm. In a recent panel discussion on Open Source Art at the LISA conference, the importance of developing open tools that embrace a collaborative ethos to iterative development echoes this idea, codifying how important such transparency and crowdsourcing efforts are to technology and software as we develop them today. Proprietary platforms and projects are still prevalent and productive, but few other industries outside of software have such a strong community founded on principles of openness and sharing. Concerns for privacy and security struggle with transparency for dominance in this domain; and the values that are appropriately and easily defended in traditional environments crumble a bit in the layered architecture of the internet. And it’s not just our personal privacy that often feels violated, but out public industry that is compromised by ambiguity. Confusion in patent law for software development and in privacy provisioning for information management engenders a host of anxieties unprecedented by previous eras of industry. The Wealth of Networks does well to treat this complicated context and the associated redefinition of values like “freedom” and “privacy“:

“An understanding of how we can think of this moment in terms of human freedom and development must transcend the particular traditions, both liberal and illiberal, of any single nation. The actual practice of freedom that we see emerging from the networked environment allows people to reach across national or social boundaries, across space and political division. It allows people to solve problems together in new associations that are outside the boundaries of formal, legal-political association. In this fluid social economic environment, the individual’s claims provide a moral anchor for considering the structures of power and opportunity, of freedom and well-being. Furthermore, while it is often convenient and widely accepted to treat organizations or communities as legal entities, as “persons,” they are not moral agents. Their role in an analysis of freedom and justice is derivative from their role—both enabling and constraining—as structuring context in which human beings, the actual moral agents of political economy, find themselves.”

bookstack..::A MODEST PROPOSAL
With the crowdsourced efforts of many contributing individuals, we can perhaps work to clarify these ambiguities, at least the lexical ones. I’ll return to my previous suggestion that we all participate in the collaborative dictionary of privacy if only to point out some holes in that request. Unfortunately, a lot of communities online are built like high school, there are cliques, the cool place to be is usually the most obscure, and while it might be the “nerds” ruling the roost, we still manage to built environments that aren’t so friendly for the average person. Transforming open environments into proprietary and restricted micro-communities is a pretty persistent part of the human experience; the internet, and privacy technologies are not exempt. And a lot of crypto and security falls in that abyss of ambiguity that remains obscure for most. This paradox infiltrates all networked environments, and we’re always struggling with a complicated set of values. Anyone who saw the Social Network can infer that one of the biggest and most pervasive social networking tools on the planet was built with a nugget of inspiration based on the exclusivity, and not the universality of access. People wanted to join Facebook because it was exclusive to their campus, a private club of awesome. So, despite the open and participatory platform we tout in our promotion of crowdsourced efforts, it’s the invite-only, dark net, closed, under-surveilled environments that we crave. People fed-up with the “everyman” environment of Craigslist might now turn to the invite-only security of collage-city
Quentin’s Friends, where only members can invite you to join, and everyone is approximately 6 degrees from Quentin. Likewise, encrypted browsing is embraced in the private darknets like Hyperboria, a peer-to-peer alternative internet that caps at a cool 500+ browser, all invited by a member. In both cases as in others, the values of “security” and “privacy” complicate because they now refute some of the open and transparent tenets of the environment they inhabit; now privacy is more exclusivity than obscurity, or at least, the means to both ends entwine.

When it comes to promoting privacy moving forward, we need to balance the attraction of exclusivity with the ethics of universality and transparency, and recognize that these are often at odds, or at the very least, swimming in the same ambiguous soup.

Still, and perhaps as a result of this value ambiguity, it has become more important than ever to involve larger populations in the conversation, if only to ensure that the privacy and security we defend is representative of the values of our populace and not just our buddylist. Some of the best crypto depends on the participation of more members to promote a healthy block chain spread across a network of contributors, some existing as exit nodes or guides, and all contributing to an increasingly cosmopolitan constituency. Even decentralized currencies like Bitcoin despite emphasis on (understandable) privacy and anonymity, support an equal measure of universality and depend on a network of participants rather than a central authority to control the economy.

Smokescreen Privacy SoftwareIn this space of privacy moderated by participatory networking, everyone can contribute to the cryptocabulary. Some aspects of digital privacy and security should remain accessible. Crypto technology is complicated, at times, frustrating to implement, but its understanding and use should not be restricted to technologists if its program continues along a progressive path. Some of the most urgent problems with privacy and security involve defining a vocabulary for defending those values, and that definition doesn’t need to be written in code. Human readable policy programming is just as important, and all have the potential to contribute. You can, and should participate in meetups and workshops related to cybersecu if only to promote the citizen-crypto community and advance your own understanding of why these measures are necessary, valued, and supported by our current systems, and where the gaps are in all three. You can set up PGP, attend a cryptoparty, participate in the crypto efforts that will protect others by normalizing privacy and security provisioning. We can better understand what’s happening in our browser, as Mozilla’s Lightbeam extension will help us do. We can better obfuscate our ad clicks via Dr. Helen Nissenbaum’s Adnostic app which autoclicks all ads and then provides you a history of your automagic patronage. We can contribute to a better web world where privacy policy is in both common and informed parlance, uninhibited by values we demand but cannot defend. It’s time to refine our valucabulary, crypto in tow.

I’ve opened a document on Editorially to version the current definition of privacy, please contribute, ping me if you’d like to be added as a collaborator or just add a comment to the page.

identicons

..::READ ON:obscurity

Tagged , , , ,

New Economies of Innovation: Value the Tacit, Trash the Tangible

This is a blog post about economies of technology, it’s long, so let’s start out with 3 concept anec-quotes, and works it’s way to a series of bracketed themes: innovation + enterprise.

# Innovation

In a February 2013 interview with Wired, Larry Page  (Google founder) commented on Google X and paths to innovation:

When I was growing up , I wanted to be an inventor. Then I realized that there’s a lot of sad stories about inventors like Nikola Tesla, amazing people who didn’t have much impact because they never turned their inventions into businesses.

Feb. 2013, Stephen Levy, 7 Massive Ideas that Could Change the World

Let’s ignore that Tesla was in any way slighted as yet another “inventor” who lacked “impact” (WTF) and proceed. This comment led me to question whether we need to monetize to achieve, and how do we create healthy economies for qualities as ill-defined as “innovation” or “integrity.” Maybe innovation alone is an opal not a diamond: beautiful and valuable to be sure, but unless someone contrives rarity or economy (ahem, debeers) around it won’t be nearly as rad. So, can we build a business on intangibles and “values” that as yet have no monetary equivalent?

# Enterprise

Suketu Gandhi comments on this in The Wall Street Journal’s Deloitte Insight , loandefining the “postdigital enterprise”  as one where innovators can either “take your existing processes and apply these new technologies to them,” or rethink the process that technology enables you to enact. In contemporary (apparently “postdigital”) enterprise, maybe the application of technologies to process gives innovation economic weight. Do we need business process to innovate and what do we value in a digital world where lots of interactions and transactions lack the physicality of “real” life? Gandhi also cited “ the big five disruptive technologies,” 3 of which struck me as strangely nebulous, not so much ‘technologies’ as vague ‘values’ of interaction: “social,” “mobility,” “cyber security.” The ability to be social, mobile, and secure seemed to bleed outside the bounds of “technology” as I would typically define it, and venture into the fuzzy region of human interactions and freedoms in the physical world. How do we monetize these, and should we?

# Monetization

To that end, Ecologies of Knowing blogger Pavel asserted that “much of the ubiquity bitcoinbillionaireof computing today is of course driven by opportunities to monetize social interactions and shifts in cultural perception.” As a software architect, I get paid to build things that have no physical product, my work is as intangible as the concepts whose value I’m now interrogating. While part of me is proud that so much of my life is “priceless,” part of me is a bit distressed that that I haven’t founded a business on the obscure intangibles and important aspects of my life. How can we re:define an economy to appropriately capture what we value? Can we bank on innovation, social mobility and security without building an enterprise? Or do ideas lack value when they lack an emphasis on economy?

Taken together, all of these anec-quotes coalesce in the topics at hand for this blogpost: bitcoins, cultural [in]security currency, innovation ecologies/economies, and basically banking on intangibles over bills. Let’s treat each in turn.

## Bitcoin to Begin

A few weeks ago, I hosted a Stereo Semantics radio show about new forms of banking. I’m interested in the development of independent economies, new currencies of exchange appropriate for our internet and IRL environments. Part in parcel to this obsession is my newfound interest in Bitcoins. As per the consistent popularity of Bitcoin in contemporary media, I’ve built a short URList (my new favorite OSStartup) on the topic.

18 Links from: Bitcoins

moonjelly, via Urlist

To take it further, and more topically, a recent NY Times article treated Bitcoin forays into governmental policy and Bitcoin progress toward legitimacy in exchange-traded funding.

The Times tempered this topic judiciously with an explanation of Bitcoin, and my URList includes a series of past and real-time updated publications/interactives focused on the topic. IRL, I’ve attended a few meetups on Bitcoin Startup philosophy and can submit from my cursory exploration that the Bitcoin ecosystem is pretty nascent, warbly in the real world, even now, long-after it’s debut. It’s hard to codify what conditions and cooperation merit my financial “trust” but I find that most startups built on Bitcoin fall in a category of specious, less-traveled by other landscapes of internet innovation.

## [In]security Currency

So, In prefacing with this artificial currency of contemporary fascination, I started privacyIsDeadthinking about other domains where potential economies could be crafted, and I found that defining values like “trustworthiness,” “integrity,” and “security,” also meandered in a nebulous and ill-articulated part of my consciousness. A recent MoMA PS1 panel discussion on Privacy and [National} Security, further forked this thought to consider a slurry of “rights” billed to US citizens but now in question in a post-PRISM world. What do we value? What are our intangible freedoms that form the substrate of our cultural currency? Services like Highlig.ht and Sitegeist would suggest that we value proximous information over privacy. In promotional material, the former markets itself as a “sixth sense for the world around you, showing your hidden connections, and making your day more fun.” The latter bills (ha) as an “the app present[ing] solid data in a simple at-a-glance format to help you tap into the pulse of your location.” Sounds exciting, discovering a secret garden of semiotics and site-specific information? How exhilarating! Until a third party starts tracking it, and determines your habits, patterns, behaviors, your prospective memories, your potential to commit thoughtcrime… so how do we balance an interest in information with a right to resist being polled? Right now, we don’t.

A recent app built by Open Data City in Germany for a local conference tracks bitcoinminerpopulation movements in a timeseries visualization hosted here  and blogged about here. ODC’s sensors detected passive interactions with mobile devices on the conference floor via each devices’ unique mac address. The visualized animation of conference traffic from sensor perception point to point is stellar and stunning but also scary. What’s disturbing about this isn’t just the tracking of these data points, more incriminating and valuable metadata is captured daily by our social applications and email clients, later mined by 3rd party services that sell us products and promotions. What’s disturbing is that unlike those social apps that we opt into voluntarily, if idiotically, on the daily, these sensors were tracking participants without explicit consent; if you had a device (phone, laptop, tablet) you were traceable, part of someone else’s time series art project. Potentially innocuous since mac addresses were probably anonymized by some hash, probably difficult to relate to your identity, but what about the other traffic patterns evident on your device? Could tweets, correspondence, conversations be layered over mac address traffic to trace aspects of your “private” interactions? :/ The project authors allude to this in their blog post:

One thing is clear: The application displays the duality of such records. On the one hand it is clear what data traces you leave, often unconsciously. Therefore, we hope that the application will help to raise awareness for the protection of their own privacy. And is perhaps only once thought about why someone “Free Wifi” offers before you log.

Zur re:log-Website. Realisiert von OpenDataCity. Unterstützt durch picocell und newthinking. Anwendung steht unter CC-BY 3.0.
But is awareness of this enough? And are we more jazzed by the  “Open Data [City]” potential of these apps than by the one-valued privacy we enjoyed in comparative anonymity? Further, how does “freedom” articulate in our ecology of networked intelligence? Is newfound “freedom” afforded by the “open” arrangement of the internet equivalent to the right to hide or the right to expose what’s been hidden? Is it the right to keep secrets or the right to reveal them? Are these even of value? And further how do we re:define value to suit a digital landscape?

## Innovation Economies

In defense of “open data,” my fascination with Bitcoin follows from persistent interest in open source and internet innovations toward replication of analog concepts. Not going to a lie, I’m totally an open data/knowledge/info fangirl. I’ve enjoyed the transition of Encyclopedias to Wikipedias, of gift economies founded in the likes of Burning Man to online exchange platforms like TimeBanks; I can dig it. There’s an intangible quality to trade and barter of “time” or “security” over monetary payment, and perhaps those tacit economies best express in the bit and byte-built world of the internet. Maybe we need to start thinking about cultural economies, the tacit luxuries that we value for their rarity and not necessarily their potential to facilitate purchase. Intangibles like “freedom,” “privacy,” and “security” are governed by their own economies based on contemporary scarcity. If scarcity and control are the determinants of value and weight, then privacy is the gem in our the rough of our current monetary systems.  

bitcointransaction

So what’s new about this? Are bitcoins really that different from current economies? Maybe not, but they’re a provocative start to thinking about tacit economies and the value-making of intangibles. To return to the article that inaugurated this blogpost, I’ll revisit the Larry Page interview, if only to root this endless econ-odyssey in a more agreeable symmetry. In response to what he envisions as successful ideas and company concepts, Page asserted that “[y]ou just need to have the conviction to make a long-term investment and to believe that things could be a lot better.” Will the world be better with investment in a more artificial econ? Will I be more content when currency codifies not as a physical bill but as an ephemeral bit? Will that make me appreciate that money really bears little of the emotional weight that I’ve applied to it,  and that intangible and ill-defined values and virtues warrant a more miserly defense than I’ve ever invested in them? Maybe, a bit[coin]…

## Banking on Intangibles

To conclude, I’m not alone in recognizing the impact of bitcoin currency on our potential economic future, nor am I particularly brilliant at applying economic social science to even more subjective qualities of “innovation,” “privacy,” “safety” and “security,” but it’s comforting to read how new systems of value are developing in tandem with technological innovation. Their access points are becoming increasingly available to a pedestrian public, but new post-digital economies demand an understanding of what we value and how we define the ephemeral.  Do we view privacy and innovation as valuable independent of a price point applied post-facto? And as we’re building these economies, I’m not sure how we’ll incorporate those ethics and morals into the “monetizable” and “business-driven” soup of innovation.

Throughout Who Owns the Future?, Jared Lanier comments on this relationship between economy and digital society, and the cost of “free” information to social and cultural constructs.  As citizens of a digitally-driven society, how do we resist violations of our intangible values via capitalization on our social, mobile, and [in]secure interactions? Should we embrace a new economy that appreciates exchanges of ideas and information, that values innovation without insisting on its monetization? Come check out Lanier’s talk at NYPL in October to find out, and in the meantime, let me close with the indubitable paraphrased prescience of one of my favorite poets:

I like to think

(it has to be!)

of a cybernetic ec[onom]y

where we are free of our labors

and joined back to nature,

returned to our mammal brothers and sisters,

and all watched over

by machines of loving grace.

Tagged , , ,

Archival Impulses

Lida Moser_Judy and the BoysAs a librarian, it’s rare the occasion when I don’t have archives on the brain. Personal or public, self-maintained or crowdsourced collections have become an almost unconscious substrate of our technological interactions. Any collections management software or CMS from commercial entities like Etsy, to institutional ones like Collection Space or  Collective Access  to social ones like Pinterest or the Retronaut cut from a similar archival cloth.
JamelShabazz_22Inspired by an impulse to preserve, capture and coordinate our collections in an online environment, each of these examples performs an archival function even when privileging contemporary content (re: commercial shopping sights, pinterests, instagram, umpteen social networks). And I’m not complaining, just collating. We’ve developed software to help us manage the overwhelming information on the internet without necessarily acknowledging the dept that practice bears to archival impulse. We’ve adapted social media outlets like Facebook and twitter to record our thoughts and internet actions on a trackable timeline to trace our trajectory from digital birth to present day. So here’s some examples of how we archive in a local context, kind of a hodge-podgey list with a personal bias. To couple with this them and locality, I’ve added some photos from Lida Moser’s (namesake whoot) and Jamal Shabazz’s work (which I had the privilege of cataloging at the Brooklyn Public Library), archives FTW.
Jamal ShabazzThesis: So i finished my thesis, yay, and promised to push it to public criticism, creative commons, accolades (probably !). The title and topic is related to archives, predictably, So feel free to browse it on Git Hub and pull request some suggestions. It’s a bit of a tome, appropriate for somnambulant wanderings into Archival Ether.
 
Radio Show Wrapup: Last week I also wrapped up the second season of my radio show, Stereo Semantics. Check out the archived episodes, tracklists, and semantic node-edge maps for season 2 here and for season 1 here. Stay tuned for Algorhythmic (a math rock and generative music show) and AMSRad.io, my upcoming shows. Props to @jakeporwary for the Math Rock push.
 
emptiness-undated-001Rhizome 7 on 7 Conference: Each year rhizome teams artists and technologists for a day of conversation and innovation, and this year produced some slick archival projects. Read the editorial here. Anyway, friendfracker was a provocative project about automated deleting a bit of your social footprint, Dabit was an admirable donation project soliciitng voluntary charitable donations in a kind of lottery system that caches the donations for the day and awards on random volunteer half of the proceeds (the other half going to charity). For even more peripheral archival talk, one project addressed information “obesity” and “overload” and another called out the “loop” as an attractive and cathartic paradigm in contemporary culture, perhaps one worth investigating as it pertains to how we plan for posterity, how we catalog and store our digital selves.
 
LISA: the recent Leaders in Software and Art meetups introduced me to some stellar social archives. Exemplary of this, Nick Dangerfield of Part/Particle demoed  a chrome-based creative collage and stencil app called tobe.us. It’s beta but if you’re interested you can create an account here:http://tobe.us/join/lisa. Images and gifs can be dropped from the library or desktop, altered or instagrammed into stencils, music from desktop and video from vimeo/youtube.You can create and share boards, and they’re adding features. I made one to show my apartment to potential viewers, adding in some cat gifs, it’s like dragndrop myspace retro fetishism.
tobe.us
Likewise, Paolo Cirio had a few interesting “disruptive” projects manipulating public data so as to point to that status of privacy in our wwworld.
 jamel_shabazz_boys of brooklyn
Past Perfect: This year’s Tribeca Hacks Festival revealed an ongoing archival project about capturing memories and visually rendering them in an online video archive. Entitled Past Perfect, the project solicits “memories” from craigslist volunteers and then visually rendered them in video form. Check out the project to schedule a memory consultation here.
I love your work: Last week I had a blissful 24hours access to an archive of human emotions courtesy of 6+ hours of footage about nine women who make lesbian porn. A catalogue interviews with these women coupled with an exquisite UI, ‘I love your work‘ made for a really polished web archive. I wish I had a few more hours to explore, and a faceted browse function, but otherwies, I recommend the project, pairs well with Cowbird.
Screen Shot 2013-05-04 at 6.54.54 PM
Science Studio: A recent project I’m proud to have kickstarted, Science Studio provides an archive of science-related multimedia content on the web. I’ve been enjoying the upvoted and crowdsourced podcasts and music selections over my coffitivity since launch. This one was particularly touching about parasites and “holes in the net,” however you might interpret it.
 LidaMoser_alt Judy and the Boys
ITP Spring Show: Lots of rad projects were on display at the typically eclectic, variably impressive NYU ITP Show this year. One of my favorites (#typical) was Matt Epler’s Kinograph film preservation project. Impressive for its utility as much as it’s stellar execution, Epler designed and built a way to affordably digitize film frame by frame.
23 and Me: My long awaited results for 23andMe arrived, clocking me at an X2b genomic profile on my materal side and an “unknown” on my paternal. :/ Perhaps one of the more disappointing personal archives I’ve explored this week, though, the labs projects included a downloadable sonification of my genome, which is now my ringtone.
image
Patents and IP Protection: Maybe one of the more yawn-worthy topics for most, the evolution of software patents and cyber security kind of settle at the same part in my brain where archival impulses incubate. I’m pretty preoccupied by cyber secu and citizen (computer) science. While I won’t bore you as I’ve blogged about this before for Girl Develop It, I think it’s worth mentioning here that our we’re at this beautiful precipice in the reconciling of intellectual ingenuity and open source ethos in developing software. I’m looking forward to participating in conversations about this topic and further witnessing developing  as regulations and practices codify. For a glimpse of internet security history look here. For further research on algorhythmic patenting and happenings, retro-follow the Governing Algorithms conference (and a few of my below-captioned tweets from last week).  And for a way to involve yourself in the immediate, peruse this past week’s happenings here: http://devsbuild.it/devpatentsummit/nyc.
Lida Moser_Street Scene

UpcomingWorld Science Festival inaugurates a summer of promise to bring more exciting things in the coming week.  I’ll also be attending Siggraph-LA and OFFF in the coming months. Looking forward to those post-scripts as they come. :) Thanks for reading. PPS: see public service message below.

The Lab for Robotics Education is hosting a free summer robotics program for high school students in NYC! Applications are now open, for more details visit http://www.thelare.org

Tagged ,

DIY Data Science + De:bugging Biometrics: balancing bioart, sensor intel, and responsive cityscapes

IMG_0297 (1)IMG_0387

A few recent articles about neuro- and cognitive science and last month’s GenSpace Talk have sparked my curiosity about the dual capacity of sensor networks to empower a sentient cityscape and to enable biometric surveillance. The forming being a rather rad consequence of a more digitally developed infrastructure, the latter being the horror storythat hangs on our most distopic scifi futures. So what is the balance when dealing with art and code? How do we manage the development of new technologies which allow us hyper-personal transactions at the expense of anonymity?

IMG_0385IMG_0357

According to an article in Science Daily, researchers at Cornell have started to used fMRI scans to predict not just how a person is processing information and in what neurological buckets the activity is dominant, but even who a person is thinking about. Not to be outdone, MIT recently went public with some MatLab code that uses and Eulerian algo to amplify pixels and detect pulse and subcutaneous activity from video files. Meanwhile, what about the prophesied Google Glass and it’s potential to kickstart ‘surveillance’ as a cinema sub-genre? In all cases, we have new windows to our own biology viasecond-hand technological captures. While primarily scientific, these developments have implications for imaging outside of the scientific realm; what new visual art projects might also be augmented by these processing scripts? How will bioart pick up the scientific slack and use open sourced code to develop critical artscience?

When challenged to hack away and build something in the theme of GodMode for 319 Scholes’ Art Hack Day in Brooklyn this weekend, a few of us decided to tackle thanks MM Moser for the logo aidbiometrics andsurveillance with a spoof film, garnering a bit of nerdfamery and some cool coverage along the way (Creator’s Project | Fast.co). Our project, DIY Spoofing for DNA Counter-surveillance, was shot, edited and exhibited in a slurried 36 hour sprint, adapted some Gattaca-like insecurities about the trajectory of genetic surveillance. Check out the project here, and browse the vimeo links to research participant hackers and our other press pages. The whole experience of hacker/artist immersion was infectiously inspiring and full of smart kids in fancy kicks #godmode. In the open source spirit, we submitted the video as a set of DIY protips on how to blend your DNA with that of a friend, then shed both samples in simultaneity, to scramble surveillance readings. However fun and simple our execution, the themes of human tracking

via biometric analysis and the role of the post-modern bioartist in critically questioning this tracking were clear. We were all amateurs in many ways, but the ubiquity of sensingtechnologies and send-away DNA analysis services in our modern cities points to the validity of our concept. How might a project likethis scale beyond a weekend hackathon and a posting on Instructables? How might these themes persist as they propagate in our cities?

Case in point, this week’s submissions to the NYC Reinvent Payphones project solicited several proposals for more “aware” telephone technologies. My company was asked to develop ways to augment underutilized street furniture and part of this process involved an impressive network of sensing technologies to permit data collection and a more personalized and locally sensitive experience. The implication was the soon these ‘augmented’ booths might permit not only private phone calls but intimate and hyper-personalized transactions, automating and diffusing the pressure of city services such as  polling and election activities, postal services, and the DMV. Oh my.

Check out press: Engadget | the Verge | NYDaily News | the GothamistFastCo !

Please vote for our video here so that we can transform the NYC payphones!

ah_ 2013-03-05 at 4.39.41 PM

But what if authentication becomes biometric? Is that fair? Do we want all of our identification to be linked to our biology? If someone spoofs our biological identity rather than spoofing surveillance, are we comfortable with allowing them access to our civic, political, and personal lives? Probably not, but we probably will be soon enough. Doubtless that many people will opt to log in with their default bio-credentials when possible, forgetting that these features, once hacked, cannot be scrambled or reissued, md5 hashed and emailed again to our ‘private’ accounts  in the physical world as they can in the digital.  Moral of the story? Keep tabs on your preference settings, keep your friends swap/spoof close, and your privacy radar closer.

Follow the Art Hack Day Press: Animal | Le Nouvel Observateur

58-pmurl

Tagged , , , , ,

Mastering the Wizardry of CS and Edu ++

There has been a dramatic lag in my contribution to this site of late, loads of new projects, full time work plus sporadic thesis guilt conspired to prevent a posting until now.

A few things I’m up to:

Girl Develop IT Code and Coffee (Feburary 5th)

Radio Show (Sundays, 9-10PM)

ArtSec Demo Projection

ArtSec Demo Projection

Conversely, my internet activities are pretty consistent off of this site. Look for my posts here (Control Group, Girl Develop It), some comments in the Google group (Art Sec), and check out my archived radio shows here.  I’ve made a few videos with cool prototypes in physical computing as well, including LED blink programs. Most of the photos in this post cull from recent events and otherwise awesome goings on. With that, explaining, I’ll proceed to some more substantive commentary.

For this post I want to focus on a consistent preoccupation of mine, one that I think bridges all of the above enumerated activities: education. As a librarian, it’s increasingly hard to abandon the idea of research for a purpose, which is pretty ubiquitous in education, the idea of consistent and independent edification. I’ve been collecting articles and thinking about this for a while.

Human Face of Big Data

Human Face of Big Data

At Bloomberg’s World IA Day (February 9th), Rich Smolan talked about positive impact of data analysis en masse, for building the potential of networked intelligence, for translating ugly data into meaningful information and contributing to the global nervous system that the internet provides. With large amounts of collective data about our population and behaviors, we are actively architecting an engine for understanding our world. Education in how to process information of this complexity and quantity is key. So, a consistent topic of discussion was what is the right balance of education in Information Architecture?

But perhaps more pertinent and consistent, the question of how soon to education and in what sequence of curricula we might begin teaching about big data and programming. Should we begin by reinforcing mathematics and logic because they are the foundation of careful thought in computation? Should we jump to scripting, robots, and physical computing because they are the jazzy IRL exactions of programming? Should we leave it to students and promote programming and data fluency in general?

Where should we start? Of late I’ve been engaged in some peripheral education exercises, wrapping up as a metadata T.A. at Pratt left me with a nostalgia for teaching and the above event list is just a catalog of my tangential pursuits in

Education and Outreach: More technicolor data plz!

Education and Outreach: More technicolor data plz!

information nerdery. It made me think about what I might be competent to teach and what I would want to teach, and part of my work with Girl Develop It has only continuously affirmed that I want to work with data and I want to teach people how to use it for the types of progressive applications that Smolan talks about in his The Human Face of Big Data. Programming is inching toward ubiquity even in obligatory curricula, and even a more basic understanding of balance in structuring and formatting data for consumption will soon be a prerequisite for a high school curriculum in CS. This was a topic that I revisited a few weeks ago when I taught a class at the Academy for Software Engineering, a new Manhattan High School focused on teaching programming in tandem with typical coursework. Part of their Functions and Data Analysis curriculum, the class was about teaching 9th graders how to approach the ubiquity and enormity of data output that they unconsciously contribute to on the daily. Most of the class was just straight up Big Data, but understanding how to structure data, how to architect and organize information for usability is an interdisciplinary skill worth cultivating at all educational levels, whether professional (as at World IA Day), collegiate, or early educational.

Check out the presentation here: CGBigData-AFSE-1.3.13

Likewise, at this month’s Open Data Day, I focused on building out a series of collaborative iPython Notebooks in PiCloud to create the skeleton of a collaborative programming curriculum in Python for Girl Develop It. Ideally, the notebooks would allow me to segment blocks of code and wrap them in a user friendly set of READ.ME-like comments in markdown. I could then share the notebooks with students and collaborators who could run the code blocks individually and process the interactive lesson plan before them as a UI-friendly literate programming environment. Developing literacy  at the expense of obscurity here is key to encouraging new programmers.

Working hard at being a nerd

Working hard at being a nerd

So, in considering all of the above, I naturally thought about my own habits of continuous education since college, about how I’ve supplemented my traditional curriculum to afford forays into CS and programming when that was not/never my primary program of study. And also about who encouraged this study and what kept me going.

Were I teaching a college course in information architecture, I would teach my students to…

  • pursue independent study (rare book school/hacker school, code.org, codeacademy, )
  • mentor and expect reciprocal mentorship from your superiors
Collabo-nerding

Collabo-nerding

  • participate in regular portfolio critique as an exercise
  • learn something outside of the nebulous field you participate in professionally, because those soundbites of even abbreviated variety in intelligence are so surprisingly significant for persuade
  • learn a really lean/agile process (aside from the  learning more about accessibility)
  • design for extremity to outperform use cases, you will never be disappointed and can scale this practice with experience

The reality is that most brilliant things that develop from your education after age 21 are probably things you designed and built yourself. Honing your skill set through regular exercises outside of your traditional workflows (extra classes, hackathons, meetups) are

Pitching ideas at Open Data Day NYC

Pitching ideas at Open Data Day NYC

an essential part of the continuous learning process. One of the unspoken (or maybe spoken) refrains of graduate school is that you don’t really need to go to grad school (something you realize inevitably and only while you’re there). Most education is just a framework for realizing your own potential; the older you are the more apparent this becomes, the more you must make independent effort to educate yourself outside of an obligatory education track. Encouragement can help (see code.org video or the Take the Pledge series from CS Ed Week – look out for my cameo!):

As a concluding point, I used to think that people who defended “liberal arts education” where trying to justify their own youthful unprofessional orientation, but I have come to recognize that the peculiar demands of most professions are irrelevant if you fail to communicate and complicate your own ideas. This is something that liberal arts teach you, how to build on your own concepts and inform or affirm them with research and critical theory. Intelligent people are remarkable problem solvers. If you train an intelligent person to approach your problem set, he will make progress toward a solution; diversity in education enriches this capacity. The answer to questions of more creativity and a more informed approach to architecture anchors in an independent and continuous education.

Tagged , ,

Absentee Archiving: why autumn is the most Occupied of seasons

Apologies, I’ve been an absentee archivist for the past month, overwhelmed as I am with all of the new excitement that the Fall semester brings. I’m writing now in brief to announce a brief blog hiatus thanks to my thesis (yikes!) and guest blogging activity, which will now be absorbing writing precedence until I submit in (gasp) December. A big part of my recent activity has been some fumble attempts at front end programming and some event planning for Girl Develop It, the non-profit I volunteer for that teaches women how to code in low-cost classes; accordingly, I’ve peppered this post with graphs I’ve charted (thank you Michael for showing me the magic of HighCharts) and female dev-ful events I’ve hosted. As this is an all-over-the-place post, I’ve tagged it up with some tagging refs. If you’ve been following this blog *applause*, you will know my affinity for tags (these, and these, and these) in all of  their semantic iterations (as per previous blog post). What follows are some bulleted updates on upcoming excitement.

  • Archives Documentary: Thanks to some friends and recent side projects, I’m increasingly fascinated by 3rd party archive projects. A friend at Eyebeam is on residency to create a documentary around themes of preservation of internet memory (and meme-ory). I recently gave a tour of the internet, and have been following the blog (http://archivefilm.tumblr.com/). If you dig data you should too.

  • Radio Show: My show is now underway *whoot*. Look under the projects tab (Projects > Radio) to find my archived episodes throughout the next few months. Inspired by the Semantic Web and that 70s show Connections, I DJ Stereo Semantics, an experiment in sonic degrees of degrees of separation. Sundays – 9-10pm EST.
  • Art/Education Projects: Game of Phones; I’ve been the lucky lady added to the Game of Phones queue and now that I’m oh man it’s addictive. Rather refreshing to watch actual phone use supersede all of the killer apps that now bog my “smart[er?]” phone (thanks David Lublin). Inspired by some cool open data postings on the ArtSec (Art + Security, you’ll know if from #artstech fame) google Group, I worked with a Miso/High Charts Stack to visualize some Graffiti tagging data from the NYC open data portal (thanks Michael Keller for the R aid). I’ve captioned a few vis examples and am looking forward to plotting this on a map soon.
  • 3rd Party Blogs: Control Group, Girl Develop It. Check out my recent posts @ControlGroup and @GDI: Technology for all: It’s a Gal++ World, relative to my Women in Tech volunteer projects. IA few weeks ago, I had the privilege Todd Park, CTO of the US, to discuss policy related to Women in Technology and their Presidential Fellows program (a fellowship which attracts a paucity of female applicants), with NY Tech Meetup and representatives of women and tech initiatives around NYC. Working with GDI and Hack n’Jill to promote a more egalitarian techscape is ever-fulfilling and certainly an important building block of brilliant and beautiful products in STEM fields. I’m happy to be a part of it.

  • Metadata Course: However under qualified I may be, I’m also assisting with a Metadata course at Pratt on Saturday mornings, designing excercises and curricula to compliment a syllabus of mainly XML implementations of metadata schemas. Of late, I’m a bit frustrated with the kludgyness of the Moodle microblogging system that’s baked into Pratt’s course enrollment and learning management software, so I’ll be migrating class content and posts to a WordPress Blog (to flesh out slowly, stay tuned!).

  • Hackathons: Data Kind/Occupy Hackathon/Hack n’ Jill. While I rarely have adequate bandwidth or energy on my weekends, I recently had the pleasure of contributing remotely to an Occupy Hackathon aimed at making use of the rich data collected throughout Occupy and its affiliated movements. Likewise, I was fortunate enough to learn from the Data Kind Data Dive, visualizing NYC Parks Data a few weeks ago; this introduced me to a pretty brilliant assortment of geo-vis tech stacks, and CartoDB, which I have become subsequently obsessed with and will happily share with whomever I can: http://cartodb.com/. My company and Girl Develop It are also partnering with Hack n’ Jill  to host a 2 day Hacksgiving at Etsy.

Sign up here: http://hacksgiving.eventbrite.com/. and come out the weekend of November 9-10th to see some rad hacks!

  • Conferences: LISA, Strata, Visualized, SIGGRAPH-Asia. If you’re in NYC and want to catch me at some conferences, I’ll be volunteering at LISA and Visualized. I’ll be attending the big data nerd conf in NYC in two weeks: http://strataconf.com/. And I have been graciously awarded funding to participate in Siggraph Asia 2012, so I’ll be off to Singapore in a wee few weeks (h1ph1ph00ray): http://www.siggraph.org/asia2012/en.

So those are the haps! Oh and I was also featured in these random but delightful things: MSN Glo article, librarian conference article. Thanks for reading, friends, join me at any of the upcoming events above, and send in your radio show rec’s to auremoser@gmail.com!

Tagged , , , , , , , , , , , , ,

Consider in Chrome: Orchestral Manoeuvres in the Dark Dark Dark

Sometimes, I have the impulse to write something profound on this  blog, but now, now is not one of those times. Now is the time I want to talk about music and webmagic so here goes with a self-indulgent slurry of the sweet things the web (GL or otherwise) has served me of late. I want to reference a recent Google adventure as well as an upcoming radio program I’ve been daydreaming of late. For visual interest, I’m going to spill some surrealism all over this with a bit of themed imagery, some daydreamy AV to dew drop drizzle  on your day.

The title of this post appropriately sweeps all of those topics under some semblance of unity. I work a lot in chrome, that is, chrome before it was Chrome™. I work a lot in browsers (chromes) now, and as a chem T.A. at an art school, I worked loads with chrom-ium based pigments. It’s a pretty colorful element, chromium, with a Greek root it couples with a variety of suffixes to produce “colorful” adjectives, band names, among other references. Hello, Chromeo, the Chromatics, Sonichrome (honey, please is a swoonworthy track in my brain), and what about the Chromes on It, Telepathe remix? But that’s not the “Chrome” I’m talking about. The chrome I’m talking about is a browser window, and has become the property of Google projects for some time. Though perhaps not obvious now, the reason I digress with all of these references and etymological diversions, is, in part, to preface a discussion of some totally rad google projects, and in part to introduce a semantic web approach to music that I’m packaging as a radio show come Fall 2013.


<  suspense >

Title Part I: Google Chromecacheedansleforet

Firstly, let’s shed some light on the Google stuff and then on to the open source.

A few weeks ago, I had the pleasure to beta-test a pretty rad application out of Google’s Data Arts dept. (thanks, Aaron * waves *). Entitled “This Exquisite Forest,” the experiment is a collab webproject where you plant an animated image and watch it grow through a system of crowdsourced contributions that branch from your budding idea (apologies for the extended puns, it is my way). It’s been clocked as a kind of version control for images, that transforms the surrealists’ exquisite corpse drawings into a digital project. Under the username “puddingmaster,” I started an 8-frame animation of a staircase, that was transformed by 5 other users into games of tetris, portraits, and geometric puzzles. It’s pretty cool, and now that it’s public, I feel comfortable gushing about how awesome it is.

Finally there is a weboutlet for my surrealist obsessions and it doesn’t involve youtube or remixes of Luis Buñuel films (nb: the surrealist echo in the featured “cachée dans la forêt” piece above). Most of the animations are pretty impressive in the forest, from DMirada’s enigmatic amoebas called “Evolving,” to RaquibShaw’s rather screensaver-stunning “Forgotten gardens of Xanadu,” the trees range in level of contribution and complexity. But pretty much everyone outdoes my stick-figure staircase and I still scored 4 branches and a rebase (WHAAT? Yes).

In addition to animations, you can author your own musical track to accompany the 8-frame image playback, and this feature seemed to correlate brilliantly with some other online obsessions of mine, that have been incubating for a while.

Of late, I’ve been messing around with Chrome’s WebLab Orchestra, which I highly recommend. And the old-school audio cassette on Tympanus allows for some distracting play in html 5. I remember when the Sembeo Sound matrix was my go-to distraction in Flash, reminding me a lot of some incredibox.fr experiments I was running a few years ago. When I started guesting on a radio show in college, I remember thinking how cool it would be to automate call-in requests, allowing people to compose music with keypad menu selection, or at least to select genres and then create collab broadcasts, exquisite-corpse their way through a show, if you will. With all the music genome projects live of late, it seems we have an internet radio infrastructure that people can sample and curate and collaborate without any particular knowledge of how all of these connections and html 5 elements integrate.  The internet and its architects give us the instruments, and all we have to do is google moog our way to play.

Title Part II: Radio DayDreams

And all of this music segues somewhat into a project I’m working on for the Fall. Inspired by my continued fascination with the semantic web, I’m starting a semweb radio series at Pratt Institute when the new semester kickstarts. It’s called Stereo Semantics and the premise pretty simple. I start with a song, I close with it’s cover, I stitch together the six degrees that separate the two.  I’ll be uploading archived episodes to the site along with a RelFinder map connecting the first song to the last, possibly with comments; ultimately I’d like to run my own musical Milgram experiments and see how it spreads. I’ve developed umpteen example playlists for this project, but I thought a shoutout to my daydreaming theme would suit this post. So, here goes….

A. Orchestral Manoeuvres in the ….

Orchestral Manoeuvres in the Dark have been a pretty solid substrate of my Pandora playlist library for a few years. I love the theatricality, the dancehall catchiness, the genre ambiguity and the science references. [Electricity: http://www.youtube.com/watch?v=Sq2vl99iIEc] I played OMD in an embarrassing amount of radio show broadcasts as a wee DJ on student radio…there’s something so relaxing for me about shoegaze and brit pop. They also have a song about dreaming, which has been covered  by everything from glitch to  ukulele online. I’d like to to find ways to connect those tracks, to show how this bass player transitioned to that band, and made music with this beat or this chord progression that you can also here in _this song. Soon the semweb will build these maps for me, beautifully. But there’s something to analog over algorithm, to assembling things manually in stitches of musical nostalgia; Stereo Semantics will tease out that idea.

B. …Dark Dark Dark

After SxSW, I had the pleasure of being introduced to Dark Dark Dark. Among many, many noteworthy others, they have a song called “DayDreaming” which I’ve captioned here:

And what about * scans itunes library *

  • M83–Hurry Up, We’re Dreaming
  • The Magnetic Fields–Asleep and Dreaming
  • Chet Baker–Daydream
  • Sonic Youth–Daydream Nation

AND

  • Themselves–Dark Sky Demo
  • Kanye–Dark Fantasy
  • Hot Chip–Made in the Dark
  • Death Cab–I will follow you into the Dark

Even on a completely superficial kw:_  level, this theming is going to be fun.

While I wouldn’t say I’ve progressed beyond the OMD/New Wave music of my more youthful days, I might admit that Dark Dark Dark is more of the hipster tunage that I sample since starting grad school. Sometimes the ridiculous melodrama of “chamber baroque folk music” makes me shudder, but often it just helps me wind down after a long day. I want to build a bridge between those two , and track a timeline of my musical trajectory over the past few years. SSemantics will be a kind of musical scrapbook, and I’m happy to take topic suggestions, or even, yes even call-ins (see my contact page if you think of something particularly rad). Stay tuned!

< / suspense >

Tagged ,
Follow

Get every new post delivered to your Inbox.