One of my bottomless projects is the 2400+ negatives I made on the job site of the first high-rise building in Boston’s financial district. There’s a book in there somewhere, but in order to think about the images I need to be able to see them, sort them, decide on narrative directions and contents and so on. So I’m gradually building a Web locus for the project, intending to treat it as a workspace for trying out presentation ideas, generating supportive text, and basically sand-boxing. You’re welcome to watch:
I follow a lot of blogs, via my Feedly RSS feed. Mostly I skip through their subject lines quickly, reading only those that seem directly relevant to my interests (which do tend to sprawl) and sending on to various others the URLs that seem to me likely to be of interest to specific interwebs buddies.
For years now I’ve used Zotero to keep track of the blog postings that are especially fraught with meaning for myself, and there’s a link at the top of my blog that connects to
which sort of mirrors the day to day flux of engagements. Missing from this list (because I can’t discern any way to include) is the aboutness categories into which I place the links. I can capture the links to specific aboutnesses (e.g., Trumpery captures the links to postings on that subject; anthro tracks what strikes me as preservation-worthy in that realm; and lexicon for wordstuff… and so on). So I can keep track of my own interests, as reflected in the reverse order of stuff I send to Zotero, and I can figure out when I first encountered something via RSS, though I rarely do that sort of retrospective inquiry. But the whole thing is rather unwieldy.
Perhaps I’m missing or misunderstanding something of Zotero’s powers as an information management tool, but it seems to me there should be some way to hashtag Zotero captures, and thus potentially to incorporate them into discourse. Which is to say that I’m wrestling with how to capture the flow of important stuff and then expose it to wider audiences. An activity I’ve been engaged in forever, it seems.
I’m forever finding things that seem to apply to people and situations that aren’t precisely my own but do need rediffusing in some medium. Here’s one that just snuck up on me:
Imagine a world where speaking or writing words can literally and directly make things happen, where getting one of those words wrong can wreak unbelievable havoc, but where with the right spell you can summon immensely powerful agencies to work your will. Imagine further that this world is administered: there is an extensive division of labour, among the magicians themselves and between the magicians and those who coordinate their activity. It’s bureaucratic, and also (therefore) chaotic, and it’s full of people at desks muttering curses and writing invocations, all beavering away at a small part of the big picture. The coordinators, because they don’t understand what’s going on, are easy prey for smooth-talking preachers of bizarre cults that demand arbitrary sacrifices and vanish with large amounts of money…
The analyst or programmer has to examine documents with an eye at once skeptical and alert, snatching and collating tiny fragments of truth along the way. His or her sources of information all have their own agendas, overtly or covertly pursued. He or she has handlers and superiors, many of whom don’t know what really goes on at the sharp end…
(from Ken MacLeod’s preface to Charles Stross’ The Atrocity Archives)
I’ve had 3 days with the iPad now, and it’s been as exhilarating as other brushes with new technologies. I think of the first few days with the TI-Pro in 1984, HyperCard on the Mac in 1990, my first work with Web browsers and HTML in 1993, and the beginnings of podcasting in 2004: in each case, pennies dropped one after the other as I tried this and tried that and articulated and then found the answer to the next question… Doc Searls really nailed it with his summary of the iPad (for which read ANY new technology) as “an accessory to your own intentions”. It’s not the DEVICE we should be judging, but rather our engagement with it, and its effects upon our imagination. Often enough, what you EXPECTED as the outcome pales next to what actually happens, and it’s the unanticipated that’s the important consequence. Case in point: for several years Kate has been working on a map summarizing the Appalachian Trail adventure that occupied Betsy and me between 1992 and 2003. Here’s the topmost bit:
The whole map is more than 15 feet long when it’s printed out, and we’ve been wrestling with how to display it. It’s too big to hang on the wall, and clumsy to roll out onto the floor and crawl around on to read the details summarizing each segment hiked, but as a pdf on the iPad it’s absolutely perfect: you can pan and zoom and really explore, just by waving your fingers over the screen. And that experience leads one to thinking about map displays of many kinds, and other features that might go into map apps for iPad and successor devices. Not something we imagined when the iPad first arrived, and it was realized only after I’d figured out a clear path to move pdfs (via Google Documents) from computer to iPad.
I really recommend a listen to Daniel Suarez’ talk at the Long Now Foundation, which I missed when it was first posted. Here’s an eye-opening bit from the transcript:
Many of you have Bluetooth devices in your car but you may not know about the TPMS system; this is the tire pressure monitoring system. It was federally mandated by the 2001 TREAD Act. That’s right. You all remember voting for this, right? It says that any car manufactured up to 2007 has to have wireless nozzle pressure measurement devices that communicate with the computer onboard the car to see that your tires are safely inflated. Now, they have to have a unique I.D. so that the computer knows your tires from the car next to you and of course, it is an open standard and makes it very simple to track the unique identity of an automobile; but of course, to do that you would have to have devices scanning. Fortunately, such scanners have started to spring up at choke points throughout modern cities. These are privately owned scanners with the data being gathered and stored again because it’s cheap to store data, vast amounts of data. This data can be piled up along with your financial transactions and anything else and bots can go through it to find persons of interest or they just find patterns or even just to sell you stuff. I’ll give you an example of just a few such devices as a BlueSweep scanner and a BlueSweep scanner is a device that able to identify all bluetooth devices within its radius, identify what their capabilities are, and what exploits they might be vulnerable to. A BlueSniper can do this up to a kilometer away. Let’s go a little further down the wall. There’s the Bluesnarfer you were all expecting. Now Bluesnarfer can use an exploit and given to it by a Bluesweeper to steal your address book, your text messages, your calendar, your pictures of your kitties, and bluetooth car whisperer can push advertising into your car speakers through your car’s bluetooth system. Now more worrisome, it could also be used to hook into your car bluetooth phone system to eavesdrop on conversations in the car. Now, if you combine that with something like the TPMS system or any future open standard device, you could pretty much track a car and listen to its occupants as they move throughout the city at any point in the future or at the moment it’s happening. Now, so you’re walking through this gauntlet of scanning activity with all the wireless devices and again, I’m sure we were all aware of this, and then there’s of course financial transactions every time we buy stuff with a debit card or a credit card. Who, what, where, and when? Combine that with visual data and all of the other points that tell us who was there with you, where you were going can be used to tell some very interesting stories. So it’s a great constellation of information being gathered on us at all times and then of course privately owned devices Hoovering up all these information. So this is the world you live in right now. Who knows what it will be like 10 years from now?
I’ve been reading the Marcus and Sollors A New Literary History of America article by article, and this morning came athwart Daniel Albright’s on Gertrude Stein (“1903: Gertrude Stein moves to Paris, and neither is ever the same again”), in which is quoted this bit from Stein’s Three Lives:
…there was a constant recurring and beginning there was a marked direction in the direction of being in the present although naturally I had been accustomed to past present and future, and why, because the composition forming around me was a prolonged present… I created then a prolonged present naturally I knew nothing of a continuous present but it came naturally to me to make one.
Hmmm, I thought, how very like the Web in which we live more than a century later.
Albright ends his article with this food for thought, quoting an unknown-to-me
Neverout. Miss, what spells b double uzzard? Miss. Buzzard in your teeth, Mr. Neverout. Lady Smart. Now you are up, Mr. Neverout, will you do the the favour to do me the kindness to take off the tea-kettle? Lord Sparkish. I wonder what makes these bells ring.
If Gertrude Stein had never been born, this would seem a freakish and incomprehensible text. It still seems freakish and incomprehensible, but as an anticipation of Stein it is made familiar, assimilated into a canon that she caused to exist.
Hmmm, I thought again, how very like the Web in which we live more than a century later…
It’ll take 3:50 of your time to watch this from Kevin Kelly:
This interests me on several dimensions: as a mode of presentation (obviously, if EVERYthing was in this format it would get old fast… but it does focus the mind if it’s novel, and the visual effects do nudge one to hear the message more clearly than if it’s just spoken, or just straight text), for its content (some of his assertions are arguable –but which ones for you? And it’s short enough to be an effective stimulus material for teaching, a good model), for its potential remixability (wouldn’t it be FUN to have a class pick out particular segments to expand upon, annotate, produce new remixes of? What’s the technology package necessary to do that?)
Cory Doctorow’s half-formed thoughts on one future for bookselling in this morning’s BoingBoing are worth a closer look if you’ve just clicked past the posting without reading it. He mentions the Harvard Bookstore’s Espresso book printer, which I visited and patronized myself a few weeks ago: …but it’s what he says about its implementation that caught my eye:
At the Harvard Bookstore, they have someone who spends the day mousing around on Google Book Search, looking for weird and cool titles in the public domain to print and shelve around the store, as suggestions for the sort of thing you might have printed for yourself. This is a purely curatorial role, the classic thing that a great retailer does, and it’s one of the most exciting bookstore sections I’ve browsed in years. And even so, there’s lots of room for improvement: Google Books produces the blandest, most boring covers for its PD books, and there’s plenty of room for stores to add value with their own covers, with customer-supplied covers (the gift possibilities are bottomless), and so on. I can even imagine the profs across the street producing annotated versions — say, a treatise on Alice in Wonderland with reproductions of ten different editions’ illustrations and selling them through the store’s printer and shelf-space, restoring the ancient bookseller/book-publisher role.