The Virtual Making of Things

I love it when Cory Doctorow tweets out a request for ideas for his next MAKE Magazine column. Near instantly, my Twitter feed is flooded with wonderful and intriguing ideas, each one as fantastic as the next. This time, the ideas caused my brain wheels to spin feverishly, and then I had a light bulb.

I’m a web developer – that is, I spend a good portion of my time writing code. I tap into APIs (Application Programming Interfaces) and manipulate the steady stream of data. Just last month, I wrote some code to interact with the Google Geocoding API and the FCC’s Census API. Enter a US address, get back a census tract (the neighborhood-sized organizational units that the Census Bureau breaks cities and counties into). Developers love taking data and making it do useful things.

Writing code is the virtual making of things.

I’m not talking about making things like in Second Life, where you can write a script to create a virtual object in a virtual world that does virtual things. No, I’m talking about writing code (and building upon others’ open code) that makes something that is useful in the real world. The 50,000+ developers on Apple’s iTunes App Store and Google’s Android Marketplace are makers. Hundreds of thousands of apps that transform our smartphones into task managers, alarm clocks and weight-loss coaches.

Now, imagine a world where everything is programmable. Every physical object has an address and an API. In Doctorow’s Makers, he describes an inventory system where every object in your house is addressable. Take that a step further, where every object in your house is programmable. Not just the obvious, like the refrigerator that orders missing ingredients from your weekly recipes, or the central heating/cooling that adjusts temperature based on who is home and how they’re feeling. I’m talking about MAKE-style innovation – appliances that talk to each other, bookshelves that recommend your next novel, tables and chairs that adjust themselves to the user, gardens that monitor and adjust soil nutrients. If you’re thinking that sounds like artificial intelligence, you’re right – it nearly is. But until AI can achieve sentience (self-awareness and self-control), all of these pseudo-intelligent objects must be pre-programmed by developers.

Such is a world where software developers are the stewards of our daily living. And the makers at MAKE and all around the world are building the hardware to take us there.

Original photo by heipei, via a CC License

By Philip Cain

Ninja Master of the Series of Tubes, musician, audio engineer and geek. More about Philip...

1 comment

  1. I like it. Two comments:

    1. If you think about the programmable controller on each item as its own entity, and then remove it one step from the actual device, you have a separate control entity. These entities are the programmable interface to the device that can understand the natural language, understand the information of the device, and make recommendations and other actions based on that information and your communication with it.

    AKA it’s a user. Or a shopkeeper. Or a technician. I’m kind of a fan of finding a way to keep people in this loop. I think it’s a little dangerous to remove too much of the human element from our human experience.

    2. The whole IBM Jeopardy computer is one of the next big steps in corporate funding for such natural language processors. I hope a.) that thing works well and b.) they make that technology available to more applications.

Comments are closed.