Why the visual background of this blog? I admit that I am not
100 percent sold on it, but I do like the black-and-white pictures
because they are so different from current photos, mostly shared via social media, instantly
available to the world on Instagram, Pinterest, and Snapchat. Most of
the images that are shared via social media are of humans, and most are initially captured through the
eyes of humans.
And yet, our concept of what it means to be human is changing, at least according to Jaron Lanier in You Are Not a Gadget. We are subjecting ourselves to "lock-in," or at least being subjected to it, as the technology shapes us as much as -- or more than -- we shape it. Just as computer-produced music is limited by the MIDI format, which was originally designed for keyboards and then became ubiquitous, technology can limit the way we see ourselves and, hence, what we do as individuals and as groups. By worshiping technology, Lanier argues, we make ourselves into gadgets.
I find myself agreeing with much of what Lanier says, and he, as a witness and participant to the early development of what we know as the computer, has a lot of street cred.
However, I find one thought nagging at me. Even if what Lanier says is true, what is the alternative? Sure, I can try to resist some of the trends Lanier points out, but how much power do I really have? I think the choice of whether to adopt technology is often like sitting at a football game when a row of people stands up to get a better view, requiring the rows behind them to stand up too or miss the game. It's a choice, but it's not much of a choice. Individuals, businesses, and countries who choose not to adopt technology will eventually be out-competed and overtaken.
Lanier himself admits this conundrum. I think the question he poses is more subtle. Are there ways we can negotiate technology to have it still serve our ends while not being left behind?
Eric Schmidt and Jared Cohen of Google have a much more sunny view of technology, as expressed in The New Digital Age. I agree with them that the extent to which information systems and robotics will change life is huge. They would know -- their company is no doubt already working on many of the technologies they talk about. And the benefits are and will be real. Anyone like me who has a family member whose life or quality of life has been saved by medical technology is a believer.
And yet, it feels to me like Schmidt and Cohen are a little too sunny about the future. It is great that terrorists can be caught by digital technology, but that same technology can be used for nefarious purposes, as internet-utopia-skeptic Evgeny Morozov is constantly pointing out. It would be nice to have my hair cut by a robotic barber, but what happens to my local barber and his family? As economic activity comes to be centered in the centralized servers of large companies, what happens to people?
It is as though Lanier and Cohen are two sides of the same coin. They largely agree on the advances that will be made, but they take very different views of the effects and implications of these changes.
I found The New Digital Age, which made a good attempt at being broad-minded and balanced, more convincing than the authors' interview promoting it, found here. It could have been the influence of the interviewer, Charlie Rose, but I found that the authors talked way too much about terrorism. It's not that terrorism is not a potential threat; but I am much more worried about my private information being accessed without my consent or knowledge than I am about a terrorist attacking me or my family. Terrorism could have a potentially big effect, but it is a very small risk; the loss of digital privacy and autonomy is something that is a virtual certainty. The only question is how much it could affect or limit me.
While I respect the effort Schmidt and Cohen make to look at the big picture, my main objection is this: Social media does indeed have power to break down walls, but I'm not sure if it has as much power to build new structures -- bridges, roads, and buildings of all kinds, not just physical -- that are needed. As we have seen in Egypt, governments may be toppled, but the underlying power, rivalries, and tensions will still be there. These problems can only be solved by humans. They cannot be solved by algorithms. I don't think Schmidt and Cohen are really making that claim, but it is easy to get caught up in technological euphoria and gloss over some very basic, human problems.
That's not to say technology is powerless to solve problems -- not in the least. It is so much easier for me to live many parts of my life than it was even ten years ago. But that is not the question. The question is how technology changes us, for good or bad, and what other problems it creates. The design of technology must be a human endeavor, with humans in mind. It should be created to fit people and not the other way around.
The bed of Procrustes is a mythological story about an innkeeper who would make his guests fit the bed by cutting off their legs if they were too tall or stretching them if they were too short. Often, this story is applied to technology that comes to dominate humans, who become slaves to it. The movie WALL-E depicts the negative future of such a world.
Do I want Google to direct my search terms, to "help" me as I'm searching? Couldn't this process, while saving time, be limiting, as Lanier argues? One of my heroes, the interculturalist Edward T. Hall, said that learning begins with the experience of being lost. If Google never lets me get lost, how will that happen? I suppose Google can build a "getting lost" element into its algorithms, and maybe already has, but somehow that defeats the purpose. I don't want to add "get lost" to my task list on my Google calendar.
In the end, it is ironic that I am making these comments on a free, Google-supported blog. Maybe I will share my blog post on my free Facebook or Twitter account. My choices, and those of many others like me, are what makes Lanier's book an argument and not a rant. He points out that much of what is happening is based not on the inherent evil of technology companies but on our own tendency to avoid paying for services, to get something for "free." But, in the end, we pay, if not with money, then with information, autonomy, and the loss of local economic activity.
In closing, if anyone at Google, Facebook, or the NSA is reading this, I just want to say that "I, for one, welcome our new digital overlords."
And yet, our concept of what it means to be human is changing, at least according to Jaron Lanier in You Are Not a Gadget. We are subjecting ourselves to "lock-in," or at least being subjected to it, as the technology shapes us as much as -- or more than -- we shape it. Just as computer-produced music is limited by the MIDI format, which was originally designed for keyboards and then became ubiquitous, technology can limit the way we see ourselves and, hence, what we do as individuals and as groups. By worshiping technology, Lanier argues, we make ourselves into gadgets.
I find myself agreeing with much of what Lanier says, and he, as a witness and participant to the early development of what we know as the computer, has a lot of street cred.
However, I find one thought nagging at me. Even if what Lanier says is true, what is the alternative? Sure, I can try to resist some of the trends Lanier points out, but how much power do I really have? I think the choice of whether to adopt technology is often like sitting at a football game when a row of people stands up to get a better view, requiring the rows behind them to stand up too or miss the game. It's a choice, but it's not much of a choice. Individuals, businesses, and countries who choose not to adopt technology will eventually be out-competed and overtaken.
Lanier himself admits this conundrum. I think the question he poses is more subtle. Are there ways we can negotiate technology to have it still serve our ends while not being left behind?
Eric Schmidt and Jared Cohen of Google have a much more sunny view of technology, as expressed in The New Digital Age. I agree with them that the extent to which information systems and robotics will change life is huge. They would know -- their company is no doubt already working on many of the technologies they talk about. And the benefits are and will be real. Anyone like me who has a family member whose life or quality of life has been saved by medical technology is a believer.
And yet, it feels to me like Schmidt and Cohen are a little too sunny about the future. It is great that terrorists can be caught by digital technology, but that same technology can be used for nefarious purposes, as internet-utopia-skeptic Evgeny Morozov is constantly pointing out. It would be nice to have my hair cut by a robotic barber, but what happens to my local barber and his family? As economic activity comes to be centered in the centralized servers of large companies, what happens to people?
It is as though Lanier and Cohen are two sides of the same coin. They largely agree on the advances that will be made, but they take very different views of the effects and implications of these changes.
I found The New Digital Age, which made a good attempt at being broad-minded and balanced, more convincing than the authors' interview promoting it, found here. It could have been the influence of the interviewer, Charlie Rose, but I found that the authors talked way too much about terrorism. It's not that terrorism is not a potential threat; but I am much more worried about my private information being accessed without my consent or knowledge than I am about a terrorist attacking me or my family. Terrorism could have a potentially big effect, but it is a very small risk; the loss of digital privacy and autonomy is something that is a virtual certainty. The only question is how much it could affect or limit me.
While I respect the effort Schmidt and Cohen make to look at the big picture, my main objection is this: Social media does indeed have power to break down walls, but I'm not sure if it has as much power to build new structures -- bridges, roads, and buildings of all kinds, not just physical -- that are needed. As we have seen in Egypt, governments may be toppled, but the underlying power, rivalries, and tensions will still be there. These problems can only be solved by humans. They cannot be solved by algorithms. I don't think Schmidt and Cohen are really making that claim, but it is easy to get caught up in technological euphoria and gloss over some very basic, human problems.
That's not to say technology is powerless to solve problems -- not in the least. It is so much easier for me to live many parts of my life than it was even ten years ago. But that is not the question. The question is how technology changes us, for good or bad, and what other problems it creates. The design of technology must be a human endeavor, with humans in mind. It should be created to fit people and not the other way around.
The bed of Procrustes is a mythological story about an innkeeper who would make his guests fit the bed by cutting off their legs if they were too tall or stretching them if they were too short. Often, this story is applied to technology that comes to dominate humans, who become slaves to it. The movie WALL-E depicts the negative future of such a world.
Do I want Google to direct my search terms, to "help" me as I'm searching? Couldn't this process, while saving time, be limiting, as Lanier argues? One of my heroes, the interculturalist Edward T. Hall, said that learning begins with the experience of being lost. If Google never lets me get lost, how will that happen? I suppose Google can build a "getting lost" element into its algorithms, and maybe already has, but somehow that defeats the purpose. I don't want to add "get lost" to my task list on my Google calendar.
In the end, it is ironic that I am making these comments on a free, Google-supported blog. Maybe I will share my blog post on my free Facebook or Twitter account. My choices, and those of many others like me, are what makes Lanier's book an argument and not a rant. He points out that much of what is happening is based not on the inherent evil of technology companies but on our own tendency to avoid paying for services, to get something for "free." But, in the end, we pay, if not with money, then with information, autonomy, and the loss of local economic activity.
In closing, if anyone at Google, Facebook, or the NSA is reading this, I just want to say that "I, for one, welcome our new digital overlords."
No comments:
Post a Comment