Categories

Sunday, February 24, 2013

#EDCMOOC Week 4: Faster, Higher, Stronger...But To What Purpose?

#EDCMOOC Week 4: Faster, Higher, Stronger...But To What Purpose?
Redefining the Human


My #EDCMOOC (eLearning and Digital Cultures Massive Open Online Course) posts spring from my participation in the course offered for free by the University of Edinburgh via Coursera.org.

By Justine C. Tajonera




Image from kidrobot.blogspot.com

As Week 4 of EDCMOOC drew to a close and the topic moved to transhumanism, I started asking the question: but to what purpose? It was a common question that occurred to me as I watched the films and read the papers.

Week 4 Films

Robbie

This surprisingly moving short film takes on a core theme of popular cyberculture - the possibility of machinic sentience and the questions advanced artificial intelligence raise about what it means to be human. Here, the boundary between human and machine is questioned - if Robbie is capable of experiencing loneliness, happiness, faith and friendship, in what senses is he not human? If the humanistic principles of autonomy, rationality, self-awareness, responsibility, resilience and so on can be held by an artificial intelligence within a mechanical form, what does that say about the extent to which they rely on human cognition and the flesh of a human body to give ‘human’ meaning to the experience of the world?

Indeed, if Robbie can experience loneliness, happiness, faith and friendship...there is little, if not nothing, that distinguishes him from the human being. If humanistic principles can be held by artificial intelligence...it does stretch the meaning of "human." It brings up the question of these "children of technology." I imagine the pathos that Roy Batty in Blade Runner evoked in his "tears in the rain" soliloquy. How could this replicant not have humanity if in all his words he reflects the principles of human transcendence? This is a question that I'm still grappling with. I have no definitive answers when it comes to the artificial human vs. the human. All I know is: it's worth pondering upon now.

Gumdrop

A vacuum-cleaning robot actress who doesn’t do hallucinogenics or nudity? Gumdrop will cheer you up after Robbie. She raises many of the same questions, but this time there are differences - literally - of voice and of embeddedness in the human world. For once, the vision of a posthuman future is not dystopic...

Yes, I did find Gumdrop cheerful. But it made me think about her autonomy too. She was made to be a "vacuum machine" and yet in this world she has the autonomy to be an actress...to perform, to evaluate art and to produce art. She embodies the humanist principle of having "no limits" to what can be achieved despite being "made" for a certain purpose.

What will happen in a world where machines produced to do certain types of work rise up and demand to have the same rights and autonomies of human beings? This is no different from slave uprisings in the past. Who gives "the human" the right to be the "master" of other sentient beings? What if those sentient beings surpass the human's capabilities? Then what? Again, as I mentioned earlier, these are worth thinking about now.

True Skin

‘No-one wants to be entirely organic. No-one wants to get sick, or old, or die. My only choice was to enhance.’ In the future-world of True Skin, synthetic enhancement is normal, and the boundary between human and machinic body has been erased. Where Robbie and Gumdrop look at the human in the robot, True Skin considers the robotic in the human. In particular, you might want to think about the final scene of the movie in which another core sci-fi fantasy - memory backup - is drawn on. What does this notion say about the nature of mind, memory and learning, and the ways in which technological mediation is positioned in relation to it?

True Skin makes the assumption that consciousness is information/ data that can be "backed up." It makes me think of "Permutation City" in Katherine Hayles' paper, Wrestling with Transhumanism, wherein uploaders all commit suicide within fifteen minutes of coming to awareness.

It makes the case that if "consciousness" is data/ information... it can definitely be enhanced by machines and memory chips. Technological mediation can "enhance" one's mind...it can, in fact, take your mind and replicate it (or replace it) with a machine structure.

For me, this is contentious. I think consciousness is not merely data. There is something other than data that makes consciousness possible. I'm not saying that machines will *never* be capable of consciousness. But I'm just saying that consciousness does not equal data.

Avatar Days

Interviews with players of the online game World of Warcraft are placed over a seamless merging of virtual and real life. This is another play on the messiness of our division of the human and non-human, this time within the context of avatar creation and role play. What does the final section of this film in particular reveal about the relationship between player and avatar, between the human and the simulation? What versions of the human are opened up here, and which are closed down?

The final section juxtaposes the characters with their human "creators" or counterparts as mirrors. Avatars are a reflection of the human controlling it. In another context, in another world, these are what these humans would be like: killers, saviors, bandits, warriors. They are all "enhancements" of the human being. Versions of the human opened up here are different kinds of existences based on different environments. If human beings could occupy different environments...they would have different aspects that don't develop in the "normal" or "real" world. In a sense, these "fantasy selves" might reflect what these humans would opt for in a transhuman world wherein you could pick your "enhancement" or "superpower." In this film, it is concluded that players of World of Warcraft are no different from other humans who have "passions." Here, the enhancements are in context of human "passions," whatever those might be. In the transhuman argument, these passions are the justification to whatever technological mediations are used by human beings.

Core

Bostrom (2005) ‘Transhumanist values’ reproduced from Review of Contemporary Philosophy, Vol. 4, May (2005)

Transhumanism is very different from the more critical modes of posthumanism that were touched on last week, in the Badmington article in particular. Where critical posthumanists see posthumanism primarily as a philosophical stance which, among other things, draws attention to the inequalities and injustices often wrought in the name of ‘the human’, transhumanists in general see ‘human values’ as a good, though incomplete, project. For transhumanists, ‘humanity’ is a temporary, flawed condition: the future of human evolution is in the direction of a post-human future state in which technological progress has freed us from the inconveniences of limited lifespan, sickness, misery and intellectual limitation. Transhumanism, in summary, is to a large extent based on the extension of the humanistic principles of rationality, scientific progress and individual freedom that critical posthumanists would question.

This article by Nick Bostrom (Oxford University) - whose work is at the more academically respectable end of what can be a fairly uneven field - does a good job of summarising the transhumanist position, though it’s important when reading this to understand that he does not use the term ‘posthuman’ in the sense that, for example, Badmington does. What is your own response to the ‘values’ he proposes? Do you find them attractive or repellent? On what basis? Bostrom mentions education a few times here: what might his vision of transhumanism mean for the future of education? What would a transhumanist theory of education look like?


I found myself questioning a lot of the "values" that Bostrom proposes. These were my primary questions:
- What is the point of a "freed up" life?
- No death, no sickness, no misery. Doesn't that also add up to "no life?"
- Transhumanism is desirable in whose point of view?
- When we talk about responsible use of technology and the modification of human beings...once again, in whose point of view?
- when we talk of a "fairer and better" society...from whose point of view? In a world where some humans don't want to be transhuman...will that disqualify them from being part of this "fairer and better" society?
- When we talk of living lives "worth living" who is to judge if these indeed are lives worth living? We talk of Beethoven, Goethe...but we don't talk about the Adolf Hitlers. What would a transhuman Hitler be capable of?

I am not saying that these ideas were entirely repellent. I just found myself questioning the point of view that was being taken. For example, I totally agree with respect for all sentience. I totally agree with alleviating human suffering. But when I start thinking of a radically enhanced life that lengthens life spans...I think: "To what purpose?" While we haven't answered the question of "To what purpose" what exactly are we extending? Acquisition of knowledge and wisdom are entirely possible within a lifetime. But we would still be asking the same questions, wouldn't we? We would still be asking ourselves why we are alive, why we exist.

I imagine the life of a vampire, for example. They can live centuries. But at some point...they too question why they are alive for so long. What is the point of everlasting life? Does it matter if it is lived here on earth or if it is as some other type of existence or sentience? Death doesn't necessarily mean the end of life for me. Perhaps it is just what I stand for and what I believe. Death is a door to another life. And that is not necessarily such a bad thing.

Bostrom talks about education. He makes this statement: "A person's life can be transformed radically by getting an education." His idea of education is in the context of creating "great works" and "great projects" that might take more than one lifetime. For him, education is the promotion of the transhumanist values of saving lives, expanding thinking and "open-mindedness."

Transhumanist education theory, I suppose, would always question assumptions and move towards "progress."

I don't disagree with this. But I believe the transhumanist theory is insufficient for the kind of progress we are making with technology. I go back to my own utopia of "biological gestalt." I don't distrust machines. But I do value sentience and harmony with all sentience, the world, the universe. I value larger thinking, contextual thinking. That is why I question the point of progress. What are we progressing towards? What is all the modification for? *That* I think is the question that needs to be answered before we go and modify ourselves and others.

Looking through this lens, I go back to my own home schooling efforts and the education I want for my son. What is the point of his education? I want him to learn how to think...how to learn, re-learn, un-learn and learn again. I want him to be ready for a world that cannot be predicted...whether it is a posthuman, transhuman world...I think it is essential that he knows that humans are valuable...but not *more* valuable than others. I want him to see a larger context. And I also see how his spiritual development plays a part in his education. Bostrom talks about a moral urgency. This comes from having a point of view about what life is for and what there is after life.

Advanced

Hayles N K (2011) Wrestling with transhumanism.

N Katherine Hayles (Duke University), author of the influential 1999 text How we became posthuman, here constructs a challenge to transhumanism which presents a useful contrast to the Bostrom text; even if you don’t want to delve too much into the advanced reading this week, we’d recommend you read the opening part, simply for the critique Hayles provides of the transhumanist position.

Hayles argues that the framework within which transhumanism considers the future of human evolution and technological advance is ‘too narrow and ideologically fraught with individualism and neoliberal philosophy to be fully up to the task’. Via a critical reading of some classic works of science fiction, she argues that we need a more nuanced, and more politically defensible, series of perspectives than those offered by the theorists and proponents of transhumanist philosophy. The first part of the essay explores some of the problems with the individualistic focus of transhumanism, while the second half uses the literary perspective to more fully explore the issues at stake.

In a sense her argument here is also an argument for the way this course has been set up: by placing the artefacts of popular culture alongside contemporary philosophy and theory, we can gain a richer view of the implications of technological shift for our world and our work. Do you agree? Do you find Hayles’ refutation of transhumanism convincing? And why, given that she clearly has very sound critical reasons for her dislike of transhumanism, does she continue to liken her relationship with it to a ‘relationship with an obsessive and very neurotic lover’?


Yes, I totally agree. I especially love the perspective that she puts forth: "Imagining the future is never a politically innocent or ethically neutral act. To arrive at the future we want, we must first be able to imagine it as fully as we can, including all the contexts in which its consequences will play out." In this I wholeheartedly agree. No manifesto or set of principles will be able to answer the complicated questions that transhumanism brings up.

I don't think she totally refutes transhumanism. She questions its framework. She simply states that transhumanism is "not up to the task." It is inadequate. In that I agree. I don't think transhumanism takes into consideration a lot of the questions I ask.

I guess she likens her relationship with transhumanism as a "relationship with an obsessive and very neurotic lover" because transhumanism has simply not gone away. It is still a topic of debate and fascination. I, too, am horrified at the possibilities...but I am drawn to the possibility as well. "What if?" The very idea of bringing up "uploaders" and an enhanced generation is enough to spur the imagination.

But as Hayles cautions...it is very tempting to go for an "enhanced" existence but first we must imagine it fully...including all the consequences of such an existence. For the imagined scenario...I ask: What is it for? What will a highly malleable human be like? Will that "new" human being replace us? Will that even be bad? I have not even fully answered the question: where does my sentience come from? These are eternal questions. And even when we do evolve...they will be the same questions we ask. What is all this for?

Perspectives on education

System Upgrade:realising the vision for UK education (2012) EPSRC Technology Enhanced Learning Research Programme.

This report may seem a long way from the sci-fi fantasies and far-fetched human futures envisioned in our other readings and films this week. However, it is presented here as a chance to look at how the themes of enhancement, transformation and technological advance - so important to the literatures and imaginaries of transhumanism - occur and recur, generally unquestioned, across the more everyday literatures of online education. ‘Technology-enhanced learning’ appears to have become the new acceptable term globally for what used to be called ‘e-learning’, but if we look at this term through the lens of transhumanist thought, what new perspectives on online education does it reveal?

This reading is a summary of the report from a large, recent UK research programme (2007-12) which was explicitly concerned with the technological enhancement of learning. There is much in the report which is unexceptionable. However, in reading it, you should consider what vision for education and technology is being forged here. In particular, what has the language used in the report inherited - consciously or otherwise - from the popular and philosophical discourses of transhumanism? How are terms like ‘transformation’, ‘enhancement’ and ‘empowerment’ used? What relationship between the human and the technological is being described? Is the vision here a useful one for us to work with as educators and as learners? What alternative visions might we pursue? Which ‘system’ is being ‘upgraded’?


Definitely, the language in the report borrows from the transhumanism philosophy. "There is no sign that technology is being adequately exploited for teaching and learning." The term "exploited" being the key term, that technology can be mined for the enhancement of the human being.

Terms "transformation," "enhancement," and "empowerment" are all used in the service of the human being, the learner. The relationship between the human and the technological is one of the "use" determination. Because technology is there...it must be exploited. There is much "power" out there. It should be put into the service of education.

This kind of vision is useful to a certain extent. Indeed, it is unrealistic to expect schools, educators and learners to continue without using the devices that proliferate society.

But the more useful perspective to take, in my view, is that of the contextual, polyvalent view. Everything must be taken with a grain of salt and not accepted, lock, stock and barrel. An alternate vision I want to take is that of "projects" as a way of applying learning. I do agree that this age approaches the end of scarcity...especially in terms of access to information. But acquiring information is not enough...in fact, it was never enough. What do we do with the information? What do we do with what was learned? For me, not even tests or assessment scores will measure if the learner has indeed learned. It is in the actual application of what was learned that the true test of learning becomes apparent.

The "system" being upgraded in the TEL report is the education system. But I don't agree that a mere upgrade will solve the "problem." A re-thinking of the system is called for. We cannot rely on the existing system and simply boost it. The system, the structure, needs to be re-worked.

Carr, M. (2008) Is Google making us stupid?

This final reading pulls together many of the themes we’ve touched on over the period of the course. It’s an interesting one to read alongside the ‘TEL’ report above, thinking about how relations between the human and the technological are differently worked in each. If the TEL report creates a vision in which technology is under individual or societal control (technology in the TEL report is ‘harnessed’, ‘utilised’, ‘developed’, ‘employed’ in the interests of enhancement), in the Carr article it is human faculties which are under the control of the technology. For Carr, our media environments develop their own logic, to which we adapt socially and physiologically. In a return to a now familiar theme, ‘human nature’ and ‘human being’ is ‘made’ in response to technological shift. Yet this ‘making’ of the human is far from the vision of empowerment and enhancement that we have seen in the readings on transhumanism, and in the vision of education and technology that we saw presented in the TEL report.

‘As we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence’, concludes Carr. Is it possible to counter the technological determinism of this view, without resorting to over-simplistic assertions of human dominance over technology? How should we respond, as teachers and learners, to the idea that the internet damages our capacity to think?


I think it is possible to counter the technological determinism that Carr brings up in his article by bringing up holistic and contextual thinking. As I saw earlier in Week 3, the human being is not the end-all and be-all of existence. But that doesn't devalue the human being at all. That puts the human being in context of the world, of the universe, of all sentience. I agree that technology is powerful...I also agree that the human being is not in total control of technology and that the structure of technology can alter the way human beings think and interact. But I do not discount the human being's agency and choice. I think we are capable of coming up with collective principles (other than those of the transhumanist). It will be a long process. As Hayle, in her paper, Wrestling with Transhumanism, points out, we need to prepare ourselves to think through the momentous changes in human life and advanced technologies make possible. We must imagine it in the most holistic terms.

As a parent, both a teacher and a learner, the idea that the internet damages our capacity to think is simply not all true. I am still capable of reading long tracts of text. I do know that I have the tendency to get distracted by the internet and its structure (Facebook, for example!) but I can choose to empty my space from distraction. It is a matter of choice. Being aware of what I am immersed in is a start. This is something I can teach myself and this is something I can teach my children.
In the end, all this talk of enhancement and even education points to the most basic, the most fundamental of all human questions: Why am I alive? What is my purpose? Not even transhumanism can answer this. And the Internet, for all its computing power, can only point to but not provide the ultimate answer.



No comments:

Search This Blog