The discipline of anthropology has typically been associated with working with remote tribes, or researching the history of a particular routine in socalled ‘traditional’ societies. This is inaccurate: many anthropologists, nowadays, in fact study ‘modern’ cultures, like organizational or youth-cultures in their own country of origin. However, anthropologists should shift their gaze even further away from what has already been. Indeed, I argue that the future is currently a more pressing field of investigation than ‘traditional’ times or even the present, and one in which anthropologists can and should play an important role. The reasons for my argument are twofold.
Firstly, the global challenges of the Anthropocene make it painfully clear that we need to constructively work towards a more sustainable future: a future in which the capacity for our biosphere and human civilization coexist.
Secondly, while growing anxiety over our troubled present is indeed motivating a swiftly-increasing number of academics to sketch possible future scenarios, by far most of their ideas are technical ‘solutions’ offered to pressing problems. This is unsurprising, regarding the fact that most academics involved in future-thinking work in fields such as Artificial Intelligence, futurism, trend-watching and design. Clearly, their innovations are crucial for development of human society and economy. However, what usually lacks in these studies is a clear focus on the ethics, norms and values of the different future scenarios that they help develop: on culture.
This is what has typically been happening, lately: engineers create ever-more-human-looking robots, where-after tech-experts, government officials and laymen engage in heated debates about how dangerous or helpful these new inventions really are. Likewise, designers developed the internet of things, where-after its users discover its advantages (hi, smart thermostat!) and its potential dangers (bye, privacy!). Or, as anthropologist Sarah Pink observed: first, the electric car was produced, then followed warning newspaper headlines about the risk that such a car will cause accidents in traffic.
In each of these three examples, I want to suggest that the order was wrong. On fast speed and for millions of dollars, we are currently developing technology with wide-ranging ethical consequences – positive and also potentially negative. Over the past years, robots have already proven to be able to offer emotional and practical support for the elderly, as well as for lonely people or people dealing with specific mental diseases. They are also increasingly taking over jobs that used to be a source of income for people: nowadays, robots work in hotels, in brothels and in health-institutions. This is not just a big – and potentially problematic – change for their human predecessors, who are now forced to seek income elsewhere, but it will also have tremendous impact for society.
Will we learn to love robots as a sort of pets, housemates or even friends, or will we regard them as non-humans that we can treat as we wish? What type of behaviour do we train in our children, if they are increasingly faced with a computer at the receptiondesk of a hospital, or in front of a class? Will they learn to pay a robot-receptionist the same respect and patience that they (hopefully) would to a human version? Why would they anyway, if the robot does not seem to have feelings and continues to reply in the unattached way, SIRI does, now? Yet, what happens if we get so much used to interacting with robots, to such extent that we become less trained in humane, kind communication and behaviour?
These questions might seem far-fetched, but if one considers that experts predict that within 20 years or so, a large number of young adults will have regular sexual experiences with female robot sexworkers (known for their unstillable desire after penis-in-vagina intercourse and for their seemingly lack of interest in their own pleasure and orgasm), it does not seem unwise to consider potential implications for the human, female sex-partners that the next generation might also encounter in their lives. [Read also: should we protect robots against rape?]
Moreover, while designers of algorhytms, apps and electric cars generally assume that their ‘solutions’ will indeed solve the problems that humanity finds itself in if only people would adopt and use their technological gadgets in the right way, anthropologists have theorized for long that people typically adapt technologies and tools. They will use them, but only in a way that suits them, in a particular time and place. Which means that whatever may have been the intention of a designer with her futuristic artefact; not she but the users of her design will eventually decide what direction our future takes. Which means, again, that we have to consider the ethical and moral implications of possible future-scenarios, before we even design something that is meant to be a ‘solution’ for our future. If we don’t we may unintendedly create new and unforeseen problems.
Arjan Appadurai (2013) reasoned that whatever we can imagine is always mediated by culture. He has argued that people can only imagine what they have learned to see, think or believe (see van Voorst 2014 for a more extensive analysis of his work). Following this line of thought, it can be argued that anthropology is especially suited to create a fuller understanding of the affective orientations that shape shared and different notions of the future. Anthropologists, through their typical methods of deep-fieldwork and participant observation, are able to get under the skin of people. They regard social problems from their particular time and place: they see the world – or an app or robot, for that matter – through their study-participants’ eyes. As such, anthropologists can help us move from concerns and anxieties, to aspiration, anticipation, expectation, speculation, desire, and destinies (Bryant and Knight 2018).