AAH14: Waleed Aly
AAH14 Keynote - Waleed Aly: Ethics & Technology
Waleed Aly considered himself a strange guest speaker for a tech conference, given his background in journalism, political science & law. I disagree, though - especially for a conference focusing on humanity.
He challenged us to think about the ethics of the space we’re working in at both micro and macro levels, and used philosophy as a backdrop for the conversation.
The world of philosophy is often ignored, but it provides invaluable questions of “Why?” and “Should?” that engineering, as a “How?” discipline, simply cannot.
micro
Q: “Are there inherent ethical challenges to the technology I’m working on?” is a question we don’t ask ourselves enough. The tech sector has a tendency to focus on what is possible first, then work out whether we should afterwards (usually after it’s too late and we’ve already done it).
Take (as Waleed did) driverless cars. He transposed them into the famous “Trolley Problem” - where split-second decisions have moral questions. We afford ourselves a certain slack because in that situation it’s hard to know what we would choose. What about a machine, that by definition has to be told how to think? How should it answer the question? Even as technologists, ethics problems are our problems.
macro, or the myth of “neutral technology”
Q: Why do we tell ourselves technology is “neutral”, when it so often ends up causing changes for good and bad?
The introduction of the car changed forever the notion of the city. Television changed forever the notion of mass communication.
We tell ourselves that our technological changes (new platforms new media) are neutral - or at least that the net good will outweigh the bad. Why do we do this?
We talk about “the death of print” in the news industry, and assert that journalistic standards shouldn’t change. But how could we assume that standards wouldn’t change when new technology allows us to optimise speed over accuracy like never before?
We build drones, and drones that can wage war without people present. But what would war look like when attackers needn’t fear loss of their own life? Would they be more aggressive and kill more people? Or would they save lives by being more precise?
Waleed suggested that the tech industry perhaps unsurprisingly has more progressively-minded people in it than conservatively-minded. If we recognised that, perhaps we would make better-informed decisions about the social impacts our technological decisions might have.
aside: Technology & Worldview
During Waleed’s Q&A, someone asked questions as “a self-confessed ethics n00b.”. He worried the ethical conversations ended up being a waste of time because he left with more questions than answers!
That fascinated me, because we like to pretend we make decisions in a vacuum, or guided by “reason, science, facts, etc.” And, to an extent, we all do.
But we also see every fact, every piece of information through lenses, or worldviews. To deny that is the very essence of cognitive bias!
How could we, as people working in a quickly-changing industry, incorporate more mindfulness of our worldviews into what we do?