image: “_D3S9849” by US Embassy Tel Aviv https://flic.kr/p/bDjjUL CC-BY-SA
Something’s been bothering me recently. As learning technologists, people advocating the use of technology within education, are we helping learners to be sufficiently critical of the tools they are using?
I hope I’m not over simplifying, but much of the critique around using web-based tools, particularly social media, focuses on issues of function and safeguarding. We talk less about examining the background, particularly the business model, of apps or services to determine whether using them is in the best interests of the individual.
I recognise I’ve been as guilty of this as anyone. We’re surrounded by free tools and apps and I’ve been eager to share these and to see how they shape the way people learn.
The implications of this were brought home to me a few months ago when I replaced my phone. Rather than backup from my previous phone’s settings I decided to do a digital life laundry and reinstall only the apps I actually needed.
If you try this you’ll know that you get regularly asked to accept that an app would like access to certain things on your phone like your camera or microphone but also your contacts list, location and so on. What they don’t always specify is why they need this.
A human rights issue?
Of course, it’s not news to say that what these apps are after in many cases is your data because that is a commodity the app producer can sell. I was encouraged to think about this differently by Aral Balkan at last year’s Thinking Digital conference.
For him, the issue isn’t just about privacy. It’s about human rights. To hear him explain why, watch the video. It’s worth 20 minutes of your time.
It was with this in mind that I delivered a lecture recently to foundation year engineering students on digital citizenship. I used the example of Uber to highlight the issues around the divergence of the needs of the business and the rights of the individual.
I might also have talked about the ethical minefield of Facebook and their “emotional contagion” experiment.
I’m not advocating that we retreat from using free tools; that would be ridiculous. On balance, having access to a powerful set of tools like Google Docs or Dropbox in exchange for access to our data may work strongly in our favour. But I do think that “following the money” should be an integral part of what we think about when choosing these tools and deciding how to use them. If it’s a bargain we’re willing to strike when we start using them then that’s fine. What we should avoid is blindly accepting the EULA as this Guardian article on controlling personal data discusses.
Why worry about it?
This is important for individuals. Technology is becoming more and more intimate. With the emergence of wearable technologies and the “internet of things”, the data about our daily existence is becoming much more nuanced and granular, data that belongs to organisations, not individuals. And information that is filtered through algorithms and presented back to us shapes our world view and behaviours. This was highlighted by people comparing the simultaneous trending topics on Facebook and Twitter during the Ferguson protests.
It’s also something we need to think about as institutions. In order to participate fully in education, do we require learners to subscribe to services run by organisations that operate in ways that are not necessarily in the best interests of the people we have a duty of care towards?
We should encourage learners to be as constructively critical of the nature of the technologies they use as the academic literature they read.
We may be enabling students to use technology but are we empowering them? Current patterns of demand will shape the development of future technologies and if we fail to encourage learners to be critical then we do them a disservice and risk creating a technology landscape that we may one day come to regret.