Here are my running notes of the Lift conference in Geneva. This is Doomed to be forever young? A social archaeology of the ‘digital natives’ (Antonio Casilli), part of the Generations and Technologies session. May contain errors, omissions, things that aren’t quite right, etc. I do my best but I’m just a human live-blogging machine.
Found other good posts about this session? Link to them in the comments.
The myth of the digital native. *steph-note: YAY!!!!*
Antonio’s cousin’s MySpace page. His grandma is on YouTube. Different generations, different ways of being online. How “young” are those digital natives?
There is no empirical evidence. No facts to support this myth. Not all children are computer-savvy. As with maths or linguistic skills, their computer skills vary. And the situation is changing quickly. However, increase in broadband access changed the most over the last 5 years for people over 50.
Before “digital natives” (2006) we had “internet children” (1999) and “computer kids” (1982)
To debunk the myth, we need to do some social archeology.
Two social dynamics:
- Computers have changed the space. Reterritorialization.
- Miniaturization.
Computers have gone from military bases to factories to offices to houses. (This is where the kids come in the picture.)
In the eighties, the child/youth becomes the main protagonist for the computer. Dismissal of adulthood visible in computer names (childhood names, pet names, fruit names… the computer is shrinking). The worst performers have adult/glamour names (vixen, orchidée, dragon…)
Why did the child become the main user of the computer? 3 reasons, but the best is the economic reason. Differences in uses of ICT, and younger generations buy high-added value services, so it makes sense to target them more aggressive marketing campaigns.
Second point, cultural reasons. Natives vs. immigrants, echo of the way we started thinking of technology. Before the eighties, technology is threatening (Big Brother). After, futuristic optimism. Positive attitude also towards the passing of time, insistance on real time, quick delivery.
Third reason: political. Mirrors social exclusion that has existed offline between younger and older generations while accessing technology. Young generations are overrepresented online. Around 55-63 the trend inverts, and older people are underrepresented online. This is also an offline issue. Senior citizens are generally excluded from mainstream society course and innovations.
Actually, digital natives never existed. Economic, cultural and political factors account for the creation of this myth. 2 bio humans online: perception evolves. Increasing participation of older generations => dents in the myth.
Older generations are catching up way more than the younger ones!
Q: how can you work as a sociologist if you can’t categorize people? *steph-note: didn’t get the answer*
Big misunderstanding: the seperation of “online” and “real life”. That’s not how we experience it. People are also aging in cyberspace.
Other stereotype: boys are good with computers, not girls. Military caste stereotype (computers were initially military). But in the 50s and 60s, a lot of female computer experts.
thanks it’s great for the people who don’t understand English that well – your summary’s really useful !
I don’t buy the digital natives stuff either. I see plenty of people younger than myself and they are good with the social media stuff – they tweet, Facebook, YouTube and the rest of it. But the programming and technical side of things isn’t nearly as clear.
I first got online in 1995. I didn’t get access at home until about 1997/1998. But I have been computing as a kid since about 1989/1990. Like the ZX81/BBC Micro/Amstrad/Commodore 64 crowd, no Internet meant either video games alone or BBSes. I didn’t have a modem, so no BBSes for me. Just video games. And video games were expensive: I remember that games on diskette were 35 GBP (and that’s 1990s GBP, so probably about 50 or 60 GBP in contemporary money). Tape games were cheaper, but still they were a bit of an expensive gamble – they were often poor quality or just flat-out didn’t work. Given this, you would eventually exhaust your choice of games and end up having to do some programming. In fact, you’d need to know a little BASIC to survive in microcomputer-land. You needed to read the manual, and preferably watch the TV shows they put on about programming (I found most of them on BitTorrent a few years ago).
Compared to those of us who grew up with microcomputers, kids growing up with computers today don’t need to learn to code. If the video game gets boring, they can click onto YouTube or Facebook, play Farmville or whatever. Nothing wrong with them having all those opportunities – and, well, if they do choose to learn to hack, they’ve got much nicer programming languages to choose from. Compilers, IDEs and all the things that the microcomputer hobbyists never had a chance to use are all free. The young hacker can now get a 150 GBP Linux netbook loaded up with Python and Ruby and a decent editor, and as much free documentation as he or she needs.
But are we getting better programmers? Joel Spolsky’s JavaSchools rant and Jeff Atwood’s FizzBuzz post are pretty good – if rather subjective – reference points on this graph: the FizzBuzz example is shocking. Given a simple task which requires no more than looping, two conditionals, modulo, simple string manipulation and printing, most graduates are unable to complete it. There’s lots of speculation why: because of the teaching of higher-level languages for industry in CS courses, some suggest. There is truth to this: if you want someone to churn out good C code, you probably want a grizzled Unix long-beard type who has been hacking for decades rather than a fresh-faced Web 2.0-er. The computer science curriculum in the UK optimizes for a career path where people come with no prior programming experience and with the intention that they are going to end up working in banking or big companies.
You can come out of a CS degree and end up with little more than a mediocre level of understanding of Java (without some of the tougher stuff – people tell me that Java’s generics are too hard, for instance), and have no experience of polyglot programming – no functional programming language experience (Haskell, OCaml etc.), no knowledge of high-level scripting languages (Python, Ruby etc.), no knowledge of a direct compiled language (C, C++), no LISP.
Are younger programmers better? They may be easier to hire, willing to work longer hours for worse pay, more swayed by fashion and less likely to point out uncomfortable truths.
With programming as with social media: the 17-year-olds who are whizzes on Facebook have no historical context. They probably have never used an old-fashioned BBS. Or Usenet. They probably haven’t wasted their youth in crappy IRC channels. They don’t remember when social media was called social software. Or blogging pre-Wordpress or pre-Tumblr. Or the sheer fun of crashing computers by loading up one’s GeoCities page with seventeen Java applets. In other words, the digital natives have quite short memories.
This debate gets boiled down to digital natives vs. digital immigrants. What about the digital colonists? The digital pioneers? The digital equivalent of the Mayflower passengers? The digital equivalent of the European Union where people can move freely between states without being ‘migrants’?
Tom, impressive comment. You ask the right questions, definitely – especially the final one on colonists. Are you familiar with Richard Barbrook’s seminal contribution on Californian ideology (the kind of mindless futurism that seem to annoy some of us)?