PLENK2010

I’ve signed up for Personal Learning Environments Networks & Knowledge, a Massively Open Online Course (MOOC) from Stephen Downes, George Siemens, Rita Kop and Dave Cormier. I am not sure how much I will be able to participate, considering I am already in the throes of a thesis, but the topic is so perfectly aligned with my thesis research on PLN’s, informal learning and the role of microblogging that I couldn’t pass up the opportunity to participate at some level.

Conceptually, there is a pretty clear distinction in my head between PLE’s and PLN’s. In very broad terms, I think of PLE’s as the technology, with the PLN being the people. The PLE enables me to build a PLN. Not that everyone who is part of my PLN requires technology to connect with, but technology has made my PLN much richer, more diverse, and instantly available.

Personally, I am more interested in the PLN than the PLE. Considering I am primarily a technologist in my day job, this is probably a bit off-kilter, but while I use a PLE (built primarily in Netvibes and good ol fashioned, still alive and kicking butt in my little world RSS) and find it invaluable to my learning, I realize I am not a typical user. I do wonder how viable the idea of learners constructing their own environments really is within the context of higher education, which is one of the things I hope this course will help me come to terms with.

But the PLN – I am much more interested in the PLN as a learning construct, both formally and informally, and how it is similar or different to other learning constructs, such as networks of practice and communities of practice.

About a year ago, I wrote about my casual search on trying to historically define the term Personal Learning Network, and came across a 1999 article by Dori Digenti called Collaborative Learning: A Core Capability for Organizations in the New Economy (pdf) in which she noted that reciprocity and trust are two crucial elements in constructing a PLN. I have thought about, and referred to, this article a lot in the past year, specifically when speaking about the idea of reciprocity and how it manifests itself in a network enabled PLN. The more I have thought about it, and the more I examine my own use of a PLN, the more I realize that the reciprocity in a PLN is not so much between myself and individuals within the PLN, but between myself and the PLN itself. I find myself both answering and asking questions to a relatively anonymous group of people whom I have weak ties with, with whom I have developed a certain level of trust with, based primarily on the ambient exposure I have to them and their ideas as a result of them being open and transparent on the web. How did I get to trust these people? Why do I think they know something that will help me? And what are the expectations of me of the people who choose to include me in their PLN? What are my responsibilities? Or are there even any responsibilities?  Oh, the questions.

The other point on PLN’s that I am interested in is a bit more grounded, and that is whether people who use PLN’s use them as a general tool, or segment them to professional development. In my view, a PLN is a general learning tool regardless of what I want to learn, yet I often see PLN’s used primarily as tools for professional development. But I realize that I only get a small glimpse into other people’s PLN’s based on who I am and the role they believe I play in their PLN, so this is probably not the case.

Okay, I need to wrap this up. Hopefully I’ll be able to articulate some of this more clearly in the coming weeks, and be able to contribute to your PLN’s in a meaningful way. At the very least, I am happy to be along for this PLENK2010 ride.

 

Over 70% of faculty feel they are not proficient using online web space to teach

Going through some lit for my thesis this evening I came across this study on higher education faculty self-perceptions of technology literacy and how it relates to their pedagogical practice. Not surprising, the research shows that faculty who perceive themselves as technology literate are more likely to integrate technology into their teaching and learning practice.

What I did find interesting about this study was that 71% of faculty do not feel proficient enough to publish content to the web.

Perhaps the most overlooked area of software use has been in website/web page construction and/or personal web spaces. According to the survey results, only a mean response of 2.18 (16.5% of faculty felt that they were proficient in creating learning-based websites/pages, and 19.9% of the faculty felt that they were proficient with the integration of word processing software and websites/pages). Using online web space to teach or add breadth to a course ranked even lower, registering a mean response for faculty self-perception of 1.61 (71.2% not proficient).

It’s not that this is a particularly surprising result, but given how important the web is these days it does feel like a bit of a clarion call. After all, if faculty don’t feel like they have the necessary technology literacy to do something as trivial as post content onto the web, then having them move beyond this relatively basic function and onto more engaging models of pedagogy is going to be a big ask and, as the researchers note, a missed opportunity.

This may be a missed opportunity for faculty; students are working with learning-based web spaces from the time they enter elementary school until the time they graduate from high school. It may be time that faculty became more familiar with technology tools in order to better facilitate student learning.

David A. Georgina and Myrna R. Olson, “Integration of technology in higher education: A review of faculty self-perceptions,” The Internet and Higher Education 11, no. 1 (2008): 1-8.

 

Will Facebook Questions mainstream crowdsourcing?

Facebook announced a new feature called Questions this week that might be the tipping point that makes technology mediated crowdsourcing a commonly accepted everyday occurrence as a way for individuals to find answers and solve problems.

Now, crowdsourcing is not all that new, but for most people I suspect crowdsouricng as a personal activity with a large network isn’t really on their radar. Sure, when you look for information, you might ask your friends or family for advice or post a question in a forum on the topic somewhere, but I suspect for most people harnessing the network effects of a large distributed mass of people isn’t really something they take part in.

Questions just might change that. Post a question using Questions (you can add a photo or a poll to the question – nice touch), and not only will your friends be able to answer it, but you can also send the question out to the FB network. Further target your question by tagging it with a subject keyword, and only people who are interested in that subject (I assume because they have declared it somewhere in  their profile) will get the question, giving you access to a bunch of people who have some (granted self-declared) skill and expertise in this area.

I haven’t seen the feature yet (it is being rolled out by Facebook as a beta to some users), so I am not going to speculate much more on it. And I am not sure how the questions will be posed to the network in an unobtrusive manner. If unsolicited questions just start popping up in people’s news streams, I suspect there will be a few upset users complaining about the added noise. But at first blush, it seems like the kind of feature that a social learning enthusiast can get behind.

EduDemic has an early look at how Questions could be used in the classroom.

Image: Share your ideas by Britta Bohlinger used under Creative Commons license.

 

Integrating Tech Tools: A Practical and Peer to Peer View

I had the great privilege of being invited to talk to the faculty of the Justice Institute in Victoria last week and speak with them about a few of the projects I have been working on with our faculty at Camosun this year. The talk focused on some practical ways faculty at Camosun have integrated technology in their class to solve specific problems or achieve specific pedagogically based outcomes, hence the “peer to peer” part of the title with me acting as the proxy for our faculty (although they did have a direct voice as I interviewed a couple of them about their projects).

The faculty and projects I picked used Skype, Twitter, YouTube and Posterous as the tools. Scope of the projects ranged from fairly small and discrete (using Skype to bring in a virtual guest speaker) to fairly ambitious (using YouTube as a platform for student created video projects, which involved 5 sections of Nursing students).

This was the first time I used Prezi as a presentation tool and enjoyed having a reason to use it. Before doing the presentation, I tweeted out asking for potential gotcha’s on using Prezi and got some good tips back, including to go easy on the zoom and pan as it can be nausea inducing on the big screen to have things continually spinning and flying from corner to corner, and to download a hard copy of the Prezi to my local machine along with any external resources I might have embedded in the Prezi, like YouTube videos. The one tip I can add to that from my own experience is to test the presentation on a projector beforehand as the projector will tend to lower the screen resolution and could change your layout when displayed on the big screen as a result. I noticed that spacing of my text was altered from the widescreen view I had on my laptop to the narrow projector view when plugged into the overhead projector.

 

re/evolution

At the recent ETUG conference at UVic, I suddenly found myself pulled in as an unprepared participant for the final event of the conference. It was a friendly debate between the green team, arguing that  “technology is an evolutionary change to traditional campus based classroom teaching and learning” and the orange team arguing that “technology is a radical change to how teaching and learning are delivered”. Midway through the debate, Grant Potter had to rush off from the orange team, and called me up to take his place. I suddenly found myself in the midst of arguing the radical side with comrades Scott Leslie and Amanda Coolidge, to whom I extend my sincerest apologies to. Of all the really smart people in the room who could have helped argue this position, you got stuck with me. Blame Grant.

My contributions to the cause consisted of a single glib quip in which, in true revolutionary fashion, I denounced the entire monetary system. I also might have said something about someday students choosing to revolt in their own way by not showing up at our institutions because they found them irrelevant, but other than that I was pretty well seat warming. Like most of my life, I am often a day late with the point. So let me try to extend the conversation and make a few ill-informed points I wish I had made while I had a mic in front of me.

Scott carried the bulk of the argument for the orange revolution. When asked about the higher ed alternative learners might begin to seek out, Scott said he believed we might see a growing importance in the role of guilds and professional organizations within a particular field, and I agree. While this has been traditionally centered around crafts and trades, there is no reason to believe this guild model couldn’t work in almost any field, where a learner who exhibits knowledge in a specific area is acknowledge by peers within that field, completely bypassing a third-party institution like a university or college. Expertise of the learner becomes recognized by the very people who are involved in that field. Why have an intermediary institution involved at all?

To bring technology into this, one of the ways in which this expertise can be determined is through the use of a digital tool, like an e-portfolio/blog, published openly on the web. Learners document their own journey of discovery and provide open evidence of that journey in the form of personal publishing, creation, and active participation within the community. You want to prove you know something? Then connect with the experts in that field and engage with them. The Internet is facilitating connections between those that know with those that want to know. People become practitioners in a field not because they earned a paper at a university, but because they are actively engaging with others who are involved in that field, and get recognized as a valued member of that community by the community.

Scott also talked of the role of the itinerant scholar as another way in which students may begin to forge ahead on their own. Smart teachers are already beginning to do this; to figure out how to pave their own way and realize they don’t need institutions to teach, they can do it themselves. The ones who have figured out how important it is to be open, public, authentic and engaged, pushing content out into open spaces and developing their own digital identity will become sought out by learners. When the learners start looking, the itinerant scholar is already there, waiting for them. Easy to find because they have been openly and actively participating and are recognized by others in the field as an expert.

This is happening already, where reputation and expertise is being converted into learning opportunities. Look at people like Salman Kahn and the Kahn Academy. The work of George Siemens and Stephen Downes with their MOOC (Massively Open Online Course) , where expertise and reputation built by being open and online is translating into thousands of learners wanting to take their course, sans any type of institutional credit.

Another example I came across recently was from Sitepoint, an Australian company that haas developed a strong reputation as a leader in web development. Recently they offered a commercial version of a MOOC on web development technoloigies. For $10 a pop, students could take a course on JavaScript development from experts in that field. 3000 did. 3000. It was so successful, they are developing more courses.

These are the models that will win in the future. If I am a music student and want to become the best funk bass player in the world, do I take the music program at my local community college, or do I enroll in Funk U and get taught online by the greatest funk bassist in the history of music, Bootsy Collins? If I am a student and want to learn how to manage a professional sports team, do I enroll in a 4 year sports management program at my local uni that will cost me tens of thousands of dollars, or do I plop down a few bucks and buy shares in the English soccer club Ebbsfleet United, a team that is run by a self-organizing group who came together on the Internet, kicked in £50 each, and now own and run their own professional soccer club? Decisions about the club are thrashed about in their forums, and owners of the team are distributed around the world. What could give me more real life experience than running the entire club?

Finally, let’s not steampunk* ourselves and believe the technology in use today is the technology that we will use tomorrow. For example, there was a great deal of talk about how important physical presence and the importance of reading physical cues human beings give off during interactions. Well, true, but the advancements in visual communication has grown leaps and bounds, even in the past 5 years. You can’t buy a device these days that does not have a camera in it, and free tools like Skype make those sci-fi video phone calls of 50 years ago reality today. Already I can augment my reality with my cell phone. How much longer will it be before I have the ability to interact with people in a virtual reality space and have it feel like I am physically present with them? If the most valuable selling point of a higher ed experience is the ability to physically bring people together, then higher ed is truly in trouble. That crazy holodeck thing is pretty damn close.

I don’t know if any of this supports the point that “technology is a radical change to how teaching and learning are delivered”, but I needed to do a post-session brain dump of things that were rattling in my head after the debate. If you made it to the end, thank you. And I should end by saying there were lots of laughs in the debate and it was a fantastic way to cap off what was a really wonderful conference – one of the best ETUG’s I have attended. The videos from the sessions will be posted any day now.

* I am not sure if this is the correct way to use the term “steampunk“, but the Robida steampunk I learned about at Northern Voice a few weeks ago seems to fit this use – the shortsighted belief that the dominate technology of the day (in Robida’s case, steam) would be the dominate technology of the future since no other alternative (electricity, gas, or nuclear power) could even be imagined.

Photo credit: I am here for the learning revolution from Wesley Fryer used under Creative Commons license.

 

What I learned at Northern Voice

Northern Voice. Finally. After trying to go for the past 2 years, 2010 was the year my stars aligned and I was able to make it across the straight for the big show. If you are not familiar, Northern Voice is an annual conference that began in 2005 as a conference for bloggers by bloggers, and has since expanded to include other types of social media.

Now, while the sessions and topics are interesting and all (some were recorded, but better yet, check out this set of graphics created by Rachel Smith on her iPad), the big draw for me was the opportunity to reconnect and put faces to avatars with so many people who I admire in the academic and technology space I haunt. My PLN was IRL. OMG.

This isn’t going to come close to capturing everything, but here are some personal things I’ve taken away from the weekend.

But the biggest epiphany that I had over the weekend was with regard to the great debt of gratitude I owe my colleague and friend, Scott Leslie. If I had to point a finger at the connector (in the Gladwell sense) within my personal learning network, Scott is the one. He has spread his coattails wide and has graciously allowed me to ride on them, which has given me the opportunity to meet people whose work at the junction of education and technology I admire. He’s like the friend who manages to get backstage passes to the best concert in town and invites me along for the ride. I feel I owe a lot to Scott, and am grateful to have him within my circle as both a colleague and a friend. Oh, and he unorganized a kick-ass altmoosecamp.

Here’s what others thought of Northern Voice this year (although after reading a few of the more mainstream articles, I was left wondering if we were at the same conference).

 

Image editing and embedding content in WPMU 2.9

I finally got around to upgrading our WPMU instance to to 2.9 (2.9.2 to be exact) and playing with some of the new features. So far the image editing has been a bit of a disappointment, but the oEmbed feature is, quite simply, awesome. Somehow, embedding content in now even easier than before.

The new image editor has some basic image editing functionality. You can crop, resize or rotate a photo. I couldn’t get the crop working after working with it for the better part of an afternoon. At first, how to crop wasn’t fully intuitive to me and it wasn’t until I read this blog post that the (admittedly dim) light bulb went off. Oh, I have to hit the crop button again. D’oh. Then when I went to insert the cropped image into the post, the aspect ratio of the image got skewed as the cropped image took up the entire dimensions of the original image. I also couldn’t save the cropped image back to my media library, but as others have pointed out, these issues may have more to do with folder permissions and settings in my PHP libraries than with the WP image editor, so I’ll be taking a closer look at those as I play more with image editing.

One other little thing about the image editor – it seems to be available only when you first insert an image into a post. If you try to go back and edit the image after it has been instered, the editor doesn’t appear as an option in the pop-up. You have to delete the image from the post and reinsert the image to enable the editor again.

Okay, that aside, the oEmbed support is a killer feature, especially for someone who finds themself supporting novice users. Embedding content from another site has never been so easy. If you want to embed content from another oEmbed enabled site (and a number of the big ones like YouTube, Flickr, Scribd and blip.tv are oEmbed capable), all you pretty well have to do is copy and paste the url of the content you want into the body of your post (make sure it is on it’s own line and not hyperlinked) and you are good to go. Good stuff.

 

Viewing my messy mind with Google Wave

While I have been dipping my toes into the waters of Google Wave for awhile, this month I am taking the plunge (to push the water metaphor) and testing it out with 2 different groups.

The first is at SCoPE where Emma Duke-Williams from the University of Portsmouth is facilitating a discussion around tools for online collaboration. In addition to the usual SCoPE forums, we have been playing with Google Wave as one of those tools (join us as we muck around group:scopecommunity@googlegroups.com).

The second project is much smaller where I am working with two members of my Masters cohort as part of our developing online communities course. We have an experiential learning task to facilitate a week long discussion around (oh, what a coinky-dink) collaborative tools. Talk about synchronicity. So we are using Wave to plan the session.

Google Wave is an interesting mix of both synchronous and asynchronous, something that is becoming more common with web apps. It is synchronous when it needs to be, and it is quite easy to chat and collaborate in real time in Wave. It is also easy to work asynchronously and come back to a Wave after the fact and add on or view an archive of a shared document or artifact. In the past year or two, with tools like Wave, Etherpad and even Twitter, I have been getting the feeling that the distinction we have used in e-learning between asyncrhonous and synchronous is beginning to blur and most of the tools we will use on a regular basis in the future will be able to be both.

Yesterday I had a synchronous chat in the SCoPE Wave with Sylvia Currie where we just happened to be in the same Wave at the same time. I am not sure why, but I find it oddly novel to go into Wave expecting to see asynchronously created content, and then suddenly seeing this little coloured cursor actively typing away and adding content. It’s kind of like walking into what you think will be an empty room and startling yourself when you notice the person working feverishly away at something at the table in the corner.

It’s this synchronous stuff about Wave that I seem to find myself adjusting to. When Sylvia and I started chatting, I noticed that, because you can see stuff as it is being typed, I became very conscious of what I was typing. For someone who is used to writing, rewriting and massaging all my asynchronous contributions to death, exposing the messyness of how my mind works felt disconcerting. When I write, I often start sentences, hit backspace 35 times, start over, move these words from over there to here and hack hack hack (don’t even get me started on my spleling). And knowing in the back of my mind that each keystroke is recorded and archived also makes me very aware of what I am typing knowing that once I hit a key, it is recorded forever in that Wave.

The flip side of that dilemma is that you can see the process – it is transparent, and if I was wanting to see an example of collaborative work when assessing a group project (for example) this kind of transparency into the process is gold.

Also, the archival ability of Wave is something I see as a real strength, but is going to require a mindshift in how I collaboratively work with others. Knowing that every keystroke is archived and can be reviewed at any time makes it slightly different than a wiki where only actual changes are recorded. I think this gives collaborators even more freedom to hack away at my work knowing the original is still there. Now, I am not sure about other people, but I know that editing someones words makes me feel uncomfortable, so instead of changing their Wave content, I find that I end up adding comments as a reply or within their post as a comment. But I am rethinking that after seeing how much crud it adds. I am beginning to realize that adding comments might actually be hurting Wave use by adding clutter. I think that, in the Wave world, we are supposed to liberally edit and change each others content. This is going to require a bit of negotiation between collaborators knowing that all content is fluid, even moreso I think, than with a wiki.

On a practical note, I notice that Google has added some notifications to Wave, which wweren’t there in the beginning. You can now get email notifications when Waves are updated. But I dislike email notifications, so instead I have been using the Google Chrome Wave notifier extension, which is turning into one of my most used extension during my Wave experiments this month. It sits unobtrusively in the top corner of Chrome and shows how many Google Wave updates are waiting for me in Waves I am taking part in. Very useful.

Photo by VespaGT used under Creative Commons license

 

To Kill a Mockingbird – Ning Style

I love it when I see teachers like English teacher Jenny Johns at work. Jenny has created a great English lesson using Ning where her students virtually become one of the characters in “To Kill a Mockingbird”.

I love this video for a couple of reasons. For one, digital literacy skills are seamlessly embedded into the assignment. This is not a lesson on how to use Ning, it is a lesson about the characters in “To Kill a Mockingbird”, yet it touches upon many issues young people face in a tech mediated landscape. The second reason I love this assignment is that it resonates with the students because it occurs in a space they are familiar with – a social network (note how the instructor has the students “friend” the other characters from the stories).

The video is from the PBS Frontline documentary digital nation.

 

Adaptive learning, disputes, and breaking out of echo chambers

I have just installed a FireFox addon called Dispute Finder. Dispute Finder is an addon developed by Intel Research and UC Berkley that highlights disputed information on a web page and displays alternatives to that disputed claim. It uses both crowdsourcing and curated resources to try to expose you to alternative views about what you are reading.

As my Masters has progressed, I find myself becoming increasingly interested in adaptive learning systems and the role that technologies could play in shaping a users personal learning environment. Now, I am no computer scientist and when I hear words like ontologies being thrown around I have to admit my head begins to ache slightly. The depth of my knowledge of semantic web technologies doesn’t go far beyond a high level flyby of FoaF and RDF . Nonetheless, I remain interested in advancements in recommendation systems, both technical (semantic) and human (folksonomies) and the implications they could have for learning and constructing knowledge.

More and more on the web we are seeing personalized recommendations pop up for us to explore, often based on our past behaviours or, increasingly, recommendations provided to us by our social networks. Amazon recommends books to me not only based on what I have bought or browsed before, but also what other people who have bought or browsed similar titles to me have found interesting. Facebook will recommend friends to me based on who is already in my network, and adjust the information I see about that network based on my viewing habits (and some other variables, I am sure).  When Facebook introduced a real time stream a few versions ago, it did so with a News view and a Live view. At the time I wasn’t sure what the differences were, but after using it for awhile the advantage of the News feed becomes clear. The News feed is content that the system deems to be more relevant to me – it is a filter to help control the tidal wave of network information (I have Clay Shirky in my head saying “it’s not information overload – it’s filter failure“). And most of the time, it is right.

I am intrigued by what it means for learning if some of the construction of these connections is being done by technology, and how educators can assist learners in setting up environments that are conducive to this kind of semi-organic discovery. On one hand, these types of recommendations help to bring order to the chaos and may open up paths for exploration that may not always be obvious. On the other hand, they also set up the possibility of developing echo chambers. If the only information I am being exposed to is information congruent with my own views, then how can I be expected to become a critical thinker? After all, being critical often means being able to discern between two opposing points of view. How can you do this if you are only being presented one point of view?

Which brings me back to Dispute Finder and why I find this project interesting. Dispute Finder seems to depart from the general trend of recommendation engines on the web. Instead of recommending things it thinks I will like, it shows me information that may not be aligned with my own views, which opens up a possibility for me to learn.

via interview with Rob Ennals on Spark

 

2010 Horizon Report

I love it when The Horizon Report comes out. It takes me back to being a kid in Northern Alberta, anxiously awaiting the November arrival of the Sears Christmas Wish Book at our house. It offered me a glimpse of what could be in the near future. And it excited me.

If you are not familiar, each year the New Media Consortium and the Educause Learning Initiative publish The Horizon Report, a look into the future at some of the technologies that may have an impact on higher education in the next 5 years. This year the report has picked the following technologies and estimated a time for adoption for each.

  1. Mobile Computing (1 year or less)
  2. Open Content (1 year or less)
  3. Electronic Books (2-3 years)
  4. Simple Augmented Reality (2-3 years)
  5. Gesture Based Computing (4-5 years)
  6. Visual Data Analysis (4-5 years)

Scott Leslie from BCcampus is one of the advisors for the report. This year he travelled to Austin, Texas for the release of the report and created this video, which features interviews with members of ELI and NMC about the technologies in the report. It’s a nice piece of work from Scott that adds useful context around the reasons why these technologies were chosen.

Some things strike me about this list.

First, mobile computing has arrived at Camosun, at least if the connectivity stats coming from our IT Services department are any indication. Last week I was speaking with some members of the department who said that they have had to increase the number of available IP addresses for our wireless network twice this fall to meet the demand of wireless apps on campus. If you are not familiar with how networking works, each device that connects to the wireless network requires a unique address. These are pulled from a limited pool of addresses. Once that pool runs out, no more devices can connect to the network until a device returns an address to the pool. I don’t think that it’s a far stretch to imagine they will be significantly upping the pool again this fall. So, we know the students are connecting. How much of that connectivity is being used for learning & teaching is the unknown.

Second, of all the technologies on this list, simple augmented reality is the one that has me the most excited. I have been playing with augmented reality apps on my Android phone for the past 6 months and can see huge potential for education should they take off. Here is an example of augmented reality in which data pulled from the web is overlayed on top of what you see through your camera phone, kind of like a heads up display you might see in a car.

Imagine scanning the horizon with your smartphone and having geographical information pop up on the screen – the names of those mountains in the distance, the number of salmon that spawned in that creek last year, what developers hold development permits for that parcel of land over there. Very possible, and useful, information.

The barrier I see with this right now is that there is no standard for delivering the information. While many augmented reality browser are being created, the layers are not compatible with each other. Kind of like the early days of web browsers where websites would only work in either Internet Explorer or Netscape. Here’s hoping we learned from that mess & some open standards begin to emerge as the augmented reality market matures.

As for the other technologies, ebooks have to catch on at some point and you have to think sooner rather than later. 2010 has been dubbed by some as the year of the e-reader, with numerous options now on the market. The advantages of ebooks are numerous – cheaper, easier to update, they don’t use trees, you can increase the font size (a big one for me after spending a term frustrated trying to read 9 point type in a textbook), annotate, snip, republish yada yada yada. They have to catch on, don’t they?

After having lived with a Wii for the past year, I can also see the appeal of gesture based computing, especially in the areas of simulations. I can imagine a carpentry simulation someday swinging something akin to a Wii remote to simulate hammering a nail into wood, complete with tactile feedback where the remote vibrates as you strike the nail.

Of course, there are many qualifiers, maybes and outright unknowns whenever you try to predict technology and trends. But one thing seems certain – the innovation train is not stopping, and that makes for very interesting times to be working in educational technology.

 

On historically defining Personal Learning Network

Earlier this week, as a response to a post by David Warlick, Stephen Downes posted on his attempt to find origins of the term “personal learning network”. This, strangely enough, got me thinking about the origins of the term.

I was surprised that, for as common as the term has become in my own PLN, the source of it was so hard to identify; that it was a generic enough grouping of words that a meaning seemed to evolve almost organically over time, thanks to contributions by a number of different people (which, I acknowledge, was somewhat the point of Stephen’s article).

Still, I have used this term in academic papers and have often searched for a definition of the term that would be useful as a citation. Recently, I used the 1998 Daniel R. Tobin article Building Your Own Personal Learning Network as a source. In the article, Tobin defines a personal learning network like this:

An important part of learning is to build your own personal learning network — a group of people who can guide your learning, point you to learning opportunities, answer your questions, and give you the benefit of their own knowledge and experience.

I’ve found his definition of a personal learning network useful, and his personal example of developing training sessions in Brazil a helpful anecdote to understand the concept of personal learning networks. But Stephen’s post did make me curious as to where this term came from, so I emailed Tobin with a link to Downes post asking if he was the originator of the phrase or whether he had another source for it. His response (10 minutes later) was:

Hi, Clint –

I don’t know if I coined the term “personal learning network” or not. I don’t know of any earlier references to the term, but that doesn’t mean that someone else didn’t use the phrase before I did.

The article was written in 1998, but I didn’t post it to my website until 2001, so that may help with the confusion on dates.

What I was referring to was my informal network of colleagues and professional acquaintances to whom I could turn if I needed information, i.e., people who could help me learn whatever it was that I was seeking. I still have a large personal learning network and am part of many other people’s PLNs as well, although none of us use that term. When I started using the phrase, I wasn’t particularly thinking about this in the sense of a virtual, PC-based network — in fact, in 1998, there weren’t many websites or discussion baords (sic), wikis, etc., that could be used for this purpose. Back then, one of the few that I knew of and used regularly was a list service started at Penn State for training and development professionals. It was later stopped and transferred to Yahoo Groups.

I hope this is helpful.

Best regards,
Dan Tobin

From there, I did a bit more digging and discovered a 1999 article written by Dori Digenti (Collaborative Learning: A Core Capability for Organizations in the New Economy. Reflections, 1(2), 45-57. doi: 10.1162/152417399570160) which uses the term “personal learning network” along with the acronym “PLN”. The use of the acronym is important to me because it denotes a very precise and specific conceptual meaning attached to the phrase “personal learning network”. And it is an acronym that I often see used to replace the phrase “personal learning network” in my network.

In the article, Digenti sets up a six phase model to build and develop collaborative learning competency in organizations. In phase six of the model (Enhancing Interdependence p. 53), Digenti speaks specifically to idea of personal learning network, and uses the phrase as an acronym.

As technology and change gain momentum, no professionals can claim enough mental bandwidth to maintain learning in all the necessary endeavors they are engaged in. An organization can sustain its collaborative learning only by building interdependence among members. This is where the personal learning network (PLN), born of series of learning collaborations, can be a valuable tool for enhancing and building interdependence (Digenti, 1998a).

The PLN consists of relationships between individuals where the goal is enhancement of mutual learning. The currency of the PLN is learning in the form of feedback, insights, documentation, new contacts, or new business opportunities. It is based on reciprocity and a level of trust that each party is actively seeking value-added information for the other.

The first paragraph, where the term personal learning network is introduced, contains a reference to a 1998 unpublished manuscript by Digenti called “The Learning Consortium Sourcebook”. I could not find that work , but I wonder if this might be the source of the term personal learning network as I understand and use it today?

The paper then goes on to describe how to develop a personal learning network, and there are two points that Digenti makes that resonate strongly with me. First, you have to give to get (p 53).

How do you build a PLN? First, it is important to overcome the hesitation around “using” people. If you are building a PLN, you will always be in a reciprocating relationship with the others in the network. Ideally, you should feel that your main job in the network is to provide value-added information to those who can, in turn, increase your learning.

Second, it takes time and work (p 53).

To have a truly valuable PLN, investments in time and resources are essential. This requires an extension of the typical transactional business mind-set. If, as a business manager or change agent, we “do the deal” and fail to consider building our PLN, we have lost much of the value of our interactions. This is particularly true in the activities of collaborative learning, where each project we engage in should enhance and broaden the PLN of each member.

Now, this was hardly an exhaustive academic search for the term, so I suspect that there are more uses of it from around that time stuffed away somewhere. But it appears to me that the phrase “personal learning network” as I use and understand the term today may have originated in the work of these two authors around 1998-99.

 

View documents in the browser with Google Docs Viewer

Google Docs Viewer is a handy little service that let’s you view documents and presentations within the browser without having to open a third party application. It eliminates the need for students to have additional applications (such as PowerPoint or a PDF reader) installed on their computer to view PowerPoint or PDF files.

Here is an example. I am using an old PowerPoint presentation on podcasting done by a colleague of mine a few years ago that lives on our web server. The link to the original PowerPoint file (2.2 mb) will either download to your computer, or force you to open PowerPoint to view the presentation (depending on how your browser is configured, assuming you even have PowerPoint). Now, here is a link to the same PowerPoint presentation (which opens in a new window/tab), but this time viewed through the Google Docs Viewer.

It’s important to note that I did not upload the presentation to the Google Docs Viewer site – the original PowerPoint file still lives on our web server. The Google Docs Viewer is not a repository to store documents.  If I delete the original file on our web server, the link to the Google Docs Viewer breaks since the original file is no longer available. I retain complete control over the source file, but the user gets the benefit of not having to download and open a PowerPoint file.

How to use Google Docs Viewer

There are a couple of ways to use Google Docs Viewer; either directly from the site, or you can construct a special url that will link your document with the document viewer.

To use the site, go to the site, enter the url to the PDF or PowerPoint document, and click Generate Link. You then get a few different options, including a link that you can tweet, IM or email, HTML link code that you can paste in a website, blog or LMS, or embed code that will bring the document into your blog, site or LMS (I’ve embeded a PowerPoint presentation at the end of this post for you to see how this works).

The second way to access the service is by crafting your own URL. You can create links that pull documents through the service. You don’t even need to use the website to use the service. To create your own URL start with the base path of http://docs.google.com/viewer, followed by a question mark (?) and the path to the original document (url=path) The path needs to be encoded so no spaces or special characters. Knowing this, I can build a url to any PDF or PowerPoint, so a link to our example above would look like this: http://docs.google.com/viewer?url=http%3A%2F%2Fdisted.camosun.bc.ca%2FDE%2Fpodcast.ppt.

So, Why Use Google Docs Viewer?

Why would you even do this and not just link directly to, say, the original PowerPoint file? Well, from a technical perspective, there are some barriers for students when they try to deal with PowerPoint files (and, to a lesser extent these hold true for PDF files as well, although PDF is by far a more web friendly format than PowerPoint).

  • The files can be large, especially if you use animations and transitions.
  • They require students to have additional software installed on their computer, in this case PowerPoint or the PowerPoint Viewer.
  • Depending on the browser, how it is configured and the security settings, PowerPoint files can cause strange and unexpected behaviours. One user may have their system set up to have PowerPoint open in a browser window, while another may be prompted to download the file. A third may get a security warning that a potentially malicious file is about to be opened.
  • The files take a long time to load. In most cases, when someone clicks on a PowerPoint link, the first thing that has to happen is that PowerPoint has to open up, which eats up time. No one likes to wait for content and those few seconds add up to frustration for users.

By using a service like Google Docs Viewer (or Slideshare, another free alternative) , you can mitigate some of these barriers and provide a better experience for students.

Here is the same presentation embeded using Google Docs Viewer.

 

4 Alternative Blogging Interfaces for WordPress

I’ve been a WordPress user since the b2 days, but only lately have I begun to explore different methods of posting content to a WordPress blog. In the past, I have used the standard web interface for creating posts, with the occasional foray into using the FireFox ScribeFire plugin (more on that in just a moment).

Why alternatives? Well, it’s not that I think the standard WordPress interface is bad or poorly designed – far from it. But I am looking at alternative, streamlined ways of getting content into a site that may be more familiar to non-WordPress users.

Over the past few days I’ve been playing with alternative ways to publish content to a WordPress site, and here are 4 that I have come up with.

Using Word 2007
I really like this method, not because it is the best tool in this list, but because it is the most familiar interface for the faculty I support. Everyone is comfortable using Word and, while it won’t give you all the functionality of the web interface, it gets the job done with some nice functions in an interface that users are familiar with.

Setup is easy and straightforward and you can insert text, links tables and images, including WordArt, Symbols, Shapes and SmartArt. Blog management and organizational options are pretty minimal, but include the ability to post as a draft, and choose an existing blog category for the post. You can also open previous posts from your blog to edit.

A lack of headings in the toolbar is a frustration I have with the interface, and the reason why the subheadings for this post are appearing as 14 POINT (???) headings and not h3 tags as I would prefer. Microsoft has instead decided to put bigger and smaller buttons on the interface. This is something Microsoft has done with other html editors I’ve come across (yeah SharePoint, I’m looking at you) and it is an annoyance I find maddening. Not only is this semantically incorrect (let me make a heading a heading and a paragraph a paragraph please), but it also overrides the set CSS in the WordPress themes. It would be far better if they just left the text options as standard html tags, which would be semantically correct and would also ensure consistency in design.

That said, in terms of something my faculty will find easy to use, the Word interface seems like an early winner. And anything that helps people move away from posting links to their Word documents and posting in html is a winner with me.

By Email

Another familiar interface for my users, you can post to a WordPress blog from any email client. While this does require a bit more technical work to initially set up, you again get a composing environment that is really user friendly and familiar, especially for the slightly technophobic faculty.

This is bare bones in terms of functionality. The subject line will be used as the title of the post with the body of the email as the content of the post. All html in the email will be stripped out, and it does not support uploading attachments or images. You also cannot choose what category you want your post to appear in with the post appearing in whatever the blog default category is. This does not have the functionality of Posterous, but in terms of getting content onto the web quick and painlessly, it’s a fine alternative.

ScribeFire

ScribeFire is a FireFox plugin that lets you post to your blog from within FireFox. This is a full featured alternative to the native web interface that has tons of features. I’ve used this in the past and, while I like it, I have found that the formatting sometimes goes a bit wonky when the post is published and the post doesn’t always look like I would expect it to with the underlying html code getting rewritten. Still, you can pretty well do anything with this tool that you can with the WordPress interface. It’s handy when you come across something on the web that you want to blog about quickly, or if you have no eb access but still want to compose a post to publish when you reconnect.

Google Docs

Cole Camplese sent me scurrying down this path a few days ago when he tweeted a test post (which looks like it has since been deleted). So I gave it a shot and found out that you can post directly to WordPress from Google Docs. In the example from a few days ago, I included an image pulled from my Flickr account and a drawing done in Google Docs. Connecting was pretty straightforward, however there was no specific WordPress API hook. Instead, I used the Moveable Type API, which connected, but may explain why when I posted the post showed up on the blog sans title.

Have you used any of these tools? Are there any other ways to create content outside of the WordPress user interface? If so, I’d love it if you let me know.

 

Etherpad adds timeline slider

Etherpad is a collaborative document tool that allows multiple users to work on the same document in real time on the web. Think of it as a hybrid of Google Docs (which is not quite as synchronous) and a live chat tool.

I’ve used this tool for many collaborative projects, and for quickly drafting a collaborative document it is fantastic. Easy to use and free and with a document revision history so that you can see previous versions of the document. Today, that particular feature got a nifty little boost – an interactive document timeline. Now you can watch a video of your document, from birth to finished project.

For educators, this is a really handy evaluation tool. If you are trying to monitor group contributions to a collaborative project, this feature will be incredibly useful. All participants are colour coded so their contributions to a document are highlighted by colour, which let’s you quickly see who made major contributions to the document as it was being constructed.

In addition, the video timeline allows you to see the groups progress on the task at hand. If they got off topic, you’ll be able to see where they went wrong, in what context (what changes were happening that might have led them to the diversion), and who might have brought the team back on track (if they did). It’s a transparent way to quickly view the process unfold.

Unfortunately, the timeline view is not available on the free public version (which allows many concurrent users), but you can get a free professional version for up to 3 concurrent users that does include the timeline.

 

Building an EdTech library – what would you recommend?

Library

I just received the textbooks for my next class and among them is Effective Teaching with Technology in Higher Education by Tony Bates and Gary Poole. I was expecting to run into this one at some point during my Masters and I am happy that it is sooner than later. It’s a book I have heard many references to in the past few years and one I am anxious to dig into.

I’ve been going over a recent post by Alec Couros where he asked his network for 5 article/book recommendations for an Associate Dean in his office to help “inform his understanding of current changes regarding social networks, knowledge, and technology in education”. So, I am going to toss something similar out here. My network is considerable smaller than Alec’s but hopefully I’ll get a few responses to bolster my fledgling EdTech bookshelf (like my Masters program won’t pile enough on over the next 2 years).

Here is the question to you, my considerably more experienced EdTech brethren; What would you consider some of the seminal or defining works in our field that examine the intersection of technology and education? If you had to recommend one or two books that seem to inform our industry/sector as a whole, what would those be?

Photo: Iqra: Read by swamimbu. Used under Creative Commons license.

 

Screenr: free web based screencasting tool

Screenr is a web-based screencasting tool that allows you to quickly create screencasts. Free and web-based, there is no software to download, unlike Jing, which Screenr is very similar to. Videos are limited to 5 minutes and Screenr will host your videos, providing you embed code to put the videos where you want. You can also tweet the screencast out on Twitter, download an MP4 version, or publish the final result to YouTube.

Here’s a demo.

Besides Camtasia and Captivate, the two mainstream commercial products that allow you to do very sophisticated screencasts that include interactivity, post production editing, and branching, there are a number of free screencasting tools similar to Screenr out there, including Screenjelly and Screentoaster. For Firefox users there is also a handy FF plugin called Capture Fox.

In my mind, the difference between Screenr and these other tools is that Screenr is coming from the e-learning world and is suported by Articulate, a company that makes a very succesful line of e-learning application products. And, as Articualte CEO Adam Schwartz says, the cost for Articulate to run Screenr is:

…really cheap for us. We’re hosted on the Rackspace cloud, and the cost for doing this is like two orders of magnitude less than it was when we looked at this two years ago. It would cost more as a marketing fiasco to shut this down than it would to keep it running.

From the same article, Schwartz also said that Screenr

is a first step in the company’s creation of a new group of e-learning products, which he compares to the popular software-based screencast products from Camtasia. But with Artculate’s focus on education, the tools will be “more about interactivity, branching, learning, and simulation.” His fully developed screencast tools will have the capabilities for grading and quizzing, and will be integrated into more fully formed educational suites.

So it sounds like Articulate has some pretty big plans with Screenr and this is just the beginning.

You do, however, need a Twitter account to use Screenr as the service is completely integrated with Twitter. This might deter some who have been reluctant to take the Twitter plunge, or might be the deciding reason for some to start using it. A big part of the idea of Screenr is to allow people to quickly make a screencast and then publish it to their network via Twitter, reinforcing the idea (for me at least) that one of the core values of Twitter is as a network notification (distribution) system.

 

Google Docs does a lot of things well, but…

Google Error

…writing an academic paper with APA formatting isn’t one of those things. Which I learned writing my first paper for my Masters last week.

The first is such a basic feature that I (wrongly) assumed that it was part of Google Docs – page numbering.  Um, turns out, I was wrong, but only discovered this at the last minute as I was cleaning up the formatting to make the paper submission ready.

Now, I could have gone in and manually added page numbers as this was a relatively small paper of 1500 words, but if I happened to be working on a 50 page paper (or, eeks, longer) that would have been a pain.

I did discover a page numbering hack, but it involved going into the HTML code – something I have no problem doing, but that others who are just looking for some basic word processing capabilities may not.

But the clincher for me was the failure to get APA references formatted correctly in the bibliography. The problem was the hanging indent in the second line of the reference. In order to get a hanging indent, I had to modify both the HTML and create a custom CSS class.

.hang {
        text-indent: -0.5in;
        margin-left: 0.5in;
}

I used inches since this was something that will be printed.

Again, not a huge problem for me, but for someone who doesn’t know either HTML or CSS a real barrier.

But the disappointing part was that when I applied the CSS to create the indent, it appeared to stick, but then it suddenly reverted back to no hanging indent. In front of my eyes. One minute it was there, the next it wasn’t.

I did some digging and I found that if I applied the style and then quickly hit save,that seemed to work (of all the kludges in the world, this has to be the kludgiest and makes NO sense to me). However, it was really random and occasionally I would be working on the document and it would suddenly revert back from the hanging indent to regular formatting.

Needless to say, this was both  frustrating and disappointing. The one time it did stick long enough for me to print/download, I noticed that the APA formatting worked when I printed a pdf copy of the paper, but when I downloaded a Word version (as requested by my instructor), the APA formatting was gone.

This was a deal breaker for me with regards to relying on Google Docs for anything more than casual use. Which is fine. It is still a hugely useful product. The night before, for example, one of my team members and I were working collaboratively on a Google Doc over IM. She was in Ontario and I was at home in BC and it worked flawlessly for collaboration.

But to rely on Google Docs for something as structured as an APA formatted paper? I downloaded Open Office last night.

Yes, there is a reason why it is still in Beta.

Reblog this post [with Zemanta]
 

New Netvibes feature: drag and follow widgets

A few days ago, just as the D2L user conference Fusion was starting in Minneapolis, I created a Twitter alert for the conference tag, #D2L09. Since I couldn’t attend this year I wanted to virtually keep track of what was happening at the conference.

To do this, I went to the Twitter search page and typed in the conference tag #D2L09, which brought up a list of tweets from the conference. From there I grabbed the RSS feed and manually created a widget in Netvibes (glowing fanboy praise of Netvibes in just a minute). With the widget created, I did not have to continually go back to Twitter and search for that tag every time I wanted a conference update – the tweets automatically appeared in the Netvibes widget as they rolled in.

Today, Netvibes released an update which will greatly simplify this process in the future – drag and follow widgets.

If you have a Twitter widget installed on your Netvibes page and you see a hashtag come through a tweet from someone you follow, all you have to do is click and drag the hash tag onto your Netvibes page. Netvibes automatically creates a new Twitter widget for you populated with Twitter search results for that hash tag. Very handy!

You can also do this with people you follow in either Twitter or Facebook. Drag their username and a breakout widget with just their stream is created. Also very handy for following a few key people in my network.

Okay, here is the Netvibes fanboy gushing (which could really be gushing about any of the current breed of customizable web startpages, from iGoogle to Pageflakes).  Of all the web tools I use, none (save Firefox) is more used than Netvibes, my personal startpage that is my aggregator for all things web.

When people ask me how I manage to keep track of all this web stuff, I say Netvibes. It is the dashboard from which I can monitor numerous email accounts, my Delicious, Twitter, Friendfeed, Flickr, YouTube and Facebook networks, see who is commenting on and linking to my blogs, listen to podcasts, catch the current web zeitgeist,  and set up alerts for everything from Twitter tags to academic publications through our library. All the information I need is on one handy dandy page.

What began as a tool I used to keep track of blog subscriptions (functionality that has now been replaced for me by Google Reader and Feedly) is fast becoming a real time web monitoring service that allows me to quickly gauge what is going on and with who in my world.

If you haven’t explored the wonderful world of personal startpages, I highly recommend it. It is a powerful and (for me) indispensable tool to quickly and efficiently take the pulse of my network and track my interests across the web.

Reblog this post [with Zemanta]