What I learned at Northern Voice

Northern Voice. Finally. After trying to go for the past 2 years, 2010 was the year my stars aligned and I was able to make it across the straight for the big show. If you are not familiar, Northern Voice is an annual conference that began in 2005 as a conference for bloggers by bloggers, and has since expanded to include other types of social media.

Now, while the sessions and topics are interesting and all (some were recorded, but better yet, check out this set of graphics created by Rachel Smith on her iPad), the big draw for me was the opportunity to reconnect and put faces to avatars with so many people who I admire in the academic and technology space I haunt. My PLN was IRL. OMG.

This isn’t going to come close to capturing everything, but here are some personal things I’ve taken away from the weekend.

But the biggest epiphany that I had over the weekend was with regard to the great debt of gratitude I owe my colleague and friend, Scott Leslie. If I had to point a finger at the connector (in the Gladwell sense) within my personal learning network, Scott is the one. He has spread his coattails wide and has graciously allowed me to ride on them, which has given me the opportunity to meet people whose work at the junction of education and technology I admire. He’s like the friend who manages to get backstage passes to the best concert in town and invites me along for the ride. I feel I owe a lot to Scott, and am grateful to have him within my circle as both a colleague and a friend. Oh, and he unorganized a kick-ass altmoosecamp.

Here’s what others thought of Northern Voice this year (although after reading a few of the more mainstream articles, I was left wondering if we were at the same conference).

 

Facilitating a distributed discussion – an experiment

Get Connected!

The latest course in my Masters is Facilitation and Community Building, and I have an interesting experiential assignment this week. I am working with 2 other members of my cohort to facilitate a discussion with the rest of our cohort.

Our topic is facilitating collaboration in virtual teams and we’re trying something a little bit different and I’m feeling a tad nervous about it (I keep telling myself nervous is good when learning). In the spirit of networked learning, instead of facilitating the discussion in our closed Moodle forum, we are going to try taking the discussion outside of the LMS and onto a couple of blog posts that we found which are related to our topic.

Part of the reason why we decided to do it this way is because all three of us facilitating this week are strong believers in networked learning as a way to engage with a broad array of voices and opinions in our field. While the assignment we have come up with may be a bit more prescriptive than constructivist, it will hopefully give the rest of our cohort a brief opportunity to try their hand at network learning.

For the past couple of days, our cohort has been reading 2 articles on facilitating virtual teams in a collaborative environment. Tonight we posted the second part of the assignment and have asked them to visit (at least) one of three blog posts related to the topic and leave a comment on the blog. The posts we have chosen are:

  • Lurking and Loafing from Steve Wheeler talks about social loafing, lurking and how to encourage participation.
  • Collaboration from Ben Grey questions the differences between collaboration and cooperation.
  • Dysfunctional Teams from Tony Karrer is a nice summary of Patrick Lencioni’s Five Dysfunctions of a Team.

Hopefully, these authors won’t mind us practicing a bit of network learning to try to spur some conversation on the topic of collaboration and virtual teamwork. So Steve, Ben and Tony, if you happen to notice a few new comments on these posts this week, take it as a good sign that you’ve engaged some of our cohort. There are 9 of us, so hopefully distributed over three blogs you won’t feel overwhelmed with a sudden influx of comments.

And if anyone in my network reading this would like to join in our conversation, that would be wonderful as well. If you get a chance, pop by these posts, respond to a few comments and help us illustrate the power of networked learning.

Photo: Get Connected by Divergent Learner used under Creative Commons license.

 

Image editing and embedding content in WPMU 2.9

I finally got around to upgrading our WPMU instance to to 2.9 (2.9.2 to be exact) and playing with some of the new features. So far the image editing has been a bit of a disappointment, but the oEmbed feature is, quite simply, awesome. Somehow, embedding content in now even easier than before.

The new image editor has some basic image editing functionality. You can crop, resize or rotate a photo. I couldn’t get the crop working after working with it for the better part of an afternoon. At first, how to crop wasn’t fully intuitive to me and it wasn’t until I read this blog post that the (admittedly dim) light bulb went off. Oh, I have to hit the crop button again. D’oh. Then when I went to insert the cropped image into the post, the aspect ratio of the image got skewed as the cropped image took up the entire dimensions of the original image. I also couldn’t save the cropped image back to my media library, but as others have pointed out, these issues may have more to do with folder permissions and settings in my PHP libraries than with the WP image editor, so I’ll be taking a closer look at those as I play more with image editing.

One other little thing about the image editor – it seems to be available only when you first insert an image into a post. If you try to go back and edit the image after it has been instered, the editor doesn’t appear as an option in the pop-up. You have to delete the image from the post and reinsert the image to enable the editor again.

Okay, that aside, the oEmbed support is a killer feature, especially for someone who finds themself supporting novice users. Embedding content from another site has never been so easy. If you want to embed content from another oEmbed enabled site (and a number of the big ones like YouTube, Flickr, Scribd and blip.tv are oEmbed capable), all you pretty well have to do is copy and paste the url of the content you want into the body of your post (make sure it is on it’s own line and not hyperlinked) and you are good to go. Good stuff.

 

Viewing my messy mind with Google Wave

While I have been dipping my toes into the waters of Google Wave for awhile, this month I am taking the plunge (to push the water metaphor) and testing it out with 2 different groups.

The first is at SCoPE where Emma Duke-Williams from the University of Portsmouth is facilitating a discussion around tools for online collaboration. In addition to the usual SCoPE forums, we have been playing with Google Wave as one of those tools (join us as we muck around group:scopecommunity@googlegroups.com).

The second project is much smaller where I am working with two members of my Masters cohort as part of our developing online communities course. We have an experiential learning task to facilitate a week long discussion around (oh, what a coinky-dink) collaborative tools. Talk about synchronicity. So we are using Wave to plan the session.

Google Wave is an interesting mix of both synchronous and asynchronous, something that is becoming more common with web apps. It is synchronous when it needs to be, and it is quite easy to chat and collaborate in real time in Wave. It is also easy to work asynchronously and come back to a Wave after the fact and add on or view an archive of a shared document or artifact. In the past year or two, with tools like Wave, Etherpad and even Twitter, I have been getting the feeling that the distinction we have used in e-learning between asyncrhonous and synchronous is beginning to blur and most of the tools we will use on a regular basis in the future will be able to be both.

Yesterday I had a synchronous chat in the SCoPE Wave with Sylvia Currie where we just happened to be in the same Wave at the same time. I am not sure why, but I find it oddly novel to go into Wave expecting to see asynchronously created content, and then suddenly seeing this little coloured cursor actively typing away and adding content. It’s kind of like walking into what you think will be an empty room and startling yourself when you notice the person working feverishly away at something at the table in the corner.

It’s this synchronous stuff about Wave that I seem to find myself adjusting to. When Sylvia and I started chatting, I noticed that, because you can see stuff as it is being typed, I became very conscious of what I was typing. For someone who is used to writing, rewriting and massaging all my asynchronous contributions to death, exposing the messyness of how my mind works felt disconcerting. When I write, I often start sentences, hit backspace 35 times, start over, move these words from over there to here and hack hack hack (don’t even get me started on my spleling). And knowing in the back of my mind that each keystroke is recorded and archived also makes me very aware of what I am typing knowing that once I hit a key, it is recorded forever in that Wave.

The flip side of that dilemma is that you can see the process – it is transparent, and if I was wanting to see an example of collaborative work when assessing a group project (for example) this kind of transparency into the process is gold.

Also, the archival ability of Wave is something I see as a real strength, but is going to require a mindshift in how I collaboratively work with others. Knowing that every keystroke is archived and can be reviewed at any time makes it slightly different than a wiki where only actual changes are recorded. I think this gives collaborators even more freedom to hack away at my work knowing the original is still there. Now, I am not sure about other people, but I know that editing someones words makes me feel uncomfortable, so instead of changing their Wave content, I find that I end up adding comments as a reply or within their post as a comment. But I am rethinking that after seeing how much crud it adds. I am beginning to realize that adding comments might actually be hurting Wave use by adding clutter. I think that, in the Wave world, we are supposed to liberally edit and change each others content. This is going to require a bit of negotiation between collaborators knowing that all content is fluid, even moreso I think, than with a wiki.

On a practical note, I notice that Google has added some notifications to Wave, which wweren’t there in the beginning. You can now get email notifications when Waves are updated. But I dislike email notifications, so instead I have been using the Google Chrome Wave notifier extension, which is turning into one of my most used extension during my Wave experiments this month. It sits unobtrusively in the top corner of Chrome and shows how many Google Wave updates are waiting for me in Waves I am taking part in. Very useful.

Photo by VespaGT used under Creative Commons license

 

BC Study on RateMyProfessors

Last week I attended a presentation on some research done by one of our instructors, Dr. Janet Reagan, on informal student course evaluations, specifically focusing on the website  RateMyProfessors . Those working within the BC college system may find the research particularly interesting as the data she used from RMP was pulled from 3 anonymous BC college’s, so it is very relevant for those of us working in this sector.

One of our College’s research analysts was in attendance – someone charged with doing our in house course survey, and remarked that there was a great deal of similarity and consistency with the informal information student’s posted on RMP and the results of Dr. Reagan’s study. I am not sure what the perception of sites like RMP is with our faculty, but I think it is easy to disregard the validity of the comments made on public spaces like this as places where students vent. Dr. Reagan’s research shows that these comments are valid and, surprising to some, equally weighted between positive and negative. Very useful and relevant phenomenological information can be found on sites like RMP and there is a great deal of congruency between what students perceive is effective teaching practice and what the research literature in this area suggests.

As part of the research, Dr. Regan has developed the ACCEPT Model of Student Discernment of Effective Teaching Characteristics which can be used as criteria to evaluate student perceptions of good teaching practice.

  • Articulate: Teachers provide consistent, clear and distinctly accurate instruction to facilitate and direct the teaching and learning process.
  • Competent: Teachers are qualified to instruct in adult education settings and exhibit skills expected of the teaching profession. They are organized and prepared for content delivery in an interactive style, and understand strategies to fairly and effectively assess learning.
  • Content-Experts: Teachers are current, informative, reality-based content experts with substantive experience in their topic areas that may include their academic research background, or their career background, or their trades or industry background.
  • Empowering: Teachers empower students in their learning to build self-confidence and assertiveness. Teachers challenge, motivate and encourage adult learners to think independently and critically.
  • Perceptive: Teachers display a high level of authenticity and credibility including insight, intuition, and humour. Perceptive teachers care about the success of their students and are approachable.
  • Trustworthy: Teachers are aware of their professional, ethical and moral obligations in relation to the trust relationship of teaching. Teachers are respectful in thought and reliable in action and have earned the students? confidence.

Dr. Reagan goes on to make 6 recommendations based on the results of the study.

  1. Explore the use of informal online student evaluation of effective teaching characteristics, to promote credible and authentic teaching practice, aligned with self-regulated learning strategies that are both beneficial and desirable to adult learners.
  2. Promote voluntary faculty development opportunities that demonstrates how humour and novelty may be used to enhance learning, as many anecdotal student comments relate to the positive effect on humour and novelty in the learning environment or, conversely, the negative effect when humour and novelty are absent.
  3. Address power relations in the classroom that interfere with learning, as voiced through informal student evaluation of teaching effectiveness, and intervene when the quality of teaching is unacceptable to students and the teaching professions.
  4. Build on the framework of the ACCPET model of Student Discernment of Effective Teaching Characteristics to develop informal adjunct to the institutional rating system. The interpretive analysis of this study revealed that students informal anecdotal comments align with empirical research on effective teaching characteristics and principles of adult learning.
  5. Build on the framework of the ACCPET model of Student Discernment of Effective Teaching Characteristics to promote and integrate effective teaching characteristics. Also, with faculty agreement, conduct regular classroom research and improve teaching practice with ongoing in-service training, student and peer feedback
  6. Improve the method of retrieving student evaluation of effective teaching characteristics by accessing informal and less traditional student communication, including data accessed from anonymous online faculty rating systems, while also acknowledging that students’ informal comments reflect credible commentary; even though possible abuses could limit validity in specific instances.

Dr. Reagan’s research was on RMP, but I suspect that similar results could be found monitoring any open social network and I believe this is a great opportunity for educators. Over the past year or so I have been monitoring keywords related to our institution on Twitter and it is always interesting when I see a student comment that I know is directly related to a class they are taking, or some kind of experience they are having with our institution. To me, the realtime web offers great potential for educators to provide immediate and timely feedback and intervention based on what our students are saying about the experiences they are having with our institution as they are having them. Many large companies are doing this kind of social media space monitoring with very positive results. Maybe it is time educators took a serious look at monitoring social networking sites as a regular part of their formative assessment strategy.

The full dissertation is available at DSpace at the University of Victoria.

 

To Kill a Mockingbird – Ning Style

I love it when I see teachers like English teacher Jenny Johns at work. Jenny has created a great English lesson using Ning where her students virtually become one of the characters in “To Kill a Mockingbird”.

I love this video for a couple of reasons. For one, digital literacy skills are seamlessly embedded into the assignment. This is not a lesson on how to use Ning, it is a lesson about the characters in “To Kill a Mockingbird”, yet it touches upon many issues young people face in a tech mediated landscape. The second reason I love this assignment is that it resonates with the students because it occurs in a space they are familiar with – a social network (note how the instructor has the students “friend” the other characters from the stories).

The video is from the PBS Frontline documentary digital nation.

 

Adaptive learning, disputes, and breaking out of echo chambers

I have just installed a FireFox addon called Dispute Finder. Dispute Finder is an addon developed by Intel Research and UC Berkley that highlights disputed information on a web page and displays alternatives to that disputed claim. It uses both crowdsourcing and curated resources to try to expose you to alternative views about what you are reading.

As my Masters has progressed, I find myself becoming increasingly interested in adaptive learning systems and the role that technologies could play in shaping a users personal learning environment. Now, I am no computer scientist and when I hear words like ontologies being thrown around I have to admit my head begins to ache slightly. The depth of my knowledge of semantic web technologies doesn’t go far beyond a high level flyby of FoaF and RDF . Nonetheless, I remain interested in advancements in recommendation systems, both technical (semantic) and human (folksonomies) and the implications they could have for learning and constructing knowledge.

More and more on the web we are seeing personalized recommendations pop up for us to explore, often based on our past behaviours or, increasingly, recommendations provided to us by our social networks. Amazon recommends books to me not only based on what I have bought or browsed before, but also what other people who have bought or browsed similar titles to me have found interesting. Facebook will recommend friends to me based on who is already in my network, and adjust the information I see about that network based on my viewing habits (and some other variables, I am sure).  When Facebook introduced a real time stream a few versions ago, it did so with a News view and a Live view. At the time I wasn’t sure what the differences were, but after using it for awhile the advantage of the News feed becomes clear. The News feed is content that the system deems to be more relevant to me – it is a filter to help control the tidal wave of network information (I have Clay Shirky in my head saying “it’s not information overload – it’s filter failure“). And most of the time, it is right.

I am intrigued by what it means for learning if some of the construction of these connections is being done by technology, and how educators can assist learners in setting up environments that are conducive to this kind of semi-organic discovery. On one hand, these types of recommendations help to bring order to the chaos and may open up paths for exploration that may not always be obvious. On the other hand, they also set up the possibility of developing echo chambers. If the only information I am being exposed to is information congruent with my own views, then how can I be expected to become a critical thinker? After all, being critical often means being able to discern between two opposing points of view. How can you do this if you are only being presented one point of view?

Which brings me back to Dispute Finder and why I find this project interesting. Dispute Finder seems to depart from the general trend of recommendation engines on the web. Instead of recommending things it thinks I will like, it shows me information that may not be aligned with my own views, which opens up a possibility for me to learn.

via interview with Rob Ennals on Spark

 

2010 Horizon Report

I love it when The Horizon Report comes out. It takes me back to being a kid in Northern Alberta, anxiously awaiting the November arrival of the Sears Christmas Wish Book at our house. It offered me a glimpse of what could be in the near future. And it excited me.

If you are not familiar, each year the New Media Consortium and the Educause Learning Initiative publish The Horizon Report, a look into the future at some of the technologies that may have an impact on higher education in the next 5 years. This year the report has picked the following technologies and estimated a time for adoption for each.

  1. Mobile Computing (1 year or less)
  2. Open Content (1 year or less)
  3. Electronic Books (2-3 years)
  4. Simple Augmented Reality (2-3 years)
  5. Gesture Based Computing (4-5 years)
  6. Visual Data Analysis (4-5 years)

Scott Leslie from BCcampus is one of the advisors for the report. This year he travelled to Austin, Texas for the release of the report and created this video, which features interviews with members of ELI and NMC about the technologies in the report. It’s a nice piece of work from Scott that adds useful context around the reasons why these technologies were chosen.

Some things strike me about this list.

First, mobile computing has arrived at Camosun, at least if the connectivity stats coming from our IT Services department are any indication. Last week I was speaking with some members of the department who said that they have had to increase the number of available IP addresses for our wireless network twice this fall to meet the demand of wireless apps on campus. If you are not familiar with how networking works, each device that connects to the wireless network requires a unique address. These are pulled from a limited pool of addresses. Once that pool runs out, no more devices can connect to the network until a device returns an address to the pool. I don’t think that it’s a far stretch to imagine they will be significantly upping the pool again this fall. So, we know the students are connecting. How much of that connectivity is being used for learning & teaching is the unknown.

Second, of all the technologies on this list, simple augmented reality is the one that has me the most excited. I have been playing with augmented reality apps on my Android phone for the past 6 months and can see huge potential for education should they take off. Here is an example of augmented reality in which data pulled from the web is overlayed on top of what you see through your camera phone, kind of like a heads up display you might see in a car.

Imagine scanning the horizon with your smartphone and having geographical information pop up on the screen – the names of those mountains in the distance, the number of salmon that spawned in that creek last year, what developers hold development permits for that parcel of land over there. Very possible, and useful, information.

The barrier I see with this right now is that there is no standard for delivering the information. While many augmented reality browser are being created, the layers are not compatible with each other. Kind of like the early days of web browsers where websites would only work in either Internet Explorer or Netscape. Here’s hoping we learned from that mess & some open standards begin to emerge as the augmented reality market matures.

As for the other technologies, ebooks have to catch on at some point and you have to think sooner rather than later. 2010 has been dubbed by some as the year of the e-reader, with numerous options now on the market. The advantages of ebooks are numerous – cheaper, easier to update, they don’t use trees, you can increase the font size (a big one for me after spending a term frustrated trying to read 9 point type in a textbook), annotate, snip, republish yada yada yada. They have to catch on, don’t they?

After having lived with a Wii for the past year, I can also see the appeal of gesture based computing, especially in the areas of simulations. I can imagine a carpentry simulation someday swinging something akin to a Wii remote to simulate hammering a nail into wood, complete with tactile feedback where the remote vibrates as you strike the nail.

Of course, there are many qualifiers, maybes and outright unknowns whenever you try to predict technology and trends. But one thing seems certain – the innovation train is not stopping, and that makes for very interesting times to be working in educational technology.

 

365Retro: My 2010 Flickr project (and maybe yours)

I have a project for 2010, and I’d love it if you came along. I’ve started a Flickr Group called 365Retro. The idea is to post one photo a day for the entire year. Now, 365 groups on Flickr are not new, but this one is a bit different. Instead of taking a photo with your camera, you have to scan a photo from your pre-digital photo collection.

The idea came to me while I was going through my old photo albums, which I have done periodically over the years. Every time I do I have this little voice inside me that says “I should really scan these”. But then real life took over and I never found the time.

This year, I am finding the time, mostly because my kids are starting to ask me more about my life, pre-kids. So, once a day I’ll be scanning and adding some old photos of my life pre-digital camera. I am really using this as an excuse to do what I have wanted to do for years – scan my old photos. And maybe share a few memories along the way.

One of the other reasons I am doing this is because in the past few months I have seen how a digital artifact, like a photo, can become a touchstone that connects people.

A group of radio announcers from CFGP radio enjoying a night out in Grande Prairie Alberta. From l to r: Peter Hall, Jeff Bolt, Paul Oulette, Clint Lalonde (me), Daryl Olsen.
A group of radio announcers from CFGP radio enjoying a night out in Grande Prairie Alberta. From l to r: Peter Hall, Jeff Bolt, Paul Oulette, me, Daryl Olsen.

Last fall, a friend of mine named Peter Hall passed away. I had not seen Peter for 15 years, but had worked quite closely with him for many years early in my radio career.

I heard about his death via a post on Facebook from a mutual friend. I remembered I had some photos of Peter tucked away in my photo collection. So that night I went through the photos, scanned a few, and posted them on Facebook. Before I knew it, people I had not heard from for years who both Peter and I had worked with began to comment on the photos. I reconnected with numerous old friends I had lost track of (including one who now lives in the same city as I do and we have met f2f for lunch since), and many fun memories were shared, all spurred by these photos.

Over the past few years, thanks to social networks, I have meet a whole new circle of people. Thanks to a continual stream of tweets, status updates, blog posts and Flickr photos, I have a pretty good idea of who these people are today and what they are up to right now. But ask me about these people and their lives prior to around 2005 when I started actively connecting virtually with people, and I know squat. And I want to know. I like history and knowing what happened to people in their lives that brought them to the point they are at now.

So, if you have a scanner,  some old photos, and a Flickr account, come and connect with us in the 365Retro group. Fill in the pre-digital gaps in your life to give your friends and family a more complete picture of your life and history. These photos can be whatever you want to scan and share. If you can add some context or a story that fills in the details about the subject of the photo, all the better. Add some context and share your stories and your history with the group.

If you don’t have a Flickr account, you can set one up for free. Once you have your account, join the 365Retro Flickr group. Scan and post a photo a day to your Flickr account, and send the photo to the 365Retro Group

That’s it! You’ve participated. And don’t worry if 365 sounds daunting. Contribute what you can. Or, if you don’t want to contribute, you can pop by and laugh at the various mullets and facial hair combo’s I have spouted over the years.

 

Adventures in backing up WPMu

I’ve bee working on setting up some backup systems in our instance of WPMu and have been struggling a bit. While I certainly appreciate that creating backups for WPMu can be fairly straightforward to setup when using tools like phpMyAdmin and gzip (as outlined nicely in a recent post at WPMU Tutorials), there really isn’t a simple way for individual site owners to do site backups from the WordPress interface.

What I would like to be able to do is allow the user to simply create a site specific backup file of all the necessary files for their site. Everything wrapped in one nice little package, with the bow on top being the ability for the user to schedule and forget their backups. Once a day/week/month it would just run, grab everything they would need to restore their site (at least their posts/pages AND uploaded files) and all is good. But I am realizing this may be a tall order without setting it up behind the scenes.

Now, each WP site does have an Export option, which is simple and straightforward, but was never intended as a backup utility, but rather a utility to move posts from one WordPress install to another. As such, it is not a comprehensive backup and doesn’t include files, images or multimedia you might have uploaded to your site.

This is a problem I have found with most community developed backup plugins as well – they all concentrate on backing up the database tables and not those extra files that will no doubt be uploaded by users looking to use the platform as a CMS. In order to backup both the database (where the posts and pages are stored), and the associated files, you need at least two separate  plugins.

The two I have been working with are WordPress Backup and WordPress Database Backup. So far I haven’t been able to get these two to do exactly what I want, and using them both makes things a tad confusing for end users.

backupFor one, there are now 2 backup options in their site navigation, located in different sub-menus. Natural instinct for a user to ask why is there 2 backups, and anytime a question is asked there is confusion. So a bit of support is needed to explain the differences between the two to the users. Not a huge deal, but a barrier.

What is very handy is that both backup plugins let you automatically schedule backups to happen at regular intervals. These files are zipped up and can automatically be moved to archive folders on the server or, if you want, emailed directly to the users, which some users might find comforting. The downside is that there are 4 separate zipped files that go along with each site – a database files (generated by the WP Database backup) and 3 backup files generated by the second backup plugin, one with your uploaded files, themes and plugins. One packaged folder would be nicer.

But the major problem I have with using the WordPress Database plugin with WPMu is that the interface does not limit the database tables to backup to just the site requesting the backup. It exposes ALL the tables to the entire WP instance, meaning that any site owner could backup and download any other site users content. Not cool.

I do like and appreciate the work that has gone into these plugins. I use them on this blog and they work great. But in a multi-user environment, I can’t really say this is the silver backup bullet I was hoping they would be. So, I am still searching for a backup system that users can initiate that is simple and straightforward for the end user that will allow them to control their own backups.

 

Piloting WordPress Multi-user at Camosun

A few weeks ago, we launched a WordPress Multi-User pilot project at Camosun.  Here are a few thoughts early on in the process.

Why are we doing this?

For the past 7 (or so) years, FrontPage has been the web authoring tool we have supported for faculty at Camosun. At the end of 2006, Microsoft discontinued FrontPage. Since then we have been experimenting with other platforms to replace FrontPage for faculty who wish to have stand alone (ie: outside our LMS Desire2Learn) websites and haven’t really been happy with the tools we have found, finding them either costly, overly complicated, or limiting. Ever since our Office 2007 rollout last year, faculty who are still using FrontPage have been reporting problems, so IT Services was also anxious to have us find another solution for faculty websites. So the main purpose for piloting WordPress for us is to see if we can use it primarily as a CMS to replace FrontPage.

Armed with some good feedback from Brian Lamb at UBC, Grant Potter at UNBC, and  Audrey Williams at Pellissippi State (who have all been involved with the UBCUNBC and Pellissippi State WPMu installs), I put together a pilot document for our IT Services, who agreed to support the project. At the beginning of November, the pilot began.

The journey so far…

We’ve done a lot in a few weeks. Installation was quick and smooth. The network admin I have been working with (who has also installed Drupal, Joomla, LifeRay and a few other CMS type systems) remarked that the LDAP integration with Active Directory was the easiest he has ever done. He literally had us integrated with our authentication system in 20 minutes.

For my part, I recruited a half dozen faculty for a pilot group and did some initial training. They are now set up with their own websites – and I use that term website intentionally. I’ve avoided using the word blog when I refer to these sites. I’ve found that the term blog carries with it preconceived notions, both good and bad. So, in order to avoid the whole “I don’t want a blog, I want a website” circular logic wheel that I have witnessed when people talk about WP as a CMS, I have been using the term website when talking about our pilot sites. I really want our users to focus on WP as a tool to manage a website, not a blog and try to proactively nip that semantic bud. These are just websites.

The faculty will be playing with their sites between now and January. In January when the new term starts, they will be using them as their primary website and posting whatever content it is they want their students to have access to.

Some early technical stuff

In keeping with that “website, not blog” philosophy, we launched with a minimum number of themes, trying to pick pretty simple ones that handle pages and nested pages well.

As for plugins, again, I’ve started with a small set of plugins and will be adding and testing functionality during the pilot (which runs until the end of June, 2010). Specifically, the plugins we have installed to begin with are:

  • Akismet spam filter and Akismet credit inserter to automatically insert a “Spam prevention powered by Akismet”
  • pageMash page management plugin which allows you to drag-and-drop the pages into the order you like.
  • COinS Metadata Exposer makes your blog readable by Zotero and other COinS interpreters. As a student who is actively using citation management tools like Zotero on a daily basis, I truly appreciate when this metadata is exposed to accurately capture citations from a webite.
  • Unfiltered MU to allow users to embed content from other sites.
  • Smart YouTube plugin to make embedding YouTube videos even easier. Yes, even easier.
  • Active Directory Integration for, uh, Active Directory integration
  • New Blog Defaults lets you customize certain default settings for new blogs.
  • WordPress Backup and WordPress Database Backup. I’ll have more to say about backing up WPMu sites in a separate post. Suffice to say, it is not an easy thing to do using the standard WordPress interface.
  • PDF and PPT Viewer looks like an interesting plugin that I have only started to test out. It could be very useful, considering that most faculty still post a lot of  PDF and PPT files on their sites. In a nutshell this plugin leverages Google Docs Viewer to create an embeddable view of a PPT or PDF document – no additional software or plugin required.

I’ll be elaborating about these plugins, and on administering WPMu, but I’ll save that for future posts. In the meantime, we now have a WPMu install up and running at Camosun and ticking along just fine.

 

On historically defining Personal Learning Network

Earlier this week, as a response to a post by David Warlick, Stephen Downes posted on his attempt to find origins of the term “personal learning network”. This, strangely enough, got me thinking about the origins of the term.

I was surprised that, for as common as the term has become in my own PLN, the source of it was so hard to identify; that it was a generic enough grouping of words that a meaning seemed to evolve almost organically over time, thanks to contributions by a number of different people (which, I acknowledge, was somewhat the point of Stephen’s article).

Still, I have used this term in academic papers and have often searched for a definition of the term that would be useful as a citation. Recently, I used the 1998 Daniel R. Tobin article Building Your Own Personal Learning Network as a source. In the article, Tobin defines a personal learning network like this:

An important part of learning is to build your own personal learning network — a group of people who can guide your learning, point you to learning opportunities, answer your questions, and give you the benefit of their own knowledge and experience.

I’ve found his definition of a personal learning network useful, and his personal example of developing training sessions in Brazil a helpful anecdote to understand the concept of personal learning networks. But Stephen’s post did make me curious as to where this term came from, so I emailed Tobin with a link to Downes post asking if he was the originator of the phrase or whether he had another source for it. His response (10 minutes later) was:

Hi, Clint –

I don’t know if I coined the term “personal learning network” or not. I don’t know of any earlier references to the term, but that doesn’t mean that someone else didn’t use the phrase before I did.

The article was written in 1998, but I didn’t post it to my website until 2001, so that may help with the confusion on dates.

What I was referring to was my informal network of colleagues and professional acquaintances to whom I could turn if I needed information, i.e., people who could help me learn whatever it was that I was seeking. I still have a large personal learning network and am part of many other people’s PLNs as well, although none of us use that term. When I started using the phrase, I wasn’t particularly thinking about this in the sense of a virtual, PC-based network — in fact, in 1998, there weren’t many websites or discussion baords (sic), wikis, etc., that could be used for this purpose. Back then, one of the few that I knew of and used regularly was a list service started at Penn State for training and development professionals. It was later stopped and transferred to Yahoo Groups.

I hope this is helpful.

Best regards,
Dan Tobin

From there, I did a bit more digging and discovered a 1999 article written by Dori Digenti (Collaborative Learning: A Core Capability for Organizations in the New Economy. Reflections, 1(2), 45-57. doi: 10.1162/152417399570160) which uses the term “personal learning network” along with the acronym “PLN”. The use of the acronym is important to me because it denotes a very precise and specific conceptual meaning attached to the phrase “personal learning network”. And it is an acronym that I often see used to replace the phrase “personal learning network” in my network.

In the article, Digenti sets up a six phase model to build and develop collaborative learning competency in organizations. In phase six of the model (Enhancing Interdependence p. 53), Digenti speaks specifically to idea of personal learning network, and uses the phrase as an acronym.

As technology and change gain momentum, no professionals can claim enough mental bandwidth to maintain learning in all the necessary endeavors they are engaged in. An organization can sustain its collaborative learning only by building interdependence among members. This is where the personal learning network (PLN), born of series of learning collaborations, can be a valuable tool for enhancing and building interdependence (Digenti, 1998a).

The PLN consists of relationships between individuals where the goal is enhancement of mutual learning. The currency of the PLN is learning in the form of feedback, insights, documentation, new contacts, or new business opportunities. It is based on reciprocity and a level of trust that each party is actively seeking value-added information for the other.

The first paragraph, where the term personal learning network is introduced, contains a reference to a 1998 unpublished manuscript by Digenti called “The Learning Consortium Sourcebook”. I could not find that work , but I wonder if this might be the source of the term personal learning network as I understand and use it today?

The paper then goes on to describe how to develop a personal learning network, and there are two points that Digenti makes that resonate strongly with me. First, you have to give to get (p 53).

How do you build a PLN? First, it is important to overcome the hesitation around “using” people. If you are building a PLN, you will always be in a reciprocating relationship with the others in the network. Ideally, you should feel that your main job in the network is to provide value-added information to those who can, in turn, increase your learning.

Second, it takes time and work (p 53).

To have a truly valuable PLN, investments in time and resources are essential. This requires an extension of the typical transactional business mind-set. If, as a business manager or change agent, we “do the deal” and fail to consider building our PLN, we have lost much of the value of our interactions. This is particularly true in the activities of collaborative learning, where each project we engage in should enhance and broaden the PLN of each member.

Now, this was hardly an exhaustive academic search for the term, so I suspect that there are more uses of it from around that time stuffed away somewhere. But it appears to me that the phrase “personal learning network” as I use and understand the term today may have originated in the work of these two authors around 1998-99.

 

View documents in the browser with Google Docs Viewer

Google Docs Viewer is a handy little service that let’s you view documents and presentations within the browser without having to open a third party application. It eliminates the need for students to have additional applications (such as PowerPoint or a PDF reader) installed on their computer to view PowerPoint or PDF files.

Here is an example. I am using an old PowerPoint presentation on podcasting done by a colleague of mine a few years ago that lives on our web server. The link to the original PowerPoint file (2.2 mb) will either download to your computer, or force you to open PowerPoint to view the presentation (depending on how your browser is configured, assuming you even have PowerPoint). Now, here is a link to the same PowerPoint presentation (which opens in a new window/tab), but this time viewed through the Google Docs Viewer.

It’s important to note that I did not upload the presentation to the Google Docs Viewer site – the original PowerPoint file still lives on our web server. The Google Docs Viewer is not a repository to store documents.  If I delete the original file on our web server, the link to the Google Docs Viewer breaks since the original file is no longer available. I retain complete control over the source file, but the user gets the benefit of not having to download and open a PowerPoint file.

How to use Google Docs Viewer

There are a couple of ways to use Google Docs Viewer; either directly from the site, or you can construct a special url that will link your document with the document viewer.

To use the site, go to the site, enter the url to the PDF or PowerPoint document, and click Generate Link. You then get a few different options, including a link that you can tweet, IM or email, HTML link code that you can paste in a website, blog or LMS, or embed code that will bring the document into your blog, site or LMS (I’ve embeded a PowerPoint presentation at the end of this post for you to see how this works).

The second way to access the service is by crafting your own URL. You can create links that pull documents through the service. You don’t even need to use the website to use the service. To create your own URL start with the base path of http://docs.google.com/viewer, followed by a question mark (?) and the path to the original document (url=path) The path needs to be encoded so no spaces or special characters. Knowing this, I can build a url to any PDF or PowerPoint, so a link to our example above would look like this: http://docs.google.com/viewer?url=http%3A%2F%2Fdisted.camosun.bc.ca%2FDE%2Fpodcast.ppt.

So, Why Use Google Docs Viewer?

Why would you even do this and not just link directly to, say, the original PowerPoint file? Well, from a technical perspective, there are some barriers for students when they try to deal with PowerPoint files (and, to a lesser extent these hold true for PDF files as well, although PDF is by far a more web friendly format than PowerPoint).

  • The files can be large, especially if you use animations and transitions.
  • They require students to have additional software installed on their computer, in this case PowerPoint or the PowerPoint Viewer.
  • Depending on the browser, how it is configured and the security settings, PowerPoint files can cause strange and unexpected behaviours. One user may have their system set up to have PowerPoint open in a browser window, while another may be prompted to download the file. A third may get a security warning that a potentially malicious file is about to be opened.
  • The files take a long time to load. In most cases, when someone clicks on a PowerPoint link, the first thing that has to happen is that PowerPoint has to open up, which eats up time. No one likes to wait for content and those few seconds add up to frustration for users.

By using a service like Google Docs Viewer (or Slideshare, another free alternative) , you can mitigate some of these barriers and provide a better experience for students.

Here is the same presentation embeded using Google Docs Viewer.

 

4 Alternative Blogging Interfaces for WordPress

I’ve been a WordPress user since the b2 days, but only lately have I begun to explore different methods of posting content to a WordPress blog. In the past, I have used the standard web interface for creating posts, with the occasional foray into using the FireFox ScribeFire plugin (more on that in just a moment).

Why alternatives? Well, it’s not that I think the standard WordPress interface is bad or poorly designed – far from it. But I am looking at alternative, streamlined ways of getting content into a site that may be more familiar to non-WordPress users.

Over the past few days I’ve been playing with alternative ways to publish content to a WordPress site, and here are 4 that I have come up with.

Using Word 2007
I really like this method, not because it is the best tool in this list, but because it is the most familiar interface for the faculty I support. Everyone is comfortable using Word and, while it won’t give you all the functionality of the web interface, it gets the job done with some nice functions in an interface that users are familiar with.

Setup is easy and straightforward and you can insert text, links tables and images, including WordArt, Symbols, Shapes and SmartArt. Blog management and organizational options are pretty minimal, but include the ability to post as a draft, and choose an existing blog category for the post. You can also open previous posts from your blog to edit.

A lack of headings in the toolbar is a frustration I have with the interface, and the reason why the subheadings for this post are appearing as 14 POINT (???) headings and not h3 tags as I would prefer. Microsoft has instead decided to put bigger and smaller buttons on the interface. This is something Microsoft has done with other html editors I’ve come across (yeah SharePoint, I’m looking at you) and it is an annoyance I find maddening. Not only is this semantically incorrect (let me make a heading a heading and a paragraph a paragraph please), but it also overrides the set CSS in the WordPress themes. It would be far better if they just left the text options as standard html tags, which would be semantically correct and would also ensure consistency in design.

That said, in terms of something my faculty will find easy to use, the Word interface seems like an early winner. And anything that helps people move away from posting links to their Word documents and posting in html is a winner with me.

By Email

Another familiar interface for my users, you can post to a WordPress blog from any email client. While this does require a bit more technical work to initially set up, you again get a composing environment that is really user friendly and familiar, especially for the slightly technophobic faculty.

This is bare bones in terms of functionality. The subject line will be used as the title of the post with the body of the email as the content of the post. All html in the email will be stripped out, and it does not support uploading attachments or images. You also cannot choose what category you want your post to appear in with the post appearing in whatever the blog default category is. This does not have the functionality of Posterous, but in terms of getting content onto the web quick and painlessly, it’s a fine alternative.

ScribeFire

ScribeFire is a FireFox plugin that lets you post to your blog from within FireFox. This is a full featured alternative to the native web interface that has tons of features. I’ve used this in the past and, while I like it, I have found that the formatting sometimes goes a bit wonky when the post is published and the post doesn’t always look like I would expect it to with the underlying html code getting rewritten. Still, you can pretty well do anything with this tool that you can with the WordPress interface. It’s handy when you come across something on the web that you want to blog about quickly, or if you have no eb access but still want to compose a post to publish when you reconnect.

Google Docs

Cole Camplese sent me scurrying down this path a few days ago when he tweeted a test post (which looks like it has since been deleted). So I gave it a shot and found out that you can post directly to WordPress from Google Docs. In the example from a few days ago, I included an image pulled from my Flickr account and a drawing done in Google Docs. Connecting was pretty straightforward, however there was no specific WordPress API hook. Instead, I used the Moveable Type API, which connected, but may explain why when I posted the post showed up on the blog sans title.

Have you used any of these tools? Are there any other ways to create content outside of the WordPress user interface? If so, I’d love it if you let me know.

 

Test post from Google Docs

I am composing this post in Google Docs to test whether or not I can publish to my blog from Google Docs. So here is the post. If you are reading this on my blog or in my RSS feed, then the test worked. You can now resume your normal activities after this stellar post.

I am trying to pull an image from Flickr. If you see an picture above then it worked and the image from Flickr was included (my 5 year old daughter drew this as a tribute to Snowflake, our white goldfish who made a break for it and jumped out of the bowl awhile back). This text should link back to the original photo on Flickr.

Okay all done.

 

Interactive storytelling with YouTube

As part of my Masters, I am currently reading Effective Teaching with Technology in Higher Education by Tony Bates and Gary Poole. My cohort is currently working with their SECTIONS model for choosing and evaluating new educational technologies. One of the criteria in the model is Interactivity (I) – what kind of interaction does the technology you are examining enable? As I was reading the chapter, a memory from my adolescence popped into my head – Dragon’s Lair.

Like most kids growing up in the early 80’s, video games were a big part of my life, including a game called Dragon’s Lair. Dragon’s Lair was different than most video games in that the action was high quality animation, not pixelated characters. The gameplay was incredibly clunky and I think it cost a dollar to play (compared to 25 cents for my game de jour Galaga) and since most of the time I ended up falling into a fiery pit of doom within 30 seconds, I didn’t invest a lot of time and money in it. But it made a lasting impression in that it was one of my first encounters with branching video. I loved that I had the direct ability to control the storyline and influence the narrative. It was like I was the Director in some fantastic animated movie.

Just over a year ago, YouTube unveiled the ability to annotate videos and add links to them. While there certainly has been a few problems associated with the annotations (most notably the lack of transparency on where the destination leads to and the possibility of linking to a malware site, as Pandalabs warned about earlier this year), it is really interesting to see how this feature is being used to create interactive stories and games on YouTube, much like the ones I experienced in the arcade hunched over Dragon’s Lair.

A good example of annotations being used to create an interactive story is this recent series of videos done by the Metropolitan Police in London as part of their Drop the Weapons campaign. At the end of each video you are asked to make a decision, which takes you down a different path.

For educators, this ability to link videos creates all kinds of interesting possibilities for creating interactive learning activities. For example, here is an interactive spelling bee.

I can’t imagine how much it cost to develop the Dragon’s Lair video game that sucked up my teenage cash, but I would hazard a guess that it was substantially more than it cost to create branching scenarios on YouTube. The point being that it doesn’t take a big budget to create compelling interactive activities using the technology available to us today. Sure, as the budgets go up so do the production techniques and special effects, etc. But really all it takes is a simple video recorder, some imagination and YouTube to create a bit of interesting interactive content.

 

Google site aggregates Internet statistics

Did you know that over 30% of our leisure time is now spent online? Or that 20 hours of video are uploaded to YouTube every minute? Or that worldwide, over 6 billion songs have been sold on iTunes? Where did I find these fascinating Internet stats I hear you asking? Why, from the Google Internet Stats site of course.

I just came across this site, but can already see how useful it will be to both monitor the Internet zeitgeist, and use as a starting point for current research about Internet and technology. Google has set up an aggregate site that monitors stats from a number of third party sites. The complete list of data sources is available on the site, but includes The Economist, Wall Street Journal, Financial Times, TechCrunch, and Neilsen (among many others). Somehow the list of data sources also includes Coke, which makes me go what the heck? Is this the cola company? I didn’t realize they pounded out a lot of Internet stats and figures, but I digress..

The topics are grouped into 5 categories; Macro Economic Trends, Technology, Consumer Trends, Media Consumption and Media Landscape so if you are in any discipline that intersects with these areas, you should find something here useful. Also, the site is UK based, so expect the results to be skewed slightly towards the UK and Europe, but still this should be a useful resource if you are looking for stats as a starting point, or to quickly support a point.

 

Etherpad adds timeline slider

Etherpad is a collaborative document tool that allows multiple users to work on the same document in real time on the web. Think of it as a hybrid of Google Docs (which is not quite as synchronous) and a live chat tool.

I’ve used this tool for many collaborative projects, and for quickly drafting a collaborative document it is fantastic. Easy to use and free and with a document revision history so that you can see previous versions of the document. Today, that particular feature got a nifty little boost – an interactive document timeline. Now you can watch a video of your document, from birth to finished project.

For educators, this is a really handy evaluation tool. If you are trying to monitor group contributions to a collaborative project, this feature will be incredibly useful. All participants are colour coded so their contributions to a document are highlighted by colour, which let’s you quickly see who made major contributions to the document as it was being constructed.

In addition, the video timeline allows you to see the groups progress on the task at hand. If they got off topic, you’ll be able to see where they went wrong, in what context (what changes were happening that might have led them to the diversion), and who might have brought the team back on track (if they did). It’s a transparent way to quickly view the process unfold.

Unfortunately, the timeline view is not available on the free public version (which allows many concurrent users), but you can get a free professional version for up to 3 concurrent users that does include the timeline.

 

The myth of multitasking

An interesting study from Stanford this week that challenges the idea that we are adept multitaskers and can effectively deal with multiple concurrent tasks.

According to the research, we’re really lousy at multitasking, and the idea that multitasking somehow makes us more efficient or effective at dealing with tasks is wrong.

I find the study results very much in line with what I have been feeling lately with regard to my own productivity. I thought my unproductive feelings were a result of age and slowing down, or being a 42 year old parent with a 5 and 2 year old and finding the time to be able to concentrate on a single task for long periods of time at home virtually impossible. Both of these are probably true to some extent. But lately I have been asking myself questions like how efficient am I, what do I produce, and is it really the best work that I can do? I find that by trying to do too many things at once, I actually accomplish very little and, in some cases, completely miss out on important tasks. I am beginning to question many of the habits and methods I have picked up over the years to deal with multiple streams of information and how I juggle multiple tasks and asking myself is this really the best way I work?

I know it is a common problem many of us who work in information based careers struggle with, but for me the change is that I am now starting to recognize that it might actually be a problem. Whereas before I thought multitasking was an essential skill I needed to thrive in a digital world, I am now beginning to rethink that and wonder if the opposite might actually true.