Create embeddable HTML5 content with H5P

Been playing around this morning with a series of tools called H5P.  H5P is a plugin for Drupal, Moodle and WordPress that allows you to create a number of different interactive HTML5 media types. Things like interactive videos, quizzes, timelines and presentations.

I’ve only had a chance to play with the plugin for a few minutes this morning, but got it working and was able to create some basic interactive content, adding a branching overlay to a YouTube video that runs from the 2 to 12 second mark. Choose an option from the screen and jump to a different point in the YouTube video. I also created a simple interactive question.

While I created these using the H5P plugin I installed on another WordPress site, the H5P plugin gives others the ability to take some embed code and post the content that I created on their site, giving other people the chance to use the same content. So, here is that same interactive quiz question that I created on my testing site now embedded here using the H5P embed code.

With the interactive video example, I am actually embedding an embedded YouTube video with the overlays that I created using H5P. Meta-embed.

There is also an option to assign an open license to the interactions I create at the time I create them, and make it possible for people to download the source file.

One thing I can see off the bat is that there are a lot of content type options with this tool. There are about 30 different content types, each with numerous options so this 10 minute quick look hardly does justice to the possibilities or options. But I like where this is going and it certainly merits a deeper dive into the tool.

H5P is an open source project and community being lead by National Digital Learning Arena (NDLA) in Norway. NDLA is a publicly funded project which aims to offer a complete, free and open learning portal for all subjects in the Norwegian high school level.

More to come as I dig deeper into this tool and plugin.

 

Are you analog or digital?

I left a fairly lengthy response on Tony Bates blog post about an issue he has been experiencing.

Tony used our instance of Pressbooks as the platform for his latest book, Teaching in a Digital Age. Tony noticed that the PDF version of the book had a problem with how the images were rendered. They were not in the correct flow of the text when the conversion from web to PDF happened in Pressbooks.

Pressbooks does the conversion from web to PDF better than most, but this is an issue we have been dealing with as part of our project. Images that are placed in the correct flow of a book in Pressbooks often get moved and pushed around in the PDF version of the book.

I understand the annoyance, but it illustrates beautifully the dichotomy of the borderlands we currently live in, straddling the digital and the analog worlds of publishing.

Here is my response.

Nate hits it on the head – these are the complexities involved in digital publishing as we straddle the world of print with the world of the web (and other digital formats). Digital publishing formats are fluid, and print formats are rigid. By choosing to use a publishing platform that values digital over print (and Pressbooks is designed to favour web over print), you are making a choice to value flexible over rigid.

However, as you have discovered, the two don’t play well. While Pressbooks and the PDF engine does an admirable job of creating an acceptable print ready document, you are still going to end up with having to compromise the layout of the rigid print for the flexible digital.

This is actually the biggest conceptual hurdle that most people moving from print based publishing to digital publishing have to contend with. It is often very disconcerting for those who have designed for the rigid formats of print to make the transition to the fluid world of digital. And they are often disappointed because they have to give up their pixel (or point in the print world) control and surrender to the fluid layouts of digital that put the user, not the publisher, in control of the appearance of the content.

The dilemma I have, as someone who is developing tools that attempt to straddle both worlds, is how can I satisfy the expectations of those who are accustomed and expecting rigid print, while still satisfy those who understand and expect the fluid digital. It is a heck of a challenge and someone is going to end up unhappy in the end, as you are seeing. Your book website looks great and works well. Your PDF (which I consider print, not digital as it enforces a rigid layout vs the digital flexible) is expecting rigid and cannot accommodate the digital flexible flow.

This is at the heart of why I find PDF so frustrating to work with. It appears to be digital, but is really analog hiding in a digital sheep’s clothing.

In the end, the decision is the author as to which compromise they are willing to make. Are they a digital publisher first making an analog version available out of convenience to those who still live in the analog world, in which case the PDF output would be acceptable. Or are they an analog publisher who wants to create rigid layouts (ie PDF and print) first with the web/ePub and digital publishing as the afterthought.

 

Photos for Class provides safe search and auto-attribute for Flickr images

Came across a site that may be a good one for k12 teachers looking for a way to safely search Flickr for Creative Commons material, and for anyone looking for an easy way to attribute Flickr photos.

Photos for Class is a site that uses a combination of Flickr’s Safe Search filter and a few in house filters and allow you to search Flickr for G rated CC licensed photos. Which is useful in itself, especially if you are in a k12 environment. But the bit that everyone will find useful about Photo for Class is that when you download the photo, the CC attribution is automatically added to the image using the CC recommend TASL (Title, Artist, Source, License) format for correct attributions. Which works great if you are simply wanting to find and use an image without modifying it.

I did a quick search for the phrase totem pole and came up with a number of images.With each image there is an option to download, view on Flickr or report (if an inappropriate image has slipped through the filtering process, there is community moderation). I downloaded the first result and got this photo with the attribution automatically added at the bottom of the photo.

7975351242

One of the things I hear often from people new to Creative Commons licenses is how to attribute resources. Here is a nice tool that makes it very easy to find and correctly attribute a CC licensed photo on Flickr. There are other tools, like the OpenAttribute browser plugin, the Washington State Open Attribution Builder and Alan Levine’s Flickr specific attribution bookmarklet also available to help make it easier to attribute CC resources correctly.

h/t to Dr. Jo Badge blog post on teaching children about Creative Commons licenses.

 

Learning about digital learning through photography

I wrote a post a few weeks ago about purchasing my first DSLR camera. In February, I took an insane amount of photos with it. 1176 to be precise as I learn how to use and understand a piece of new (to me) technology.

The thing I love best about the new camera? It allows me to shoot 1176 photos in a month.

I used to shoot with film. I was by no means a good photographer, but I had fun fiddling with film, although I often found shooting with film a stressful experience to get the shot just right.

And this is the thing that has struck me most as the biggest difference between film vs digital photography: the scale. It has nothing to do with the actual quality or types of photos I can take, but instead it is how cheap it is to experiment with digital. In my film days, I would have never shot 1100+ photos in a month. Heck, I probably never shot 1100 photos in the entire time I shot with film. There was the cost of film and the cost of developing film that was a real barrier to experimenting freely with my film camera.

But with digital, that cost to experiment has been greatly reduced to the point where it costs me no more to take 1100 pictures than it does to take 1. Digital has allowed me to scale up the number of photos I take with little regard for monetary cost (the mental cost of sifting thru 1100 photos is another story). Digital has given me the ability to more freely experiment and, more importantly, the freedom to fail since the dollar cost of failure is very low.

I never felt that type of freedom to experiment when I was shooting film. When shooting film, there was always that nagging bit of pressure to get the shot right because every shot cost, not to mention the disappointment of  getting a developed roll of film back and discovering too late that you don’t have a single decent picture because you decided to use an ISO 100 film instead of 800. Money wasted. A barrier to experimenting with film.

Whoops. Didn't get that lighting right

Whoops. Didn’t get that lighting right

But that freedom to experiment afforded by digital photography alone doesn’t make the learning happen. Taking tons of pictures and having the freedom to fail is just the start. In order to learn, you also have to take the time to examine why you failed; why did that photo turn out so dark when the lighting in one 3 dial tweaks later turn out fine?

Le there be light!

Let there be light!

In order to learn, I need to be able to examine why one setting worked and another didn’t. And, in the world of digital photography, that means looking at the metadata. Digital photos give me so much more information(feedback) than film did about what was happening when the photo was taken. What was my aperture setting when I took that photo? Shutter speed? ISO setting? What lens was I using? All this metadata is automatically captured when I snap a picture and called up later by my software when reviewing my photos, allowing me to see exactly what settings worked and didn’t work in certain situations. From this information, I can make better decisions in the future.

Now, so far my digital photo learning has been pretty technical and fairly autodidactic. Other than a few tweets and reading some websites, I haven’t really begun to explore the social side of learning photography where I actively solicit feedback from others on the photos I take, and vice versa. At some point, I’ll need the input of some MKO’s about the things that the data can’t tell me. Things like composition that you can’t learn from just looking at data and taking lots of pictures. And I’d like to share what I have learned with others. Thinking my long underutilized Flickr account is about to become my learning network of choice for the next little while.

All in all, so far my new camera has been a wonderful edtech meta learning opportunity for me. It’s an example to me about how digital affordances give us the ability to freely experiment, fail, and try again at a scale that wasn’t possible in the analog days, all while providing both a rich set of data and access to a network of peers to help us improve. But above all, it’s a heck of a lot of fun, which makes for the best kind of learning.

 

Enhancing & Remixing Video with YouTube

It has been awhile since I’ve done any video editing or enhancing with YouTube so when I popped the hood to tweak a couple of personal videos earlier this week I noticed that the production tools within YouTube have grown and matured since I last edited a video.

Slow Motion

One (new to me) enhancement that educators might be interested in is the slow motion enhancement. A few weeks ago I wrote a post about what I thought was a good piece of instructional video that relied on slow motion that really enabled learners to see the phenomena the instructor was talking about, in this case an octopus camouflaging itself. The change from recognizable octopus to unrecognizable piece of sea rock & coral happens really quickly – too quick for the human eye to really understand what is going on. So, the instructor slowed the video down. This gives students time to see all the processes unfold & also gives the instructor time to explain what was happening. In YouTube, adding slow motion to your video is a snap.

slow

Add an Audio Soundtrack

They have also beefed up the audio soundtrack since I last played around with the tool. You can add background music to your video, with a number of YouTube suggested (and legal to use) background soundtracks from the (150,000 piece strong) YouTube music library. The difference between the last time I used the editor and now is that you can now mix the music with the original background of the video, and you can set a start point for when you want the music to begin on your video. The last time I used it, you could only replace the audio track with the music track. The mix feature is pretty rudimentary compared to a more advanced video editing system, allowing you to choose whether you want the audio to favour the original audio or favour the music soundtrack. But as an easy to use tool you can’t beat it to spice up your video with a bit of bg music.

music

Annotations

Finally, for an educator, annotations can really enhance the video by adding additional information as a text overlay to the video. Going back to the octopus video from a few weeks ago, the original video had a number of bullet points appear on the screen as the instructor spoke about the points. In YouTube, you can do this in the Annotations tab which allows you to place text blocks at certain points of a video. You could use these text blocks to point out specific areas of the screen you want students to pay attention to in the video or, like the octopus video example, add bullet points to help explain what the student is seeing.

annotation
These are just a couple of tools within YouTube that let you enhance, edit or remix videos. If you want to experiment with video as a pedagogical tool, you don’t need a lot of fancy equipment or expensive software to enhance your videos. For most educators, your smartphone & a little time time invested in learning to use the YouTube platform as a production tool will do the trick.

 

A good use of video

I stumbled across a number of Biology video resources via the iBiology site earlier today and took a minute to watch a few shorter clips in the collection and came across this one. Not only does it prove that octopus are pretty amazing creatures, but I also thought it was a decent example of how video can be used effectively in a lesson. Here’s why I like it. But first, the video:

So, why is this a good use of video? Well, before I get into that, I should say that the part of the video I want to focus on is the video within the video. So, even though this is essentially a video lecture, it is the way the instructor uses video within that lecture to illustrate something that might otherwise be difficult to explain that I find particularly well done.

First, it shows a field scene – something that would be very difficult to otherwise explain in words and static images. Using video makes the learner feel like they are right there in a place that they might otherwise not ever see. In this case, the bottom of the ocean. Effective video takes learners to places (or times) they might not otherwise be able to go to.

The dramatic effect of the octopus changing from camouflaged to visible happens virtually instantaneously, and that instantaneous moment simply would not carry the same weight if the instructor tried to talk about it or show a series of photos. It is unexpected. It piques the learners interest. Notice how the instructor builds to that moment in his lead up as well, setting the scene of the shot as a rather boring underwater scene. His language signals that something is going to happen that will soon transform that boring underwater scene. He is building curiosity through his language, and when the moment of unexpected transformation happens, you are engaged.

Notice, too, how the instructor is not simply playing video and having students watch it, but is actively interacting with the video and explaining what is happening while it is happening. At a number of places, he is pointing and drawing the learners attention to details in the video as it happens. For example, at :27 seconds, he points to the screen and says “now watch here”, making sure that the attention of the learner is in the right place to catch the key concept he is trying to explain

Then, after the video has been played in forward at full speed, he plays the video at half speed backward, giving you a completely new perspective of the phenomenon the student just witnessed. Again, at :40 seconds in, he makes sure to point out what he wants the learner to see “watch the ring form around this eye”.

He then pauses the video and brings up a series of stills to further explain the concepts, adding a text overlay to the video with a bullet list of keywords explaining what a hi-fidelity match would be. This further underscores what he is saying. And then in his summary he augments the video on the screen further with a few more points underscoring the key concepts of the short video. Key here is that he includes question prompts to spur deeper thinking for the students and spark some curiosity about the concepts introduced in the short lesson. There is a slight problem in that the juxtaposition of the final shot overlay’s the teacher on top of the text, obscuring some of the text, but it’s a small quibble.

So, even though this is a video lecture, I think it is a well done bit of lecturing based around a compelling video. The instructor is naturally engaged and dynamic and the presentation is snappy. Having the instructor on screen humanizes the lesson and allows him to carry out the kinds of interaction with the video that make the video clip pedagogically strong, like directing attention to key moments in the clip. There is a lot packed into this 2 minute video and if I was working with faculty in a traditional f2f classroom, this clip would probably make its way into my training arsenal as an example of how to effectively use video in a lesson.

 

Social Annotation with Hypothes.is

Following David’s lead (and thanks to some great WordPress plugin work by Tim Owens),  I’ve installed a social annotation tool called Hypothes.is on this site. Actually, it looks like much more than a social annotation tool, but I’ll get to that in a minute.

Hypothes.is is a non-profit funded by (among others) the Shuttleworth Foundation (who are funding some very innovative work right now in the education/web space, including the OERPub project and Siyavula). It is a social web annotation platform being developed around web standards proposed by the W3C Open Annotation community group.

A WordPress plugin is just one of the Hypothes.is tools. There is also a Chrome browser plugin and (soon) a plugin for Firefox. These plugins allow you to annotate and highlight across the web. So, annotation works in 2 ways, either on the user side via the browser plugins, or on the site builder side via a WordPress plugin.

If you highlight and right-click any text on the page, you should see a little balloon/pen icon pop up. Click on the icon and a panel will slide out from the right of the page. You need a Hypothes.is account to highlight and annotate. If you don’t have one, you can create one quickly from the fly out.

If you want to see the comments that are on the page, there are 2 prompts on the page that show you there are comments. First, you can click on the icon in the top right hand corner of the page that looks like this:

Hypo

Hover over the icon and you’ll see some other icons appear that allow you see the annotations & highlights on the page, or to highlight and annotate yourself.

The second prompt that shows you there are comments are the icons on the right of the page that look like directional arrows:

down

This one appears in the bottom right corner of the page on posts that have comments on them (like the one you will see on this post if you are viewing the post itself. For some reason, Hypothes.is doesn’t seem to be working on the home page of the blog). Click on the icon and you are taken to the exact spot in the post that has been highlighted or annotated.

This is still very much an alpha project, but looks promising as a collaborative annotation tool. One of the concepts that I really like about it is that you have the ability to aggregate all of your annotations and comments under one account, something I tried to do many years ago, but gave up on in frustration as the tools that were around at the time were frustrating to use. I want to be able to have a central place that shows me all of my conversations on the web, and this might be a good option.

There are a few things I like about Hypothes.is the project as well. Reading their principles, it looks like they are committed to creating a tool that remains non-profit, free and that works anywhere – important qualities if they hope to garner enough critical mass to make the project a success. The rest of the principles are equally important and you should take a read through.

As more and more websites turn off comments, I can see services like Hypothes.is (and existing tools like the Diigo, which is often forgotten as an annotation tool and used by many only as a social bookmarking tools) are going to be important tools to keep the conversation flowing.

As for the more than a social annotation tool bit I hinted at in the lead, Hypothes.is appears to be framing itself as a tool for discussion and collaboration rather than simple highlighting and annotating.

Hypothes.is will be an open platform for the collaborative evaluation of knowledge. It will combine sentence-level critique with community peer-review to provide commentary, references, and insight on top of news, blogs, scientific articles, books, terms of service, ballot initiatives, legislation and regulations, software code and more.

I am not exactly sure how this bit works yet. But as I play with Hypothes.is I am eager to find out.

Something I learned about the history of the web from the Hypothes.is promotional video. Annotations were an original feature of Mosaic, but disabled at the last minute when the browser first shipped. Which makes you wonder what the web would be like today if comments were enabled from the start through the browser right from the get go.

 

Create a book from Wikipedia articles

While doing some random surfing last night, I stumbled upon a new tool in Wikipedia that I didn’t know existed (but has been around for a couple of years).

You can create books (both print and e) of selected Wikipedia content.

The Wikipedia book tool is located in the left hand navigation of Wikipedia under Print/Export. Click the create a book link,  activate the book creator tool and you can start compiling pages in Wikipedia.

As you go from page to page, you will see a new toolbar at the top of each page prompting you to add this page to your book.

Once you are finished, click Show Book where you can add a title and rearranging the articles.

Once you have the book tweaked as you like, you can then output & download to EPUB, PDF, OpenDocument, or OpenZIM (a format I am not familiar with), or send a copy to a print on demand service called Pediapress which, for a small fee, will print and ship you a physical copy of the book.

I gave it a try and in about 5 minutes had created a very simple ebook containing the biographies of the current Canadian mens national soccer team (sigh we came so close this time) and the current state of our national soccer program. Here is a Canadian soccer primer from Wikipedia in PDF (yikes – 13 meg) or ePub (1.6 meg) format.

Video on how to create a Wikipedia ebook.

After I tweeted this, Alan Levine & Scott MacMillan replied to me and pointed out that UBC has this feature set up on their wiki’s as well.

[blackbirdpie url=”https://twitter.com/cogdog/status/270759907056304128″]

 

[blackbirdpie url=”https://twitter.com/clintlalonde/status/270761315293855744″]

 

[blackbirdpie url=”https://twitter.com/scottmcmillan/status/270762545047031809″]

 

Turns out, there are extensions for any MediaWiki site that can enable instant, on the fly publishing ebook format.

 

Creating Interactive Lessons using Ted-Ed

You are probably familiar with TED Talks, a series of 18 minute video lectures recorded at the annual TED Conference. But did you know that TED also provides a handy tool for educators to turn those talks (and other videos) into interactive lessons?

TED-Ed is a platform that allows educators to take any video and make a lesson out of it.

Robert Hanlon, Faculty in Peace & Conflict Studies at RRU, recently used TED-Ed to create 2 interactive lessons for his students; Salma in the Square – Egypt and Witness – The Mayor of Mogadishu. I had the chance to speak to Robert about the lessons, why he decided to use TED-Ed, and what it was like working with the TED-Ed tool (note the audio is slightly clipped for the first 10 seconds).

 

So, here's the thing about the video in my Coursera course

I’m taking a Coursera course, and the primary content delivery tool being used is video. Talking head video of the instructor switching to voice over PowerPoint lectures with bullet point slides and diagrams.

Now, I wish I could leave my first impressions aside, but can’t (because I’m a bit shallow and judgmental this way and first impressions count), but I am staring at PowerPoint slides primarily composed of bullet points of text (bad) in FREAKIN’ COMIC SANS.  I mean, bullet points of texts are bad enough in terms of adding nothing to my understanding of what is being said, but it’s FREAKIN’ COMIC SANS. I am in a kindergarten class.

Anyway, where was I. Oh yeah. Video.

So, a little technical & pedagogical note about using video as a content delivery method. Web video can be great in that it allows students to interact with the video. Learners can pause, rewind, fast forward and otherwise move through video at their own pace. Going back to review content they may be fuzzy on. As  Zhang, Zhou, Biggs and Nunamaker noted in their 2005 research study Assessing the impact of interactive video on learning effectiveness (pdf) , the interactive nature of web video – this ability to stop, rewind and replay – is one of the prime pedagogical affordances of web video .

Results of the experiment showed that the value of video for learning effectiveness was contingent upon the provision of interactivity. Students in the e-learning environment that provided interactive video achieved significantly better learning performance and a higher level of learner satisfaction than those in other settings

Now, for me, if you are going to make video your primary content delivery platform and take advantage of that pedagogical affordance of video – this ability for learners to manipulate the timeline – then the video should be a true streaming experience. Coursera videos are not.

What does that mean? Well, there are 2 ways you can deliver video on the internet: progressive download and streaming. I won’t get into the technical details of each (you can read for yourself a bit more if you like), but one of the major differences between the two methods of video delivery is how quickly you can move thru the timeline. Progressive download buffers the video, meaning when you move the timeline, you get the hourglass for a few seconds while the video buffers and then restarts. Whereas in streaming video, you get no buffering. You move your cursor on the timeline and the video starts at that point instantaneously.

Imagine this (and I am sure you have experienced it yourself). You are a student and you are trying to find a specific spot on a video, how frustrating is the progressive method? You move the cursor back. Wait (buffer). Wait (buffer). Wait (buffer). The video plays. Whoops, wrong spot. You move the video back a few more seconds. Wait (buffer). Wait (buffer). Wait (buffer). Hmmm. Too far. Move the cursor forward. Wait (buffer). wait (buffer)….you get the picture.

Knowledge is created in instants. When you are on the verge of connecting concepts, these little delays matter. You want to find the spot you need, not give your mind even that extra couple of seconds to wander or worse, get frustrated interacting with technology.

On the plus side for Coursera videos, the videos appear to be short (less than 6 minutes), so shuffling back and forth and buffering to find an exact spot is reduced as there isn’t much of a timeline to slide through. And you do have the option to play at slower or faster speeds – great if you want to review a 5 minute video in 3, or slow down the pace to catch concepts. But, if you are going to make video your pedagogical tool of choice for content delivery, and the primary pedagogical advantage of video is the ability to move thru the timeline and review what you saw, then it is worth it to invest the extra dollars and make the video true streaming video for a seamless user experience where the technology gets out of the way and not in the way.

 

PageFlakes – a cautionary reminder that free comes with a price

This morning Alec Corous tweeted a crowdsourced call for tools for a workshop he is presenting. I responded and suggested a couple of aggregators in Netvibes and PageFlakes.

I am a big Netvibes user & fanboy – it is one of the web tools I could not live without as it is my central dashboard for my online life. PageFlakes is a tool I have used in the past, but hadn’t touched for awhile and when Alec went to check the PageFlakes site, it was down. I started poking around and asking a few questions and discovered that it does look like Pageflakes is gasping it’s final breath. It’s probably not a good sign that the official company blog hasn’t been updated since July 2008, and most of the comments posted on it these days are for male enhancements.

It served as a good reminder for me – a message that I forget until something like this pops up. Not that I am going to stop using these tools, but every once in awhile it’s a good thing that something like PageFlakes dies as a cautionary tale that many of the tools I use and, in some cases, have come to rely on are just a single bad quarter away from disappearing.

Which is why data portability is such a crucial issue, and one that I pay much more attention to when I sign up for a new tool these days.

The other thing I have been paying more attention to when signing up for free services is what is the business plan? Is there a way that this service is making (or can make) money? And is there a way I can pay a few dollars for those services that I have come to rely on. I do this with the wiki service I use. I also pay for my own web hosting for this blog. If there is a way I can pay, then I don’t mind kicking in a few dollars for a service that I truly find valuable. After all, everyone has to make a buck, and I am not adverse to paying for something if it means it has a better chance of surviving in the long run.

 

View documents in the browser with Google Docs Viewer

Google Docs Viewer is a handy little service that let’s you view documents and presentations within the browser without having to open a third party application. It eliminates the need for students to have additional applications (such as PowerPoint or a PDF reader) installed on their computer to view PowerPoint or PDF files.

Here is an example. I am using an old PowerPoint presentation on podcasting done by a colleague of mine a few years ago that lives on our web server. The link to the original PowerPoint file (2.2 mb) will either download to your computer, or force you to open PowerPoint to view the presentation (depending on how your browser is configured, assuming you even have PowerPoint). Now, here is a link to the same PowerPoint presentation (which opens in a new window/tab), but this time viewed through the Google Docs Viewer.

It’s important to note that I did not upload the presentation to the Google Docs Viewer site – the original PowerPoint file still lives on our web server. The Google Docs Viewer is not a repository to store documents.  If I delete the original file on our web server, the link to the Google Docs Viewer breaks since the original file is no longer available. I retain complete control over the source file, but the user gets the benefit of not having to download and open a PowerPoint file.

How to use Google Docs Viewer

There are a couple of ways to use Google Docs Viewer; either directly from the site, or you can construct a special url that will link your document with the document viewer.

To use the site, go to the site, enter the url to the PDF or PowerPoint document, and click Generate Link. You then get a few different options, including a link that you can tweet, IM or email, HTML link code that you can paste in a website, blog or LMS, or embed code that will bring the document into your blog, site or LMS (I’ve embeded a PowerPoint presentation at the end of this post for you to see how this works).

The second way to access the service is by crafting your own URL. You can create links that pull documents through the service. You don’t even need to use the website to use the service. To create your own URL start with the base path of http://docs.google.com/viewer, followed by a question mark (?) and the path to the original document (url=path) The path needs to be encoded so no spaces or special characters. Knowing this, I can build a url to any PDF or PowerPoint, so a link to our example above would look like this: http://docs.google.com/viewer?url=http%3A%2F%2Fdisted.camosun.bc.ca%2FDE%2Fpodcast.ppt.

So, Why Use Google Docs Viewer?

Why would you even do this and not just link directly to, say, the original PowerPoint file? Well, from a technical perspective, there are some barriers for students when they try to deal with PowerPoint files (and, to a lesser extent these hold true for PDF files as well, although PDF is by far a more web friendly format than PowerPoint).

  • The files can be large, especially if you use animations and transitions.
  • They require students to have additional software installed on their computer, in this case PowerPoint or the PowerPoint Viewer.
  • Depending on the browser, how it is configured and the security settings, PowerPoint files can cause strange and unexpected behaviours. One user may have their system set up to have PowerPoint open in a browser window, while another may be prompted to download the file. A third may get a security warning that a potentially malicious file is about to be opened.
  • The files take a long time to load. In most cases, when someone clicks on a PowerPoint link, the first thing that has to happen is that PowerPoint has to open up, which eats up time. No one likes to wait for content and those few seconds add up to frustration for users.

By using a service like Google Docs Viewer (or Slideshare, another free alternative) , you can mitigate some of these barriers and provide a better experience for students.

Here is the same presentation embeded using Google Docs Viewer.

 

Interactive storytelling with YouTube

As part of my Masters, I am currently reading Effective Teaching with Technology in Higher Education by Tony Bates and Gary Poole. My cohort is currently working with their SECTIONS model for choosing and evaluating new educational technologies. One of the criteria in the model is Interactivity (I) – what kind of interaction does the technology you are examining enable? As I was reading the chapter, a memory from my adolescence popped into my head – Dragon’s Lair.

Like most kids growing up in the early 80’s, video games were a big part of my life, including a game called Dragon’s Lair. Dragon’s Lair was different than most video games in that the action was high quality animation, not pixelated characters. The gameplay was incredibly clunky and I think it cost a dollar to play (compared to 25 cents for my game de jour Galaga) and since most of the time I ended up falling into a fiery pit of doom within 30 seconds, I didn’t invest a lot of time and money in it. But it made a lasting impression in that it was one of my first encounters with branching video. I loved that I had the direct ability to control the storyline and influence the narrative. It was like I was the Director in some fantastic animated movie.

Just over a year ago, YouTube unveiled the ability to annotate videos and add links to them. While there certainly has been a few problems associated with the annotations (most notably the lack of transparency on where the destination leads to and the possibility of linking to a malware site, as Pandalabs warned about earlier this year), it is really interesting to see how this feature is being used to create interactive stories and games on YouTube, much like the ones I experienced in the arcade hunched over Dragon’s Lair.

A good example of annotations being used to create an interactive story is this recent series of videos done by the Metropolitan Police in London as part of their Drop the Weapons campaign. At the end of each video you are asked to make a decision, which takes you down a different path.

For educators, this ability to link videos creates all kinds of interesting possibilities for creating interactive learning activities. For example, here is an interactive spelling bee.

I can’t imagine how much it cost to develop the Dragon’s Lair video game that sucked up my teenage cash, but I would hazard a guess that it was substantially more than it cost to create branching scenarios on YouTube. The point being that it doesn’t take a big budget to create compelling interactive activities using the technology available to us today. Sure, as the budgets go up so do the production techniques and special effects, etc. But really all it takes is a simple video recorder, some imagination and YouTube to create a bit of interesting interactive content.

 

Etherpad adds timeline slider

Etherpad is a collaborative document tool that allows multiple users to work on the same document in real time on the web. Think of it as a hybrid of Google Docs (which is not quite as synchronous) and a live chat tool.

I’ve used this tool for many collaborative projects, and for quickly drafting a collaborative document it is fantastic. Easy to use and free and with a document revision history so that you can see previous versions of the document. Today, that particular feature got a nifty little boost – an interactive document timeline. Now you can watch a video of your document, from birth to finished project.

For educators, this is a really handy evaluation tool. If you are trying to monitor group contributions to a collaborative project, this feature will be incredibly useful. All participants are colour coded so their contributions to a document are highlighted by colour, which let’s you quickly see who made major contributions to the document as it was being constructed.

In addition, the video timeline allows you to see the groups progress on the task at hand. If they got off topic, you’ll be able to see where they went wrong, in what context (what changes were happening that might have led them to the diversion), and who might have brought the team back on track (if they did). It’s a transparent way to quickly view the process unfold.

Unfortunately, the timeline view is not available on the free public version (which allows many concurrent users), but you can get a free professional version for up to 3 concurrent users that does include the timeline.

 

Screenr: free web based screencasting tool

Screenr is a web-based screencasting tool that allows you to quickly create screencasts. Free and web-based, there is no software to download, unlike Jing, which Screenr is very similar to. Videos are limited to 5 minutes and Screenr will host your videos, providing you embed code to put the videos where you want. You can also tweet the screencast out on Twitter, download an MP4 version, or publish the final result to YouTube.

Here’s a demo.

Besides Camtasia and Captivate, the two mainstream commercial products that allow you to do very sophisticated screencasts that include interactivity, post production editing, and branching, there are a number of free screencasting tools similar to Screenr out there, including Screenjelly and Screentoaster. For Firefox users there is also a handy FF plugin called Capture Fox.

In my mind, the difference between Screenr and these other tools is that Screenr is coming from the e-learning world and is suported by Articulate, a company that makes a very succesful line of e-learning application products. And, as Articualte CEO Adam Schwartz says, the cost for Articulate to run Screenr is:

…really cheap for us. We’re hosted on the Rackspace cloud, and the cost for doing this is like two orders of magnitude less than it was when we looked at this two years ago. It would cost more as a marketing fiasco to shut this down than it would to keep it running.

From the same article, Schwartz also said that Screenr

is a first step in the company’s creation of a new group of e-learning products, which he compares to the popular software-based screencast products from Camtasia. But with Artculate’s focus on education, the tools will be “more about interactivity, branching, learning, and simulation.” His fully developed screencast tools will have the capabilities for grading and quizzing, and will be integrated into more fully formed educational suites.

So it sounds like Articulate has some pretty big plans with Screenr and this is just the beginning.

You do, however, need a Twitter account to use Screenr as the service is completely integrated with Twitter. This might deter some who have been reluctant to take the Twitter plunge, or might be the deciding reason for some to start using it. A big part of the idea of Screenr is to allow people to quickly make a screencast and then publish it to their network via Twitter, reinforcing the idea (for me at least) that one of the core values of Twitter is as a network notification (distribution) system.

 

Wordle as a blog self-assessment tool

I just finished an assignment that was a first for me – assigning my own grade. What a strange thing to do. Now, I am sure I have had dedicated teachers in the past, but I feel pretty confident that none of them have ever spent as much time pouring over one of my assignments as I have done in the past week.

The assignment was in 2 parts. Part 1 was to keep a reflective blog during my 2 week residency. Okay, I think I can handle that part. I actually went a bit overboard in the end and the blog morphed into a way to share resources with my cohort in addition to the self reflection piece, but hey what the heck.

The second part was a bit trickier – the self assessment. When I started going through the criteria and comparing it to the blog, I began to fear that, despite my prolific output on both my own and my cohort’s blogs, I might have actually missed a significant piece of the assignment. Not only was the blog to be a reflective tool, but it was supposed to be specifically reflective about research and questions arising during my Introduction to Research class.

Now, just so you don’t think I am totally dense and didn’t know what class I was in at any given time, I have to say that the residency was a pretty homogeneous event with sessions and classes blurring together into one mass. Our instructors team taught and would appear in each others class regularly, often both facilitating at the same time. Research blended with Learning Theory, which blended with lunch which morphed into team building that somehow ended up back at Research. The lines were fuzzy, a point underscored during our final group presentations when 6 out of 6 presentations did a bang up job of presenting wonderful research for an assignment for our Learning Theory class – a point not missed by our Learning Theory Instructor. As a class, I think we all slightly missed the mark as to what class we were actually presenting for. So, I don’t think I was alone in my class confusion.

Back to the blog. I agonized for a few days whether I had enough information about research in my blog. I did touch upon it here and there and actually did have a couple of posts that spoke to research directly. But on the whole it felt pretty light in the research department. So I ran my blog through Wordle, a tool that takes a block of text and turns it into a graphic based on the frequency of keywords in the text. The more often a word appears in the text, the larger it is in the graphic. The results on whether or not I addressed research in my blog? Well, I’ll let you decide if I missed the research point or not.

I find this image interesting for a few reasons. First, it convinced me that I didn’t miss the research angle and I used it in my assessment to talk myself up a grade point from where I originally had myself pegged.

The second thing is the prominence of the word think. I went back and read some posts and realized I used the phrase ‘I think” quite often and I found this very validating. I went to an intensive 2 week Masters residency and guess what I did? I thought! And apparently I thought a lot about technology. Pretty appropriate for a Masters in Learning and Technology.

Finally, I have been agonizing over whether or not I should pursue a thesis or go the course work/major project route with my Masters. I am leaning towards major project. Now, if you have ever used Wordle you’ll know that the placement of the words is random. Note the placement of the words “think” and “thesis”. Is this a sign that I should think thesis?

 

Google Image Search adds license filter

Google announced a new feature for Image Search today that should make it easier for you to find, modify and reuse images from across the web.

Google Image Search now has a license filter which will allow you to filter out images based on the license type. This makes it much easier to find public domain or Creative Commons licensed images to reuse or modify.

To access the license filter you have to go to the Advanced Image Search options. At the bottom of the page you will see an option called usage rights with a dropdown list with the options to return images labelled for reuse, labelled for commercial reuse, labelled for reuse and modification and labelled for commercial reuse and modification.

Reblog this post [with Zemanta]
 

4 Free Audio Players to Add Audio to Your Site

Adding audio to your website, blog or online course is pretty easy to do these days. Long gone are the days when we would force students to download and install proprietary players like Real Player or Quicktime. With the ubiquity of Flash and JavaScript, and mp3 we now have more options for delivering audio on the web than ever before.

Here are 4 audio players that I have been working with recently while redeveloping a French language course. All of these players support mp3 and are built using JavaScript and Flash. 2 of the players (Playtagger and  Yahoo Media Player) only require a single line of code to get working on a page. The other 2 (WordPress Audio Plugin and the JW FLV Player) are more complicated, but much more feature rich. All will do the job of playing audio without requiring a software download or install by students and all worked when I tested them in D2L.

The links to the demo of each player will open in a new window since I didn’t want to have multiple players competing with each other on the same mp3 files.

1) Playtagger

The most basic of all the players on this list, the Delicious Playtagger, is minimalism in action. You can start, stop or add the file to Delicious. That’s about it. No pause or volume control. In fact, no audio controls whatsoever.

But what Playtagger lacks in features it makes up for in simplicity of use. Include a single line of JavaScript in your HTML, and any link to an mp3 file in your document automatically becomes playable on the page. A play icon will appear just to the left of the mp3 link.

The one little problem I have with the Playtagger is that if you click on the text link, the mp3 file may either try to load in your default media player or try to download the mp3 file to your computer, depending on your browser. It would be better if the mp3 file played in Playtagger regardless of whether you click on the Playtagger play icon or the actual text link itself.

That one minor problem aside, if you are looking for a simple option to play an mp3 file, you can’t get much simpler than Playtagger.

Playtagger in action.

2) Yahoo Media Player

Like Playtagger, the Yahoo Media Player is added to a page with a single line of JavaScript, which adds the audio player to any mp3 link on your page. Click on the play icon beside the file and the player opens up at the bottom of the screen.

The Yahoo Media Player has more features than Playtagger. There is a pause button, skip forward/back to the next/previous track control, volume control, and track and time information.If you have multiple audio files on a page, the Yahoo Media Player will play the files back to back like a playlist. In fact, there is a playlist option within the media player itself.

The Yahoo Media Player does give you more options to customize the interface and the default behaviour of the player. There are some documented hacks at the media player wiki which come in handy if you want to extend or change the player.

Another resource you will want to check out if you use the Yahoo Media Player is the blog of  Eric Fehrenbacher. Eric has written a number of scripts that extend the player and add extra features. Features like TrackSeek , which adds a slider to give users the ability to move forward and back in a track and TrackLoop which will loop through a playlist after it is finished.

Yahoo Media Player in action.

3) WordPress Audio Player

First off, the WordPress Audio Player is not just for the WordPress blog platform. There is a stand alone version that can be used on any web page.

This audio player is a tad more complicated than Playtagger or the Yahoo Media Player. There is more mucking around with the code to set parameters, but the process is well documented and should be fairly straightforward to get you up and running.

You also have to download and install the scripts for the WordPress Audio Player on your own server, unlike Playtagger and the Yahoo Media Player whose scripts are hosted on external servers. This could be a deal breaker if you don’t have access to a web server. However, if you are using D2L, you can use the file manager in D2L as a place to serve up the files from.

Those negatives aside, I think the WordPress Audio Player has the nicest interface of the lot and packs all you need for features in a compact player. The player itself slides open and closed so it takes up very little screen space and you can change the look and behaviour of the player by changing a few values in the settings.  And unlike the Yahoo Media Player, the WordPress Audio Player comes with a slider enabled out of the box with no need for a third party script.

WordPress Audio Player in action.

4) JW FLV Player

The JW FLV player is by far the most full featured (and hence, the most complicated) of the 4 players here. The JW FLV Player works not only for audio files but for video as well.

Of all the players, JW FLV is the only one capable of doing true media streaming using RTMP as opposed to progressive downloading. True media streaming requires a media server. If you have access to a medai server, then JW FLV Player is your player.

Like the WordPress Audio Player, you need to upload the Javascript and Flash files to your own server.

Configuring the player can be a bit of a frustrating affair if you are not technically inclined. Much of the documentation and tutorials feel like they were written by developers, which is okay if you are a developer but not so if you just want to get the thing working. You should feel comfortable working in JavaScript before diving into the JW FLV Player, especially if you want to customize the features or look and feel beyond the default player.

Speaking of which, the JW FLV Player does have a vibrant developer community and many developers are creating and releasing skins and addons that change the look and functionality of the default player, so you have a lot of pre built interfaces to choose from if the default interface doesn’t toggle your play button.

JW FLV Player in action as an audio only player. This is a streamed mp3 file from our Flash media server.

5) Bonus for the more geek oriented: SoundManager

Okay, if the thought of digging into the JW FLV Player code excites rather than terrifies you, then be sure to check out SoundManager. SoundManager is not a player per se, but rather bills itself as a Javascript Sound API which lets you create some pretty impressive audio players. Check out the page as playlist demo and the still-under-development-so-may-not-be-working-perfectly examples of the 360° Player Demo. However, SoundManager is very JavaScript intensive and I was never able to get it working reliably enough in D2L to use it.

And then there is HTML 5

The chances that most of these players will become obsolete once the WC3 releases HTML 5 to the world are pretty good. HTML 5 promises easier ways to embed audio and video content on web pages with standard HTML tags. The goal is to make adding multimedia content to a web page as easy as adding an image or a table is currently.

But even though HTML 5 got a huge Google boost with the demo of Google Wave, which is a fully functioning  HTML 5 web application, we’re still a few years away from it being available widely enough to rely on it as the sole method of delivering audio and video content. So in the interm we still need players to play multimedia content.

Reblog this post [with Zemanta]
 

Academic Earth: free and open video lectures

Open Educational Resources

I am not a big fan of iTunes U. I know there is a lot of great content there, but unless you use iTunes it is inaccessible (and if you do know a way to access iTunes U content without iTunes I would love to hear about it). So, I am always on the lookout for resources like Academic Earth.

Academic Earth is a website featuring video lectures from Berkeley, Harvard, Stanford, MIT, Princeton and Yale.

While collections like Academic Earth are not new (you could find many of these lectures on each institutions YouTube channel), what is nice about Academic Earth is that it filters and packages the collections in a very friendly and easy to use way. For example, on the Playlist page you can view thematic collections put together by the site editors that group lectures from different instructors and institutions around certain themes like Love is in the Air, a group of videos on emotion, love, dating, marriage, and sex that cross disciplines and combine lectures from Psychology, English, and Economics.

The site also features all the Web 2.0 goodness you would expect from a video site these days – embedding, the ability to subscribe to specific courses, and user feedback where logged in users can grade the lectures. One added academic feature of the site you don’t normally find on other video sharing sites is the citation feature, which gives you a nicely formatted snippet of citation code that you can cut and paste when referencing the video. There are also links to transcripts and other related resources like PowerPoint slides and (in some cases) captures of blackboard/whiteboard notes, adding further value to the video lecture.

Right now there are over 1500 lectures on the site, which seems to be heavy on Business lectures. But as the site grows I would expect that to change and balance out in terms of subject matter. Still, sites like Academic Earth are nice alternatives to the locked down world of iTunes U.

Image Credits: Open Ed Poster by riacale. Used under Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Generic license

Zemanta Pixie
 

Create an interactive wall of images with TiltViewer

Demo of TiltViewer

TiltViewer is a free, customizable 3D Flash image viewing application you can add to your site to create a lovely, interactive wall of photos. In just a few minutes I put together a demonstration page to show off the effect.

The images are being pulled from my Flickr account, and if you click on the rotate icon in the bottom right hand corner, you can the Flickr description of the image along with some other data, which could make TiltViewer a nice little flash card exercise with the image on one side and answers on the reverse.

TiltViewer also integrates with Picasa or with a folder of stand alone images on your web server. And, best of all for us D2L users, I was able to get the application working in D2L without a lot of mucking, which is a bit of a surprise since anything that uses javascript often makes D2L very unhappy. Here is what TiltViewer looks like in D2L.

TiltViewer inside D2L

If you plan to use this with stand alone photos, it does require some mucking with an XML file, but the instructions are straightforward.

Reblog this post [with Zemanta]