Of Bikes and Books

At the start of 2016, for the first time in a long time, I made resolutions. 2 specific.

  1. Cycle 3000 km.
  2. Read 10 fiction books.

These resolutions were inspired by 2 things.

In 2015, my love of beer caught up on me and I tipped the scales at 225 on a 5’10” frame. Turning 50 this year, I needed to make some changes. I know that the standard resolution of “lose weight” was too general. I needed something specific. So, in addition to renting a rowing machine (which actually happened towards the end of 2015), in 2016 I got back into cycling on a regular basis.

I’ve always biked, but in recent years working at home more and my mountain biking taking a back seat to small kids, my mileage had taken a real hit. So I started 2016 with a goal of riding 3000 km. I thought it was an achievable goal. A tad under 60k a week. My plan was to ride into the office 2 days a week on 10 k routes, and then a longer ride on the weekend.

I used Strava to track most of my rides and, while I didn’t hit 3000, I am pretty happy with my Strava total.

There were times I didn’t use Strava, like weekends where I took on some singletrack trails with my son (who got into mountain biking in a big way this year which was fantastic as it re-ignited my passion for MTB, something I have not done since before I had kids).

All in, I figure the best I did was around 2300 km. Short, but I am still happy with that. And, despite a recent uptick in the weight, cycling this year has helped me stay under 200 lbs for most of the year.

Some of my Kindle library. Many samples waiting to dig into in 2017.

The second goal was inspired by David Wiley and Martin Weller. At the end of 2015, both wrote blog posts about the books they read in the previous year (and I see that, today, Martin has written one for 2016). As I read their posts, I realized that, while I read – non-fiction, reports, papers, research, blog posts, etc – I had really fallen off the fiction wagon. So I made a modest goal of 10 fiction books this year, far short of the impressive output that Martin managed. 48. Enough to make some lovely charts and do some analysis on.

This goal I met, again with help from my son who, at 10 and like his sister, seems to be developing a voracious appetite for fiction.

Here (in rough chronological order) is the fiction I read in 2016.

  1. Neuromancer William Gibson
  2. No Relation Terry Fallis
  3. Fool Christopher Moore
  4. Good Omens Terry Pratchett & Neil Gaiman
  5. High Fidelity Nick Hornby
  6. Little Brother Cory Doctorow
  7. Armada Ernest Cline
  8. Holes Louis Sachar (my sons favorite book of the year)
  9. The Dragonet Prophecy Wings of Fire book 1 Tui T. Sutherland
  10. The Lightening Thief Rick Riordan
  11.  The Hunger Games (book 1) Suzanne Collins

So, a couple notes about this list. There are some decidedly YA titles on this list. The last 4 are books that my son and I read together. I still read to him every night before bed and the final four books on the list (Holes, Dragonet, Lightening Thief and Hunger Games) were his choices. I almost thought I should not include them, except for the fact that I really enjoyed all 4. Holes was much richer and complex than I expected, and The Lightening Thief felt like a great way to introduce classic Greek mythology to a contemporary audience.

Despite having watched the movies and reading books 2 and 3 in the series, I had not read the first Hunger Games book. Reading it with my son at the same time I was reading the non-fiction Hillbilly Elegy and while the US election debacle was unfolding added an extra resonance to the plight of Katniss Everdean. District 12 as Appalachia in a pre-Panem America. So much about the novel has been topical this fall – the role of the media and manipulation, who controls the media, reality television, the plight of the working class, the excessive indulgence of the elite oblivious to the plight of working class until it is too late. The novel has been a source of rich political discussion with both my kids, who are both fans.

While Little Brother was also a topical read this year, by far my favorite book on the list was Good Omens. Smart and funny, it reminded me of the best absurdist satire of Vonnegut. I had never read either author and Good Omens proved to be an excellent way to whet my appetite for more from both Gaiman and Pratchett.

Disappointments were Armada and No Relation. While both Fallis and Cline had debut novels that I really enjoyed (in The Best Laid Plans and Ready Player One respectively), both of these newer efforts were weak and I struggled to finish them.

In addition to the Hillbilly Elegy, the other non-fiction title that made an impact on me this year was Quiet by Susan Cain. While I had it on my list for awhile, attending Educause in Anaheim this year where she keynoted was the impetus that spurred me to dig in, and I was happy I did.

A penny dropped for me while reading Queit in that, as I ws reading it, I realized that I have not been a good advocate for my introverted daughter with her teachers. Consistently year after year in the 7 years she has been in the public school system, her teacher assessments have always included the wish that she speak and participate up more in class. I have taken those assessments to heart and (high irony alert here for someone who considers himsefl quite introverted) tried to push her to be more participatory.

After reading Quiet, I realized that I have been so wrong and it isn’t my daughter who I should be coaxing to change It’s not often that I have read a book where I can see such a clear connection between the book and my life, but Quiet was one of those books that has made me change my attitude.

Oddly, no books about cycling. Come to think of it, I don’t think I have ever read a book where cycling is the main theme or subject. Hmmmm. #2107Resolution. That and this.

Photo: Cyclists Manifesto by Richard Masoner CC-BY-SA

 

Setting up Public Channels in Kaltura Mediaspace

Part of my work at BCcampus is to co-administer a provincial shared service of Kaltura with my colleagues at BCNET and UBC. Kaltura is a suite of tools for hosting and streaming media. Mediaspace is the YouTube like front end for Mediaspace. Because of that, BCcampus has access to some of the Kaltura tools for our own business uses and, for the past couple weeks, I have been mucking around with our new Kaltura Mediaspace instance.

In general, I find Kaltura a beast. It is complex, and there are multiple layers of administration to go thru depending on the tools you want to use. It is powerful, no doubt, and has some wonderful features. But likely not a system you want to tackle off the side of your desk to fully grok how it works and, just as important, fully maximize a fairly sizable institutional investment.

At any rate, I have been making some progress on setting up our Mediaspace site. You can take a look. Keep in mind it is being built in real-time and not fully configured and setup. But for now, I have some basic branding in place and a few channels with some content set up.

One task that I was failing at was creating public channels. I was able to make channels and add videos to channels, but could not seem to make those channels publicly visible unless you had an account. Every time someone would click on the Channels link, they would be taken to a log in screen. For an organization that does open work, having closed channels was a no go.

So, after poking around, I went back and did what i should have done in the beginning, which is RTFM. Or, in Kaltura’s case, RTFM’s. Started here, which led me here and here and here and here. Here and here. A bit over here. Some stuff from this 23 page PDF here.

Ok, well, you get the idea and why I say complicated. Oy! And no where could I find the damn setting to make a channel public.

Finally, in a brief 30 second conversation with my colleague Jordi at UBC, I found the setting. In the Mediaspace admin area, there is a setting called supportPublicChannel that needs to be enabled.  That was it. One little setting. Click that and, boom, public channels.

 

Man, there it was. Right there in the Channels section of the administration console. Hours of pouring thru technical support documents & Google searches on how to enable public channels and the problem was solved in 30 seconds by talking to Jordi.

There were a few steps I did before this that I’ll add in here just in case others are struggling. This info is specific to those in the BC Kaltura Shared Service, which is an on prem instance of Kaltura. If you are not in the shared service, this may not work for you. The other caveat is that I have been working on this off the side of my desk for a couple of weeks now, along with other config issues with Kaltura and Mediaspace and, because I don’t have the awesome discipline of CogDog to document and share on an incremental basis, may be incomplete and missing stuff (and let’s just pause here and acknowledge just how fantastic CogDog is at documenting the technical work he is doing knowing that there are very likely others out there struggling with the same thing).

But, to the best of my memory, here is what I did prior to making that final switch above.

When you log into your KMC and go to Categories, you should see a MediaSpace category already set up as part of the initial system wide configuration. Under channels, I have created 2 channels – EdTech Demos and SCOPE

On the MediaSpace category, the entitlement (what Kaltura calls permissions) are open. These can be overridden at lower categories.

 

Drilling down to the next layer, Channels, I have set default channels to be Private.


When someone creates a new channel, they may want to work on it before making it open. When you create an actual channel, you will need to override this setting, as this screenshot of the actual SCOPE channel shows. In this case, I have overridden the Channel defaults and made the content privacy no restriction, while adding a restriction on who can actually add content to the category (only the channel administrator).

.

In the Mediaspace admin area, when I create a new channel, I now have an option to make it a Public channel.

So, this was a spot that I was getting confused at because, until I flicked the supportPublicChannel option on, I was not seeing the Public radio button. But I was seeing the Open radio button. So, when I clicked Open, I thought that would make the channel, uh Open. But no. In Kaltura, Public and Open are different concepts and it wasn’t until I enabled the supportPublicChannels switch that the public radio button option became available during my channel setup. Clicking that button made the channel publicly viewable without people having to log in.

Now, when I upload a video, I can publish it to multiple channels, including the open and public ones.

 

Like I said, this is likely missing out a bunch of other steps I have done along the way to enable public channels on our Mediaspace instance. But for those of you in the KSS struggling with setting up public channels – supportPublicChannel was what finally did it for me. Thanks Jordi!

 

Supporting what I use 2016 edition

Ok, time for my annual supporting what I use post. For those of you who have followed my blog for the past few years, you’ll know this is an annual event around the holiday season where I encourage you to financially support the free and open tools & services you use to help keep them free & open.

This whole annual supporting what I use series of posts goes back to a blog post that George Siemens wrote in 2012 where he singled out the important work that Audrey Watters brings to the EdTech community; work that, unlike many of us, is not underwritten or supported by an institution or company. Audrey is an independent agent, making a living off her writing, speaking and related events. This year, I’ve gone back to supporting Audrey with an ongoing monthly contribution that can hopefully help her concentrate on publishing important pieces, like her annual top EdTech Trends of the Year posts (essential EdTech reading). I encourage you to do the same.

In addition to supporting Audrey’s independent work, I am renewing my commitment to Open Media for their work in advocating for internet rights and freedoms in Canada. And, as this past year has shown us so clearly, more work needs to be done in the area of critical digital and media literacy, which is why MediaSmarts is also getting a donation from me.

Which brings me to my last choice, which is a bit different this year in that it is a business.

I’ve subscribed to a daily newspaper.

I have done this for a couple of reasons. First, in reaction to the recent election in the US (built on the back of Brexit in the UK) and the war on truth we are facing. Propaganda and misinforamtion have always been a staples in politics, but these recent results have shown that now, more than ever, I need to step up and support organizations committed to fair and accurate journalism, and (for me) that means a daily newspaper.

And I am getting a physical copy delivered to my home. This is part of the second (and perhaps less obvious) reason I am subscribing to a daily paper. For my kids. I want to have newspapers in the house that they can pick up and read.

As my kids get older, I am finding I have increasingly less control over their digital environments, and have to rely on the critical media and digital literacy skills they are developing to make good decisions about the media they consume. In a digital household where our media choices are often highly individual based on the devices we each have in front of us, there is little chance for serendipitous happenstance for my kids to discover information outside of their mediated filter bubble. It is something I worry about with digital books, too. As much as I love reading books electronically, there is something about not having my collection public on the bookshelf within my own home that reduces the random discoverability of topics and subjects to the other people in my house. Sure, there are plenty of ways for me to make my digital collections known, but my kids aren’t really cruising past my Goodreads account on a daily basis on the way to the breakfast table like they do our family bookshelf.

When I was a kid living in northern Alberta, the daily Edmonton Journal subscription was a critical part of my media diet for the simple fact that it was just left lying around in the house. Same goes for the books on my family bookshelf. I often read things that were outside of my normal areas of interest simply because I had proximity to books that I would not have picked myself. So, I want to have a general daily newspaper lying around the house that they can just pick up and read to both widen their horizons, and to help understand what good journalism looks like.

If you are interested in seeing what I have supported in the past (to perhaps give you some ideas of your own), you can read my previous posts here, here and here.

Image: Newspapers by Alan Foster CC-BY-NC-ND

 

 

12 apps of Christmas

Yeah yeah, I know. It’s the middle of November, what the heck are you talking about Christmas for?

Well, a couple of my ETUG colleagues Leva Lee and Sylvia Riessner pitched an idea a few weeks back for a special Christmas theme ETUG event called the 12 Apps of Christmas that I have been working on.

Drawing inspiration from similar 12 Apps of Christmas events from across the pond, (and how fantastic that Chris Rowell thought to CC license everything and create a build your own 12 apps of Christmas tutorial website!) the basic idea is to put together some bite sized microlearning activities that gets our local edtech community suggesting, testing, collaborating and reflecting on the usefulness of different apps.

No surprise, but there are thousands of apps targeted at EdTech that are varying utility and quality, and the EdTech’s task of being able to quickly separate the wheat from the chaff is becoming increasingly important. Institutions, like UC Irvine, have developed processes around testing and assessing the usefulness of cloud based educational technologies, and rapid EdTech evaluation models are being considered and developed. We’re also seeing collaborative efforts to assess educational technologies, like the Common Sense Media educators portal which collects & aggregates information from teachers about the usefulness and pedagogical value of different learning apps.

The idea of 12 Apps of Christmas is that each day starting December 1st, we’ll release a new app via the (currently under development) 12AppsofChristmas.ca website. The app will include a description, some possible ways it could be used in a teaching & learning context, and a very short (15 minute) activity that gets people trying out the app.

The apps are being picked by various members of the BC ETUG community. Criteria for what apps to include are pretty basic; free, available on multiple platforms, easy to use, and lightweight in the sense that it shouldn’t take people a lot of time to figure out how to use them.

Once the activity is completed, we hope that you’ll spend a bit of time evaluating the app & leaving some review comments on the app post (I’m building the site in WordPress & will use the commenting feature). We’ll include a few question prompts to help frame the evaluation, but the idea is that the whole process should not be too onerous and should be flexible enough to allow people to hop in and out and take part with whatever time they have.

While the 12 Apps of Christmas is by no means an extensive review process, it will hopefully be a fun activity with a minimal time commitment will get those interested in educational technology collaboratively playing, testing and evaluating different apps and technologies.

Photo: Blue Christmas by Jamie McCaffrey  CC-BY-NC

 

Learning analytics & transparency

Just got back from EDUCAUSE. I’ll have more on the conference in future posts, but wanted to quickly post a couple of thoughts I have had around learning analytics and transparency based on what I learned at EDUCAUSE and as a result of an EdTech demo session I did this morning with an LMS vendor on learning analytics.

I went to EDUCAUSE with a few goals, one of which was to try to learn more about learning analytics. Specifically, what (if any) are the compelling use cases and examples of faculty and institutions effectively utilizing analytics to solve problems, what are the ethical issues around data collection, how are institutions informing their students & faculty of these concerns, and what technologies are being used to facilitate the collection and analysis of analytics data. And while I didn’t find complete answers to these questions, I did come away with a better 10,000 foot view of learning analytics.

The primary use cases still seem to be predictive analytics to identify academically at-risk students, and to help institutions improve student retention. I get the sense that, while student retention in Canada is important, it is not as critical for Canadian institutions as it appears to be for U.S. institutions. There are likely more use cases out there, but these 2 seem to be the big drivers of learning analytics at the moment.

Earlier today, I attended an LMS demo session on learning analytics where I had a chance to see some of the analytics engine built into the LMS. The demo included a predictive analytics engine that could be used to identify an at-risk student in a course. Data is collected, crunched by an algorithm, and out comes a ranking of whether that student is at risk of completing the course, or of failing the course. When I asked what was going on within the algorithm that was making the prediction about future student behavior, I got a bit of an answer on what data was being collected, but not much on how that data was being crunched by the system – that is, what was happening inside the algorithm that was making the call about the students future behavior.

This is not to single-out a specific company as this kind of algorithmic opacity is extremely common with not only learning technologies, but almost all technologies we use today. Not only are we unaware what data is being collected about us, but we don’t know how it is being used, what kind of black box it is being fed into, and how it is being mathemagically wrangled.

Now, it’s one thing to have something fairly innocuous as Netflix to recommend movies to you based on – well, we don’t really know what that recommendation is based on, do we? It is likely what we have viewed before is factored in there, but it is also likely that the recommendations in Netflix are pulling data about us from services we have connected to Netflix. Mention on Facebook that you want to see the new Wes Anderson movie and suddenly that becomes a data point for Netflix to fine tune your Netflix film recommendations and the next time you log into Netflix you get a recommendation for The Royal Tennenbaums. I don’t know for sure that it works that way, but I am pretty certain that this information from around the web is being pulled into  my recommendations. Search for a movie on IMDB. Does that information get shared back to Netflix the next time you log in? Probably.

As I said, the decisions coming out of that Netflix black box are fairly innocuous decisions for an algorithm to make – what movie to recommend to you. But when it comes to predicting something like your risk or success as a student, well, that is another scale entirely. The stakes are quite a bit higher (even higher still when the data and algorithms  keep you from landing a job, or get you fired, like teachers in New York State). Which is why, as educators, we need to be asking the right questions about learning analytics and what is happening within that black box because, like most technologies, there are both positives and negatives and we need to understand how to determine the difference if we want to take advantage of any positives and adequately address the negatives. We can’t leave how the black box works up to others.

We need transparency

Which brings me to the point that, in order for us to fully understand the benefits and the risks associated with learning analytics, we need to have some transparent measures in place.

First, when it comes to predictive analytics, we need to know what is happening inside the black box. Companies need to be very explicit about what information is being gathered, and how that data is being processed and interpreted by the algorithms to come up with scores that say a student is “at-risk”. What are the models being used? What is the logic of the algorithm? Why were those metrics and ratios within that algorithm decided upon?  Are those metrics and ratios used in the algorithms based in empirical research? What is the research? Or is it someones best guess? If you are an edtech company that is using algorithms and predictive analytics, these are the questions I would want you to have answers to. You need to let educators see and fully understand how the black box works, and why it was designed the way it was.

Second, students should have exactly the same view of their data within our systems that their faculty and institution has. Students have the right to know what data is being collected about them, why it is being collected about them, how that data will be used, what decisions are being made using that data, and how that black box that is analyzing them works. The algorithms need to be transparent to them as well. In short, we need to be developing ways to empower and educate our students into taking control of their own data and understanding how their data is being used for (and against) them. And if you can’t articulate the “for” part, then perhaps you shouldn’t be collecting the data.

Finally, we need to ensure that we have real live human beings in the mix. That the data being analyzed is further inspected and interpreted by human beings who have the contextual knowledge to make sense of the information being presented on a data dashboard. Not only does that person need to know how that data ended up on that dashboard and why, but also how to use that data to make decisions. In short, faculty need to know how to make sense of the data that they are being given (and I’ll touch on this more in a future blog post when I write about Charles Darwin University Teaching & Learning Director Deborah West’s analytics presentation which centered around the question “what do teachers want?”)

One approach from UC Berkeley

At EDUCAUSE, I saw a really good example of how one institution is making their data processes more transparent. In a presentation I saw from Jenn Stringer, Associate CIO of UC Berkeley, there was a slide that hilighted the data policies that they have put in place around the ethical collection and use of learning analytics data.

img_20161028_082931

These principles are reminiscent of the 10 learning data principles set out by the Data Quality Campaign and the Consortium for School Networking.

Additionally, UC Berkeley also makes a student analytics dashboard available to the student so that they get the same view of the analytical data that their faculty get. I think both of these are excellent starts to working ethically and transparently with learning analytics data.

But for me the big question remains – what are the compelling use cases for learning analytics, and are those use cases leading to improvements in teaching & learning? So far, I am not sure I came away from EDUCAUSE with a better understanding of how analytics are being used effectively, especially by faculty in the classroom. If you have some interesting use cases about how analytics are being used, I’d love to hear them.

Photo: Learning Analytics #oucel15 keynote by Giulia Forsythe CC-BY-NC-SA

 

Looking for Canadian Creative Commons projects

If you have been involved with the Creative Commons community, you will have no doubt run into Kelsey Wiens.

Kelsey was a Canadian ex-pat working in South Africa, and was deeply involved in Creative Commons South Africa. Kelsey was also the driving force behind Open Textbooks for Africa.

Earlier this year, Kelsey relocated from South Africa back to Canada and I have been working with her (and others, like CIPPIC) to reinvigorate interest in the Canadian Creative Commons affiliate. With Toronto hosting the 2017 Creative Commons Global Summit (mark the dates April 25-28, 2017), it would be great to have an energized local affiliate representing the host country.

There are some really interesting projects happening at Creative Commons these days, not the least of which is the CC Certification program that Alan Levine is working on with Paul Stacey. Paul is also co-authoring a book on open business models with Sarah Pearson.

One of the projects that Kelsey and I are working on is developing a map of open projects in Canada to try to get a better understanding as to where the pockets of openness are happening across the country. The CC Canada community is well represented by educators (especially post-secondary educators)  and we have a pretty good idea as to what some of the major open education projects that are happening across the country. But Creative Commons is much more than Open Educational Resources, and it is those other areas where we are trying to find pockets of openness.

So, if you are involved with a Canadian based Open Access, Open Data, Open Government, or Open Source Software project, please take a few seconds and connect with us by filling out this short form. I am especially interested in finding out about Canadian GLAM (Galleries, Libraries, Archives, and Museums) projects that might be using Creative Commons licenses.

Please feel free to share with your networks, and help us map Canadian open projects.

Photo: Creative Commons 10th anniversary by Timothy Vollmer CC-BY

 

Open puts the public in public education

Really encourage you to take a few minutes and watch Robin DeRosa’s great Ignite Talk DML2016 on open education.

In 5 short minutes (NO, DON’T CLAP I DON’T HAVE TIME!) she connects the various strands of open education (open access, open educational resources, and open pedagogy) to the broader societal mandate of our public institutions, which is to serve the public good. And while Robin is based in the US, the main thesis of her talk is applicable to anyone working in public education anywhere.

I don’t see how higher education can be relevant in the future without being even more open than we are today. We need to be more deeply engaged with the public; as educators, as researchers, as institutions designed to serve the public good. Open has to be both the default value and the default process by which we operate, or else we risk becoming alienated from the public whom we are here to serve, and risk adequately preparing our students to become fully engaged citizens.

 

Fall projects

I’ve got a busy fall on the go with some new initiatives and projects keeping me busy.

EdTech Demos

This is a new educational technology initiative here at BCcampus, designed to help expose the system to some new ideas and educational technologies. These are free 30-60 minute virtual  demonstrations done about once a month.  So far I’ve done 3 of these demo sessions (Canvas, FieldPress, H5P) and I’ve been very happy at the response and attendance from the post-sec system.

One of the goals I have is to try to make some space for open source educational technologies as these are often interesting projects that don’t have the marketing or promotional budgets of a commercial edtech company. But there will be a mix of commercial and open source, big and small to try to get a nice flavour of what is happening in the edtech space. I have 2 more schedule for this fall, one with D2L Brightspace on Learning Analytics at the end of October, and another with Hypothes.is in late November.

I’ve put together an email notification system that people can sign up for to get notified when these demos happen. I am shooting for about one per month.  I’m also looking for suggestions of edtech that you would like to see a demo of.

Guide on the Side Sandbox

I’m also coordinating a sandbox project with a group of academic librarians from aroun d the BC post-sec system for an open source application called Guide on the Side. Guide on the Side is an open source app developed by the University of Arizona to create guided tours of websites and web applications. We are just in the process of installing the software and forming our community. This sandbox project will run for the next 6 months as we test out the software. I am trying to put together some edtech evaluation frameworks (SAML, RAIT, etc) to use as a guide for evaluating the software. I imagine I’ll end up cobbling a few of these together to come up with a framework that works for what we want our sandbox projects to do.  We’ll be releasing our findings in the spring.

EDUCAUSE

I’ll be heading to EDUCAUSE in Anaheim at the end of the month. The last time I was at EDUCAUSE was in 2007 where I first met Bryan Alexander and learned about this new thing called Twitter. I don’t know if this one will be as memorable (Twitter became kind of a big deal in my life), but I am looking forward to attending.

I am in a bit of session overload right now as I plan to attend and put together my schedule. I forgot just how massive this thing is. Holy session overload! One time slot I am looking at has 53 concurrent sessions. Even when I filter from 7 to 3 streams, I still have 25 options. This one looks most relevant for how I feel at this moment.

capture

As I have written about before, I am intrigued by a few new technologies and ways of thinking about edtech that have been coming out of EDUCAUSE, specifically the idea of Next Generation Digital Learning Environment (NGDLE) and applications like CASA. These are the sessions I’ll be attending, along with some more on personalized and adaptive learning which I feel I have a good conceptual understanding of, but have yet to get a good grasp on some of the more practical applications of these technologies.

Privacy Impact Assessment & WordPress Projects

One of the other projects I have on my plate for this fall is some Privacy Impact Assessment work for the BC OpenEdTech Collaborative. We had a very productive meeting of our WordPress group where one of the barriers identified by the group was the lack of clarity about data sovereignty and privacy with the technical solutions we are looking at (EduCloud, Docker, and WordPress itself).

While we do have a FIPPA compliant hosting service in EduCloud, that is just one (albeit significant) piece of the FIPPA puzzle. But there may be other privacy considerations when it comes to using WordPress. For example, a plugin may potential disclose personal information to a server outside of Canada.

Since privacy and FIPPA (within the context of educational technology) is part of my wheelhouse, I’ve taken on coordinating a Privacy Impact Assessment for an EduCloud based WordPress project.  Since a privacy impact assessment is something that is done on an initiative and not just the technology used as part of the initiative, I’ll be taking a fairly in-depth look at one of our applications of WordPress and using it to construct a Privacy Impact Assessment report that can then (hopefully) be used as a template for other initiatives using similar, but slightly different technologies. I have an idea of how to do this in my head, but haven’t yet fully formed how to execute it yet.

Other stuff

There are a number of other projects I have on the go right now (including a big one with BCNET and UBC developing an onboarding process for institutions who wish to join the provincial Kaltura shared service), and participating on the SCETUG steering committee. But these are likely the ones I’ll be blogging about over the coming months.

Oh, and something unrelated to my work with BCcampus – I’ll be spending some time prepping to teach in the new year at Royal Roads University in the Learning & Technology program. The course (normally taught by George Veletsianos) is  LRNT505: Community Building Processes for Online Learning Environments, and I am thrilled to be able to get into a (virtual) classroom and work with students. Being that I have been out of an institution for the past 4 years, I am immensely grateful to have the opportunity to jump back into an institution as a faculty member & work directly with students.

 

My #ETUG Tale of Fail

ETUG is coming up in a few weeks; the twice yearly gathering of BC post-secondary educators, learning technologists & instructional designers.

I’m more than a bit bummed that the timing of ETUG this year coincides with the annual EDUCAUSE conference which I am going to be at. So, despite being on the steering committee for ETUG, I’ll actually be missing this one.

And it will be a good one. The steering committee (co-led this year by the excellent Janine Hirtz and wonderfully creative Jason Toal aka Dr. Jones) has taken full advantage of the fact that ETUG happens just a few days before Halloween and are calling the workshop the Little (work)Shop of Horrors.

Completely en pointe  will be Audrey Watters keynote riffing off her recent book the Monsters of Educational Technology.

Also keeping with the frightening theme is the call for proposals, looking for stories about things that went wrong. Tales of the fail. I am a big fan of failure, having failed at many things in my life. And I do believe that, while success gives us confidence, it is in failure that we do some of our deepest learning. And as educators, we need to be mindful that there are valuable lessons within failure.

Leading up to ETUG, a few of us are creating some videos with our own tales of failure. Here is my contribution – my tale of fail is my academic career, partial inspired by Johannes Haushofer’s CV of Failures. The running narrative throughout my academic life going back to high school is that I never finish. Even high school, although I did manage to get that last course and technically finish high school a few years after I was supposed to graduate. My first distance learning experience was taking a Biology 12 correspondence course that finally gave me enough credits to finish high school.

Watch the #etug hashtag on Twitter over the coming days for more Tales of Fail.

And if you are reading this and I have applied to be in your PhD program…I will complete it.

Maybe.

 

Privacy in BC is more than just data sovereignty and the cloud

Some random thoughts and notes from the recent InfoSummit I attended. The Summit was put on by the non-profit BC Freedom and Information Privacy Association.

Among the presenters were KPU faculty Mike Larson and UBC legal counsel Paul Hancock. Both spoke on a number of issues, but the one that was most relevant to my work was the discussion on the BC-specific data sovereignty requirement which says that, unless you get consent, personal information collected by BC public bodies must reside on data servers stored in Canada.  In the BC higher education edtech space, this has made using cloud based services problematic for faculty who often have to get signed consent forms from students to use cloud services outside of Canada.

On this point, Mike and Paul were on opposite sides of the debate. Mike supported the requirement, and made the point that the requirement to get informed consent has a strong pedagogical function. When he asks his students to sign a consent form to use cloud-based services, it often kickstarts a conversation with them about privacy, data, security and user rights. Now, Mike does teach Criminology and law, so it feels like a natural fit to have this convo with his students, and I wonder if a, say, English prof or someone teaching trades would be prepared to have this conversation with their students. Or wether their students would even care to have this conversation, as important as it is to have. But still, it was refreshing to hear a faculty member speak about using BC’s informed consent requirement as a pedagogical device to start conversations about privacy in a digital age.

On the other hand (and probably closer to the reality of most faculty), the data sovereignty requirement and informed consent forms are real barriers for faculty who wish to incorporate other technologies into their teaching and learning.  UBC’s Paul Hancock believes that the data sovereignty requirement is much too broad, and he spoke about feeling handcuffed by the legislation when he speaks to instructors who want to use a teaching tool that fits their exact pedagogical goals, but is hosted in another country. For this instructor, Hancock has to advise them that they cannot use the technology unless they get the consent of the students. Getting consent may sound easy, but if you do have students who do not consent, then the instructor has to have an alternative activity or exercise ready for them that is FIPPA compliant. Now you have to start designing additional activities for these special exceptions, and I don’t know many faculty who have extra time on their hands to develop extra activities as a work-around to the legislation.

But it is not just the data sovereignty requirements that are presenting challenges to higher education institutions. Students are now asking institutions for access to the data collected about them within the technologies hosted on campus. Using a Freedom of Information request, UBC student Bryan Short has been trying to get a copy of the data collected about him by the UBC LMS (Blackboard, branded as UBC Connect at UBC). In a five-part blog post series, Bryan neatly and perceptively outlines his experiences trying to get access to learning analytics collected about him using a Freedom of Information request (and does a nice takedown of the LMS in general). Normally, when a public institution has been served with an FOI request, they have 30 days to comply. UBC was unable to comply and has now asked for an additional 30 days to gather the information. I wonder what kind of technical hoops UBC might be jumping thru to even extract this information from their LMS in a meaningful way.

Sadly, this whole process has made Bryan feel like he is a “meddling, bothersome nuisance” for doing something he has the legal right to do.

I don’t think this will be an isolated incident. As digital privacy becomes a bigger issue within our society, I suspect institutions will begin to see more and more of these kinds of requests from students like Bryan asking for access to their learning data, especially if that learning data is being used in anyway to conduct an assessment of the student. Which makes me wonder how many institutions in BC would be able to respond within 30 days to a student FOI request for the data collected about them in the LMS?

Photo: Eye I. By Thomas Tolkien CC-BY

 

Disjointed, fractured and somewhat pointless

3 months.

I’ve gone through long dry spells on my blog in the past, but this one feels especially long.

I’m struggling a bit to figure out why this writing stuff has suddenly become so difficult. It wasn’t that long ago when I was in the habit of writing weekly post of what I was working on. But lately I can’t seem to pull all the disjointed and fractured thoughts together to even accomplish that each week.

It’s not like there are not interesting conversations happening that I want to participate in or respond to. I feel like I owe both Audrey and Martin some considered and thoughtful words about EdTech as academic discipline as I think I was the one who likely spurred the subject after posting a piece on Facebook that they both responded to…and then took one step further and wrote great posts on (here’s Audrey’s and Martin’s). My take doesn’t go much beyond the annoyance I felt when I read the original article and thought here are elite institutions “discovering” something that many of us thought already existed. Like MOOC’s, suddenly discovered when the, ahem, right institutions started doing them. It’s not echolalia, it’s moocolalia.

There is also Martin’s excellent take (and ensuing comments) on a societal shift to a dark place where expertise is dismissed and the facts don’t matter. How does education fight a culture that is increasingly anti-learning?

I feel like I should comment with something, especially since I was quick to jump into the great Twitter maple syrup debate, which prompted an observation from Martin.

But maple syrup is easy. The role of education in the helping to solve the decline of western civilization? Not so much. I feel inadequate to respond, especially in light of the incredibly thoughtful responses by a lot of very smart people each weighing in with their views.

And here is where I get to everytime. What is it I am trying to say? It has to be more than just….this.

I feel like I’m wasting time. Mine, because here I have sat for 45 minutes trying to figure out what it is I want to say, or whether I even have anything else to say.  And yours because, well, I haven’t really said all that much.

Maybe I am just going thru a drought, but it feels like something has changed. Writing used to be the way that I would connect the thoughts and make meaning out of this stuff. But lately…not so much. It feels like an awful lot of work with little reward. And work that is dissatisfying because I keep going, and feel I get nowhere. Get stuck. Have a thousand strands in my head that I can’t quite pull together, elusively out of reach to make a coherent narrative.

Hell, never mind coherent narrative. At this stage, I’d be happy with a point.

So I am just going to keep on writing. I am hoping I can write my way out of this and get back to where I feel like I actually have something meaningful to contribute to the conversation. It’ll likely mean a few go nowhere posts, but I’m going to publish them anyway so please bear with me. I feel like I need to get this blogging thing under control again.

Photo: Temporary Pointless Sign by Cory Doctrow, CC-BY -SA

 

My first pull request

Crazy to think that, even though I have had a GitHub account for 5 years and have poked, played and forked things, I have never made a pull request and contributed something to another project until today.

I attribute that mostly to the fact that I stopped actually developing and writing code right around the same time as I signed up for a GitHub account, and the fact that it took me a long time to grok how GitHub works. Honestly, I am still not totally sure I understand how GitHub works, but after a great session with Paul Hibbitts at the Festival of Learning last week where I had a chance to dig into both Grav and GitHub, I finally feel like I can work around GitHub with some level of confidence. Enough that when I saw an opportunity to contribute to a project earlier today I thought, “I can help!”

The trigger was a tweet from the web annotation project Hypothes.is. I’ve been playing with Hypothes.is since hearing about the project from David Wiley a few years ago. It is maturing into a really great annotation system that has found some use among educators, including Robin DeRosa who is using Hypothes.is as an annotation tool she has published in PressBooks.

The tweet from Hypothes.is pointed me to a small project that Kris Shaffer is working on – a WordPress plugin that will allow you to aggregate your Hypothes.is annotations on a page or post on your WordPress site.

As Kris points out on his blog post about the plugin, there are some compelling use cases

I envision a number of possible uses for Hypothes.is Aggregator. As I write in my post on hypothes.is as a public research notebook, you can use this plugin to make a public research notebook on your WordPress site. Read something interesting, annotate it, and aggregate those annotations ? perhaps organized by topic ? on your domain. They will automatically update. Just set it and leave it alone.

I also see this as a tool for a class. Many instructors already use hypothes.is by assigning a reading that students will annotate together. Hyopthes.is Aggregator makes it easy to assign a topic, rather than a reading, and ask students to find their own readings on the web, annotate them, and tag them with the course tag. Then Hypothes.is Aggregator can collect all the annotations with the class tag in one place, so students and instructors can see and follow-up on each other’s annotations. Similar activities can be done by a collaborative research group or in an unconference session.

I went to his GitHub site, downloaded the plugin and fired it up. It worked (although the Cover theme I am using has done some funky formatting to it, which i need to adjust). But when I took a look at the GitHub site, I noticed that Kris had no README file on the Github site and the actual instructions on how to install and use the plugin were only on his blog post. Aha! A chance for me to actually contribute something to a project! So, I fired up my Atom editor, forked his repo and added a README.md file with instructions that i copied and pasted from his blog post on how to install and use the plugin.

So far so good. Now to figure out how to actually do a pull request. i thought that, before I do this (and not knowing exactly what might happen when I hit the Pull Request button) I should check with Kris. So I fire him off a tweet.

Ok, all good. I used these instructions from GitHub on how to launch a pull request and a few minutes late, my README file was sitting in Kris’s GitHub repo.

I am still not totally sure what I am doing, but having that first pull request under my belt has given me a boost of GitHub confidence.

Image: GitHub (cropped from original) by Ben Nuttall CC-BY-SA GitHub (crop) used here released under same CC-BY-SA license.

 

What would you do with a Creative Commons certificate?

I’ve been following the development of a Creative Commons certificate since last fall. Paul Stacey from Creative Commons paid a visit to the BCcampus office looking for some feedback on a DACUM-inspired curriculum process he was leading, and on the potential value of a CC certificate.

Developing a certificate program that is flexible enough to consider all the potential use cases for Creative Commons is (I think) one of the biggest challenges. While we in higher ed look at CC licenses as a way to enable the development and sharing of curricular resources and open access research, the use cases outside of academia are wide and varied. CC is used by authors, musicians, filmmakers, photographers, and other types of artists. Governments are using Creative Commons licenses, as well as galleries, libraries, archives and museums (GLAM), furniture design3D printing & manufacturing, and even in game design.

Earlier this year, Alan Levine was brought on board to assist with the process, and it’s great to see some progress being made on the development of a Creative Commons certificate. Alan has asked for some help from the community to seed a website with some videos on how a CC certificate could be applied and used.

One of the ways that I could see my organization, BCcampus, using a CC certificate program is to help us vet grant applications. Over the years, BCcampus has supported the development of open educational resources (open courseware with the old OPDF program and the current open textbook project) by coordinating grant program. A number of institutions get together and collaborate to create open courses or open textbooks that can be freely shared with others. As a condition of the grant, those creating the resources have to agree to release their material with a Creative Commons license. Often when people apply for a development grant, they are either not familiar with Creative Commons, or often have a very cursory knowledge of how the licenses work, so BCcampus often takes on the role of providing support and training to the grantees, depending on their level of knowledge of Creative Commons.

Having a certificate program from CC would help with the application vetting process. Additionally, with some CC certified standards to align with, I think the community could develop some fantastic openly licensed learning resources to support the CC approved learning objectives. It could become a model of OER production and sustainability if we all begin to build on each others work.

If you have a use case for a CC certificate, take a minute, record a video and let Alan know. Here is my response.

 

Follow that open educational resource!

This morning I was taking a look at some of the Piwiks website analytics for the Geography open textbook we created 2 years ago as a textbook sprint project. For me, the really interesting data is always the referring website data as that can give you a glimpse of how the content is being used and by who.

The BC Open Textbook Project tracks adoptions of open textbooks with an eye to reporting back student savings. But there is much more value in open resources than just a displacing adoption where a commercial textbook is replaced by an open one. When you create open resources, you may have one specific group in mind, but you often find there are unexpected audiences using your resources.

This is a big value proposition of open resources. Once you make an open resource, it is available for others to use and refer back to. Each open textbook in the BC open textbook collection contributes to improving the knowledge available on the open web.

This is one of the major reasons I love open resources created by smart people in higher education. Every time an edu contributes resources to the open web, we make the web a better, more informed space for all.

I like to think of it as making the web more educational, less Perez Hilton.

Onto the Geography open textbook.

First, there is a lot of evidence that the book is being used by the intended audience of BC post-secondary institutions as there are referring links back to sections of the book from post-secondary domains at KPU, UVic, Langara, VCC, UBC and VIU. Most of the referring links I follow back take me to a landing page for an LMS at that institution, telling me that the content is being referred to from inside a course. This may not be a full adoption of the resource by the faculty, but it does indicate that the resources are being used by the intended audience.

But use of the resources extends beyond BC higher ed. A large source of referral traffic is from the Oslo International School, which appears to be an International Baccalaureate primary school. Again, when I follow the referring link I am met with an LMS login screen. It’s crazy to think that a regionally specific resource aimed at a first year Geography student in British Columbia is finding use at a primary IB school in Oslo, Norway. You never know where you’ll end up when you go open.

It is not the only K-12 school referring to these resources. Teachers from School District 43 in BC have recommend (Word) the section on Residential Schools to their students as a resource for students doing a research project.  Teachers in School District 63 are using the section of the textbook dedicated to the BC Gold Rush as a learning resource in their classes.

Those two resources have also found use outside of education. The Royal BC Museum Learning Portal has included a link back to the textbook section on the Gold Rush as a resource on their learning website dedicated to the BC Gold Rush, and Vice included a referential link back to the section on Residential Schools in an article it published on the Canadian Truth Commission on Residential School.

The textbook is also showing up on some kid friendly search engines. A referral from a KidRex search on the Hope Slide led me back to the search results page for the search and shows that the Hope Slide case study in the textbook comes up as the second result (behind the Wikipedia entry).

None of these uses save students a penny, but show the value of an open resource beyond the financial. No doubt that the student savings are important as the financial barriers are real. But to me, seeing this kind of usage of OER shows the benefits extend beyond students. These resources make the web better for all. It is higher ed freely contributing knowledge to the world. It is higher ed making the world much less Perez Hilton.

Addendum: Gill Green, one of the original book authors and current Open textbook Faculty Fellow sent me this tweet about a resource we created for the book.

 

Wikipedia and open learning at the Festival of Learning

The BC Festival of Learning is happening next week in Burnaby. This is an amalgam of a number of different workshops and conferences that have been supported by BCcampus; Educational Technology User Group (ETUG), the Open Textbook Summit, and the Symposium on Scholarly Teaching & Learning.

I’ve got a busy week ahead of me, facilitating or participating in a number of different sessions, including a three hour Wikipedia workshop on day 1 with Judy Chan and Rosie Redfield (UBC) and Jami Mathewson from the Wiki Education Foundation.

I am quite excited about this session as this is something I have wanted to see happen at an ETUG for the past few years. I have written about Wikipedia in the past and have been a semi-regular contributor for many years. I also maintain a curated Scoop.it collection where I stuff articles on how educators are using Wikipedia.

Getting ready for the workshop, I’ve been impressed with how much work has been done by the Wiki Education Foundation to help support educators who want to use Wikipedia in their class. The resources available to instructors – from handouts, how-to’s, lesson plans to real live people who can help support them – have really lowered the bar for educators to begin using Wikipedia. This is not the same unsupported landscape for educators as it was 10 years ago when early adopters like UBC’s Jon Beasley-Murray were trailblazing. Full credit to the foundation for making it easier for educators to engage with Wikipedia.

It’s been interesting to watch perceptions of Wikipedia change in higher ed over the years from the days when nobody knew exactly what Wikipedia was, to the backlash forbidding its use by students, to tacit acceptance that it could have a role to play in higher ed, to today where we are seeing active engagement on Wikipedia by many in the academic community interested in exploring open pedagogy.

I have also been heartened to see academics who treat the platform seriously and realize that the worlds largest repository of open knowledge is being heavily used by people in their daily lives. They understand that, as academics, they have an important role to play in helping to maintain the accuracy, breadth and diversity of Wikipedia. Faculty like Dr. James Heilman and Dr. Amin Azzam who regularly correct misinformation on Wikipedia articles about health.

Heading into the world of Wikipedia is not without its risks, as UofT professor Steve Joordens discovered when he had his (1,900!) students start editing Wikipedia articles, flooding the existing Wikipedia volunteer editors with tons of extra work as they had to filter the contributions. Wikipedia is, first and foremost, a community made up of volunteers, and learning to negotiate and engage with that community is just as important as contributing & fixing content. It’s one of the topics we’ll be discussing at the workshop.

Image: Wikipedia by Giulia Forsythe CC-BY

 

Facebook has an identity crisis – and it's messing with democracy

I’ve followed the long standing Facebook identity battles that both Alec Couros and Alan Levine have had to endure, and the abject failure on the part of Facebook to deal with fake account after fake account expropriating their identities to do all manner of nasty things. Today comes news that the mayor of Victoria, Lisa Helps, was locked out of her own Facebook account because….well, because Facebook doesn’t believe that a person in politics could actually have the last name of Helps.

While what has happened to Alec and Alan is serious and has caused a great deal of pain to people who have been duped and manipulated by one form of catfish con after another (to say nothing of the huge amount of effort both Alec and Alan have expended fighting Facebook), it is another level of icky when Facebook starts messing with the identity of publicly elected officials.

Regardless of your political opinions of the mayor (and just for the record, I live in Saanich, a different municipality with a different mayor) it is clear that Helps considers Facebook an important tool to engage with her constituents on all manners of public policy. Which is how it should be. The internet should enable more direct interaction with our public officials.

But by locking her out of her own account, Facebook has essentially gagged a public official. In short, Facebook – a corporation that is no stranger to accusations that it manipulates political opinion and conducts ethically questionable research by manipulating what we see in Facebook – is messing with democracy.

Now, I don’t think that there is anything overtly political behind having the mayor’s account shut down by Facebook. I think this is a case of Facebook’s own algorithmic bumbling. But, intentional or not, there are socio-political implications to having a publicly elected official lose access to their own Facebook account. Imagine if this happened with just a few days left in a tight election campaign? Or during a crisis in the city where the mayor was trying to use Facebook as a way to communicate important information to the citizens of her community?

It would be dismissive to think that social media is trivial. That this is just Facebook and there are plenty of other avenues available to the mayor to communicate with constituents. Which is true. But the fact is that social media is driving much of the political discourse happening in North America. The recent Pew Research shows that most people get their news from social media, with 63% of respondents saying that they get their news directly from what they see in Facebook. Over 60% of us use Facebook and social media to engage in political discourse on social media. And Facebook, the company, has no qualms about adjusting our newsfeed to promote certain behaviours during an election.

Social media has become a vitally important mechanism in our political process and, by extension, our society.

I am becoming convinced that it is dangerous for us to leave something as crucial as our identity up to an unaccountable, corporate social media company. Facebook is messing up too bad and the stakes are just too high in a democratic society.

I think our civic institutions need to be playing a bigger role in digital identity. Our governments need to be doing more to help its citizens verify that who we are online is legit. It’s a role our government has always had a hand in, through the issuing of government identification documents like passports, health cards, and drivers licenses. It’s time for them to step up and provide some kind of mechanism that can help their citizens verify that they are who they are online.

I also think that we need some regulations on social media with regards to digital identity issues. When the mayor of a city – and that cities police force – are unable to convince Facebook that the mayor is who she says she is….that is a serious problem. Our digital identities are too important to be left to customer support who refuse to return messages and fix problems quickly. It begins to look like censorship – tacit or otherwise – when the mayor is cut off for 9 days, and gets NO response from the company that cut her off. With identity, Facebook is failing and it is time for our public officials to step in and ensure that there are effective and efficient identity dispute mechanisms in place that keep people from being locked out of their own accounts for days and weeks on end. And with Facebook single sign on accounting for over 60% of login credentials at third-party sites, getting the boot on Facebook likely means getting the boot of a whole host of other sites and services across the web that you use.

I am also becoming convinced that our governments need to be more proactive in providing citizens alternative public virtual spaces for citizens to engage. While it is great that civic engagement happens on Twitter and Facebook and other virtual spaces, it is still at the whim and control of that social media company.  Just like our communities have real public spaces like libraries, schools, recreation centres and other physical municipal institutions, we should also be pushing for more of these virtual public spaces provided by our civic institutions. Places where a mayor can virtually interact with a wide network of constituents that isn’t controlled by a corporation driven by their own best interests who seem to have little regard for the damage they are doing to our lives and communities.

 

Sandstorm Apps and Grains

Understanding the difference between Apps and Grains is important to understanding how Sandstorm works.

Grains are discrete instances of apps. Each grain is a copy of an app. This allows each grain to run isolated from other copies of the app in Sandstorm. Here’s a little walk through to help you understand how they relate to each other.

Adding the app to Sandstorm

Before you can create grains, you have to add the app from the Sandstorm app market to your local instance of Sandstorm.  When you log into Sandstorm the first time, you’ll see a blank slate that looks like this:

blankslate

Not much there.

In the left hand navigation, there are 2 sections; Apps and Grains. The screenshot above is the default Apps section. If I switch to the Grains view I get a message that I don’t have any Grains installed yet.

Grains

Understanding the difference between Apps and Grains is important to understand how Sandstorm works.

In a nutshell, you install the application once on your local instance of Sandstorm. Once it is installed, you can create multiple copies of that app to use. Each of these copies is called a Grain in the Sandstorm world.

So, let’s use Etherpad as an example. I install the Etherpad app from the Sandstorm App market onto my Sandstorm server. Once that is done, every time I want to create a new unique Etherpad document, I create a new grain (copy) of Etherpad with each grain operating independent of the others.

Installing an App

The first step in making Etherpad available on Sandstorm is to install the app from the Sandstorm App Market. Think of the Sandstorm App Market like Google Play or the Apple App Store. I only have to do this step once. Once Etherpad is installed on my Sandstorm server, I can then create multiple grains of Etherpad with the click of a button.

In the Apps section, click Install from App market. This will open a new tab/browser window in the Sandstorm App Market.  Find Etherpad and click Install.

Etherpad

You’ll be taken back to Sandstorm and get a message asking you to verify that you want to install the app.

install

This is one of the most visible places where you will see the commitment to security that Sandstorm has as each app has some additional information included to help you verify that this application is legitimate. You will see a PGP key signed by the application publisher along with their verified contact information. Sandstorm provides a cryptographic chain of trust that connects the app package you’re installing to the app publisher’s online accounts. This is an assurance method that you are installing a legitimate Sandstorm application and provides a verified trail back to the person who published the app.

Click Install Etherpad and the application is installed and ready to use.

Create a Grain

Once the app is installed, you can now create your first Etherpad Grain by clicking on Create new pad.

EtherpadReadyToUse

You’ll see the Etherpad Grain now appear in your left hand navigation under Grains as Untitled Etherpad. To change the title to something more meaningful, click on the title Untitled Etherpad pad at the top of the screen.

Untitled

A popup will appear where you can change the name

MyNew

Click Ok and your Etherpad name is changed at the top of the screen and in the Grains navigation on the left.

MyNewEP

And I am ready to start working on this Etherpad. Click the Share access link at the top of the page and I can generate a link that I can send to co-collaborators and give them anonymous access to collaborate on this document, just like you can with Google Docs.

share

Go back to the apps page and you’ll see that Etherpad has now been installed on our local instance of Sandstorm.

MostUsed

If I want to create another Etherpad Grain, I don’t have to go back to the app market and reinstall the application from the start. I simply click on the Etherpad app icon and a create a new grain. Clicking on the Etherpad icon also shows me all the grains of Etherpad I currently have installed.

CreateASecond

With the app installed, I can now create dozens of discrete Etherpad apps and share them with different groups of people, each running as their own application within Sandstorm.

ManyEtherpad

Header image: Grains of Sand by Fran Tapia CC-BY-ND

 

Working with Sandstorm

I’ve been making an attempt to kick the tires more with Sandstorm in preparation of our upcoming workshop at the Festival of Learning.

MyGrains

Snapshot of my Sandstorm grain dashboard

Small pieces, loosely joined is what Sandstorm is all about. Sandstorm is the stitching that joins the small pieces, providing a common authentication and security framework to a patchwork quilt of open source applications.

So far I’ve tested out about half a dozen of the 50+ applications within the Sandstorm eco-system trying to use them in my day to day work. Etherpad (the collaborative document editor that is a scaled down version of Google Docs) and Frameadate (a handy meeting scheduler alternative to Doodle) have been the most useful. I’ve also played around with Ethercalc (spreadsheet), Quick Survey (survey tool), Hacker Slides (presentation tool that uses Markdown), OpenNode BB (forums), GitLab (Git repo), Rocket Chat (Slack alternative), and mucked around a bit with the WordPress port in Sandstorm.

My general observation is that the applications that work well within the Sandstorm environment are small, discrete and focused where you can create a single instance of the application (called a grain in the Sandstorm world). Things like a single document or meeting invitation. Tools like Etherpad, Ethercalc, Quick Polls, Hacker Slides and Frameadate are the type of applications that Sandstorm does well in that you create a document, share with others to collaborate and contribute to, and then move on.

I tend to think of these tools as being somewhat disposable. Once a discrete task is done, it’s done. The survey is finished, the meeting dates are picked, the document has been edited and completed. Get in, do your work, get out.

As you can see from my screenshot, I’ve got a lot of Etherpad instance on the go, working on collaborative documents with different users. There is no folder scheme in Sandstorm, or way to organize these multiple instances so I can imagine over time as you create more and more documents, the user interface could become quite cluttered. I’m just starting to get to the tipping point where I’d like to be able to put some structure around the different applications I have going. Maybe organizing by project I am working on and grouping all the related apps I am using with a single project in a single folder or some other visual organizational metaphor. But haven’t seen a way to do that yet.

More complicated applications seem to have more limitations. WordPress, for example, is not the full featured version of WordPress that you would get at WordPress.com or if you installed it yourself. Installing plugins and themes means uploading a zip file instead of connecting to the remote WordPress plugin repo. Publishing is static, meaning whenever you add new content you have to rebuild the site.

Rocket Chat (a nice open source Slack-like application) also has a limitation with the mobile app. Rocket Chat works quite well if you are logged into Sandstorm, but  the mobile application cannot connect through Sandstorm, which limits its usefulness.

These are not dealbreakers, but really just the things you learn while sandboxing and experimenting with new technology – seeing what the tool does well and where the limitations are.

Image: Blue Sky by leg0fenris CC-BY-NC-ND

 

Create embeddable HTML5 content with H5P

Been playing around this morning with a series of tools called H5P.  H5P is a plugin for Drupal, Moodle and WordPress that allows you to create a number of different interactive HTML5 media types. Things like interactive videos, quizzes, timelines and presentations.

I’ve only had a chance to play with the plugin for a few minutes this morning, but got it working and was able to create some basic interactive content, adding a branching overlay to a YouTube video that runs from the 2 to 12 second mark. Choose an option from the screen and jump to a different point in the YouTube video. I also created a simple interactive question.

While I created these using the H5P plugin I installed on another WordPress site, the H5P plugin gives others the ability to take some embed code and post the content that I created on their site, giving other people the chance to use the same content. So, here is that same interactive quiz question that I created on my testing site now embedded here using the H5P embed code.

With the interactive video example, I am actually embedding an embedded YouTube video with the overlays that I created using H5P. Meta-embed.

There is also an option to assign an open license to the interactions I create at the time I create them, and make it possible for people to download the source file.

One thing I can see off the bat is that there are a lot of content type options with this tool. There are about 30 different content types, each with numerous options so this 10 minute quick look hardly does justice to the possibilities or options. But I like where this is going and it certainly merits a deeper dive into the tool.

H5P is an open source project and community being lead by National Digital Learning Arena (NDLA) in Norway. NDLA is a publicly funded project which aims to offer a complete, free and open learning portal for all subjects in the Norwegian high school level.

More to come as I dig deeper into this tool and plugin.

 

A BC HigherEd WordPress Community

South of the border, I am watching the WP in Higher Ed community growing, and it strikes me that there may be an appetite for  something similar to happen in BC.

WordPress has deep roots in the BC post-sec system, and there is a lot of WordPress use currently happening.  There are UBC blogs and UNBC blogs, WordPress course development happening at JIBC, eportfolio work at Capilano (who invoked both The Bava and Novak Rogic in their site credits and at their recent presentation at the BCNET Conference). When I was at Camosun College, I set up a WordPress instance that is still being used by faculty. There is the fantastic PressBooks goodness Brad is whipping up here at BCcampus to support the open textbook project, and the work at TRU being done by Brian Lamb and Alan Levine.

wordpress

I suspect this is the tip of the WordPress iceberg & there are many more pockets of use in higher ed in BC.

I’m hoping to start finding those pockets of WordPress use in the system in the hope of bringing together those who are using (and want to use) WordPress into some kind of community/network of practice.

I’ve set up a form to gather information from folks in the BC post-sec system who are using, or are interested in, connecting with others across the province using WordPress.

I have to stress that this is very preliminary groundwork on my part to gauge if there is enough interest in the province to bring together some kind of more formalized community and/or network. What this community/network will look like, what we work on, how we connect, where we find value is something that should be driven by the community, so if the shape/structure, feel of this community is a bit vague right now, that’s intentional.  But from my view, I can see areas where it makes sense to come together, collaborate, find shared commonalities and potential opportunities that could benefit all.

If you know someone in the BC post-sec world who is using WordPress, please let them know about this opportunity. I hope that we can get a good mix of people from both the technology and the pedagogy sides of the house to come together and participate.

Image: edupunkin by Tom Woodward CC-BY-NC