Sharing some stuff from my open EdTech road show

I’ve been on the road doing some talks and workshops this spring, and this blog post is more of a way for me to aggregate the various bits of media that has resulted over the past few months. Dumping stuff to my outboard brain.

Piloting Open Learning – Sandbox Collaborative podcast

My colleague Amanda Coolidge and I were guests on the Sandbox ColLABorative podcast with Brian Fleming, Deputy Director of the ColLABorative at Southern New Hampshire University. I met Brian at EDUCAUSE last fall and he invited us to take part as guests on the podcast, where we talked about the BC Open Textbook Project and the BC Open EdTech Collaborative and the work of BCcampus more generally. Podcast and transcript (nice!)

Open Technology: The Third Pillar of Open Education – Kwantlen Polytechnic University

Rajiv invited me to speak at KPU on open technologies. This was a completely new talk for me, picking up on some of the streams of my work over the past year on open technologies, privacy, student data. The talk is still rough and needs to be refined and I am grateful that I had a patient audience. But this is a theme that I hope to be able to speak more about in the future. Here are the slides

And the video (not often I have a talk captured, so grateful to Meg Goodine at KPU for putting their new Kaltura server to work).

BCNET Conference

BCNET is an annual higher ed IT conference here in BC. Think of it as a regional EDUCAUSE. I did three talks at BCNET. One was an operational talk with BCNET on the Kaltura Shared Service. The other two were in partnership with BC institutions.

NGDLE: From Monolithic to Disaggregation was a talk/facilitated discussion I did with Marianne Schroeder of UBC and Maureen Wideman of UFV. This is another theme of my work for the past year – exploring the changing role of the LMS and what kinds of potential opportunities and challenges institutions are facing as the LMS changes and evolves from the single learning technology, to be a central technology that others integrate with. I had some fun with the slide for this, as you’ll see, drawing comparisons of the LMS to a Swiss Army Knife.

The second presentation was with Scott Robarts and Auralea Mahood of Capilano University where they spent some time talking about their eportfolio project, built on WordPress. My piece was to come in at the end and talk about some of the other projects happening around BC built on WordPress at TRU and RRU, and again promote the work of Brian, Tannis and Grant and the BC EdTech Collaborative.

Creative Commons Global Summit 2017

So grateful to have been able to attend this event (thank you Creative Commons). I didn’t present, but was part of a Virtually Connecting session with Doug Belshaw, Laura Hilliger, Terry Greene, Alan Levine & Helen De Waard talking about co-op’s. I’ll have some separate posts about the summit and co-op’s in near future. For now, here’s the Virtual Connecting session.

Digital Pedagogy Network Symposium

The talk I never gave at the SFU/UVic Digital Pedagogy Network Symposium on open tools, open pedagogy (I had to miss my time slot waiting for a plumber at home). I’ll share the slides here anyway.

Building an Open Textbook

I did make it for the second day of the symposium where Amanda Coolidge and I facilitated a 2 hour workshop on building an open textbook where I did a deep dive into some of the early research about open textbooks, drawing on 2 blog posts I wrote about pedagogical features of textbooks (here and here.

 

Adding Creative Commons licenses to Kaltura MediaSpace videos

I’ve been working on an internal BCcampus project to set up and configure Kaltura MediaSpace for our internal use. We have a number of use cases, not the least of which are providing a central hosting space for videos created as part of a grant associated with the BC Open Textbook Project. Since these videos will be openly licensed (as is everything we create at BCcampus), I want there to be a visible Creative Commons license with each video to let users know the terms of usage for each video.

Out of the box, MediaSpace has a lot of functionality, but the ability to apply a Creative Commons license to a video is not one of them. So, with a bit of consultation with my colleague (and knower of all Kaltura secrets) Jordi Hernandez at UBC, I was able to add a basic CC license field to the videos we host in Mediaspace.

It is actually a pretty straightforward 2 step process. First, you need to create custom metadata fields in the Kaltura Management Console (KMC), then you have to enable the fields in the Kaltura Mediaspace administration console.

I am using an OnPrem service of Kaltura. The MediaSpace instance I am working on is 5.38.07.

Create Custom Fields in the KMC

After logging into the KMC, I went to Settings > Custom Data. This is where I will set up the custom data scheme and define the CC licenses. Click Add New Schema to create a new Creative Commons Metadata Schema. Give your Schema a name, description and a system name. The system name should be one word and short. We want each video to be able to have their own CC license, so we want this metadata schema to apply to Entries and not Categories.

Once you have the Schema set up, you will want to add the actual licenses as field values. Choose Add field and enter in the different CC licenses that you want to make available to your users. These are the options they will see when they upload a new video, and what people who view the video will see on the screen associated with the video. I chose to make my list a Text Select List so that it would appear as a drop down menu for the person uploading the video.

One nice feature of the custom metadata schemas in Kaltura is that you can enable these items to be searched for in the built in search engine. So, with CC licensed material, someone could come to our video portal site and search for nothing but CC0 videos in our collection. I haven’t explored this fully yet, but it does seem to work at a granular level. Which is both good and bad. Good if you want to search for a specific type of CC licensed content in our collection, like a CC0 or CC-BY video. But not so great if you wanted to search for all CC licensed videos regardless of flavour.

Once that is done, the Schema is setup and we can now slip over to MediaSpace to apply it.

Add the custom fields to the upload form in MediaSpace

I logged into the MediaSpace admin console. The area we want to play in is called Customdata. It may appear with a line through it in your admin console. That just means that the module has not been activated.

Go into the Customdata module and make sure it is enabled. In the profileid field, you should be able to find the custom metadata schema that you just created in the KMC. Choose that. You can also make the field a required field and, if you wish, enable the showInSearchResults field to enable the search index.

 

That’s it. Save the changes and you now have added a custom CC license field to your videos. When someone uploads a video to MediaSpace, they will have an additional field in a dropdown menu that they can choose a CC license to apply to the video.

And, when people come to view the video in the MediaSpace site, they will see that the video is licensed with a Creative Commons license.

Now when we upload a video to our MediaSpace site, we can assign it a Creative Commons license that people can see.

Good first step

For me, this is a good first step that gives us the option to apply a visual marker to the video in MediaSpace. However, what would be great (and I am not sure that this can be done) would be to have that CC license metadata embedded in the page in the correct metadata format for CC licenses. This would ensure that it would be found in search engines when people search for CC licensed content.

The second improvement would be to somehow embed that CC license metadata right in the video so that if some were to take a copy of this video, the original license information would go along with the actual video when they downloaded it. Doubt that is possible, but that would be a great feature for organizations like ours that produce a lot of openly licensed content.

Finally, I think that it might be a good idea to add a visual bumper as part of the video that would spell out the CC license. It is what we currently do with our videos, and is good practice to help make it clear that the content is openly licensed.

Photo: CC Stickers by Kristina Alexanderson CC-BY

 

Setting up Public Channels in Kaltura Mediaspace

Part of my work at BCcampus is to co-administer a provincial shared service of Kaltura with my colleagues at BCNET and UBC. Kaltura is a suite of tools for hosting and streaming media. Mediaspace is the YouTube like front end for Mediaspace. Because of that, BCcampus has access to some of the Kaltura tools for our own business uses and, for the past couple weeks, I have been mucking around with our new Kaltura Mediaspace instance.

In general, I find Kaltura a beast. It is complex, and there are multiple layers of administration to go thru depending on the tools you want to use. It is powerful, no doubt, and has some wonderful features. But likely not a system you want to tackle off the side of your desk to fully grok how it works and, just as important, fully maximize a fairly sizable institutional investment.

At any rate, I have been making some progress on setting up our Mediaspace site. You can take a look. Keep in mind it is being built in real-time and not fully configured and setup. But for now, I have some basic branding in place and a few channels with some content set up.

One task that I was failing at was creating public channels. I was able to make channels and add videos to channels, but could not seem to make those channels publicly visible unless you had an account. Every time someone would click on the Channels link, they would be taken to a log in screen. For an organization that does open work, having closed channels was a no go.

So, after poking around, I went back and did what i should have done in the beginning, which is RTFM. Or, in Kaltura’s case, RTFM’s. Started here, which led me here and here and here and here. Here and here. A bit over here. Some stuff from this 23 page PDF here.

Ok, well, you get the idea and why I say complicated. Oy! And no where could I find the damn setting to make a channel public.

Finally, in a brief 30 second conversation with my colleague Jordi at UBC, I found the setting. In the Mediaspace admin area, there is a setting called supportPublicChannel that needs to be enabled.  That was it. One little setting. Click that and, boom, public channels.

 

Man, there it was. Right there in the Channels section of the administration console. Hours of pouring thru technical support documents & Google searches on how to enable public channels and the problem was solved in 30 seconds by talking to Jordi.

There were a few steps I did before this that I’ll add in here just in case others are struggling. This info is specific to those in the BC Kaltura Shared Service, which is an on prem instance of Kaltura. If you are not in the shared service, this may not work for you. The other caveat is that I have been working on this off the side of my desk for a couple of weeks now, along with other config issues with Kaltura and Mediaspace and, because I don’t have the awesome discipline of CogDog to document and share on an incremental basis, may be incomplete and missing stuff (and let’s just pause here and acknowledge just how fantastic CogDog is at documenting the technical work he is doing knowing that there are very likely others out there struggling with the same thing).

But, to the best of my memory, here is what I did prior to making that final switch above.

When you log into your KMC and go to Categories, you should see a MediaSpace category already set up as part of the initial system wide configuration. Under channels, I have created 2 channels – EdTech Demos and SCOPE

On the MediaSpace category, the entitlement (what Kaltura calls permissions) are open. These can be overridden at lower categories.

 

Drilling down to the next layer, Channels, I have set default channels to be Private.


When someone creates a new channel, they may want to work on it before making it open. When you create an actual channel, you will need to override this setting, as this screenshot of the actual SCOPE channel shows. In this case, I have overridden the Channel defaults and made the content privacy no restriction, while adding a restriction on who can actually add content to the category (only the channel administrator).

.

In the Mediaspace admin area, when I create a new channel, I now have an option to make it a Public channel.

So, this was a spot that I was getting confused at because, until I flicked the supportPublicChannel option on, I was not seeing the Public radio button. But I was seeing the Open radio button. So, when I clicked Open, I thought that would make the channel, uh Open. But no. In Kaltura, Public and Open are different concepts and it wasn’t until I enabled the supportPublicChannels switch that the public radio button option became available during my channel setup. Clicking that button made the channel publicly viewable without people having to log in.

Now, when I upload a video, I can publish it to multiple channels, including the open and public ones.

 

Like I said, this is likely missing out a bunch of other steps I have done along the way to enable public channels on our Mediaspace instance. But for those of you in the KSS struggling with setting up public channels – supportPublicChannel was what finally did it for me. Thanks Jordi!

 

12 apps of Christmas

Yeah yeah, I know. It’s the middle of November, what the heck are you talking about Christmas for?

Well, a couple of my ETUG colleagues Leva Lee and Sylvia Riessner pitched an idea a few weeks back for a special Christmas theme ETUG event called the 12 Apps of Christmas that I have been working on.

Drawing inspiration from similar 12 Apps of Christmas events from across the pond, (and how fantastic that Chris Rowell thought to CC license everything and create a build your own 12 apps of Christmas tutorial website!) the basic idea is to put together some bite sized microlearning activities that gets our local edtech community suggesting, testing, collaborating and reflecting on the usefulness of different apps.

No surprise, but there are thousands of apps targeted at EdTech that are varying utility and quality, and the EdTech’s task of being able to quickly separate the wheat from the chaff is becoming increasingly important. Institutions, like UC Irvine, have developed processes around testing and assessing the usefulness of cloud based educational technologies, and rapid EdTech evaluation models are being considered and developed. We’re also seeing collaborative efforts to assess educational technologies, like the Common Sense Media educators portal which collects & aggregates information from teachers about the usefulness and pedagogical value of different learning apps.

The idea of 12 Apps of Christmas is that each day starting December 1st, we’ll release a new app via the (currently under development) 12AppsofChristmas.ca website. The app will include a description, some possible ways it could be used in a teaching & learning context, and a very short (15 minute) activity that gets people trying out the app.

The apps are being picked by various members of the BC ETUG community. Criteria for what apps to include are pretty basic; free, available on multiple platforms, easy to use, and lightweight in the sense that it shouldn’t take people a lot of time to figure out how to use them.

Once the activity is completed, we hope that you’ll spend a bit of time evaluating the app & leaving some review comments on the app post (I’m building the site in WordPress & will use the commenting feature). We’ll include a few question prompts to help frame the evaluation, but the idea is that the whole process should not be too onerous and should be flexible enough to allow people to hop in and out and take part with whatever time they have.

While the 12 Apps of Christmas is by no means an extensive review process, it will hopefully be a fun activity with a minimal time commitment will get those interested in educational technology collaboratively playing, testing and evaluating different apps and technologies.

Photo: Blue Christmas by Jamie McCaffrey  CC-BY-NC

 

Learning analytics & transparency

Just got back from EDUCAUSE. I’ll have more on the conference in future posts, but wanted to quickly post a couple of thoughts I have had around learning analytics and transparency based on what I learned at EDUCAUSE and as a result of an EdTech demo session I did this morning with an LMS vendor on learning analytics.

I went to EDUCAUSE with a few goals, one of which was to try to learn more about learning analytics. Specifically, what (if any) are the compelling use cases and examples of faculty and institutions effectively utilizing analytics to solve problems, what are the ethical issues around data collection, how are institutions informing their students & faculty of these concerns, and what technologies are being used to facilitate the collection and analysis of analytics data. And while I didn’t find complete answers to these questions, I did come away with a better 10,000 foot view of learning analytics.

The primary use cases still seem to be predictive analytics to identify academically at-risk students, and to help institutions improve student retention. I get the sense that, while student retention in Canada is important, it is not as critical for Canadian institutions as it appears to be for U.S. institutions. There are likely more use cases out there, but these 2 seem to be the big drivers of learning analytics at the moment.

Earlier today, I attended an LMS demo session on learning analytics where I had a chance to see some of the analytics engine built into the LMS. The demo included a predictive analytics engine that could be used to identify an at-risk student in a course. Data is collected, crunched by an algorithm, and out comes a ranking of whether that student is at risk of completing the course, or of failing the course. When I asked what was going on within the algorithm that was making the prediction about future student behavior, I got a bit of an answer on what data was being collected, but not much on how that data was being crunched by the system – that is, what was happening inside the algorithm that was making the call about the students future behavior.

This is not to single-out a specific company as this kind of algorithmic opacity is extremely common with not only learning technologies, but almost all technologies we use today. Not only are we unaware what data is being collected about us, but we don’t know how it is being used, what kind of black box it is being fed into, and how it is being mathemagically wrangled.

Now, it’s one thing to have something fairly innocuous as Netflix to recommend movies to you based on – well, we don’t really know what that recommendation is based on, do we? It is likely what we have viewed before is factored in there, but it is also likely that the recommendations in Netflix are pulling data about us from services we have connected to Netflix. Mention on Facebook that you want to see the new Wes Anderson movie and suddenly that becomes a data point for Netflix to fine tune your Netflix film recommendations and the next time you log into Netflix you get a recommendation for The Royal Tennenbaums. I don’t know for sure that it works that way, but I am pretty certain that this information from around the web is being pulled into  my recommendations. Search for a movie on IMDB. Does that information get shared back to Netflix the next time you log in? Probably.

As I said, the decisions coming out of that Netflix black box are fairly innocuous decisions for an algorithm to make – what movie to recommend to you. But when it comes to predicting something like your risk or success as a student, well, that is another scale entirely. The stakes are quite a bit higher (even higher still when the data and algorithms  keep you from landing a job, or get you fired, like teachers in New York State). Which is why, as educators, we need to be asking the right questions about learning analytics and what is happening within that black box because, like most technologies, there are both positives and negatives and we need to understand how to determine the difference if we want to take advantage of any positives and adequately address the negatives. We can’t leave how the black box works up to others.

We need transparency

Which brings me to the point that, in order for us to fully understand the benefits and the risks associated with learning analytics, we need to have some transparent measures in place.

First, when it comes to predictive analytics, we need to know what is happening inside the black box. Companies need to be very explicit about what information is being gathered, and how that data is being processed and interpreted by the algorithms to come up with scores that say a student is “at-risk”. What are the models being used? What is the logic of the algorithm? Why were those metrics and ratios within that algorithm decided upon?  Are those metrics and ratios used in the algorithms based in empirical research? What is the research? Or is it someones best guess? If you are an edtech company that is using algorithms and predictive analytics, these are the questions I would want you to have answers to. You need to let educators see and fully understand how the black box works, and why it was designed the way it was.

Second, students should have exactly the same view of their data within our systems that their faculty and institution has. Students have the right to know what data is being collected about them, why it is being collected about them, how that data will be used, what decisions are being made using that data, and how that black box that is analyzing them works. The algorithms need to be transparent to them as well. In short, we need to be developing ways to empower and educate our students into taking control of their own data and understanding how their data is being used for (and against) them. And if you can’t articulate the “for” part, then perhaps you shouldn’t be collecting the data.

Finally, we need to ensure that we have real live human beings in the mix. That the data being analyzed is further inspected and interpreted by human beings who have the contextual knowledge to make sense of the information being presented on a data dashboard. Not only does that person need to know how that data ended up on that dashboard and why, but also how to use that data to make decisions. In short, faculty need to know how to make sense of the data that they are being given (and I’ll touch on this more in a future blog post when I write about Charles Darwin University Teaching & Learning Director Deborah West’s analytics presentation which centered around the question “what do teachers want?”)

One approach from UC Berkeley

At EDUCAUSE, I saw a really good example of how one institution is making their data processes more transparent. In a presentation I saw from Jenn Stringer, Associate CIO of UC Berkeley, there was a slide that hilighted the data policies that they have put in place around the ethical collection and use of learning analytics data.

img_20161028_082931

These principles are reminiscent of the 10 learning data principles set out by the Data Quality Campaign and the Consortium for School Networking.

Additionally, UC Berkeley also makes a student analytics dashboard available to the student so that they get the same view of the analytical data that their faculty get. I think both of these are excellent starts to working ethically and transparently with learning analytics data.

But for me the big question remains – what are the compelling use cases for learning analytics, and are those use cases leading to improvements in teaching & learning? So far, I am not sure I came away from EDUCAUSE with a better understanding of how analytics are being used effectively, especially by faculty in the classroom. If you have some interesting use cases about how analytics are being used, I’d love to hear them.

Photo: Learning Analytics #oucel15 keynote by Giulia Forsythe CC-BY-NC-SA

 

Fall projects

I’ve got a busy fall on the go with some new initiatives and projects keeping me busy.

EdTech Demos

This is a new educational technology initiative here at BCcampus, designed to help expose the system to some new ideas and educational technologies. These are free 30-60 minute virtual  demonstrations done about once a month.  So far I’ve done 3 of these demo sessions (Canvas, FieldPress, H5P) and I’ve been very happy at the response and attendance from the post-sec system.

One of the goals I have is to try to make some space for open source educational technologies as these are often interesting projects that don’t have the marketing or promotional budgets of a commercial edtech company. But there will be a mix of commercial and open source, big and small to try to get a nice flavour of what is happening in the edtech space. I have 2 more schedule for this fall, one with D2L Brightspace on Learning Analytics at the end of October, and another with Hypothes.is in late November.

I’ve put together an email notification system that people can sign up for to get notified when these demos happen. I am shooting for about one per month.  I’m also looking for suggestions of edtech that you would like to see a demo of.

Guide on the Side Sandbox

I’m also coordinating a sandbox project with a group of academic librarians from aroun d the BC post-sec system for an open source application called Guide on the Side. Guide on the Side is an open source app developed by the University of Arizona to create guided tours of websites and web applications. We are just in the process of installing the software and forming our community. This sandbox project will run for the next 6 months as we test out the software. I am trying to put together some edtech evaluation frameworks (SAML, RAIT, etc) to use as a guide for evaluating the software. I imagine I’ll end up cobbling a few of these together to come up with a framework that works for what we want our sandbox projects to do.  We’ll be releasing our findings in the spring.

EDUCAUSE

I’ll be heading to EDUCAUSE in Anaheim at the end of the month. The last time I was at EDUCAUSE was in 2007 where I first met Bryan Alexander and learned about this new thing called Twitter. I don’t know if this one will be as memorable (Twitter became kind of a big deal in my life), but I am looking forward to attending.

I am in a bit of session overload right now as I plan to attend and put together my schedule. I forgot just how massive this thing is. Holy session overload! One time slot I am looking at has 53 concurrent sessions. Even when I filter from 7 to 3 streams, I still have 25 options. This one looks most relevant for how I feel at this moment.

capture

As I have written about before, I am intrigued by a few new technologies and ways of thinking about edtech that have been coming out of EDUCAUSE, specifically the idea of Next Generation Digital Learning Environment (NGDLE) and applications like CASA. These are the sessions I’ll be attending, along with some more on personalized and adaptive learning which I feel I have a good conceptual understanding of, but have yet to get a good grasp on some of the more practical applications of these technologies.

Privacy Impact Assessment & WordPress Projects

One of the other projects I have on my plate for this fall is some Privacy Impact Assessment work for the BC OpenEdTech Collaborative. We had a very productive meeting of our WordPress group where one of the barriers identified by the group was the lack of clarity about data sovereignty and privacy with the technical solutions we are looking at (EduCloud, Docker, and WordPress itself).

While we do have a FIPPA compliant hosting service in EduCloud, that is just one (albeit significant) piece of the FIPPA puzzle. But there may be other privacy considerations when it comes to using WordPress. For example, a plugin may potential disclose personal information to a server outside of Canada.

Since privacy and FIPPA (within the context of educational technology) is part of my wheelhouse, I’ve taken on coordinating a Privacy Impact Assessment for an EduCloud based WordPress project.  Since a privacy impact assessment is something that is done on an initiative and not just the technology used as part of the initiative, I’ll be taking a fairly in-depth look at one of our applications of WordPress and using it to construct a Privacy Impact Assessment report that can then (hopefully) be used as a template for other initiatives using similar, but slightly different technologies. I have an idea of how to do this in my head, but haven’t yet fully formed how to execute it yet.

Other stuff

There are a number of other projects I have on the go right now (including a big one with BCNET and UBC developing an onboarding process for institutions who wish to join the provincial Kaltura shared service), and participating on the SCETUG steering committee. But these are likely the ones I’ll be blogging about over the coming months.

Oh, and something unrelated to my work with BCcampus – I’ll be spending some time prepping to teach in the new year at Royal Roads University in the Learning & Technology program. The course (normally taught by George Veletsianos) is  LRNT505: Community Building Processes for Online Learning Environments, and I am thrilled to be able to get into a (virtual) classroom and work with students. Being that I have been out of an institution for the past 4 years, I am immensely grateful to have the opportunity to jump back into an institution as a faculty member & work directly with students.

 

Sandstorm Apps and Grains

Understanding the difference between Apps and Grains is important to understanding how Sandstorm works.

Grains are discrete instances of apps. Each grain is a copy of an app. This allows each grain to run isolated from other copies of the app in Sandstorm. Here’s a little walk through to help you understand how they relate to each other.

Adding the app to Sandstorm

Before you can create grains, you have to add the app from the Sandstorm app market to your local instance of Sandstorm.  When you log into Sandstorm the first time, you’ll see a blank slate that looks like this:

blankslate

Not much there.

In the left hand navigation, there are 2 sections; Apps and Grains. The screenshot above is the default Apps section. If I switch to the Grains view I get a message that I don’t have any Grains installed yet.

Grains

Understanding the difference between Apps and Grains is important to understand how Sandstorm works.

In a nutshell, you install the application once on your local instance of Sandstorm. Once it is installed, you can create multiple copies of that app to use. Each of these copies is called a Grain in the Sandstorm world.

So, let’s use Etherpad as an example. I install the Etherpad app from the Sandstorm App market onto my Sandstorm server. Once that is done, every time I want to create a new unique Etherpad document, I create a new grain (copy) of Etherpad with each grain operating independent of the others.

Installing an App

The first step in making Etherpad available on Sandstorm is to install the app from the Sandstorm App Market. Think of the Sandstorm App Market like Google Play or the Apple App Store. I only have to do this step once. Once Etherpad is installed on my Sandstorm server, I can then create multiple grains of Etherpad with the click of a button.

In the Apps section, click Install from App market. This will open a new tab/browser window in the Sandstorm App Market.  Find Etherpad and click Install.

Etherpad

You’ll be taken back to Sandstorm and get a message asking you to verify that you want to install the app.

install

This is one of the most visible places where you will see the commitment to security that Sandstorm has as each app has some additional information included to help you verify that this application is legitimate. You will see a PGP key signed by the application publisher along with their verified contact information. Sandstorm provides a cryptographic chain of trust that connects the app package you’re installing to the app publisher’s online accounts. This is an assurance method that you are installing a legitimate Sandstorm application and provides a verified trail back to the person who published the app.

Click Install Etherpad and the application is installed and ready to use.

Create a Grain

Once the app is installed, you can now create your first Etherpad Grain by clicking on Create new pad.

EtherpadReadyToUse

You’ll see the Etherpad Grain now appear in your left hand navigation under Grains as Untitled Etherpad. To change the title to something more meaningful, click on the title Untitled Etherpad pad at the top of the screen.

Untitled

A popup will appear where you can change the name

MyNew

Click Ok and your Etherpad name is changed at the top of the screen and in the Grains navigation on the left.

MyNewEP

And I am ready to start working on this Etherpad. Click the Share access link at the top of the page and I can generate a link that I can send to co-collaborators and give them anonymous access to collaborate on this document, just like you can with Google Docs.

share

Go back to the apps page and you’ll see that Etherpad has now been installed on our local instance of Sandstorm.

MostUsed

If I want to create another Etherpad Grain, I don’t have to go back to the app market and reinstall the application from the start. I simply click on the Etherpad app icon and a create a new grain. Clicking on the Etherpad icon also shows me all the grains of Etherpad I currently have installed.

CreateASecond

With the app installed, I can now create dozens of discrete Etherpad apps and share them with different groups of people, each running as their own application within Sandstorm.

ManyEtherpad

Header image: Grains of Sand by Fran Tapia CC-BY-ND

 

Working with Sandstorm

I’ve been making an attempt to kick the tires more with Sandstorm in preparation of our upcoming workshop at the Festival of Learning.

MyGrains

Snapshot of my Sandstorm grain dashboard

Small pieces, loosely joined is what Sandstorm is all about. Sandstorm is the stitching that joins the small pieces, providing a common authentication and security framework to a patchwork quilt of open source applications.

So far I’ve tested out about half a dozen of the 50+ applications within the Sandstorm eco-system trying to use them in my day to day work. Etherpad (the collaborative document editor that is a scaled down version of Google Docs) and Frameadate (a handy meeting scheduler alternative to Doodle) have been the most useful. I’ve also played around with Ethercalc (spreadsheet), Quick Survey (survey tool), Hacker Slides (presentation tool that uses Markdown), OpenNode BB (forums), GitLab (Git repo), Rocket Chat (Slack alternative), and mucked around a bit with the WordPress port in Sandstorm.

My general observation is that the applications that work well within the Sandstorm environment are small, discrete and focused where you can create a single instance of the application (called a grain in the Sandstorm world). Things like a single document or meeting invitation. Tools like Etherpad, Ethercalc, Quick Polls, Hacker Slides and Frameadate are the type of applications that Sandstorm does well in that you create a document, share with others to collaborate and contribute to, and then move on.

I tend to think of these tools as being somewhat disposable. Once a discrete task is done, it’s done. The survey is finished, the meeting dates are picked, the document has been edited and completed. Get in, do your work, get out.

As you can see from my screenshot, I’ve got a lot of Etherpad instance on the go, working on collaborative documents with different users. There is no folder scheme in Sandstorm, or way to organize these multiple instances so I can imagine over time as you create more and more documents, the user interface could become quite cluttered. I’m just starting to get to the tipping point where I’d like to be able to put some structure around the different applications I have going. Maybe organizing by project I am working on and grouping all the related apps I am using with a single project in a single folder or some other visual organizational metaphor. But haven’t seen a way to do that yet.

More complicated applications seem to have more limitations. WordPress, for example, is not the full featured version of WordPress that you would get at WordPress.com or if you installed it yourself. Installing plugins and themes means uploading a zip file instead of connecting to the remote WordPress plugin repo. Publishing is static, meaning whenever you add new content you have to rebuild the site.

Rocket Chat (a nice open source Slack-like application) also has a limitation with the mobile app. Rocket Chat works quite well if you are logged into Sandstorm, but  the mobile application cannot connect through Sandstorm, which limits its usefulness.

These are not dealbreakers, but really just the things you learn while sandboxing and experimenting with new technology – seeing what the tool does well and where the limitations are.

Image: Blue Sky by leg0fenris CC-BY-NC-ND

 

NGDLE and Open EdTech

I’ve been doing some research on Next Generation Digital Learning Environments (NGDLE) and think it might be another useful way to frame some of the work we are doing with open edtech. Educause has a 7 Things paper and a deeper white paper on NGDLE, and Phil Hill has written about NGDLE as well if you want to dig in further.

In a nutshell, NGDLE is the idea that the next generation of learning tools isn’t the single monolithic LMS, but rather a series of applications connected together using different sets of emerging and established learning tool standards.

The LMS may be part of an NGDLE environment, but it is probably more likely that the LMS would take on a more connective and administrative function in an NGDLE environment. The idea is to separate the course administrative tools & functions (like classlists and gradebooks) from the teaching and learning tools, and allow faculty to mix and match tools to fit their pedagogical needs. This gives faculty greater autonomy with what tools they want to see, while still being connected (with technologies like LTI & Caliper) to centralized institutional systems.

While it is being tagged with “Next Generation”, it is an idea that has been around for awhile now (see D’arcy’s eduglu post from a decade ago). It also strikes me that there is more than a nod to the concept of the PLE in this approach as well, although the PLE construct is about more than just technology and tools and is focused on learner autonomy, while NGDLE is more institutional and faculty focused.

We’re beginning to see institutions move towards this approach where the LMS is more the middleware that handles the administrative functions of course management, and faculty mix and match the learning tools to meet their goals. Phil Hill wrote a post about the University of North Carolina Learning Technology Commons where faculty can log into choose learning tools from an approved list of tools that will integrate with the existing LMS – the idea of a learning tools app store.

These tools are approved in 2 senses. First, there is a peer review process where faculty can review the tool and leave feedback for their peers, similar to the CASA model that I wrote about a few weeks ago, and which I love.

The second part of becoming an approved app involves vendors who submit their app to be reviewed and listed in the app store. In fact, a big part of the UNC app store approach is to, “iron out inefficiencies in edtech procurement.”

Smoothing procurement.

Now, I don’t necessarily have a problem with putting systems in place to smooth procurement, especially when part of the purpose is to make room for smaller players and not default to the 800 pound gorillas. But it does make me wonder how do faculty find tools that do not have a vendor pushing and backing them? The process (as it appears to me from the outside) seems to heavily favor commercialized vendor backed learning tools as opposed to open source community developed applications.

Certainly, there is a lot to like about the NGDLE approach. It acknowledges that there is seldom one tool that fits all pedagogical needs, and gives faculty the freedom and flexibility to try out different tools to fit their pedagogical goals. Indeed, I can see the NGDLE concept as one way to frame the open edtech experimentation we are doing with Sandstorm.  And UNC may have mechanisms to get tools in the app store that are not vendor driven, so I have to applaud the fact that they are doing this and making more teaching and learning tools available to faculty.

My caution is if the only options we put in front of faculty to carry out one of the core functions of our institutions are commercially driven options, then we’re not only missing out, but are locking ourselves in to a vision of edtech that is completely vendor driven. We are not putting all the edtech options on the table; options that often have much more involvement and development input from actual educators than many vendor solutions.

As Candace Thille noted in her recent Chronicle interview on learning analytics As Big-Data Companies Come to Teaching, a Pioneer Issues a Warning (may be paywalled)

…a core tenent of any business is that you don’t outsource your core business process.

Teaching and learning are the core business of most higher education institutions. How much of that core business are we willing to outsource?

Also, see Jim Groom.

Photo: Open source free culture creative commons culture pioneers by Sweet Chilli Arts CC-BY-SA

 

Framing our Open EdTech project

There was a great series of blog posts between Dave Winer and Joi Ito this past weekend about the Open Web that touched on the role of universities in the Open Web. You can read Ito’s first post, Winer’s response, and Ito’s followup.

A few things struck me reading this exchange between two web luminaries. First, both are having a good old fashioned blog dialogue on the open web in spaces they each own and control, and because of that I get to reap the benefit of overhearing their conversation. This gives me a better understanding of how two people who are deeply connected to the web are feeling about the web today. Their transparency working in the open brings to me a bit of their knowledge about the state of the web. And clearly, they are both feeling the web that they know – the web that allows exactly this sort of free flow of dialogue – is being threatened by more and more closed spaces.

The second thing that struck me was the list of call to action points that Dave Winer posted as one way to combat the closing of the Open Web.

  1. Every university should host at least one open source project.
  2. Every news org should build a community of bloggers, starting with a river of sources.
  3. Every student journalist should learn how to set up and run a server.

I’d actually expand the third point to include many more people, including first and foremost, any academic or researcher.

But it’s the first point that caught my eye and made me wonder how many open source projects are being hosted by BC higher ed institutions? And how many of those are specific to teaching and learning?

I know that UBC and RRU have public open portals that showcase some of the open work being done at those institutions. I imagine there are many many more at not only those institutions, but others around the province being spearheaded and/or contributed to by staff, students and faculty. I’d be interested to hear about them, and if you know, please leave a comment below.

It was a timely series of posts to read for me as I have been working with Grant, Brian, Tannis and Val on crafting a vision & plan for our open collaborative educational technologies in BC higher ed group (OCETBCHE? We really need to come up with a name), and one of the goals I have of our work is to see an increased level of interest across our system in the use of OSS for teaching and learning.

To frame our work, I’ve been looking for some high level documents that articulate the importance of OSS in an educational context. One of the strongest statements I have found about the importance of OSS in education (that also connects quite nicely with the BCcampus mandate of open education in general) is a paragraph in the Capetown Declaration.

For many working in open education, the Capetown Declaration on Open Education as a defining document in the field. While it is often connected most explicitly to open education resources, there is also a section in the document that speaks directly to software and technology that doesn’t seem to get the same level of attention as the sections about OER’s.

However, open education is not limited to just open educational resources. It also draws upon open technologies that facilitate collaborative, flexible learning and the open sharing of teaching practices that empower educators to benefit from the best ideas of their colleagues.

Building on this principle, I’ve been thinking about the purposes of our working group, and I’ve come up with a few ideas of why we want to do this.

  1. To promote the use of open source applications focused on teaching & learning. While there are numerous commercial vendors promoting the use of commercial software, numerous open source applications get overlooked because there are no vendors selling & marketing OSS.
  2. To provide practical solutions to educators wishing to employ open education pedagogies that build on network learning principles.
  3. To promote inter-institutional collaboration. OSS relies on the development of communities of developers and users in order to be successful. The success comes from sharing knowledge about how the software is constructed and can be pedagogically utilized. The software becomes the focal point around which a community can develop.
  4. To provide a pathway for institutions and educators to actively participate in OSS projects that are focused on EDU OSS.  Pathways to participate in OSS projects can sometime be obtuse and difficult to maneuver, meaning educators may not want to, or feel welcome to, participate in EDU OSS projects. This group can provide support for those who wish to dive deeper and participate in specific community projects, and in ways that are not just software development. This provides benefit to the OSS project as it can bring new members into the community, and active involvement in OSS communities strengthens the software, the community developing & maintaining the software, and the long term sustainability of the software.
  5. To encourage technological autonomy and provide ways for students, faculty and institutions to own and control their own data.
  6. To lower the barrier to participation on the open web for faculty and students.
  7. To provide value to other higher ed support systems within BC (think specifically of utilizing services like BCNet’s EduCloud).

It’s my start at trying to define some of what I am hoping we can do here in BC over the next little while.

Photo: I support the Open Web by Bob Chao CC-BY-NC-SA

 

Bring on the festival

This year the BC post-secondary system is trying something new with conferences. Instead of multiple small conferences, there is going to be an uber-conference called the Festival of Learning, June 6-9 in Burnaby.

The Festival brings together a number of smaller events that BCcampus has supported over the years, including the Open Textbook Summit, ETUGSymposium on Scholarly Teaching & Learning, and the BC-TLN Spring Gathering. The Festival is being organized by the BC Teaching & Learning Council.

The idea behind the Festival was to bring all these different groups together in one place at the same time to provide some space for collaboration and co-mingling.

The challenge in doing this is to do it in a way so that the uniqueness of each singular event that made it important and special to that particular community isn’t lost in a larger event. So far, from the draft program schedule I have seen (being part of SCETUG this year and helping to coordinate some of the ETUG part of the conference), the Festival organizers have done a good job at pulling it together & maintaining space in the Festival for each of the different groups to flourish.You can see this reflected in both small ways (the way all groups are represented on the general call for proposal page, for example), and larger with each group having their own program committee.

I’m quite looking forward to the week in Burnaby, and think this is going to be a massive teaching and learning event for our system.

If you have attended any of these events in the past, then you’ll want to mark June 6-9 on the calendar. If you haven’t, then this year will be a great time to join BC post-secondary faculty, educational technologists, instructional designers, and others involved in EdTech &  SOTL in BC at the Festival. Calls for proposals are on now until March 16th. Keep an eye on the website for more information.

The Festival runs June 6-9, 2016 at both the Delta Villa Hotel and BCIT in Burnaby, BC.

 

 

Putting tools into the hands of faculty with CASA

I’ve been feeling really good about the direction my new role at BCcampus is going. I am in a stage of work where I am feeling creative and energized, scanning the horizon and researching new stuff.

One of the projects I’ve been thinking about (and writing about) is the work with Sandstorm and the BC OpenEd Tech group, and trying to align the work of that group (and specifically with Sandstorm) with a broader vision for my role at both BCcampus and within the system.

What is emerging is a vision that sees me facilitating getting new educational technology into the hands of many people to try, and help with the evaluation of that technology to see where/if it aligns with teaching and learning.  Which is why I am liking Sandstorm because it looks like one way to get new tools into the hands of educators to try.

Another tool that I’ve been looking into is an IMS Global tool called the Community App Sharing Architecture (CASA). CASA is conceptually similar to Sandstorm in that they both share the same end goal of making it easy to deploy applications. But it does differ from Sandstorm in a few ways.

First, it is designed to work primarily with an LMS and is focused on deploying LTI enabled apps within an LMS, as opposed to Sandstorm which focuses on stand-alone outside of the LMS applications. The idea is that you can have an app-like “store” within the LMS that can be deployed by the users that integrates with the LMS.

But it isn’t limited to the LMS. A CASA app store can be mobile focused as well, as this UCLA example is with a mix of apps and dashboards optimized for mobile devices. And there was also talk in a webinar I watched about sharing analytics (perhaps connected using Caliper), but that seems to be at a pretty conceptual level right now.

The CASA architecture is also interesting in that it enables the connecting of different institutional app stores to each other in a network of trust. Metadata about the apps can be shared between institutions. And this is interesting because what CASA can do is enable the sharing of reviews about the apps between trusted nodes of the network.

CASA

Screenshot from CASA webinar (link to archive of webinar is below)

This is an example of what a future CASA app review will look like. Faculty reviews of an app from one CASA enabled institution can flow through the network and be available to other members of the trusted network. This helps to aid in discoverability of new applications and can help instructors separate the wheat from the chaff. As the number of edu applications continue to explode (the EduAppCenter currently has over 220 LTI enabled apps in it’s store), both discoverability and peer reviews from trusted networks are important to help filter, as anyone who has developed a PLN can attest to. CASA has the potential to enable another technology filter by leveraging the reputations in a network of trust.

Right now, CASA is still a beta tool. But it does look like an interesting technology that could make the deployment of edu focused applications easier for end users, while giving them some guideposts as to how useful these tools might be.

 

Amazon Web Services coming to Canada

In a blog post on the AWS site, Amazon Web Services Chief Evangelist  Jeff Barr announced that Amazon Web Services will be bringing their cloud computing service to Canada sometime this year.

This is potentially big news for edtech in Canada where our privacy laws have hindered the use of cloud based services where personal data may be stored outside of the country.

These days, it’s hard to find scalable edtech infastructure and services that are not built on AWS (or other) cloud services, and having data stored outside of Canada using cloud services has traditionally been a barrier to adoption for Canadian institutions. Not a deal breaker as there are ways to mitigate and still be compliant with privacy laws through informed consent, etc. But for many, the P.I.A (Privacy Impact Assessment) is a P.I.A. and enough of a barrier that it hindered the use of cloud based services.

For an edtech example, Canvas has had very little uptake in Canada because it is built on AWS.

Of the 25 public post-secondary institutions in BC, there is only a single institution using Canvas, and they are self hosting to work around the data storage issue. With a regional offering of AWS in Canada, I would expect to see a company like Instructure bring Canvas north of the border soon, and it being a serious contender for institutions undertaking LMS reviews.

While not explicitly stated in the release that it will be compatible with all the different provincial and federal privacy laws, it’s hard to imagine Amazon rolling out services in Canada that are not as compliant as possible. Indeed, privacy compliance with federal and provincial laws would be one of the biggest selling points for the service in Canada, as PCWorld notes;

Having a dedicated Canadian region will be important for organizations that need to comply with the patchwork of regional data protection laws Canada has, which requires the storage of some types of data inside Canada, depending on where the storer is located.

Although the question of “does legislation actually make a difference where data is stored in an interconnected world?” hangs in the air, with many seeing these regulations as doing nothing by providing the illusion of data protection for citizens.

And who knows, the TPP may get ratified in Canada and then it is a different data protection game altogether as the TPP clause on free flowing data between member countries would put it at direct odds with provincial & federal privacy laws. And while edtech might win with the TPP in that we get better access to more cloud services,  I have real concerns at what the cost to the rest of our society might be.

Addendum

Shortly after I posted this, Scott Leslie tweeted in response to this post that even if the servers are located in Canada, there is still a question of where the parent company is located.

Photo:Sensitive Data sign, Freegeek, Portland, Oregon, USA by Cory Doctorow CC-BY-SA

 

BC Open Education Infrastructure

As I wrote about a few weeks ago, my role at BCcampus has undergone a bit of a focus shift back to supporting & researching educational technologies in BC with an emphasis on open source technologies. And there are some exciting things happening in BC that I am going to be a part of.

One of the projects that I have begun sinking my teeth into post-opened conference has been the work done by Grant Potter, Brian Lamb, Tannis Morgan and Valerie Irvine, the former BCNet open education working group. Once BCNet announced the end of the group, Mary Burgess and I talked about how BCcampus could provide support for the open education work this group is doing, and I’m very happy that I’ve been given some time & resources to support this group.

The main project on the go right now is (what I’ve called) the BC open education infrastructure project. This is basically the FIPPA compliant (hosted on EduCloud at UBC) Sandstorm instance that I wrote about a few weeks back. I’ve been able to get in and kick the tires a bit more and am able to see a few clear potential use cases for the technology.

In a nutshell, Sandstorm aims to make the deployment of web applications as easy as installing an app on your smartphone. One click installs of popular open source packages like EtherPad and WordPress direct from an app repository/store .

Sandstorm App Store

Screenshot of Sandstorm App Store

At a high level, here are some of the ways I think this could be useful to my work, and to the system as a whole. These are things that are driving me to work on this project.

  1. A simple way for an instructors to deploy open source applications. Instead of having to use the LMS, which may not have the tools you need or even like working with, or impose a pedagogical way of working that you don’t want, Sandstorm provides an app marketplace where instructors can pick and choose the tools they want to use with their students. Need a collaborative document editor? Hit a button and you’ve got an Etherpad instance set up. Need an instance of Git?  Discussion forums? Pick from a few different alternatives, install and share with students. And all the data stays on a locally hosted server under local control. No corporate data mining of students information. Unbundling the LMS.
  2. A system wide sandbox platform. This is my own use case, as one of the projects in my portfolio will be to revive a system wide sandbox process to allow people to experiment with open source edu focused applications. A BCcampus instance of Sandstorm might make it easier to manage that process.
  3. A way to distribute education related open source applications. I’ve been thinking of ways to get Pressbooks Textbooks into the hands of more people, and making a one button install of Pressbooks in something like Sandbox seems like a doable project. Get an instance of Pressbooks into the Sandstorm app store has the potential to get it in front of more eyes and deployed. There are other open source tools that are edu focused that I think could be included, like Candela, TAO, Open Embeddable Assessments, Omeka, and Scalar (to name just a few). I envision an edu section of the Sandstorm app store. It’s premature to be thinking this way, considering the relative newness of Sandstorm, but, this is why we experiment and play.
  4. A powerful tool for students to work with the tools that they want to work with. Give a class a Sandstorm instance and let them decide how they want to collaborate, communicate and work together using the apps in the toolbox.

This work is obviously heavily influenced by Jim Groom & Tim Owens Domain of Ones Own which is, at its heart, about autonomy and control; about giving people the ability to control their own data and their own digital identity. It is also about recognizing that technology is not neutral, and that the systems we set up within our institutions (looking at you LMS) impose a way of doing things that may not be the way that our faculty want to teach. We should, at the very least, try to provide systems that support technology enhanced pedagogical models outside of the narrow confines of the LMS.

But what really excites me about this project is the chance to work with some of the most forward thinking edtech people in the province. And that is putting a big spring in my step.

 

Killing technological generativity

If there is one way to kill technology generativity, lock the technology up to such an extent that you can’t even repair it, let alone hack at it to do something new and innovative.

I’ve written about generativity before (in the context of open textbooks). Briefly, generativity is the capacity a system has to be changed and modified by someone other than the original developer to do something new and interesting that the original developer may never have imagined.

As I read this Motherboard article How to Fix Everything, it hit me again just how difficult technology companies make it for their systems to be repairable, much less generative.

“Normally if I purchase a hammer, if the head of the hammer falls off, I’m allowed to repair it and fix it. I can use the hammer again,” Charles Duan, director of Public Knowledge’s Patent Reform Project, told me. “For a lot of these newer devices, manufacturers want to say ‘We want to be the only ones to repair it’ because they make more profits off the repairs. They’ve found lots and lots of way to do this. Intellectual property law, contracts, end user license agreements, lots and lots of ways to try to make sure you can’t do what you want with your stuff.”

A few weeks ago I came across the story of farmer Matt Reimer and his brilliant robotic hack that turned an old tractor into a remote controlled tractor, saving him time, money, and from sending his old tractor to the landfill. If his tractor was a John Deere tractor, he would not have been able to make these modifications as John Deere makes it impossible to tinker with their tractors.

John Deere told the copyright office that allowing farmers and mechanics to repair their own tractors would “make it possible for pirates, third-party developers, and less innovative competitors to free-ride off the creativity, unique expression and ingenuity of vehicle software.”

Think about that for a moment – a farmer not allowed to fix his own equipment. If you are from a farming community, you know how ludicrous that sounds.

But beyond the silliness of not being able to repair your own stuff (let alone the terrible environmental consequences of forcing people who use their products to live in an even more disposable society), corporations that lock up their technology send a clear message that the only way innovation can happen is within their narrow confines and vision. It limits the scope of innovation to only what a corporation wants, and only in the ways that serve the corporation.

Because we should all have the ability to turn pop bottles into lights.

 

An open edtech playground infrastructure (or the magic of Grant Potter)

experiment

Grant Potter and Brian Lamb have been cooking up some open edtech goodness.

Earlier this week, Grant sent me a tweet with a link to a project that he and Brian have been working on, and it is exactly in line with my musings lately around an open web edtech infrastructure.

What Grant and Brian have done is take a whack of current open web infrastructure platforms and launched an open edtech web playground for BC edtechies to try out.

In the backend there is the UBC hosted higher ed virtualized cloud services EduCloud, a fully FIPPA compliant cloud hosting service. On top of this, Docker containers running Sandstorm, a web application platform that has, as a primary goal, making the deployment of web applications as easy as installing an app on a smart phone. One click and you have a fully functioning web application, like Etherpad or WordPress.

While this development stack is mighty impressive in that it represents a very modern web workflow, it is Sandstorm that holds real interest to me because it allows you to build customized web apps that can be deployed with the click of a button. This is incredibly powerful as it allows you to define the defaults of programs that you want to deploy, and controlling the defaults often means controlling how a user interacts with an application. This is powerful.

Say, for example, that you wanted to make a number of different WordPress installations available to your faculty, each with a separate set of defaults, plugins or themes enabled by default. Theoretically, you could create a Sandstorm SPK file (via Vagrant) for the different versions of WordPress you wanted to make available to your users and let them decide which version they wanted to install. Want the standard blog platform? Here is the WordPress button. Want Pressbooks? Here is the Pressbooks button. All deployable with the click of a button.

Well, that is my working theory of how this works right now. How it works when I actually dig deeper into the system may vary from this high level conceptualization. But if this stack works like I think it works, this will make an excellent platform for the simple deployment of customized web applications where the default is set to “education”.

We really need to come up with a proper way to recognize the technical wizardry of Grant Potter. Maybe a medal?

The Award Winning Grant Potter

 

A slight shift in focus

Just over a year ago, BCcampus went through a significant change in leadership. Mary Burgess, who was the Director in charge of the BC Open Textbook Project, was named acting Executive Director for BCcampus. This change left a bit of a leadership gap for the OTB project. Mary asked that I take on a leadership role for the project. I agreed and became Acting Senior Manager for the open textbook project.  The initial term was to be for 6 months, but was extended to a year as we went through a ministerial mandate review before Mary was named permanent ED.

During this past year, I’ve done interesting and challenging work as the team leader. Coordinating a project like the open textbook project is massive, and I have been stretched in ways I couldn’t have imagined. But I do feel stretched. And in the back of my mind I knew that I was getting farther and farther away from a significant piece of what I love doing, and that is working with educational technology.

While there is certainly a tech piece to the OTB project, it has been far from front and centre in my day to day work. This past year, you would be more likely to find me at Ministry meetings, preparing budget reports, and working with other provinces on tri-provincial MOU’s. All important and meaningful work. And while I think I am a competent and decent administrator and did achieve much in the role,  it’s not where my heart is. I am an educational technologist, and the work I have been doing has been taking me farther away from that.

So, this summer, I spoke with Mary about moving out of the open textbook leadership role, and back into a role with a deeper focus on educational technology. She agreed and posted the leadership job.

Helping to make the transition easier was the fact that there were extremely capable people working on the project. Earlier this week, one of those capable people, my colleague Amanda Coolidge, accepted the role as the new team leader for the open textbook project.

The timing is very good for me to step aside. We have exceeded the deliverables of the original project, and in the next few weeks, will release the final open textbooks in trades and skills training. Our original AVED project draws to a close, and it feels like we are shifting to a new phase of the project.

Amanda will take over the project for an exciting new phase where the emphasis will be, not on the creation of new material, but the deeper integration of the OTB material within new pedagogical models, like open pedagogy. While we can’t publicly talk about much yet, suffice to say that the next 3 years will see exciting new work in open textbooks in BC. And Amanda is much more capable in leading this next phase than I am. Her background in Instructional Design and deep history with open education going back to her work with TESSA make her a natural for the leadership role.

For me, I’ll still be involved in open textbooks. I’ll finish out a few projects I am committed to, like coordinating the OpenEd conference in November. I’ve got an Open Access week event to do, and am heading to Alberta in a few weeks to do a workshop with eCampus Alberta on OER. But my future role with OTB will see me return to my original focus for the project, which is on technology.

I am eager to get to work on PressBooks and work towards making a self-serve instance of PB available to BC faculty. I am also interested in seeing how we can extend the platform and begin to integrate other tools within an open textbook, and explore how we can deeply integrate open textbook content in other edtech systems.

I also have a couple of other projects that I want to work on. As Brian noted, the open education working group was recently cut by BCNet, and I think there is important and exciting work to be done here exploring the role that open source software can have in higher education. It feels like the state of edtech in higher ed these days starts and ends with negotiating the best procurement deal for vendor software. With the exception of Moodle (and I expect someday in Canada, Canvas), open source software rarely plays a significant role in teaching and learning. I hope that we can set up a group to explore this within the work that Brian, Tannis Morgan, Valerie Irvine and Grant Potter had been doing with BCNet.

There are exciting technology developments, like Sandstorm and Docker, that could provide interesting frameworks for delivering a more customizable and configurable suite of open source software tools to faculty and students. I hope we can explore this.

I feel very fortunate to work with people and an organization who allow me the freedom and ability to shift focus. And I do think that, for the open textbook project and where the project is going in the next 3 years, Amanda is the right choice to take this project even farther. I’m looking forward to working with her in her new role and doing more amazing open work.

 

What Pressbooks EDU means for BCcampus and Pressbooks Textbooks

Open Textbook Summit 2015

Hugh getting ready to talk Pressbooks and LibriVox at the Open Textbook Summit

A few weeks back at the Open Textbook Summit, Hugh McGuire from Pressbooks announced a new hosted Pressbooks offering aimed at institutions called Pressbooks EDU. Since that announcement, I’ve had a few emails from people asking what this might mean for BCcampus and our work with Pressbooks Textbooks.

In a nutshell, Pressbooks EDU does not change the work we at BCcampus are are doing with Pressbooks. BCcampus is still actively involved with development of the Pressbooks Textbooks platform, and will continue to contribute our code back to the core Pressbooks code base. This means that much of the work we (and by we I mean Brad) do in BC on Pressbooks Textbooks could eventually trickle down to this new hosted Pressbooks EDU instance, however individual decisions about what features and code we develop get merged back into the PB core are made by Hugh and his development team.

While we do share our work openly for the wider community to use, our primary mandated area is to serve the post-secondary institutions within British Columbia. BCcampus will continue to support faculty authoring and adapting open textbooks in Pressbooks as part of the BC Open Textbook project and, later this summer, we will be piloting a self-serve instance of Pressbooks for faculty and staff at BC post-secondary institutions to use.

Hugh’s new service now gives an option to institutions and organizations in other states and provinces who may not have the internal support or resources to set up and host their own instance of Pressbooks (and if you are from a post-sec in BC and are interested in Hugh’s hosted option, by all means contact him). Now institutions have a service provider to support them rather than have to take on the technical support of setting up an instance themselves which, of course, could happen as well as all the code is open source and institutions with the resources could set up their own instance of PB.

I see this as a great move by Pressbooks as it will likely bring some more interest to the platform in EDU now that there is a hosted option available. It will also provide Pressbooks with a source of funding to help grow Pressbooks, the business. In my opinion, a healthy business model for Pressbooks, the company, means a much healthier open source product in Pressbooks, the software.

We have worked closely with Hugh and Pressbooks in the past, and will continue to do so in the future. Hugh and Pressbooks have been wonderful partners, and I am continually impressed with Hugh’s vision around what a book can be in a networked digital world, which he spoke about again recently at the Open Textbook Summit.

All in all, I see this as a wonderful development for Pressbooks in education, and for the open textbook publishing ecosystem as a whole.

 

Embedding Interactive Excel Spreadsheets in WordPress using OneDrive

One of the projects we are funding is the development of a number of interactive Excel documents to support an open finance textbook in our collection. These types of projects are fun to do, and they enhance an existing resource by adding interactivity to the book. This makes the book more attractive for adoption by faculty.

The author has been developing a number of interactive charts using Excel. The idea being, you change a value and the chart changes. Excel is the software of choice in business, so it makes sense to develop these activities in Excel. Now, there is nothing wrong with having students download the spreadsheets and work on them on their own computer. But the author is  looking for a way to try to enable the interactivity to happen within the browser.

After a bit of digging around I discovered that Microsoft OneDrive has the ability to embed Office documents within a webpage. The instructions on how to embed content  also say that, “readers can sort, filter, and calculate data, right there in your post”. Sounds like the ticket to me.

So, I uploaded one of the interactive Excel spreadsheets the author sent me to OneDrive, followed the embed instructions and voila…

…an interactive Excel spreadsheet embedded into a post.

I tested this in Chrome, Firefox and IE and it seems to work. Change a value in the yellow column and the chart below it updates. The other columns stay locked, which is how the faculty coded them. So, the behaviour of the sheet seems to be intact.

The embedded interface also gives students the option to download a copy of the original file (so they can retain and work on it in Excel on their desktop, if they choose), or open up the document within Excel on the web using the icons in the bottom right of the embed window.

download

However, while it works well in the browser, the embedded spreadsheet doesn’t give me much love on my Nexus tablet or Android phone, and  the Pressbooks output formats (ePub, PDF and mobi) don’t like the embed code much, leaving big blank spaces in those outputs. So, there is still that hurdle to cross to find an elegant way to make those work. But so far, I’m pretty happy to have found this as it gives students the option to interact with the data in real time on the website version of the book, or download and keep the interactive Excel spreadsheet. All while allowing faculty to work in a tool that they feel comfortable with.

 

Correcting Wikipedia history on educational radio in Canada

Valerie Irvine and Irwin DeVries are working on a project documenting the history of open education in Canada. If you have run into Irwin in the last few months, chances are you’ve seen him lugging around some video equipment and maybe even tapping you on the shoulder to get a clip on the role you have played in the history of open education and edtech in Canada.

One of the areas where I am hoping to contribute to the project is around the role of educational radio in Canada. While radio courses have a very long and deep history, I find they often get forgotten when the history of edtech and open education comes up.

My personal perspective isn’t historically deep, considering the roots of radio education stretch back to the 20’s in Canada. I only goes back 20 years to the work I did in the mid 90’s at CKMO radio, a campus/community radio station located at Camosun College in Victoria BC. By the time I began working on radio courses there, educational radio was at the end of its run as a robust delivery platform for open courses. Funding for one of the pillars of educational radio stations in Canada, CJRT in Toronto, had just been pulled by the then Conservative government in Ontario, and CKUA in Edmonton was also under severe financial strain.*

However, as shallow as my perspective may be, I know when something doesn’t look quite right, like the Wikipedia article on the Open College  in Toronto (link leads to old version of the page). When I looked at the article this morning, the first sentence popped out at me:

onlyThat is a pretty bold statement considering that, even with my short term 20 year horizon to draw on, I can name at least 2 other radio-based university-credit distance education providers in CKUA in Edmonton and CKMO in Victoria. Both offered open courses on the air and both were accredited; CKUA through Athabasca University and CKMO through Camosun College. CKUA in Edmonton is often credited with being the first radio station to program educational content, starting in 1927.

So, I hit edit and made a change to Wikipedia to fix what, I think, is an inaccurate statement. The first line of the article now reads:

onlyfixNow when people read about the history of Open College, they will see that they were not the only ones doing this. As important as Open College was, there were others doing formal radio based educational programming in Canada.

Update: Grant Potter, also lover of radio and quick on the draw with finding cool stuff on the web, shared this video about the early history of CKUA.

*An aside: CKUA and Athabasca offered up one of the finest explorations of music I’ve ever heard with the fantastic radio course Ragtime to Rolling Stones which, if you listened to it on CKUA in the early 90’s was free to hear. But if you try to access it via the web today….well.)