Privacy in BC is more than just data sovereignty and the cloud

Some random thoughts and notes from the recent InfoSummit I attended. The Summit was put on by the non-profit BC Freedom and Information Privacy Association.

Among the presenters were KPU faculty Mike Larson and UBC legal counsel Paul Hancock. Both spoke on a number of issues, but the one that was most relevant to my work was the discussion on the BC-specific data sovereignty requirement which says that, unless you get consent, personal information collected by BC public bodies must reside on data servers stored in Canada.  In the BC higher education edtech space, this has made using cloud based services problematic for faculty who often have to get signed consent forms from students to use cloud services outside of Canada.

On this point, Mike and Paul were on opposite sides of the debate. Mike supported the requirement, and made the point that the requirement to get informed consent has a strong pedagogical function. When he asks his students to sign a consent form to use cloud-based services, it often kickstarts a conversation with them about privacy, data, security and user rights. Now, Mike does teach Criminology and law, so it feels like a natural fit to have this convo with his students, and I wonder if a, say, English prof or someone teaching trades would be prepared to have this conversation with their students. Or wether their students would even care to have this conversation, as important as it is to have. But still, it was refreshing to hear a faculty member speak about using BC’s informed consent requirement as a pedagogical device to start conversations about privacy in a digital age.

On the other hand (and probably closer to the reality of most faculty), the data sovereignty requirement and informed consent forms are real barriers for faculty who wish to incorporate other technologies into their teaching and learning.  UBC’s Paul Hancock believes that the data sovereignty requirement is much too broad, and he spoke about feeling handcuffed by the legislation when he speaks to instructors who want to use a teaching tool that fits their exact pedagogical goals, but is hosted in another country. For this instructor, Hancock has to advise them that they cannot use the technology unless they get the consent of the students. Getting consent may sound easy, but if you do have students who do not consent, then the instructor has to have an alternative activity or exercise ready for them that is FIPPA compliant. Now you have to start designing additional activities for these special exceptions, and I don’t know many faculty who have extra time on their hands to develop extra activities as a work-around to the legislation.

But it is not just the data sovereignty requirements that are presenting challenges to higher education institutions. Students are now asking institutions for access to the data collected about them within the technologies hosted on campus. Using a Freedom of Information request, UBC student Bryan Short has been trying to get a copy of the data collected about him by the UBC LMS (Blackboard, branded as UBC Connect at UBC). In a five-part blog post series, Bryan neatly and perceptively outlines his experiences trying to get access to learning analytics collected about him using a Freedom of Information request (and does a nice takedown of the LMS in general). Normally, when a public institution has been served with an FOI request, they have 30 days to comply. UBC was unable to comply and has now asked for an additional 30 days to gather the information. I wonder what kind of technical hoops UBC might be jumping thru to even extract this information from their LMS in a meaningful way.

Sadly, this whole process has made Bryan feel like he is a “meddling, bothersome nuisance” for doing something he has the legal right to do.

I don’t think this will be an isolated incident. As digital privacy becomes a bigger issue within our society, I suspect institutions will begin to see more and more of these kinds of requests from students like Bryan asking for access to their learning data, especially if that learning data is being used in anyway to conduct an assessment of the student. Which makes me wonder how many institutions in BC would be able to respond within 30 days to a student FOI request for the data collected about them in the LMS?

Photo: Eye I. By Thomas Tolkien CC-BY

 

Disjointed, fractured and somewhat pointless

3 months.

I’ve gone through long dry spells on my blog in the past, but this one feels especially long.

I’m struggling a bit to figure out why this writing stuff has suddenly become so difficult. It wasn’t that long ago when I was in the habit of writing weekly post of what I was working on. But lately I can’t seem to pull all the disjointed and fractured thoughts together to even accomplish that each week.

It’s not like there are not interesting conversations happening that I want to participate in or respond to. I feel like I owe both Audrey and Martin some considered and thoughtful words about EdTech as academic discipline as I think I was the one who likely spurred the subject after posting a piece on Facebook that they both responded to…and then took one step further and wrote great posts on (here’s Audrey’s and Martin’s). My take doesn’t go much beyond the annoyance I felt when I read the original article and thought here are elite institutions “discovering” something that many of us thought already existed. Like MOOC’s, suddenly discovered when the, ahem, right institutions started doing them. It’s not echolalia, it’s moocolalia.

There is also Martin’s excellent take (and ensuing comments) on a societal shift to a dark place where expertise is dismissed and the facts don’t matter. How does education fight a culture that is increasingly anti-learning?

I feel like I should comment with something, especially since I was quick to jump into the great Twitter maple syrup debate, which prompted an observation from Martin.

But maple syrup is easy. The role of education in the helping to solve the decline of western civilization? Not so much. I feel inadequate to respond, especially in light of the incredibly thoughtful responses by a lot of very smart people each weighing in with their views.

And here is where I get to everytime. What is it I am trying to say? It has to be more than just….this.

I feel like I’m wasting time. Mine, because here I have sat for 45 minutes trying to figure out what it is I want to say, or whether I even have anything else to say.  And yours because, well, I haven’t really said all that much.

Maybe I am just going thru a drought, but it feels like something has changed. Writing used to be the way that I would connect the thoughts and make meaning out of this stuff. But lately…not so much. It feels like an awful lot of work with little reward. And work that is dissatisfying because I keep going, and feel I get nowhere. Get stuck. Have a thousand strands in my head that I can’t quite pull together, elusively out of reach to make a coherent narrative.

Hell, never mind coherent narrative. At this stage, I’d be happy with a point.

So I am just going to keep on writing. I am hoping I can write my way out of this and get back to where I feel like I actually have something meaningful to contribute to the conversation. It’ll likely mean a few go nowhere posts, but I’m going to publish them anyway so please bear with me. I feel like I need to get this blogging thing under control again.

Photo: Temporary Pointless Sign by Cory Doctrow, CC-BY -SA

 

My first pull request

Crazy to think that, even though I have had a GitHub account for 5 years and have poked, played and forked things, I have never made a pull request and contributed something to another project until today.

I attribute that mostly to the fact that I stopped actually developing and writing code right around the same time as I signed up for a GitHub account, and the fact that it took me a long time to grok how GitHub works. Honestly, I am still not totally sure I understand how GitHub works, but after a great session with Paul Hibbitts at the Festival of Learning last week where I had a chance to dig into both Grav and GitHub, I finally feel like I can work around GitHub with some level of confidence. Enough that when I saw an opportunity to contribute to a project earlier today I thought, “I can help!”

The trigger was a tweet from the web annotation project Hypothes.is. I’ve been playing with Hypothes.is since hearing about the project from David Wiley a few years ago. It is maturing into a really great annotation system that has found some use among educators, including Robin DeRosa who is using Hypothes.is as an annotation tool she has published in PressBooks.

The tweet from Hypothes.is pointed me to a small project that Kris Shaffer is working on – a WordPress plugin that will allow you to aggregate your Hypothes.is annotations on a page or post on your WordPress site.

As Kris points out on his blog post about the plugin, there are some compelling use cases

I envision a number of possible uses for Hypothes.is Aggregator. As I write in my post on hypothes.is as a public research notebook, you can use this plugin to make a public research notebook on your WordPress site. Read something interesting, annotate it, and aggregate those annotations ? perhaps organized by topic ? on your domain. They will automatically update. Just set it and leave it alone.

I also see this as a tool for a class. Many instructors already use hypothes.is by assigning a reading that students will annotate together. Hyopthes.is Aggregator makes it easy to assign a topic, rather than a reading, and ask students to find their own readings on the web, annotate them, and tag them with the course tag. Then Hypothes.is Aggregator can collect all the annotations with the class tag in one place, so students and instructors can see and follow-up on each other’s annotations. Similar activities can be done by a collaborative research group or in an unconference session.

I went to his GitHub site, downloaded the plugin and fired it up. It worked (although the Cover theme I am using has done some funky formatting to it, which i need to adjust). But when I took a look at the GitHub site, I noticed that Kris had no README file on the Github site and the actual instructions on how to install and use the plugin were only on his blog post. Aha! A chance for me to actually contribute something to a project! So, I fired up my Atom editor, forked his repo and added a README.md file with instructions that i copied and pasted from his blog post on how to install and use the plugin.

So far so good. Now to figure out how to actually do a pull request. i thought that, before I do this (and not knowing exactly what might happen when I hit the Pull Request button) I should check with Kris. So I fire him off a tweet.

Ok, all good. I used these instructions from GitHub on how to launch a pull request and a few minutes late, my README file was sitting in Kris’s GitHub repo.

I am still not totally sure what I am doing, but having that first pull request under my belt has given me a boost of GitHub confidence.

Image: GitHub (cropped from original) by Ben Nuttall CC-BY-SA GitHub (crop) used here released under same CC-BY-SA license.

 

What would you do with a Creative Commons certificate?

I’ve been following the development of a Creative Commons certificate since last fall. Paul Stacey from Creative Commons paid a visit to the BCcampus office looking for some feedback on a DACUM-inspired curriculum process he was leading, and on the potential value of a CC certificate.

Developing a certificate program that is flexible enough to consider all the potential use cases for Creative Commons is (I think) one of the biggest challenges. While we in higher ed look at CC licenses as a way to enable the development and sharing of curricular resources and open access research, the use cases outside of academia are wide and varied. CC is used by authors, musicians, filmmakers, photographers, and other types of artists. Governments are using Creative Commons licenses, as well as galleries, libraries, archives and museums (GLAM), furniture design3D printing & manufacturing, and even in game design.

Earlier this year, Alan Levine was brought on board to assist with the process, and it’s great to see some progress being made on the development of a Creative Commons certificate. Alan has asked for some help from the community to seed a website with some videos on how a CC certificate could be applied and used.

One of the ways that I could see my organization, BCcampus, using a CC certificate program is to help us vet grant applications. Over the years, BCcampus has supported the development of open educational resources (open courseware with the old OPDF program and the current open textbook project) by coordinating grant program. A number of institutions get together and collaborate to create open courses or open textbooks that can be freely shared with others. As a condition of the grant, those creating the resources have to agree to release their material with a Creative Commons license. Often when people apply for a development grant, they are either not familiar with Creative Commons, or often have a very cursory knowledge of how the licenses work, so BCcampus often takes on the role of providing support and training to the grantees, depending on their level of knowledge of Creative Commons.

Having a certificate program from CC would help with the application vetting process. Additionally, with some CC certified standards to align with, I think the community could develop some fantastic openly licensed learning resources to support the CC approved learning objectives. It could become a model of OER production and sustainability if we all begin to build on each others work.

If you have a use case for a CC certificate, take a minute, record a video and let Alan know. Here is my response.

 

Follow that open educational resource!

This morning I was taking a look at some of the Piwiks website analytics for the Geography open textbook we created 2 years ago as a textbook sprint project. For me, the really interesting data is always the referring website data as that can give you a glimpse of how the content is being used and by who.

The BC Open Textbook Project tracks adoptions of open textbooks with an eye to reporting back student savings. But there is much more value in open resources than just a displacing adoption where a commercial textbook is replaced by an open one. When you create open resources, you may have one specific group in mind, but you often find there are unexpected audiences using your resources.

This is a big value proposition of open resources. Once you make an open resource, it is available for others to use and refer back to. Each open textbook in the BC open textbook collection contributes to improving the knowledge available on the open web.

This is one of the major reasons I love open resources created by smart people in higher education. Every time an edu contributes resources to the open web, we make the web a better, more informed space for all.

I like to think of it as making the web more educational, less Perez Hilton.

Onto the Geography open textbook.

First, there is a lot of evidence that the book is being used by the intended audience of BC post-secondary institutions as there are referring links back to sections of the book from post-secondary domains at KPU, UVic, Langara, VCC, UBC and VIU. Most of the referring links I follow back take me to a landing page for an LMS at that institution, telling me that the content is being referred to from inside a course. This may not be a full adoption of the resource by the faculty, but it does indicate that the resources are being used by the intended audience.

But use of the resources extends beyond BC higher ed. A large source of referral traffic is from the Oslo International School, which appears to be an International Baccalaureate primary school. Again, when I follow the referring link I am met with an LMS login screen. It’s crazy to think that a regionally specific resource aimed at a first year Geography student in British Columbia is finding use at a primary IB school in Oslo, Norway. You never know where you’ll end up when you go open.

It is not the only K-12 school referring to these resources. Teachers from School District 43 in BC have recommend (Word) the section on Residential Schools to their students as a resource for students doing a research project.  Teachers in School District 63 are using the section of the textbook dedicated to the BC Gold Rush as a learning resource in their classes.

Those two resources have also found use outside of education. The Royal BC Museum Learning Portal has included a link back to the textbook section on the Gold Rush as a resource on their learning website dedicated to the BC Gold Rush, and Vice included a referential link back to the section on Residential Schools in an article it published on the Canadian Truth Commission on Residential School.

The textbook is also showing up on some kid friendly search engines. A referral from a KidRex search on the Hope Slide led me back to the search results page for the search and shows that the Hope Slide case study in the textbook comes up as the second result (behind the Wikipedia entry).

None of these uses save students a penny, but show the value of an open resource beyond the financial. No doubt that the student savings are important as the financial barriers are real. But to me, seeing this kind of usage of OER shows the benefits extend beyond students. These resources make the web better for all. It is higher ed freely contributing knowledge to the world. It is higher ed making the world much less Perez Hilton.

Addendum: Gill Green, one of the original book authors and current Open textbook Faculty Fellow sent me this tweet about a resource we created for the book.

 

Wikipedia and open learning at the Festival of Learning

The BC Festival of Learning is happening next week in Burnaby. This is an amalgam of a number of different workshops and conferences that have been supported by BCcampus; Educational Technology User Group (ETUG), the Open Textbook Summit, and the Symposium on Scholarly Teaching & Learning.

I’ve got a busy week ahead of me, facilitating or participating in a number of different sessions, including a three hour Wikipedia workshop on day 1 with Judy Chan and Rosie Redfield (UBC) and Jami Mathewson from the Wiki Education Foundation.

I am quite excited about this session as this is something I have wanted to see happen at an ETUG for the past few years. I have written about Wikipedia in the past and have been a semi-regular contributor for many years. I also maintain a curated Scoop.it collection where I stuff articles on how educators are using Wikipedia.

Getting ready for the workshop, I’ve been impressed with how much work has been done by the Wiki Education Foundation to help support educators who want to use Wikipedia in their class. The resources available to instructors – from handouts, how-to’s, lesson plans to real live people who can help support them – have really lowered the bar for educators to begin using Wikipedia. This is not the same unsupported landscape for educators as it was 10 years ago when early adopters like UBC’s Jon Beasley-Murray were trailblazing. Full credit to the foundation for making it easier for educators to engage with Wikipedia.

It’s been interesting to watch perceptions of Wikipedia change in higher ed over the years from the days when nobody knew exactly what Wikipedia was, to the backlash forbidding its use by students, to tacit acceptance that it could have a role to play in higher ed, to today where we are seeing active engagement on Wikipedia by many in the academic community interested in exploring open pedagogy.

I have also been heartened to see academics who treat the platform seriously and realize that the worlds largest repository of open knowledge is being heavily used by people in their daily lives. They understand that, as academics, they have an important role to play in helping to maintain the accuracy, breadth and diversity of Wikipedia. Faculty like Dr. James Heilman and Dr. Amin Azzam who regularly correct misinformation on Wikipedia articles about health.

Heading into the world of Wikipedia is not without its risks, as UofT professor Steve Joordens discovered when he had his (1,900!) students start editing Wikipedia articles, flooding the existing Wikipedia volunteer editors with tons of extra work as they had to filter the contributions. Wikipedia is, first and foremost, a community made up of volunteers, and learning to negotiate and engage with that community is just as important as contributing & fixing content. It’s one of the topics we’ll be discussing at the workshop.

Image: Wikipedia by Giulia Forsythe CC-BY

 

Facebook has an identity crisis – and it's messing with democracy

I’ve followed the long standing Facebook identity battles that both Alec Couros and Alan Levine have had to endure, and the abject failure on the part of Facebook to deal with fake account after fake account expropriating their identities to do all manner of nasty things. Today comes news that the mayor of Victoria, Lisa Helps, was locked out of her own Facebook account because….well, because Facebook doesn’t believe that a person in politics could actually have the last name of Helps.

While what has happened to Alec and Alan is serious and has caused a great deal of pain to people who have been duped and manipulated by one form of catfish con after another (to say nothing of the huge amount of effort both Alec and Alan have expended fighting Facebook), it is another level of icky when Facebook starts messing with the identity of publicly elected officials.

Regardless of your political opinions of the mayor (and just for the record, I live in Saanich, a different municipality with a different mayor) it is clear that Helps considers Facebook an important tool to engage with her constituents on all manners of public policy. Which is how it should be. The internet should enable more direct interaction with our public officials.

But by locking her out of her own account, Facebook has essentially gagged a public official. In short, Facebook – a corporation that is no stranger to accusations that it manipulates political opinion and conducts ethically questionable research by manipulating what we see in Facebook – is messing with democracy.

Now, I don’t think that there is anything overtly political behind having the mayor’s account shut down by Facebook. I think this is a case of Facebook’s own algorithmic bumbling. But, intentional or not, there are socio-political implications to having a publicly elected official lose access to their own Facebook account. Imagine if this happened with just a few days left in a tight election campaign? Or during a crisis in the city where the mayor was trying to use Facebook as a way to communicate important information to the citizens of her community?

It would be dismissive to think that social media is trivial. That this is just Facebook and there are plenty of other avenues available to the mayor to communicate with constituents. Which is true. But the fact is that social media is driving much of the political discourse happening in North America. The recent Pew Research shows that most people get their news from social media, with 63% of respondents saying that they get their news directly from what they see in Facebook. Over 60% of us use Facebook and social media to engage in political discourse on social media. And Facebook, the company, has no qualms about adjusting our newsfeed to promote certain behaviours during an election.

Social media has become a vitally important mechanism in our political process and, by extension, our society.

I am becoming convinced that it is dangerous for us to leave something as crucial as our identity up to an unaccountable, corporate social media company. Facebook is messing up too bad and the stakes are just too high in a democratic society.

I think our civic institutions need to be playing a bigger role in digital identity. Our governments need to be doing more to help its citizens verify that who we are online is legit. It’s a role our government has always had a hand in, through the issuing of government identification documents like passports, health cards, and drivers licenses. It’s time for them to step up and provide some kind of mechanism that can help their citizens verify that they are who they are online.

I also think that we need some regulations on social media with regards to digital identity issues. When the mayor of a city – and that cities police force – are unable to convince Facebook that the mayor is who she says she is….that is a serious problem. Our digital identities are too important to be left to customer support who refuse to return messages and fix problems quickly. It begins to look like censorship – tacit or otherwise – when the mayor is cut off for 9 days, and gets NO response from the company that cut her off. With identity, Facebook is failing and it is time for our public officials to step in and ensure that there are effective and efficient identity dispute mechanisms in place that keep people from being locked out of their own accounts for days and weeks on end. And with Facebook single sign on accounting for over 60% of login credentials at third-party sites, getting the boot on Facebook likely means getting the boot of a whole host of other sites and services across the web that you use.

I am also becoming convinced that our governments need to be more proactive in providing citizens alternative public virtual spaces for citizens to engage. While it is great that civic engagement happens on Twitter and Facebook and other virtual spaces, it is still at the whim and control of that social media company.  Just like our communities have real public spaces like libraries, schools, recreation centres and other physical municipal institutions, we should also be pushing for more of these virtual public spaces provided by our civic institutions. Places where a mayor can virtually interact with a wide network of constituents that isn’t controlled by a corporation driven by their own best interests who seem to have little regard for the damage they are doing to our lives and communities.

 

Sandstorm Apps and Grains

Understanding the difference between Apps and Grains is important to understanding how Sandstorm works.

Grains are discrete instances of apps. Each grain is a copy of an app. This allows each grain to run isolated from other copies of the app in Sandstorm. Here’s a little walk through to help you understand how they relate to each other.

Adding the app to Sandstorm

Before you can create grains, you have to add the app from the Sandstorm app market to your local instance of Sandstorm.  When you log into Sandstorm the first time, you’ll see a blank slate that looks like this:

blankslate

Not much there.

In the left hand navigation, there are 2 sections; Apps and Grains. The screenshot above is the default Apps section. If I switch to the Grains view I get a message that I don’t have any Grains installed yet.

Grains

Understanding the difference between Apps and Grains is important to understand how Sandstorm works.

In a nutshell, you install the application once on your local instance of Sandstorm. Once it is installed, you can create multiple copies of that app to use. Each of these copies is called a Grain in the Sandstorm world.

So, let’s use Etherpad as an example. I install the Etherpad app from the Sandstorm App market onto my Sandstorm server. Once that is done, every time I want to create a new unique Etherpad document, I create a new grain (copy) of Etherpad with each grain operating independent of the others.

Installing an App

The first step in making Etherpad available on Sandstorm is to install the app from the Sandstorm App Market. Think of the Sandstorm App Market like Google Play or the Apple App Store. I only have to do this step once. Once Etherpad is installed on my Sandstorm server, I can then create multiple grains of Etherpad with the click of a button.

In the Apps section, click Install from App market. This will open a new tab/browser window in the Sandstorm App Market.  Find Etherpad and click Install.

Etherpad

You’ll be taken back to Sandstorm and get a message asking you to verify that you want to install the app.

install

This is one of the most visible places where you will see the commitment to security that Sandstorm has as each app has some additional information included to help you verify that this application is legitimate. You will see a PGP key signed by the application publisher along with their verified contact information. Sandstorm provides a cryptographic chain of trust that connects the app package you’re installing to the app publisher’s online accounts. This is an assurance method that you are installing a legitimate Sandstorm application and provides a verified trail back to the person who published the app.

Click Install Etherpad and the application is installed and ready to use.

Create a Grain

Once the app is installed, you can now create your first Etherpad Grain by clicking on Create new pad.

EtherpadReadyToUse

You’ll see the Etherpad Grain now appear in your left hand navigation under Grains as Untitled Etherpad. To change the title to something more meaningful, click on the title Untitled Etherpad pad at the top of the screen.

Untitled

A popup will appear where you can change the name

MyNew

Click Ok and your Etherpad name is changed at the top of the screen and in the Grains navigation on the left.

MyNewEP

And I am ready to start working on this Etherpad. Click the Share access link at the top of the page and I can generate a link that I can send to co-collaborators and give them anonymous access to collaborate on this document, just like you can with Google Docs.

share

Go back to the apps page and you’ll see that Etherpad has now been installed on our local instance of Sandstorm.

MostUsed

If I want to create another Etherpad Grain, I don’t have to go back to the app market and reinstall the application from the start. I simply click on the Etherpad app icon and a create a new grain. Clicking on the Etherpad icon also shows me all the grains of Etherpad I currently have installed.

CreateASecond

With the app installed, I can now create dozens of discrete Etherpad apps and share them with different groups of people, each running as their own application within Sandstorm.

ManyEtherpad

Header image: Grains of Sand by Fran Tapia CC-BY-ND

 

Working with Sandstorm

I’ve been making an attempt to kick the tires more with Sandstorm in preparation of our upcoming workshop at the Festival of Learning.

MyGrains

Snapshot of my Sandstorm grain dashboard

Small pieces, loosely joined is what Sandstorm is all about. Sandstorm is the stitching that joins the small pieces, providing a common authentication and security framework to a patchwork quilt of open source applications.

So far I’ve tested out about half a dozen of the 50+ applications within the Sandstorm eco-system trying to use them in my day to day work. Etherpad (the collaborative document editor that is a scaled down version of Google Docs) and Frameadate (a handy meeting scheduler alternative to Doodle) have been the most useful. I’ve also played around with Ethercalc (spreadsheet), Quick Survey (survey tool), Hacker Slides (presentation tool that uses Markdown), OpenNode BB (forums), GitLab (Git repo), Rocket Chat (Slack alternative), and mucked around a bit with the WordPress port in Sandstorm.

My general observation is that the applications that work well within the Sandstorm environment are small, discrete and focused where you can create a single instance of the application (called a grain in the Sandstorm world). Things like a single document or meeting invitation. Tools like Etherpad, Ethercalc, Quick Polls, Hacker Slides and Frameadate are the type of applications that Sandstorm does well in that you create a document, share with others to collaborate and contribute to, and then move on.

I tend to think of these tools as being somewhat disposable. Once a discrete task is done, it’s done. The survey is finished, the meeting dates are picked, the document has been edited and completed. Get in, do your work, get out.

As you can see from my screenshot, I’ve got a lot of Etherpad instance on the go, working on collaborative documents with different users. There is no folder scheme in Sandstorm, or way to organize these multiple instances so I can imagine over time as you create more and more documents, the user interface could become quite cluttered. I’m just starting to get to the tipping point where I’d like to be able to put some structure around the different applications I have going. Maybe organizing by project I am working on and grouping all the related apps I am using with a single project in a single folder or some other visual organizational metaphor. But haven’t seen a way to do that yet.

More complicated applications seem to have more limitations. WordPress, for example, is not the full featured version of WordPress that you would get at WordPress.com or if you installed it yourself. Installing plugins and themes means uploading a zip file instead of connecting to the remote WordPress plugin repo. Publishing is static, meaning whenever you add new content you have to rebuild the site.

Rocket Chat (a nice open source Slack-like application) also has a limitation with the mobile app. Rocket Chat works quite well if you are logged into Sandstorm, but  the mobile application cannot connect through Sandstorm, which limits its usefulness.

These are not dealbreakers, but really just the things you learn while sandboxing and experimenting with new technology – seeing what the tool does well and where the limitations are.

Image: Blue Sky by leg0fenris CC-BY-NC-ND

 

Create embeddable HTML5 content with H5P

Been playing around this morning with a series of tools called H5P.  H5P is a plugin for Drupal, Moodle and WordPress that allows you to create a number of different interactive HTML5 media types. Things like interactive videos, quizzes, timelines and presentations.

I’ve only had a chance to play with the plugin for a few minutes this morning, but got it working and was able to create some basic interactive content, adding a branching overlay to a YouTube video that runs from the 2 to 12 second mark. Choose an option from the screen and jump to a different point in the YouTube video. I also created a simple interactive question.

While I created these using the H5P plugin I installed on another WordPress site, the H5P plugin gives others the ability to take some embed code and post the content that I created on their site, giving other people the chance to use the same content. So, here is that same interactive quiz question that I created on my testing site now embedded here using the H5P embed code.

With the interactive video example, I am actually embedding an embedded YouTube video with the overlays that I created using H5P. Meta-embed.

There is also an option to assign an open license to the interactions I create at the time I create them, and make it possible for people to download the source file.

One thing I can see off the bat is that there are a lot of content type options with this tool. There are about 30 different content types, each with numerous options so this 10 minute quick look hardly does justice to the possibilities or options. But I like where this is going and it certainly merits a deeper dive into the tool.

H5P is an open source project and community being lead by National Digital Learning Arena (NDLA) in Norway. NDLA is a publicly funded project which aims to offer a complete, free and open learning portal for all subjects in the Norwegian high school level.

More to come as I dig deeper into this tool and plugin.

 

A BC HigherEd WordPress Community

South of the border, I am watching the WP in Higher Ed community growing, and it strikes me that there may be an appetite for  something similar to happen in BC.

WordPress has deep roots in the BC post-sec system, and there is a lot of WordPress use currently happening.  There are UBC blogs and UNBC blogs, WordPress course development happening at JIBC, eportfolio work at Capilano (who invoked both The Bava and Novak Rogic in their site credits and at their recent presentation at the BCNET Conference). When I was at Camosun College, I set up a WordPress instance that is still being used by faculty. There is the fantastic PressBooks goodness Brad is whipping up here at BCcampus to support the open textbook project, and the work at TRU being done by Brian Lamb and Alan Levine.

wordpress

I suspect this is the tip of the WordPress iceberg & there are many more pockets of use in higher ed in BC.

I’m hoping to start finding those pockets of WordPress use in the system in the hope of bringing together those who are using (and want to use) WordPress into some kind of community/network of practice.

I’ve set up a form to gather information from folks in the BC post-sec system who are using, or are interested in, connecting with others across the province using WordPress.

I have to stress that this is very preliminary groundwork on my part to gauge if there is enough interest in the province to bring together some kind of more formalized community and/or network. What this community/network will look like, what we work on, how we connect, where we find value is something that should be driven by the community, so if the shape/structure, feel of this community is a bit vague right now, that’s intentional.  But from my view, I can see areas where it makes sense to come together, collaborate, find shared commonalities and potential opportunities that could benefit all.

If you know someone in the BC post-sec world who is using WordPress, please let them know about this opportunity. I hope that we can get a good mix of people from both the technology and the pedagogy sides of the house to come together and participate.

Image: edupunkin by Tom Woodward CC-BY-NC

 

NGDLE and Open EdTech

I’ve been doing some research on Next Generation Digital Learning Environments (NGDLE) and think it might be another useful way to frame some of the work we are doing with open edtech. Educause has a 7 Things paper and a deeper white paper on NGDLE, and Phil Hill has written about NGDLE as well if you want to dig in further.

In a nutshell, NGDLE is the idea that the next generation of learning tools isn’t the single monolithic LMS, but rather a series of applications connected together using different sets of emerging and established learning tool standards.

The LMS may be part of an NGDLE environment, but it is probably more likely that the LMS would take on a more connective and administrative function in an NGDLE environment. The idea is to separate the course administrative tools & functions (like classlists and gradebooks) from the teaching and learning tools, and allow faculty to mix and match tools to fit their pedagogical needs. This gives faculty greater autonomy with what tools they want to see, while still being connected (with technologies like LTI & Caliper) to centralized institutional systems.

While it is being tagged with “Next Generation”, it is an idea that has been around for awhile now (see D’arcy’s eduglu post from a decade ago). It also strikes me that there is more than a nod to the concept of the PLE in this approach as well, although the PLE construct is about more than just technology and tools and is focused on learner autonomy, while NGDLE is more institutional and faculty focused.

We’re beginning to see institutions move towards this approach where the LMS is more the middleware that handles the administrative functions of course management, and faculty mix and match the learning tools to meet their goals. Phil Hill wrote a post about the University of North Carolina Learning Technology Commons where faculty can log into choose learning tools from an approved list of tools that will integrate with the existing LMS – the idea of a learning tools app store.

These tools are approved in 2 senses. First, there is a peer review process where faculty can review the tool and leave feedback for their peers, similar to the CASA model that I wrote about a few weeks ago, and which I love.

The second part of becoming an approved app involves vendors who submit their app to be reviewed and listed in the app store. In fact, a big part of the UNC app store approach is to, “iron out inefficiencies in edtech procurement.”

Smoothing procurement.

Now, I don’t necessarily have a problem with putting systems in place to smooth procurement, especially when part of the purpose is to make room for smaller players and not default to the 800 pound gorillas. But it does make me wonder how do faculty find tools that do not have a vendor pushing and backing them? The process (as it appears to me from the outside) seems to heavily favor commercialized vendor backed learning tools as opposed to open source community developed applications.

Certainly, there is a lot to like about the NGDLE approach. It acknowledges that there is seldom one tool that fits all pedagogical needs, and gives faculty the freedom and flexibility to try out different tools to fit their pedagogical goals. Indeed, I can see the NGDLE concept as one way to frame the open edtech experimentation we are doing with Sandstorm.  And UNC may have mechanisms to get tools in the app store that are not vendor driven, so I have to applaud the fact that they are doing this and making more teaching and learning tools available to faculty.

My caution is if the only options we put in front of faculty to carry out one of the core functions of our institutions are commercially driven options, then we’re not only missing out, but are locking ourselves in to a vision of edtech that is completely vendor driven. We are not putting all the edtech options on the table; options that often have much more involvement and development input from actual educators than many vendor solutions.

As Candace Thille noted in her recent Chronicle interview on learning analytics As Big-Data Companies Come to Teaching, a Pioneer Issues a Warning (may be paywalled)

…a core tenent of any business is that you don’t outsource your core business process.

Teaching and learning are the core business of most higher education institutions. How much of that core business are we willing to outsource?

Also, see Jim Groom.

Photo: Open source free culture creative commons culture pioneers by Sweet Chilli Arts CC-BY-SA

 

Framing our Open EdTech project

There was a great series of blog posts between Dave Winer and Joi Ito this past weekend about the Open Web that touched on the role of universities in the Open Web. You can read Ito’s first post, Winer’s response, and Ito’s followup.

A few things struck me reading this exchange between two web luminaries. First, both are having a good old fashioned blog dialogue on the open web in spaces they each own and control, and because of that I get to reap the benefit of overhearing their conversation. This gives me a better understanding of how two people who are deeply connected to the web are feeling about the web today. Their transparency working in the open brings to me a bit of their knowledge about the state of the web. And clearly, they are both feeling the web that they know – the web that allows exactly this sort of free flow of dialogue – is being threatened by more and more closed spaces.

The second thing that struck me was the list of call to action points that Dave Winer posted as one way to combat the closing of the Open Web.

  1. Every university should host at least one open source project.
  2. Every news org should build a community of bloggers, starting with a river of sources.
  3. Every student journalist should learn how to set up and run a server.

I’d actually expand the third point to include many more people, including first and foremost, any academic or researcher.

But it’s the first point that caught my eye and made me wonder how many open source projects are being hosted by BC higher ed institutions? And how many of those are specific to teaching and learning?

I know that UBC and RRU have public open portals that showcase some of the open work being done at those institutions. I imagine there are many many more at not only those institutions, but others around the province being spearheaded and/or contributed to by staff, students and faculty. I’d be interested to hear about them, and if you know, please leave a comment below.

It was a timely series of posts to read for me as I have been working with Grant, Brian, Tannis and Val on crafting a vision & plan for our open collaborative educational technologies in BC higher ed group (OCETBCHE? We really need to come up with a name), and one of the goals I have of our work is to see an increased level of interest across our system in the use of OSS for teaching and learning.

To frame our work, I’ve been looking for some high level documents that articulate the importance of OSS in an educational context. One of the strongest statements I have found about the importance of OSS in education (that also connects quite nicely with the BCcampus mandate of open education in general) is a paragraph in the Capetown Declaration.

For many working in open education, the Capetown Declaration on Open Education as a defining document in the field. While it is often connected most explicitly to open education resources, there is also a section in the document that speaks directly to software and technology that doesn’t seem to get the same level of attention as the sections about OER’s.

However, open education is not limited to just open educational resources. It also draws upon open technologies that facilitate collaborative, flexible learning and the open sharing of teaching practices that empower educators to benefit from the best ideas of their colleagues.

Building on this principle, I’ve been thinking about the purposes of our working group, and I’ve come up with a few ideas of why we want to do this.

  1. To promote the use of open source applications focused on teaching & learning. While there are numerous commercial vendors promoting the use of commercial software, numerous open source applications get overlooked because there are no vendors selling & marketing OSS.
  2. To provide practical solutions to educators wishing to employ open education pedagogies that build on network learning principles.
  3. To promote inter-institutional collaboration. OSS relies on the development of communities of developers and users in order to be successful. The success comes from sharing knowledge about how the software is constructed and can be pedagogically utilized. The software becomes the focal point around which a community can develop.
  4. To provide a pathway for institutions and educators to actively participate in OSS projects that are focused on EDU OSS.  Pathways to participate in OSS projects can sometime be obtuse and difficult to maneuver, meaning educators may not want to, or feel welcome to, participate in EDU OSS projects. This group can provide support for those who wish to dive deeper and participate in specific community projects, and in ways that are not just software development. This provides benefit to the OSS project as it can bring new members into the community, and active involvement in OSS communities strengthens the software, the community developing & maintaining the software, and the long term sustainability of the software.
  5. To encourage technological autonomy and provide ways for students, faculty and institutions to own and control their own data.
  6. To lower the barrier to participation on the open web for faculty and students.
  7. To provide value to other higher ed support systems within BC (think specifically of utilizing services like BCNet’s EduCloud).

It’s my start at trying to define some of what I am hoping we can do here in BC over the next little while.

Photo: I support the Open Web by Bob Chao CC-BY-NC-SA

 

Attribution and content theft in a new media world

A few weeks back I was contacted by Buzzfeed reporter Katie Notopolous interested in doing a story about my ongoing PayPal woes. Buzzfeed published Katie’s story yesterday. In the story, Katie included a link back to my original PayPal blog post.

Immediately after the story was published, I began receiving pingbacks on my blog and my comment section began to fill with stuff like this…

copy

I decided to follow a few back thinking that they might be commenting on the story. Instead, what I found was content scrapped verbatim from Katie’s Buzzfeed story, including the link back to my original blog post.

me

Copied site #1

me2

Copied site #2

Which explains why I was getting pingback after pingback from these content mills as they copied and pasted the story exactly as it appeared on the Buzzfeed site, right down to using the same Getty photo (that I suspect Buzzfeed had to pay for the rights to use) that Buzzfeed used in the original story.

photoEach of the links I follow (close to 20 now and they keep coming in) was the same. No additional context. No editorializing. No opinion on the story. Just a straight copy and paste of Katie’s story onto their site.

What was worse is that Katie – who did the original work – isn’t even attributed as the original author of the story. On most sites the content is posted by “admin” or “editor” or some other anonymous title. But in some cases, there are other people taking credit for Katie’s work, like Michael Blythe, if that is indeed your real name.

theftWhile I have had content from my own blog scrapped and farmed in the past, I haven’t seen it happen quite this quickly and at this scale.

For journalists, these must be both exciting and terrifying days. You now have a potential audience reach unheard of in human history. Exciting. But publish online and your work will be stolen and quickly capitalized on by others. Frustrating.

I don’t greet the disruption of journalism with glee. And I’m not justifying the theft of content, but in a digital world content will be copied, it is inevitable. If your business model is dependent on advertising revenue derived from driving traffic to your site with original content, you are in trouble.

I don’t know what the answer is, but I do know that trying to stop this from happening is like whack a mole.

 

Dear EdTech Conferences. Try harder.

Got a notice today of an upcoming conference being put on by IMS Global called the Learning Impact Leadership Institute. I went to check out the website to find out more about the conference and saw the lineup of the eight confirmed plenary speakers.

CaptureNotice something?

Yep. All white. All male.

Now, just to be clear, this has nothing to do with who is on the panel. I don’t know any of these men personally, although there are names I certainly recognize and people whose work I follow. So, I don’t want this to come across as a criticism of the people in the photos or their work. It is definitely not that.

But it is a criticism of the event and the organizers who, in this day and age, cannot seem to find a single woman or person of colour to include.

In this case, I find the offense even more egregious because this event is being presented as a leadership event, and the message this sends is that the only people who are eligible to become members of the edtech leadership elite are white men.

That message gets reinforced even stronger when you scroll to the bottom of the page and see who the organizers have decided to include as representatives to provide testimonials about the conference.

Capture2

Yep, three more white guys (again…this is not a criticism of these individuals).

This conference is a definite candidate for the All Male Panels Tumbler (thanks Tara for reminding me).

We need to do better.

20151109-EN_03

Yeah, I know it’s 2016….

 

 

Bring on the festival

This year the BC post-secondary system is trying something new with conferences. Instead of multiple small conferences, there is going to be an uber-conference called the Festival of Learning, June 6-9 in Burnaby.

The Festival brings together a number of smaller events that BCcampus has supported over the years, including the Open Textbook Summit, ETUGSymposium on Scholarly Teaching & Learning, and the BC-TLN Spring Gathering. The Festival is being organized by the BC Teaching & Learning Council.

The idea behind the Festival was to bring all these different groups together in one place at the same time to provide some space for collaboration and co-mingling.

The challenge in doing this is to do it in a way so that the uniqueness of each singular event that made it important and special to that particular community isn’t lost in a larger event. So far, from the draft program schedule I have seen (being part of SCETUG this year and helping to coordinate some of the ETUG part of the conference), the Festival organizers have done a good job at pulling it together & maintaining space in the Festival for each of the different groups to flourish.You can see this reflected in both small ways (the way all groups are represented on the general call for proposal page, for example), and larger with each group having their own program committee.

I’m quite looking forward to the week in Burnaby, and think this is going to be a massive teaching and learning event for our system.

If you have attended any of these events in the past, then you’ll want to mark June 6-9 on the calendar. If you haven’t, then this year will be a great time to join BC post-secondary faculty, educational technologists, instructional designers, and others involved in EdTech &  SOTL in BC at the Festival. Calls for proposals are on now until March 16th. Keep an eye on the website for more information.

The Festival runs June 6-9, 2016 at both the Delta Villa Hotel and BCIT in Burnaby, BC.

 

 

Privacy & Security Conference

Spent last week at the 17th annual Privacy and Security Conference in Victoria. The event is put on by the BC provincial Office of the CIO & Ministry of Finance. What follows are some notes from the sessions I took in.

Overall, the conference was better than I expected, although I found the huge number of vendor and vendor presentations disconcerting. The vast majority of attendees at this conference are primarily from government ministries and departments. As a bit of an outsider, I was troubled by the amount of prime time given to the likes of Oracle, IBM and Microsoft to pitch directly to those in government who make the decisions around IT, privacy and security. There were many problems raised that – surprise – there were solutions to. I’m not naive to believe that there isn’t a cozy relationship between government and big tech business, but seeing so much of the conference as a sales pitch to government raised the ick factor for me moreso than the usual conference vendor presence. I hope that, at the very least, BC taxpayers made a chunk of sponsorship cash from the conference.

That said, there were some good sessions. My interest was more on the privacy side over the security, so I passed on a lot of the security bits and stuck with mostly privacy sessions.

The first day was dedicated to pre-conference half-day workshops, and the two I attended (Privacy & Ethics, and Privacy Governance) were perfect primers for me coming into a new role that will have privacy and FIPPA as an integral component of the work I’ll be doing.

Privacy is a fairly new societal concept. It wasn’t until the 1890’s that this idea of personal privacy as a right began to appear in legal journals, driven by new information technologies of the day (the party line telephone and postcards). Interesting to see how technology remains the primary driver behind privacy discussions today.

Privacy is contextual was a reoccurring message throughout many of the governance and legal sessions I attended. Meaning that, while there is both constitutional and common law around privacy, there is still room for interpretation.

The legislation in BC is driven by some key principles of privacy governance – that the right information is gathered and used by the right person at the right time for the right purpose and in the right way. Practically speaking this means taking measures to ensure that you (as someone collecting personal information) only collect what you need for the purpose you need to collect it for, and only use that data for the purpose you collected it for.

Keynote: Richard Thieme

Richard Thieme did a good keynote on day one, although the title of his talk The Porous Borders of the Modern Imagination: Privacy, Trauma and Mass Media led me to believe there would be some critical analysis of the role of the mass media in shaping the narrative of security, privacy and state surveillance. It never materialized. But the keynote was enjoyable as Thieme provided some historical context around privacy that helped frame the themes of the rest of the conference for me. He also reminded me of how powerfully right McLuhan was when he said (to paraphrase), “we look to the future through a rearview lens”, and how that lens is both comforting and problematic.

ISO 27018

Chantal Bernier (former Privacy Commissioner of Canada) introduced me to the international code of practice for personally identifiable information in public clouds, also known as ISO 27018 standard. It’s a fairly new standard from ISO, but I can imagine we’ll begin to see this certification being stamped on all manners of services from IT companies offering cloud services. I wonder if this standard may be under consideration by the BC government as they review the current FIPPA legislation?

The TPP and BC’s FIPPA

BC Privacy Commissioner Elizabeth Denham did touch on the current FIPPA review (which a number of educators and educational technology groups have contributed briefs to). The big point in Denham’s talk that jumped out at me was that she believes that the BC privacy laws around local storage of data will hold a trade challenge should the TPP and its clause on allowing the free flow of data across borders be ratified in Canada.

Sketchnoting my way thru the conference

I tried something different this conference. Rather than firing up my laptop and taking part in the backchannel (which, whenever I checked, was crickets considering there were something like 700 people at the conference), I decided to work on sketching some notes during the talks I attended. I have to say, I loved doing this. I found I paid closer attention to the speakers, and my brain had to work hard to try to organize concepts and thoughts on the fly. I can see the appeal and will definitely be using this again in the future.

2016-02-09 16.52.15

2016-02-09 16.53.01

 

Putting tools into the hands of faculty with CASA

I’ve been feeling really good about the direction my new role at BCcampus is going. I am in a stage of work where I am feeling creative and energized, scanning the horizon and researching new stuff.

One of the projects I’ve been thinking about (and writing about) is the work with Sandstorm and the BC OpenEd Tech group, and trying to align the work of that group (and specifically with Sandstorm) with a broader vision for my role at both BCcampus and within the system.

What is emerging is a vision that sees me facilitating getting new educational technology into the hands of many people to try, and help with the evaluation of that technology to see where/if it aligns with teaching and learning.  Which is why I am liking Sandstorm because it looks like one way to get new tools into the hands of educators to try.

Another tool that I’ve been looking into is an IMS Global tool called the Community App Sharing Architecture (CASA). CASA is conceptually similar to Sandstorm in that they both share the same end goal of making it easy to deploy applications. But it does differ from Sandstorm in a few ways.

First, it is designed to work primarily with an LMS and is focused on deploying LTI enabled apps within an LMS, as opposed to Sandstorm which focuses on stand-alone outside of the LMS applications. The idea is that you can have an app-like “store” within the LMS that can be deployed by the users that integrates with the LMS.

But it isn’t limited to the LMS. A CASA app store can be mobile focused as well, as this UCLA example is with a mix of apps and dashboards optimized for mobile devices. And there was also talk in a webinar I watched about sharing analytics (perhaps connected using Caliper), but that seems to be at a pretty conceptual level right now.

The CASA architecture is also interesting in that it enables the connecting of different institutional app stores to each other in a network of trust. Metadata about the apps can be shared between institutions. And this is interesting because what CASA can do is enable the sharing of reviews about the apps between trusted nodes of the network.

CASA

Screenshot from CASA webinar (link to archive of webinar is below)

This is an example of what a future CASA app review will look like. Faculty reviews of an app from one CASA enabled institution can flow through the network and be available to other members of the trusted network. This helps to aid in discoverability of new applications and can help instructors separate the wheat from the chaff. As the number of edu applications continue to explode (the EduAppCenter currently has over 220 LTI enabled apps in it’s store), both discoverability and peer reviews from trusted networks are important to help filter, as anyone who has developed a PLN can attest to. CASA has the potential to enable another technology filter by leveraging the reputations in a network of trust.

Right now, CASA is still a beta tool. But it does look like an interesting technology that could make the deployment of edu focused applications easier for end users, while giving them some guideposts as to how useful these tools might be.

 

PayPal no pal of mine

terroristPayPal has locked up money in my PayPal account for over a month, and they are not giving it back. All because I made the mistake of using the word “Syrian” in a PayPal transaction.

On December 15th my daughter came home and said that her class was raising money to support a Syrian refugee family resettling in Victoria. We sent the notice out to the people you usually hit up for these kind of kid classroom fundraising activities – our family, a few of who live out of town.

Last day of classes for school for Christmas break was December 18th, and my daughter needed to have all the money into the school by then. To expedite the process of getting their money to us quickly, I decided to set up a donation form on a private page on my blog and have family members send me the money & I would write a cheque to the school to make sure we met the deadline. On the form, I needed to have a description line for the PayPal transaction. I used the phrase “Maggie’s Syrian Fundraiser” (Maggie is my daughters name). Her aunt, 2 uncles, & grandfather made donations.

On Dec 17th I received the following notice from PayPal:

Dear Clint Lalonde,

As part of our security measures, we regularly screen activity in the PayPal system. During a recent screening, we noticed an issue regarding your account.

PayPal is committed to complying with and meeting its global regulatory obligations. One obligation is to ensure that our customers, merchants, and partners are also in compliance with applicable laws and regulations in their use of PayPal.

To ensure that activity and transactions comply with current regulations, PayPal is requesting that you provide the following information via email to ComplianceTransactions@paypal.com.

1. Purpose of payment ********* made to you on December 16, 2015 in the amount of $50.00 CAD, including a complete and detailed explanation of the goods or services you are providing. Please also explain the transaction message: “Maggie Syrian Fundraiser.”

2. Please specify the Syrian Fundraiser will provide aid to the country of Syria, or if it will benefit those living outside of the country of Syria.

Please go to our Resolution Center to provide this information. To find the Resolution Center, log in to your account and click the Resolution Center subtab. Click Resolve under the Action column and follow the instructions.

If we don’t hear from you by January 01, 2016, we will limit what you can do with your account until the issue is resolved.

We thank you for your prompt attention to this matter. We apologize for any inconvenience.

Yours Sincerely,
Samantha
PayPal

Ok. So, obviously using the word “Syrian” raised a red flag. On December 18th, I emailed them my explanation.

Hi there,

My 11 year old daughter is doing a fundraiser at her school to help with the local resettlement of Syrian refugees in our city, Victoria, British Columbia, Canada. Recently, our federal government committed to accepting and resettling 25,000 Syrian refugees, and there are local fundraising efforts to help support refugee families resettling here in Victoria.

When we began fundraising, a few of our family members asked if there was a way to donate online. I have been a long time PayPal user so I told people to send me a PayPal payment and I would send the money on to the school. I created a PayPal button and stuck it on my personal blog. As of this morning, you should see 4 transactions in my PayPal account from our family members related to my daughters school fundraiser. These are from *****, *****, ***** and *****.

Specifically, to answer you questions.

1) “Maggie Syrian Fundraiser.” Maggie is my daughters name. The school is collecting money to donate to the Victoria Immigrant and Refugee society to assist with the local resettlement of Syrian refugees here in Victoria.

2) The money does not go to Syria. It stays in Victoria BC and will be used by our local Victoria immigrant and Refugee centre to support the local resettlement of refugees from Syria in Victoria BC.

I hope this response helps to explain the transactions. There may be one or 2 more coming thru this weekend from another aunt and grandfather, but I don’t anticipate many more transactions.

Regards,Clint Lalonde

For good measure, I uploaded a copy of the letter to their dispute resolution center on the PayPal site, just to make sure that they had a copy on their files and that my response didn’t get buried in some spam folder at PayPal, like the notices from PayPal usually do :).

I figured the explanation would clear things up.

Ha!

PayPal denied 2 of the transactions and tagged 2 others with “pending review”.  My account was restricted, and when I went in to try to figure out what to do to unrestrict the account, I was given no options.

On December 26th, I called PayPal and asked them why there were still 2 pending transactions in my account, why was there a restriction on my account, and what did I need to do beyond what they asked me to do to get these issues both cleared (credit PayPal – you CAN actually speak to a live person). I was put on hold. When the rep came back he said, “well, you have done what has needed to be done. I can’t see why this restriction is still in place and these transactions are still pending.” The call ended with him saying the restrictions and payments would be lifted in 72 hours.

January 4th. Still no resolution. I get a call from MacLeans Magazine after a reporter there spied a tweet of mine expressing my frustration with PayPal. He tells me I am not alone, and that other fundraising projects related to Syria have been blocked or rejected by PayPal. He writes an article in MacLeans about the problems many of us are having with PayPal.

January 10th I send an email to Compliance.

There are still 2 payments in my PayPal account that have been marked as “Pending” since December 17, 2015.

Could you please advise me of whether those payments will be cancelled or approved?

Either way, i would like to get this money out of the Pending limbo that it is in with you guys, and have no idea how to do that as I have received no further instructions as to what to do to clear up my account.

I believe I have sent you all the information you have asked for and, in a phone call I made to PayPal support on December 26, 2015, I was led to believe that this issue was cleared up and the holds would be removed from my account. That was over 2 weeks ago, and the 2 payments are still being held as “Pending” with you.

Can you please advise me if you need more information from me, or else release or deny these payments asap?

Thank you for your attention to this matter.

Clint Lalonde

No response.

January 18th – second call to PayPal. Again told that everything looked fine on their end and that the payments and restrictions would be lifted within 72 hours.

January 21 – It has been 72 hours. Payments still pending. Account still restricted. I call back. I am told that my issue is sitting in a back log with compliance because “it is tax season” and that they will get to it in 72 hours.

Excuse me if I sound skeptical.

This is where we are today.

What a gong show.

I’d like to tie this back into something wider – about some social commentary about how a big corporation reliant on data decision making has lost the ability to decipher well-intentioned actions from legitimate threats. I mean, hell, If I was going to launder money for some sort of subversive Syrian terrorist organization, the first thing I would do to hide my tracks is put the word “Syrian” in the description of a financial transaction. I mean, being  a money laundering international terrorist does not mean that I can forgo keeping well detailed and accurate books.

And part of me also wants to ruminate on what this might mean for me in the future. Not only what being flagged in PayPal for suspicious activity, but even writing this blog post and using the word “Syrian” in it as many times as I have has likely got me onto who knows what list.

What if I try to cross the border? Will this silly screw-up somehow get me moved to the special room? I *think* I am being facetious with this line thinking, but in my head I am both laughing at the ridiculousness of this, and feeling the chill of unease as a little part of me wonders, have I triggered something bigger? Have I now been added by some smart/dumb algorithm to a no-fly list based on some stupid PayPal flag? I mean, someone getting accidentally added to “the special list” through no real fault of their own…that doesn’t happen in real life, does it?

Update: January 25, 2016. It is Monday, the day PayPal told me that my account would be fixed. Well, my account is still restricted and PayPal has not released the payments pending in my account.

 

Amazon Web Services coming to Canada

In a blog post on the AWS site, Amazon Web Services Chief Evangelist  Jeff Barr announced that Amazon Web Services will be bringing their cloud computing service to Canada sometime this year.

This is potentially big news for edtech in Canada where our privacy laws have hindered the use of cloud based services where personal data may be stored outside of the country.

These days, it’s hard to find scalable edtech infastructure and services that are not built on AWS (or other) cloud services, and having data stored outside of Canada using cloud services has traditionally been a barrier to adoption for Canadian institutions. Not a deal breaker as there are ways to mitigate and still be compliant with privacy laws through informed consent, etc. But for many, the P.I.A (Privacy Impact Assessment) is a P.I.A. and enough of a barrier that it hindered the use of cloud based services.

For an edtech example, Canvas has had very little uptake in Canada because it is built on AWS.

Of the 25 public post-secondary institutions in BC, there is only a single institution using Canvas, and they are self hosting to work around the data storage issue. With a regional offering of AWS in Canada, I would expect to see a company like Instructure bring Canvas north of the border soon, and it being a serious contender for institutions undertaking LMS reviews.

While not explicitly stated in the release that it will be compatible with all the different provincial and federal privacy laws, it’s hard to imagine Amazon rolling out services in Canada that are not as compliant as possible. Indeed, privacy compliance with federal and provincial laws would be one of the biggest selling points for the service in Canada, as PCWorld notes;

Having a dedicated Canadian region will be important for organizations that need to comply with the patchwork of regional data protection laws Canada has, which requires the storage of some types of data inside Canada, depending on where the storer is located.

Although the question of “does legislation actually make a difference where data is stored in an interconnected world?” hangs in the air, with many seeing these regulations as doing nothing by providing the illusion of data protection for citizens.

And who knows, the TPP may get ratified in Canada and then it is a different data protection game altogether as the TPP clause on free flowing data between member countries would put it at direct odds with provincial & federal privacy laws. And while edtech might win with the TPP in that we get better access to more cloud services,  I have real concerns at what the cost to the rest of our society might be.

Addendum

Shortly after I posted this, Scott Leslie tweeted in response to this post that even if the servers are located in Canada, there is still a question of where the parent company is located.

Photo:Sensitive Data sign, Freegeek, Portland, Oregon, USA by Cory Doctorow CC-BY-SA