+ All Categories
Home > Documents > Post-Content LMS (Final)

Post-Content LMS (Final)

Date post: 26-Oct-2014
Category:
Upload: michael-caulfield
View: 114 times
Download: 0 times
Share this document with a friend
Popular Tags:
28
April 2012
Transcript
Page 1: Post-Content LMS (Final)

April 2012

Page 2: Post-Content LMS (Final)

Executive Summary

Historically, Learning Management Systems (LMS) were developed to help professors and institutions to “publish courses on the web”. As the advances of Web 2.0 have made some of these features less critical, LMS’s are being re-envisioned as centers for the analysis, management, and use of learning process data and artifacts. This paper will explore five concerns such a data-centric LMS will need to address, and relate them to current concerns of Keene State. These concerns are:

Assessing Outside of the Walled Garden Using Learning Analytics to Improve Student Success Supporting Just-in-Time Teaching & Mobile Use Improving Outcomes Alignment and Tracking Maintaining a Record of Student Work (ePortfolios)

Page 3: Post-Content LMS (Final)

Assessing Outside of the Walled Garden

When Blackboard debuted its flagship product in the mid-90s, numerous gradebooks and classroom management tools were already available to teachers. What Blackboard offered was a way to publish portions of a class on the web (including readings and quizzes) and organize online class announcements. Early advertisements for Bb, WebCT, Prometheus, and other systems highlighted that professors could publish course materials “without knowing HTML”. Over the years, LMS providers added more features, but the core product remained a publishing tool that allowed faculty to put syllabi, announcements, quizzes, and other course material up on the web with little effort. As a further bonus, these materials were password protected inside the “walled garden” of the LMS, preventing people outside the class from accessing them, and allowing student activity to take place in a safe place segregated from the larger internet.

With advances in personal publishing and social networking technology in the mid-aughts, the challenge of course communication and content posting was largely eliminated. A variety of tools (blogs, wikis, YouTube, Slideshare) solved the self-publishing problem more elegantly than any LMS could, and did so often for free. In this new environment, the LMS began to be perceived as a barrier – whereas embedding a YouTube video in a blog or wiki page was a copy and paste procedure, embedding a video in Blackboard several years ago was a painful, error-prone process. While other Web 2.0 products allowed people outside the class (including authors and professionals) to participate in class discussion, Blackboard remained painfully closed. While other non-LMS platforms made sharing and persistence of materials easier, Blackboard made this more difficult.

Even the safety of the “walled garden” came under critique: experiments showed that students did better when publishing on the open web. They spent more time on projects, became more authentically engaged, and began to anticipate the reactions of a real audience to their work. The cheap Web 2.0 tools had become, in many ways, far better learning tools than the LMS-provided alternatives. As an additional benefit, as Web 2.0 began to penetrate the enterprise, these also became the tools that students were most likely to use to manage learning and communication after college. Many researchers and instructional designers pointed out that these suites of cheap or free tools combined with “real-world” resources represented an organic Personal Learning Environment or “PLE” that could

Page 4: Post-Content LMS (Final)

be used to solve any problem, in contradistinction to Learning Management Systems which represented inauthentic, class-bound constructs.

Diagram of a PLE, Jared Stein (2008)

What has become clear over the past eight years is that, in the area of publishing and collaborative tool creation, the LMS market cannot compete with the innovation and utility of the more general social networking market. Nor should it. Today, many of Keene State’s best classes are experimenting with wikis, blogs, and other Web 2.0 products that fit well into a constructivist paradigm and have demonstrated pedagogical value. As students learn to use these tools they are learning more than skills for class – they are learning skills for life.

Because of this, the LMS of the future must integrate with current low and no-cost networking tools such as Slideshare, Google Docs, and Live.com rather than build competing (but largely inferior) products.

The publishing function of the LMS will continue to be useful, particularly for organizing class announcements, maintain records of student-teacher communication, and providing private spaces for core assignments. But in a world where student publication happens many different places, the Post-

Page 5: Post-Content LMS (Final)

Content LMS will have to act not only as a collection of tools, but as a harness for external tools, one that helps bring coherence to their use and supplies (where needed) an assessment framework that can capture and store graded artifacts. Such a scheme will open up the power of Web 2.0 to the classroom while retaining the accountability, coherence, and occasional privacy of LMS-run activities.

(For a concise treatment of the history of the PLE in education and the difficulties the traditional LMS has with assessing PLE activities, see Jon Mott’s excellent Envisioning the Post-LMS Era in Educause Quarterly, Volume 33, Fall 2010).

Using Learning Analytics to Improve Student Success

If you go to Amazon.com, the website will suggest products that you might want to buy based on your previous purchases. Netflix tries to discover what film genres you like through looking at your rental history, and evaluating your completion rate of web delivered video. The supermarket you go to places certain items in the same aisle because analysis of receipt data tells them people tend to buy those items together. Target actually predicts if you are pregnant through sudden increases in lotion purchases, and sends you relevant coupons.

Analytics is the use of naturally occurring data to optimize decision-making and environmental design. While some applications are decidedly creepy, analytics can provide powerful answers to persistent problems encountered by higher education. What if we could identify which students were likely to drop out of school in the first year – by the first week? What if we could tell which of our students were likely to fail the mid-term before the first quiz? In many of these cases, given warning, we could stage interventions at a point when interventions are still likely to be effective.

Learning analytics is potentially even more powerful when matched with outcomes-based activities. If Netflix can suggest which movies you might like based on your preference for Zoolander, an advanced Learning Management System should certainly be able to inform you that 86% of students who have struggled with Outcome X “found the following video useful”.

Organizations such as Khan Academy are working on complex formulas to determine how to automate tutor-like guidance, but initial forays into learning analytics suggest even simple measures can have a profound impact. As an example, consider the following diagram, which shows participation of students in an online forum (from University of Wollongong’s SNAPP project):

Page 6: Post-Content LMS (Final)

For a large forum like this, it might be very difficult to spot student nonparticipation or to reward students that have made themselves central to the community. This diagram shows at a glance which students are deeply engaged in the forum, which students are peripheral to the network, and which students have not engaged at all. It distinguishes students that may be engaging in “drive-by replies” from students legitimately engaged in conversation. In the future, tools like this will aid selective intervention. An instructor, acting as a facilitator, can talk to students who are peripheral or disconnected and see what it might take to pull them deeper into the community without having to read and grade every forum response.

In this next example, Purdue’s Signals program uses custom software to mine a Blackboard backend and identify courses in which a student is likely to encounter trouble based on a model that relies both on grades in the LMS and analysis of logins as an indication of student effort in the course:

Page 7: Post-Content LMS (Final)

And while such systems may seem reductive, they are highly useful. Student logins to an LMS (both their amount and pattern) can predict student success in a course or set of courses. A study by David Wiley of student logins and use of the LMS revealed what has been referred to as the “waterfall” pattern:

Page 8: Post-Content LMS (Final)

The waterfall visualization is not meant to be a real-time dashboard item, but is instead a demonstration of the relationship between time spent in an LMS and student success. The vertical axis represents the percentile ranking of these students’ final grades for a semester. Each horizontal dot represents one day’s login, with the lack of a dot representing no login. The darkness of the dot represents time spent on graded activities on the system on that day. And you can almost see the student profiles: the diligent advanced students at the top, followed by the smart late-semester slackers. Below them, perhaps the diligent B+ students. Unexpected patterns become obvious: even the best students tend not to work much on weekends, and working over Thanksgiving break is actually negatively associated with success, perhaps because so many struggling students use the holiday as a time for a last ditch effort, or perhaps due to make-up opportunities underperforming students were assigned. But with all the students together, the broad pattern is unmistakable, as Wiley notes in his explanation of the graph:

We call this visualization the Waterfall because the drops have all but evaporated away by the time you reach the bottom of the image (meaning that students with lower final grades spend much less time on their work), reinforcing what we know about the relationship between time-on-task and academic performance.

From a research perspective this is fascinating. More importantly, however, such insights can be used to provide what ASU (inspired by Thaler’s research in behavioral economics) calls Nudge Analytics – the use of student facing analytics to indicate likely consequences of current behavior, and suggest more profitable courses of action.

The logical hub of such data gathering and presentation is the LMS, and providers have already begun to put together student-facing analytics. The graphic below is a mockup of student-facing functionality that is anticipated in the Canvas core product next few months:

Page 9: Post-Content LMS (Final)

The key to understanding the impact of the above interface is the student-facing nature of it. One insight from the waterfall visualization is that even the worst students log in into the LMS occasionally. What if instead of being greeted with class announcements they were shown this? Might this give the LMS user the sense of self-efficacy that is so often missing in low-performing students?

Part of it certainly depends on the implementation, as promoting self-efficacy requires that students not only see their grade, but understand the way their actions have impacted their grade. In the mock-up above, inputs into the grade are emphasized over the student’s grade relationship to peers. Progress on outcomes is also highlighted.

This is a notable improvement over current the LMS approach to grading. Students currently use LMS’s to check their grades frequently, but a focus on raw grades can often undermine the self-efficacy of the most at-risk students, leading to the paradox that many students who obsess about grades do little to change them. The key to an impactful presentation of performance is to remove the student temptation to see grades as either an indication of their intelligence or as a product of the difficulty of the class or the professor. A more holistic view, one that values inputs as much as outputs, can help the student to understand not only where they are in the course, but how they might get to where they want to be.

Analytics can also be used at the instructor level to identify at-risk students:

Page 10: Post-Content LMS (Final)

And can provide administrators a quick overview of different programs:

As we aim to assist at-risk students and increase our graduation rates to meet national targets, such support within an LMS (and particularly the student-facing piece) will become essential. As important (though perhaps outside the scope of this paper) will be the existence of an open API that will allow us to pair data like this to demographic risk data and student history in more general student information systems.

(For more on the “Coming Third Wave” of Learning Analytics, see Learning Analytics: The Coming Third Wave, an ELI Brief from April 2011)

Page 11: Post-Content LMS (Final)

Supporting Just-in-Time Teaching & Mobile Use

For most of its history, the LMS has been considered a tool to provide access to course materials and out-of-class activities. With the shift of the LMS from an online content repository to a data and activity hub this is likely to change. The Post-Content LMS is also an In-class LMS.

There are numerous reasons to suspect that in-class use of Learning Management Systems can have a positive impact on student learning. I have detailed just a few of them below.

Mobile Devices, Tablet Computing, and BYOD. More and more students are turning up to classrooms with laptops and smartphones. Additionally, tablet ownership is on the rise. In 2011, estimates of student tablet ownership ranged from 8%-12% (ECAR 2011, other sources), and recent information suggests the number of student tablet users is growing rapidly, potentially having tripled in the past 12 months.

As Bring-Your-Own-Device (BYOD) and mobile use become realities, it is likely that more and more classes will mediate some in-class instruction through the LMS. For instance, currently many classes engage in group and project work in class using worksheets or similar prompts. With in-class LMS use these worksheets can become virtual, executed on tablet or laptop devices, and tied to the student record, or even specific departmental outcomes. In situations where attendance takes too much class time, the LMS may be used to register student presence. Participation grades can be justified through work logged in the LMS, and basic analytics can be used to see if classroom activities are resulting in increased conceptual understanding.

Most importantly, in-class use of the LMS may draw the lower-performing students into low-stakes participation in a way that both classwork and lecture often fail to do.

This is not to say that every class will become a highly LMS-mediated event. But in cases where a professor wants to capture classwork, analyze progress, or draw low participation students into the discussion, the LMS will be available to provide that support.

From ECAR 2011

Page 12: Post-Content LMS (Final)

Just-in-time-Teaching (JITT). Some of the most impressive impact on student learning we have seen in the SoTL literature recently has been related to a methodology called Just-In-Time Teaching (JITT). In the broad application, professors test students either directly before class on in-class in real time to see where their recall and conceptual understanding is weakest, and then use the class time to focus on those specific deficits. For instance, students can be asked to apply pre-class reading in clinical psychology to a specific triage problem presented to them at the beginning of class:

John, a 24 year old college-going male, shows up a clinic with concerns his roommate is plotting to kill his cat, having been directed to you after airing these concerns to campus safety. He seems intensely nervous, and suspicious of your involvement as a psychologist. What do you do first?

Ask John whether he feels this is a safe environment Send him for a “meds consult” Discuss with John his history with his roommate Ask John if he knows who he is, where he is, and what date it is

Where such questions are asked in class (as opposed to directly before class), such activities are usually executed with “clicker” devices that allow students to vote for a response. After the students have voted, the professor looks at a proprietary piece of clicker software that shows the distribution of student responses via a bar graph or pie chart. If students can apply the reading to the novel problem, the professor is free to move onto the next subject. If not, the professor can use the student answers to launch into a class-wide discussion of the issue.

In a popular (and highly effective) variation of Just-in-Time Teaching called Peer Instruction (PI), students are presented a question which the instructor estimates that 50%-60% of students will get wrong. After voting the first time, they are asked to turn to their classmate and defend the answer they just provided. Their classmate does likewise, and then they each re-vote.

Experiments with this method have shown exceptional learning gains for students, especially in areas where conceptual gains are often hardest. Effect sizes of PI interventions compared to normal lecture and class-wide discussion have ranged from 0.8 to 2.5, remarkable in a world where the average educational intervention impact is 0.4 (see the GTC whitepaper on John Hattie’s work on effect sizes).

Just-in-Time Teaching and Peer Instruction have been strongly endorsed as methods by the NSF, the NSTC, and other governmental and professional bodies, but the biggest impediments to in-class adoption of JITT methods remains the necessity of specialized hardware (clickers and base units) for teachers to tie student responses to the student record. The likelihood is that as BYOD increases, much of JITT will be mediated through student smartphones, tablets, laptops, and other non-clicker devices, and the student responses will be tallied either by the LMS or a plug-in communicating with the LMS via an open API. It is worth anticipating such JITT use as one of the primary vectors through which BYOD will be explored, and any LMS will have to accommodate this use either directly or indirectly.

Page 13: Post-Content LMS (Final)

Peer-grading

It has become widely accepted that maintaining college affordability requires better operations at scale, and that one of the primary classroom bottlenecks to scalability of instruction is providing student feedback. If there is one great economic problem at the heart of higher education, it is that effective learning environments are rich in feedback, and feedback is expensive.

As such, there has been renewed interest in peer-grading and self-grading approaches to assessment. Such grading practices, when coupled with instructor guidance and occasional instructor-executed assessment, can radically increase the amount of formative feedback students receive while decreasing the time commitment faculty must make to grading. Peer-grading has also demonstrated substantial impact on student learning, in many cases out-performing instructor feedback. These issues require institutions adopt an LMS that can allow for peer-grading as well as more traditional instructor-provided grading.

Ideally, the LMS could improve such methods by making the process truly anonymous, allowing students to grade work across different sections of the same class, and allowing for multiple reviewer schemes. As always, when such grading can be tied into analytics, it will allow the instructor to quickly spot any anomalies of grading or commenting and address them directly. Such functionality could also help students address concerns about the fairness of the process, and lead to a wider adoption of the practice.

Integration of the in-class record with LMS

A final note worth making – a point that ties the examples above together. If the research in such works as Academically Adrift is to be believed, the majority of time our students spend on task occurs in classrooms, not outside of them. As such, any formative or summative assessment of SLOs or Program Level Outcomes must engage with in-class work to be truly useful.

Whether all of the above applications will see general use in the near future is up for debate, but the general trend is not: as we move forward, more and more data will be collected (and is currently being collected) from in-class activity, and the LMS is the likely vehicle for that collection and analysis. In the end, the strength of the analytics and reporting will probably not be as important as the ubiquity of the data collection – most current attempts to make education more data-informed are hindered not by underpowered analytics, but by a general thinness of data. Collecting and acting on data gathered in the classroom represents one way to remedy this.

Page 14: Post-Content LMS (Final)

Improving Outcomes Alignment and Tracking

While use of institutional and programmatic outcomes has increased in higher education in the past decade, there is still a large disconnect between the outcomes as expressed on an institutional and programmatic level and how they are expressed at the micro-level in assignments, projects, and course artifacts.

While there are multiple problems with programmatic and institutional alignment, one of the biggest hurdles is lack of meaningful participation in assessment activities. Most courses – or at least most good courses – are always a bit in a state of flux. Most professors could use more class time, more time to tinker, to re-write, to clarify wording of test questions, or finally tweak the assignment that has been misfiring the past couple of semesters. When concerns that will impact current students in a course are balanced against support for institutional initiatives, institutional initiatives tend to lose.

One obvious solution to this dilemma is to “align the alignment process”. Instead of seeing institutional demands as competing with course commitments for scarce resources, the alignment process can be reconfigured to realize mutual gains for both the institution and faculty.

This is a complex concept in the abstract, but a simple idea in the concrete. Consider, for example the definition of a set of outcomes by a program that a professor is teaching in. By themselves, those outcomes do not help any student in the professor’s class succeed. In fact, while they may guide the design of the course, they are invisible to both the professor and the student through much of the year.

Modern LMS’s aim to change this by making the outcomes visible pieces of the course design and using them to reduce faculty workload. The following page shows an example of such a system.

Page 15: Post-Content LMS (Final)

Although it may be hard to see in the above screen capture, the instructor is in a question bank for quiz questions that test a certain outcome. By aligning it formally with the outcomes the system inherits from the program and institution, benefits accrue to the student and the instructor. For instance, students can see how they have struggled with or mastered particular outcomes over the duration of the course (expected functionality 2012, see bottom of mockup below):

Page 16: Post-Content LMS (Final)

This helps the instructor by giving the student automated feedback and alerting the student to what they may want to work on, but since it also assigns the results of that question to that outcome at the institutional level, it creates the possibility for institutional assessment that works seamlessly with formative assessment.

“Aligning alignment” can also mean reduced work for professors. In the example below, an instructor uses a set of inherited criteria to quickly create a rubric for an assignment:

By making sure outcome alignment results in tangible, near-term course benefits for both student and instructor, institutions may finally be able to close the gap between the stated intentions of their programs and the real world implementation of them.

Maintaining a Record of Student Work (ePortfolios)

Early LMS’s were conceptualized as a way to get classes online, and very little thought was given to preserving student work past the 15 weeks or so of the class. The early Blackboard dropbox operated, as the name implies, as a method of delivering documents, not of storing them. And various licensing schemes discouraged maintaining student work any longer than needed by the professor.

This compartmentalization has been both a loss for students and the college. Students submit increasing amounts of work electronically, but are denied access to that work in later classes. Colleges also have trouble evaluating the progress of a student across multiple classes. Many capstone classes encourage students to create a summary of one’s work in the program along with reflection – but often the work the student has done is lost to the student at that time.

To solve this problem, a number of ePortfolio solutions were developed in the early aughts by a variety of vendors. Some solutions, like Blackboard’s, plugged into the LMS, allowing students to directly assemble electronic portfolios from work that had been submitted into the LMS. Others took a separate

Page 17: Post-Content LMS (Final)

approach – producing enterprise software that allowed students well-designed externally facing portfolios suitable for viewing by a future employer. Still others focused on the institutional assessment issues addressed by portfolios.

To consider what the LMS role is in the ePortfolio world, it is useful to break the problems they are trying to address into component parts: student needs, teaching needs, and institutional needs.

Student Needs

1. The review problem: Students need access to previous work for the purposes of review.2. The bigger picture: Students need to see how early coursework relates to later coursework.3. The portfolio problem: Students need access to previous course materials to construct

meaningful cross-course reflections.4. The presentation problem: Students need to put material together in a presentable form for

sharing with external audiences.

Let’s start with point #4, since it is where many external vendors bring expertise. The presentation problem is ill-suited for an LMS, but is ill-suited for any piece of enterprise software.

The truth is that what a portfolio is, and what it looks like, is going to vary widely from discipline to discipline. In the Honors Program, students put together “Case Studies” on how they have evolved as thinkers, workers, and citizen; they are produced in Microsoft Word. Graphic Design students currently use a WordPress installation to showcase their work. In Sociology, many students have built wikis as part of an introductory project on local homelessness, and any portfolio presentation would have to be able to accommodate the presentation of that shared site. Math students may want to show problems not well served by standard character sets, and GIS students may have annotated files which are several gigabytes large. Some presentations to employers might involve video and Flash. Some might involve runnable code. For some, the need to apply an original style to the presentation is critical; others might need two-page paper with downloadable files. For certain professions, such as teaching, there are other expectations.

In short, the dream of a single product for external presentation is likely an impossible one – and were we to adopt any product for such presentation it would most likely look like a student-owned WordPress, Google Sites, or Drupal instance – a “blank slate” site that allowed wide customization, generous storage, and more opportunities for students to differentiate themselves to employers than any database driven portfolio wizard could provide.

Once we remove external presentation from the mix, we see the key for the students is access and portability. They must be able to access, organize, and, if need be, export their work in the LMS easily. This will allow them to build the reflective portfolios they need to build in their programs later, whether those portfolios take the form of a website, screencast, Word document, or annotated GIS file.

By using the LMS to store the materials they have submitted for class, students dramatically reduce the chance that they lose material during their tenure here. Because the material stays in the LMS, their

Page 18: Post-Content LMS (Final)

ambient awareness of its availability is increased, and they may end up referring to past work not only for the purposes of summative assessment, but to increase their own understanding.

Ideally the LMS will allow for:

Storage of all student work submitted through the LMS through the duration of the student’s career at Keene State. This requires not only the technology, but a vendor licensing scheme that doesn’t penalize the storage of student data at an unworkable level, and a culture among professors that promotes leaving courses open to students after the semester.

An ability to organize artifacts according to various schemes and share that work externally. While students may not want to display their portfolios through the LMS, they will likely want to be able to select good pieces and categorize them according to what outcome or skill they demonstrate. The system should support a tagging or a folder scheme. They may also wish to make the files available to people external to the class, for instance, in the case of a student reviewing work with a tutor.

An ability to export artifacts for external use and display. Given the wide array of formats different disciplines may require, the ability to easily get materials out of the LMS in common formats is critical.

Institutional Needs

Likewise, institutional needs are about permissions and artifact retention. Currently it is difficult for people outside a class to view selected work in that class, even where this would be desirable. For example, assessment efforts currently require students upload work into another course, since per artifact permissions cannot be set on a per-assignment basis.

Granular permissions in systems like LMS’s can be tricky to implement well, and it is not clear that any current LMS has come up with a good solution to this yet. But most of our assessment process problems boil down to allowing documents to be shared easily outside of the class they were originally created for – as such, solving the student issues above may advance our institutional assessment capability as well.

What We Talk About When We Talk About Blackboard

How does Blackboard fit into this new model of what a Learning Management System should be? The answer is not simple, partially because of how they have structured their offering.

Since at least the mid-aughts, Blackboard’s explicit business strategy has been to move up-market. The introduction of multiple new products sold by Blackboard in addition to its main product has been coupled with a loss of basic licenses. In a saturated market, Blackboard has made it known to investors that its strategy is not to increase the number of license holders, but to increase the average cost of licenses.

Page 19: Post-Content LMS (Final)

On this point, they have been fairly successful. The Community System (now Blackboard Collaborate), Outcomes System (now Blackboard Analytics), and Transaction System (now Blackboard Transact) have been profitable add-ons to the core products. Added to their recent push to sell to educational systems instead of individual institutions, they have been able to increase profits while decreasing market share. As Jim Farmer has noted, the results of this strategy were evident as early as 2006.

(James Farmer, Blackboard, Inc. Analysis, Part 1: Software Licenses)

In recent years this trend of deriving profit through cross-selling, as opposed to acquisition or retention of users of the core product, has increased.

Coupled with this, Blackboard has pushed institutions into its Enterprise product (an additional up-sell opportunity) through a variety of means, including migrating core function development into the Enterprise product.

Page 20: Post-Content LMS (Final)

(Delta Initiative. Blackboard Acquisition - Analysis of License Numbers)

For reasons that are unclear, Blackboard stopped releasing the number of new Enterprise licenses after the 2009 bump from the purchase of Angel Learn. However, the Delta Initiative estimates from publicly available data that Blackboard is losing about 150-300 customers a year.

Again, this by design, as the stated goal of Blackboard has been to increase the average cost per license, at the expense of losing smaller, less profitable customers.

In short, Blackboard is responding to the post-content redefinition of the LMS through developing add-on products (such as Blackboard Analytics); but this is being done as part of a larger strategy to significantly increase contract charges. Because of Blackboard’s reluctance to innovate and re-define their core LMS product around these new market demands it is difficult to evaluate the future of their LMS product. The lack of integration of newer members of the Blackboard family and the core is also an issue – it is difficult to see Blackboard embracing something like student-facing analytics in their current model to the extent Canvas plans to – because this functionality must be separable for pricing reasons, there are limits to how tightly it can be integrated into the core experience.

A final hurdle for Blackboard is usability. While Blackboard is making efforts to change its focus, it carries over much of the cruft of its history as an announcement and course publishing machine. The interface effectively hides many of the newer features in order to preserve a historically consistent UI. Elements such as rubric-driven grading are not easily discoverable. And very little in the current interface suggests to faculty new features that might increase efficiency or impact. This burden of history, common among mature products, represents an additional obstacle to Blackboard reorienting its UI to the new reality.

Where to go from here

Page 21: Post-Content LMS (Final)

This paper is meant only to provide a short summary of what LMS’s must be able to do in a post-content future. Some of this can be done with Blackboard, some of it can be done with Blackboard at an additional yearly cost, and some of it cannot be done currently with Blackboard at all.

All of it, however, needs to be considered. Canvas, an alternative LMS to Blackboard, has many of these features today as part of its core product. Blackboard has a subset of these features at a much higher price. There may be other options we are unaware of (although, if so, there are not many – we’ve connected with other universities and since the Angel buyout in 2009 the four choices have been Bb, Desire2Learn, Moodle, and Canvas).

What we must not do is to continue to use the LMS only as a weak 1990s-era publishing tool. Of all technology on campus, the technology that comes into contact with courses the most is the LMS. Its presence has slowly become ubiquitous, and students are demanding that we use our Learning Management Systems more effectively. It remains one of the few technologies on campus that comes close to representing and organizing what we do in our teaching, and it is not going away any time soon. Whatever our long-term goals for the classroom or for our programs, the LMS we use can either powerfully support those initiatives or undermine them. My hope is this report has demonstrated that if we want to succeed in many of our current initiatives, demanding more from our LMS is crucial.


Recommended