This section offers guidance on the methodologies used to evaluate the performance of an Institution as Publisher project. It suggests:
- Why evaluation is important
- What should be evaluated
- The methodologies and tools for evaluation
- The challenges, concerns and ethical considerations you may encounter when evaluating a project of this type.
Evaluation is often the final step in the process of creating an institution as publisher operation. Since everyone is involved in the process, it is likely that everyone will have an interest in evaluation outcomes. However, those in a project management role, in university leadership, researchers of pedagogical innovations/ learning technology/ or publishing, and, more generally, those interested in their university becoming an Institution as Publisher, are likely to benefit most directly from reading this section.
This section of the toolkit will consider the following:
- The development process, through observation and reflection, including e-textbook authoring, technical activities, publishing process and ongoing promotion. We reflected on the impact of the process for those involved in it
- Feedback from stakeholders through survey, dialogue, and data
- Impact on learning and teaching, of the publications, through observation, and anecdotal discussion with students and teachers
- Challenges of evaluation, with an overarching reflection on how evaluation may contribute to improvement and streamlining of future e-textbook developments
- The tools for evaluation, in order to better understand the range of data, feedback, dialogue and outcomes from the project.
1. Why and what to evaluate?
In the context of a project, the process of evaluation offers perspective on achievement, and commentary on further development. It allows individuals associated with the process and/or outcomes of a project the opportunity to discuss and interpret their experience and/or outputs.
In the context of an institutional publishing project, three key areas should be investigated. These are:
1a. Perspectives on the development process
This includes e-textbook authoring, technical activities, publishing process and ongoing promotion. We reflected on the impact of the process for those involved in it. Finally, we discussed the value and consumption of e-textbooks for students, academics, module leaders, libraries, for the University, and for a wider audience. Some questions to consider, might be:
- What are the resources needed by the publishing team? What are the resources available to them? Were the tools sufficient for the task?
- Is sufficient time allocated to each part of the development process?
- How might you demonstrate an improvement in development knowledge?
- How cohesive is your development team?
What arose? Time. Commitment. Constraints. Costs. Institutional. Structure. Collaboration. Priorities. Outcome. Change. Professional. Adaptive. Pioneering.
Additional Material: Questions about Development
1b. Interpreting feedback from stakeholders
This includes students and educators, and those in a University management role, as well as from a broader range of individuals with access to the publications. We garnered feedback from surveys, through dialogue with groups, and we examined download access data, viewed anonymous reviews and feedback. In broad terms, evaluation is likely to wish to examine the following questions:
- Who are your stakeholders?
- What do they think of your publications and process?
- What can be learned that may help further developments?
What arose? New. Unexpected. Different. Dull. Searchable. Mine. Download. Reference. Cheap. Relevant. Accessible. Simple. Raw. Text. Authors. Learning.
Additional Material: How Stakeholders can focus your evaluation
1c. Observing impact on learning and teaching
This may be difficult across a top-down, short project, but over time, who might be best placed, and who might actually, pioneer such an initiative? We found few examples of use of our e-textbooks across institutions and, therefore, little evidence that they were being used, in the classroom, for structured learning and teaching. Our evaluation suggested that it was the choice of individual students to use our publications, rather than those in a teaching role. Some questions that your evaluation might consider are:
- What does ‘impact’ look like in my circumstance? Might it be represented by a rise in grades or by something less tangible?
- How successful is distributed to academic and students?
- What is revolutionary about my development?
The projects have written a number of case studies on observing impact. The University of Liverpool have looked at how they embedded Using Primary Sources in the Liverpool Curriculum.
Further case studies to follow
What arose? Opportunity. Together. Relevance. Distribution. Empowerment.
Additional Material: Looking at Teaching and Learning
2. Considering the challenges of evaluation
2a. Ethical Considerations
What is your role in the production team, as evaluator? What lines do you draw in order to ensure the best interests of the Institution as e-textbook publisher team, the product, and above all the students? The original report can be found here.
2b. What do we evaluate?
An evaluator is often tasked to measure specific things and answer the question “is it worth it?” However, evaluators are also positioned to see and report on the unexpected, and what justifies the existence of an Institution as e-textbook publisher isn’t always easy to measure. Read these document and ask yourself whether, why, and how you might capture this point of view for evaluation.
An article presenting the author’s view was also published as part of the project:
Hogg, J. (2017). Creating a new type of e-textbook: Using Primary Sources. Insights, 30(1), 53–58. http://doi.org/10.1629/uksg.344
2c. Designing surveys
When you’re producing and selling your own textbooks, the lines between things like commercial product sales, academic practice, and student experience can become blurred. The world of Institution as e-textbook publisher evaluation can challenge even an experienced researcher. No matter what kind of data you’re after, chances are your survey has to serve a lot of masters. So before you ask someone a single question, there are a few you have to ask yourself.
Additional Material: What to ask yourself before asking students
3. Using tools for evaluation
The tools contained here were designed for evaluation in a proof-of-concept capacity. This means they’re not solely focused on the performance of an Institution as Publisher. They’re meant to observe an Institution as e-textbook publisher’s potential to grow, be sustainable, and meet the needs of a particular university. Feel free to adapt these tools, with the proper attributions. Each link contains a description of the tool and how it’s used, visual examples (with annotations), and an inventory of relevant documents.
- a) Benchmarking – looking at your book in comparison to others
- b) Reader Engagement Survey – how do your students engage with materials before they see an IAP textbook? (survey only, for methodology see “The Final Survey” Link
- c) The Final survey – How do you find out what your students thought of an Institution as e-textbook publisher textbook?
- d) The Project Reflection Matrix – how does your team develop, manage, and grow its publishing?
- e) The Resource Profiling Survey – what’s the “real cost” of doing business?
We recommend that an Institution as Publisher project find opportunities to investigate three themes:
- The development journey of its publications
- The feedback from stakeholders about what it has achieved
- The impact of its publication on learning and teaching.
Evaluators should seek mixed method approaches of data collection. Surveys and dialogue offer direct input from students and authors. Distribution data (for example, from a University’s library, or from Amazon and Smashwords), may uncover how much a publication is being used, and by whom.