Koha Community Newsletter: April 2014
Koha Community Newsletter: April 2014
Volume 5, Issue 4
Edited by Chad Roseburg and Joanne Dillon, Koha Community Newsletter Editors.
Please submit news items to
Table of Contents
- Upcoming Events
- Past Events
Koha 3.14.6 Released
by Fridolin Somers
The Koha community is proud to announce the release of Koha 3.14.6.
This is a maintenance release and contains some enhancements and several bugfixes.
As always you can download the release from Koha Downloads.
See the full release notes and changelog here.
This section highlights upcoming features as well as bugs needing attention.
Filter duplicates when adding a full batch from a staged file
Submitted by Sonia Bouis
From Bug description:
Until now, when you add order for a single record from a staged batch, duplicates are checked and if found you can choose 3 different treatments : adding a new order from existing record, adding a new record nevertheless or doing nothing. But when importing the batch all in once (Import all), there’s no duplicate checking.
This patch aims to solve this. When adding a batch of records to a basket, duplicates are skipped and an alert is displayed with a link to them so as they could be treated
In the case that all batch records match an existing record, you stay on the staged batch list, a warning is also visible to tell you that nothing had been imported. The “Import all” block is not any more displayed
Research Links Added to OPAC
from Franziska Wallner and Stefano Bargioni
The American University of Rome (Koha 3.4) and the Pontificia Università della Santa Croce (Koha 3.12) have added two hyperlinks to each author in the record normal view.
See example here:
The first WorldCat icon connects to the “WorldCat Identities Page” for the
author, and the second one allows for users to “Explore Identities Associate
Relationships through WorldCat Identities Network”.
We consider these links extremely useful for researchers. We welcome any
comments and suggestions.
from Gunilla Jensen at the National Institute of Water and Atmospheric Research (NIWA)
The New Zealand National Institute of Water and Atmospheric Research (NIWA)conduct environmental science to enable the sustainable management of natural resources. The NIWA Library switched to Koha in 2012 as their library management system. On 8 April 2014 NIWA’s Library implemented a brand new Koha plugin, that integrates EBSCO Discovery Service (EDS) into Koha retaining the features of the Koha public interface. NIWA’s Library staff have been working with EBSCO and Catalyst IT and are excited to be first in the world to do this integration.
The result can be seen in action at https://library.niwa.co.nz. A dropdown menu enable the user to search the native catalogue or EDS by selecting the appropriate option. At NIWA the default search has been set to the EDS. The Plugin also supports applying EDS items in the cart. EBSCO licensing and publishing terms require users to authenticate themselves for EDS results to display.
If interested in this plugin, your library must be subscribed to EDS. There is a relatively small amount of work to do in Koha to configure the plugin. You can find the install instructions here.
Elastic Search Integration Update
from Brendan Gallagher at ByWater Solutions
Bywater Solutions and Catalyst IT are working together to implement Elastic Search into Koha. So far, funding for this work has been contributed by:
- Arcadia Public Library
- ByWater Solutions
- Catalyst IT
Why would I care about Elastic Search?
We need Koha to perform for our future (and present!) library users who expect access to Linked Data sources and a more Google-style searching experience. So, we need an equally modern searching experience in Koha, along with the library’s real life experts and quality content.
Yeah I know those things are important, tell me more then…
Koha is already on the way to supporting Linked Data. We need Elastic Search to support taking MARC records on a round trip to RDF (linked data is RDF), and back again. This will enable libraries to deliver more quality content in their search results.
There is also scope for major improvements behind the scenes of Koha, including the ability to rebuild the indexes with no downtime, optimisation of the way we use server resources, real-time usage and performance statistics, and most of all, chipping away at search speed, at every opportunity.
Aaand..what exactly is Elastic Search?
We’ll let www.elasticsearch.org answer this one: “Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Architected from the ground up for use in distributed environments where reliability and scalability are must haves, Elasticsearch gives you the ability to move easily beyond simple full-text search. Through its robust set of APIs and query DSLs, plus clients for the most popular programming languages, Elasticsearch delivers on the near limitless promises of search technology”.
So where is implementation up to?
A significant start has been made on implementing Elastic Search and browsing into Koha using the funds raised so far. They’re working, but not fully integrated throughout Koha. It’s a big undertaking, and once the base is there, there are endless opportunities for ways it can be extended to further enhance Koha. The more funds we collect, the more we can build.
If your organisation would like to know more about contributing towards Elastic
Search development, please contact firstname.lastname@example.org or email@example.com
Robin Sheat and Chris Cormack from Catalyst are working with Brendan at Bywater on the development – try eythian, rangi and bag in the community IRC, or the koha-devel mail list for open technical discussions.
Plack, the staff client, and the need for speed
from Kathryn Tyree at Catalyst IT
Some time ago, a lot of work was done by some very talented people to take the Koha OPAC from running under old, slow CGI to new, fast Plack. Now we’re making good progress on doing the same to the staff client.
What is Plack?
Plack is an alternate method of running Koha on the server. In the normal way, every time you ask for Koha to do something, whether it’s tell you about a record, or return an item, the server has to load all the required Koha libraries, everything they depend on, read the configuration files, and then start answering your question. After that, it would shut itself down and forget everything, in order to prepare for the next request. This adds a lot of wasted time on to every request and reduces the amount of traffic that a Koha server can handle. It also makes it feel slow to the user.
With Plack, all this slow stuff only happens once: when the server is turned on. Everything is loaded and just waiting on someone to do something, e.g. search.
Why does this not work already?
When you know that a Perl process is going to shut down after serving your request (like it does with CGI), you can take a lot of liberties. You can create connections and not close them, you can store information locally rather than looking it up every time, because you know that your local copy will be gone in a few seconds. You can allocate a lot of memory knowing that it’ll be cleaned up without you having to do anything.
With Plack, which will maybe serve hundreds of requests without shutting down and cleaning up, those assumptions go out the window. If we create a connection to zebra, we might hold onto it for an hour or more, and just hope that zebra doesn’t give up waiting for us (it will, we’re working on it.) We’ll never realise that someone updated the book framework half an hour ago, because we are storing a local copy for quick access. Even worse, due to the multithreaded nature of it all, sometimes you’ll see something be up to date, then you’ll hit refresh and it’ll be old again.
What’ll it be like when it’s working?
If you’ve seen the OPAC running under plack, you’ll know it feels a whole lot snappier. We’re currently working to bring this efficiency to the staff client. It’ll mean that everything is faster, and the repetative things (like returns) get done that much quicker.
So what’s going on to fix this?
Unfortunately there’s not much science to this. It’s mostly a matter of trying things, noticing that they’re broken, and fixing them. This includes improving the Koha::Cache system so that it can be a lot smarter about where things get stored, and how long for, helping Koha’s scalability overall.
Bywater Solutions are funding this work, which is being led by Robin Sheat at Catalyst IT. The two companies are working closely to solve as many of these issues as possible, along with plenty of help from the Koha Community.
email: firstname.lastname@example.org or email@example.com IRC: eythian or bag
New Koha Libraries
Barton Chittenden explains how to write up effective support tickets in his blog post entitled, Entering Good Support Tickets, Revisited.
Joy Nelson discusses a workshop on successfully implementing workflow changes.
Nicole Engard explains how to print lists the “old way” in Koha 3.14 .
Jeremy Wellner explains how to get MARC records from Amazon in this guest blogpost.
In his Did you know blog series this month, Pierre Vandekerckhove covers the following topics:
KohaCon 2014 will be held in Córdoba, Argentina, October 2014.
For the schedule and registration details see the
North American Koha Users Group
The North American Koha Users Group will be held in Wenatchee, WA, August 2014.
For the schedule and registration details see the
North American Koha Users Group page.
May General IRC Meeting
The May general IRC meeting will be held on May 7th 2014 at 15:00 and 22:00 UTC.
The agenda and other information are here
Koha Training At Daffodil International University (DIU) Library
The Daffodil International University (DIU) Library organized the 4th phase of the workshop “Automation of Information Institution using Koha-ILS and MARC21”.
Participants got hands on practice on different modules of Koha-ILS and MARC21 in DIU Computer Lab. They learned how to implement koha by themselves. Workshop took place 3rd – 5th May, 2014 at Daffodil International University, Shukrabad, Dhaka, Bangladesh.
Beginning from the debian installation, the workshop covered:
- Step by Step Debian Installation.
- Step by Step Koha installation
- Step by Step Koha Customization
- Step by Step Koha Configuration
- Practical Session of learning MARC 21
- Step by Step process of Automation using Koha.
- Step by step patron creating in Koha
- Hand on practice on how to use koha as circulation officer, administrator
- Live Demonstration of Koha.
- Problem & Solutions of Implementing Koha.
April General IRC Meeting
The April general IRC meeting was held on April 9th 2014.
The agenda, links to the minutes, and other information is here.