Tag: jisc

Jisc bid writing

Today I submitted a JISC bid on behalf of a team, as part of the recent JISC call Infrastructure for Education and Research (’15/10′ to its friends). The call was actually a set of broadly related different strands, we submitted (with a whole 30 mins to spare) under a strand called Infrastructure for Resource Discovery, and there’s a nice web based version of the call on Jiscpress.

Jiscpress was created in part by Joss Winn and a post of his I saw today inspired me to knock out this this rambling.  Go read it before you read this, thanks.

I admire his openness and I should strive to do the same. Funny that I try – and to an extent automatically do – make much of what I do open, but with this sort of thing there is a tendency to keep it close to your chest. There were very few tweets in the run up to the bid. Why are we not more open? He also talks about his JISC bid writing and tips, here’s mine.

My first experience was attending a ‘town hall meeting’ in Birmingham about the JISC Capital programme, around 2006. For a starts I didn’t even know what a Town hall meeting was (I think it means a briefing day, everyone presumed you should know this). I do remember it felt daunting, there were lots of people in suits. Lots and lots of sentences I didn’t understand (We’re going to innovate to take vertical solutions and silos and break them in to a horizontal landscape to help foster a new wave of disruption to help inject in to the information environment testbed) and no one I knew. I looked at the massive documents that accompanied the programme, many of them, many times. And looked at what I needed to do to write a bid. Budgets, signatures, partners, matched funding. I didn’t submit one.

Since then the community has developed, in no small part thanks to Twitter, but also to things like mashlib and many one day events (which either never used to exist in the field that I work, or I was just more ignorant then than I am now). Beer has been a big part of forging links in the HE Library / tech community. Seriously. It really needs its own cost code.

I looked at a number of potential calls over the last few years – often they required a report or development that I had no really knowledge in, I almost came close to putting something in for the JISC rapid innovation call (and helped mark it). When the JISC LMS call came out about a year a go the time and subject were right to submit a bid. I knew the subject matter, I had a natural interest and passion, and I knew the people who would be involved in these sorts of things.

These are tips for people who are thinking of putting in a bid, especially those who are stupidly disorganised like me:

  • Time between a call being released and the submission deadline is short, normally about a month, which in HE terms is not long. Use the JISC funding timeline to get a heads up of future funding opportunities so that you can prepare for working on a bid (including blocking off time during a month, and arranging initial meetings with others) before it comes out and not taken by surprise. The JISC 15/10 call had a blog post a few weeks before the call came out giving a feel for the call and confirming the date it should be released. It helped me to start thinking about ideas and block out time to read it (even if some of that time was in the evening) on the day it was released.
  • Every organisation is different (that applies to everything I say) but for us, setting up a meeting a couple of days after the call was out was very useful. It included those who it could affect and relevant senior staff. The call had lots of areas which matched our goals (and some, not always the same, that matched my personal interests), it was good to prepare a briefing and then bounce those ideas around to see what had potential and see what other ideas came up. It helps in many ways, to quickly focus and refine potential ideas (and drop those that people show no interest in), keep everyone in the loop and see whose willing to work on it. It stops it being one person or departments little side project.
  • The briefing day was very useful, especially for talking to people, finding potential partners and getting great advice.
  • Now I have an incredible amount of bad points, but two of them are leaving everything to the last minute and working in a very linear fashion. Often things that feel like they are the last things you need to do are actually things you need to set in motion earlier on. This seems so simple typing it now but I’ll probably (be warned colleagues) do the same next time. These include budget, project name and supporting letters.
  • The budget is hard. See if your org offers support in doing this. The problem is certain magic numbers (the wonders of trac and fec) can only be calculated once you know all your other costs. However I tend to find that near the end of the bid writing process you suddenly think of some work a particular group/person/org will need to do so you need to factor in those hours and costs, or you invite someone from the other end of the country to be on a panel and need to cater for their travel and hotel costs. In best ‘do as I say don’t do as I do’ tradition I would try and bash this out well in advance so it can be sent to those who can then check it over and fill in the magic numbers.
  • Asking a friend at another Uni if they don’t mind asking their (P)VC to drop everything so that he/she can write a nice supporting letter for your project is hard. So try and avoid it by getting it done sooner. Again often easier said than done as projects tend to evolve during the bid writing process which can make letters reflect out of date ideas or stress the wrong areas.
  • Letters and other stuff need a project name. I’m guilty of really not thinking a name is that important. The acronym will be meaningless to all. On my first bid I just used a working name (all of 5 seconds thought) and right at the end asked everyone if they are happy to go with it. Mistake. Changing project name at the last minute is a pain.
  • A key point. You need a good idea. And a good idea is one that is a good fit to the call. You may have a perfect methodology but if the idea doesn’t fit with the call then you could be in trouble. I’m guessing ‘It’s not really what you’re after but it’s such a good idea you must want to fund it’ is not a good sign.
  • Speak to people, I mentioned the briefing day above, but also speak to the programme manager, they’re nice people! Talk about it on twitter.
  • You don’t need to be an expert. I was put off for years from writing a bid about things I was interested in but didn’t think I knew enough about. You can ask people to work with you! People who know how to do stuff. I’ve just submitted a bid about Linked Data. Now I’ve followed the rise of Linked Data for years and tried to learn about it, but taking an XML file and ‘converting’ (is that the right term?) in to Linked Data, I had no idea how to even start. But I spoke to some people, who recommended someone, and they do know what to do
  • Approaching others out the blue is difficult, especially if you don’t feel ‘part of it’. All I can say is ask. And if you don’t know who to approach ask people (either at the JISC or via twitter) for advice.
  • If you have a clear(ish) idea of what you are going to do, broken down in to mini packages of work, andwho is going to do each one of them, then writing the actually bid is easy. Treat it like a job application. We all know that when writing a job app use the Person spec as a structure, a paragraph for each entry of the person spec, perhaps even with headings to help those marking your application. A bid is just the same, the clearly laid out structure of a bid is worth sticking to, it’s the same thing the markers will have to use to score each section. If the JISC Call document ‘proposal outline’ refers to a section which talks about Project Management, leadership, copyright, evaluation and sustainability. Then write about those things together as clearly as possible. Long winded paragraphs which ramble on about everything and make subtle implied passing references are a bain to the marker and no help to you.
  • But, I have been involved in marking and assessing bids, what impressed me was the impartial way bids were judged, and the real desire of wanting to fund Good Ideas, even if the actually bid document needs a little clarity in some parts. Especially from first bidders. To stress, there was a real desire to see first time bidders (with a good idea) be successful.
  • So the actually bid write up can in a way be left later than other tasks mentioned above, as it mainly involves just you, (and probably a couple of people who work closely with you to check it).
  • In an ideal world this would all be done weeks before it needed submission. In real life other factors (and work) can mean it can be a last minute dash. That’s fine. But make sure a few days before it has to be submitted you put in to an email to yourself (and others involved in writing it up): The email address to send it to, the submission date time, cut and paste things such as the exact format it needs to be submitted in (how many fiels, how big), number of pages. Add a link below each of these facts to the actually source of information (jiscpress is excellent for this), so when you’re panicing and presume everything you know is wrong you can follow a link and see for yourself that it really is eight pages max for the proposal, direct from the horses mouth.

The whole process of submitting a bid, and running a project, is good experience. It often involves working with people you would not normally, and doing differently to your normal job. Now if I get a chance in the next few days I will follow Joss’ example and blog about our proposal.

JISC, Monitter and DIUS (Department of Innovation, Universities and Skills)

Earlier this week the Jisc 2009 Conference went ahead. A one day summary of where things are going in Jisc-land.

Like last year, I got a good feel of the day via twitter. I used a web app called monitter.com for real time updates from anyone on twitter who used the tag #jisc09. monitter.com allows you to track a number (3 columns by default) of words/searches, this works well as these can be usernames, tags or just a phase. I used ‘jisc09’, ‘brighton OR sussex’ and ‘library’.

The keynote talks was also streamed live on the web, the quality was excellent. Check out the main Jisc blog for the event.

Linking to all the different sites, searches and resources on the web after the event wouldn’t do it justice. The usefulness was in the way these were all being published during the day itself, using things like twitter (and bespoke sites) as a discovery mechanism for all these different things being added around the web. I didn’t know who most of the people were, but I was finding their contributions. That’s good.

An email came out the next day about the conference and announcing a guest blog post by David Lammy, the Minister for Higher Education, on the Jisc Blog.

He finished by asking for the conversation to continue, specifically on  http://www.yoosk.com/dius which is described as ‘a place to open up lines of communication between Ministers and the HE Community’. Yoosk.com is set up to allow users to ask ‘famous people questions’. Its homepage suggests that it is designed for any kind of ‘famous person’ though seems to be dominated by UK politicians. Looks interesting but can’t help wonder if there are other sites which could facilitate a ‘discussion’ just as well or better.

The dius section of the site seems quite new. In fact my (rather quickly composed) question was the second to be added to the site. I think the idea of practitioners (yuck, did I just use that word?) raising issues directly with Ministers is an interesting one, and hope it takes off, and at very least, he/they answer the questions!

DIUS do seem to be making an effort to use web2.0 tools. I recently came across this sandbox idea of collecting sites from delicious based on tags, in this example, the library2.0 tag. Interesting stuff, but not specific to HE, it will work for any tag and really just creates a nice view of the latest items bookmarked with the tag in question. The code for it is here.

In any case, it is good to see a government department trying out such tools and also releasing the code under the GPL (even 10 Downing street’s flickr stream is under crown copyright, and don’t get me started on OS maps and Royal Mail postcodes). I’m reminded of the Direct.gov team who, when they found out there was a ‘hack the government‘ day to mashup and improve government web services, decided to join in.

DIUS homepage with web2.0 tools

DIUS homepage with web2.0 tools

On the DIUS hompage, just below the fold, they have a smart looking selection of tools, nice to see this stuff here, and so prominent, though the Netvibes link to me just a holding page when I tried it.

Finally, they have set up a blog on the jiscinvolve (WordPress MU) site. At the time of writing it has a few blogs posts which are one line questions, and has a couple of (good) responses. But I can’t help feeling that these sites need something more if they are to work. At the moment they are just there floating in space. How can they integrate these more into the places that HE staff and students inhabit. Perhaps by adding personal touches to the sites would encourage people to take part, for example the blog – a set of questions – is a little dry, it needs an introduction, host, and photos.

To sum up, some good stuff going on here, but need to see if it takes off, it must be difficult for a government department to interact with HE and students, the two are very different but they are trying.  I hope it proves useful, if you’re involved in HE why not take a look and leave a comment?


March 25, 2009

“Sitting on a gold mine” – improving provision and services for learners by aggregating and using ‘learner behaviour data’

I’m at a workshop today called “Sitting on a gold mine” – improving provision and services for learners  by aggregating and using ‘learner behaviour data’ (it rolls off the tongue!), which is part of a wider JISC TILE project looking at, in a nutshell, how we can use data collected from user and user activity to provide useful services, and the issues and challenges involved (and some Library 2.0 concepts as well). As ever, these are just my notes, at some points I took more notes than others, there will be mistakes and I will badly misquote the speakers, please keep this in mind.

There’s quite a bit of ‘workshop’ discussion coming up, which I’m a little tentative about as I can rant on about many things for hours, but not sure I have a lot of views on this other than ‘this is good stuff’!

Pain Points & Vision – David Kay (TILE)

David gave an overview of the TILE project. Really interesting stuff, lots covered and good use of slides, but quite difficult to get everything down here.

TILE has three objectives

  • Capture scope/scale of Library 2.0
  • Identify significant challenges facing library system developments
  • Propose high level ‘library domain model’ positioning these challenges in the context of library ‘business processes’

You can get context from click streams, this is done by the likes of Amazon and e-music providers.

E.g. First year students searching for Napoleon also borrowed… they downloaded… they rated this resource… etc.

David referred to an idea of Lorcan Dempsey : we get too bogged down by the mechanics of journals and provision without looking at the wider business processes in the new ‘web’ environment.

Four ‘systems’ in the TILE architecture: Library systems (LMS, cross search, ERM), VLE, Repositories and associated content services, we looked at a model of how these systems interact with the user in the middle.

Mark Tool (University of Stirling)

Mark (who used to be based down the road at the University of Brighton) talking about the different systems Stirling (and the other Universities he has worked at) use and how we all don’t really know how users use them. Not just now, but historical trends, e.g. are users using e-books more now than in the past?

These questions are important to lecturers as they point students to resources and systems but what do users actually use, and how do we use them. Also a quality issue, are we pointing them to the right resources. Are we getting good value for money? e.g. licence and staff costs for a VLE.

If we were to look at how different students look at different resources, would we see that ‘high achievers’ use different resources to weaker students? Could/should we point the weaker students to the resources that the former use? Obvious privacy implications.

Also could be of use when looking at new courses and programmes and how to resource them. Nationally, might help guide us to which resources we should be negotiated for at a national level.


  • small crowd -> small dataset  -> can be misleading (one or two people can look like a trend)
  • HEI’s very different to each other.

Thinks we should run some smallish pilots and then validate the data collected by some other means.

Joy Palmer – MIMAS

Will mainly be talking about COPAC, which has done some really interesting stuff recently in opening up their data and APIs (see the COPAC blog).

What are COPAC working on:

  • Googlisation of records (will be available on Google soon)
  • Links to Digital content
  • Service coherency with zetoc and suncat
  • Personalisation tools / APIs
    • ‘My Bibliography’
    • Tagging facilities
    • Recommend-er functions
    • ummm other stuff I didn’t have time to note
  • Generally moving from a ‘Walled garden’ to something that can be mashed up [good!]

One example of a service from COPAC is the ‘My bibliography’ (or ‘marked list’ ) which can be exported in the ATOM format (which allows it to be used anywhere that takes an ATOM feed). These lists will be private by default but could be made public.

Talked about the general direction and ethos of COPAC development with lots of good examples, and the issues involved. One of the slides was titled:  From ‘service’ to ‘gravitational hub’ which I liked. She then moved on to her (and MIMAS/COPAC’s) perspective on the issue of using user generated data.

Workshop 1.

[Random notes from the group I was in, mainly the stuff that I agreed with(!), there were three groups] Talking about should we do this? the threats (and what groups of people affected by these threats). Good discussion. We talked about how these things could be useful, why some may be adverse/cautious of it (inc, privacy, inflicting on others areas – IT/library telling academics what they are recommending to students are not being used, ie telling them they are doing it wrong, creates friction). Should we do this? Blunt tool, may see wrong trends. But need to give it a go, and see what happens. Is it ‘anti-HE’ to be offering such services (i.e. recommending books), no no no! Should we leave it it to the likes of Google/Amazon? No, this is where the web is going. But real world experience of things to be aware of e.g. a catalogue ranking an edition of a  book high due to  high usage lead to a newer edition being further down the list.[lots more discussion, I forget]

Dave Pattern – Huddersfield.

[Dave is the system librarian at Huddersfield, who has ideas better than me, then implements than better than I ever could, in a fraction of the time. He’s also a great speaker. I hate him. Check out his annoyingly fantastic blog]

Lots of data generated just doing what we and users need to do, we can dig this. Dave starts of talking about Supermarket loyalty cards. Supermarkets were doing ‘people who bought this also bought’ 10 or more years a go. We can learn from them, we could do this.

We’ve been collecting circ data for years, why haven’t we done anything (bar real basic stuff) with it.

Borrowing suggestions (people who borrowed this also borrowed), working at Hud, librarians report it working well and suggesting the same books as they would.

Personalised Suggestions, if you log in, looking at what they borrowed and then what others items those who borrowed the

Lending paths: paths which join books together. potentially to predict what people will borrow and predict when particular books will be in high demand.

Library catalogue shows some book usage stats when used from a library staff PC (brilliant idea!) this can be broken down by different criteria (i.e. the courses borrowers are on).

Other functionality: Keyword suggestions, Common zero results keywords (eg, newspapermen, asbo, disneyfication). Huddersfield have found digging useful.

He’s released XML data of anonymised  circulation data, with approval of the library, for others to play with and hopes other libraries will do the same. (This is a stupidly big announcement, it feels insulting to put it just as one sentence like this, perhaps I should enclose it in the <blink> tag!?) See his blog post.

(note to self, don’t try to download 50mb file via 3g network usb stick – bad things happen to macbook)

Mark van Harmelen

Due to bad things was slightly distracted during part of this talk. Being a man completely failed to multi-task.

This was an excellent talk (at a good level) about how the TILE project is building prototype/real system(s). Some real good models of how this will/could work.  So far have developed harvesting data from institutions (and COPAC/similar services) and adding ‘group use’ to their database, a searcher known to be ‘chemistry student’ and ‘third year’ can then get relevant recommendations based on data from the groups they belong to. [I’m not doing this justice, but some really good models and examples of this working]

David Jennings – Music Recommender systems

First off refers to the Googlezon film (never heard of this before) and the idea of big brother in the private sector, and moves on and talks about (concept of) ipods which predict the music you want to hear next based on your mood and even matchmaking based on how you react to music.

Discovery: We search, we browse, we wait for things come along, we follow others, we avoid things everyone else listens to, etc.

Talking about flickr’s (not published) popularity ranking as a way to bring things to the front based on views, comments, tags etc.

Workshop 2:

Some random comments and notes from the second discussion session (from all groups)

One University’s experience was that just ‘putting it out there’ didn’t work, no one added tags to catalogue, conclusion was the need of community.

Coldstart problem: new content not surfacing with the sort of things being discussed here.

Is a Subject Librarian’s (or researcher) recommendation of the same value as a undergrad’s?

Will Library Director’s agree for library data to be released in the same way as Huddersfield, even though it is anonymised? They may fear the risks and issues that it could result in, even if we/they are not sure what those risks are (will an academic take issue with a certain aspect of the realised data).

At a national level, if academics used these services to create reading lists, may result in homogenisation of teaching across the UK. Also risk of student’s reading focusing on a small group of items/books, we could end up with four books per subject!


This was an excellent event, and clearly some good and exciting work is taking place. What are my personal thoughts?…

This is one of those things that once you get discussing it you’re never quite sure why it already hasn’t been done before, especially with circulation data. There’s a wide scope, from local library services (book recommendation) to national systems which use data from VLEs, registry systems and library systems. A lot of potential functionality, both in terms of direct user services and informing HE (and others) to help them make decisions and tailor services for users.

Challenges include: privacy, copyright, resourcing (money) and the uncertainty of (and aversion to) change. The last one includes a multitude of issues: will making data available to others lead to a budget reduction for a particular department, will it create friction between different groups (e.g. between academics and central services such as Libraries and IT)?

Perhaps the biggest fear is not knowing what demons this will release. If you are a Library Director, and you authorise your organisation’s data to be made available – or the introduction of a service such as the ones discussed today – how will it come back to haunt you in the future? Will it lead to your institution making (negative) headlines? Will a system/service supplier sue you for giving away ‘their’ data?  Will academics turn on you in Senate for releasing data that puts them in a bad light? ‘Data’ always has more complex issues than ‘services’.

In HE (and I say this more after talking to various people at different institutions over the last few years) we are sometimes to fearful of the 20% instead of thinking about the 80% (or is that more 5/95%). We will always get complaints about new services and especially about changes. No one contacts you when you are doing well (how many people contact Tesco to tell them they have allocated the perfect amount of shelf space to bacon?!) We must not let complaints dictate how we do things or how we allocate time (though of course not ignore them, relevant points can often be found).

Large organisations – both public and private – can be well known for being inflexible. But for initiatives like this (and those in the future) to have a better chance of succeeding we need to look at how we can bring down the barriers to change. This is too big an issue to get in to here it and the reasons are both big and many, from too many stakeholders requiring approval to a ‘wait until the summer vacation’ philosophy, from long term budget planning to knock-on affects across the organisation (change in department A means training/documentation/website of Department B needs to be changed first). Hmmmm, seemed to have moved away from TILE and on to a general rant offending the entire UK HE sector!

Thinking about Dave Pattern’s announcement, what will it take for other libraries to follow? First, techy stuff, he has (I think) created his own XML schema (is that the right term?) and will be working on an API to access the data. The bad thing would be for a committee to take this and spend years to finally ‘approve’ it. The Good thing would be for a few metadata/XML type people to suggest minor changes (if any) and endorse it as quickly as possible (which is no disrespect to Dave). Example: will the use of UCAS codes be a barrier for international adoption (can’t see why, just thinking out loud). There was concern at the event that some Library Directors would be cautious in approving such things. This is perhaps understandable. However, I have to say I don’t even know who the Director of Huddersfield Information Services is, but my respect for the institution and the person in that role goes about as high as it will go when they do things like this. They have taken a risk, taken the initiative and been the first to do something (to the best of my knowledge) worldwide. I will buy them a beer should I ever meet them!

I’ll be watching any developments (and chatter) that result from this announcement, and thinking about how we can support/implement such an initiative here. In theory once (programming) scripts have been written for a library system, it should be fairly trivial to port it to other customers of the same software (work will probably include mapping departments to UCAS codes, and the way user affiliation to departments is stored may vary between Universities). Perhaps Universities could club together to working on creating the code required? I’m writing this a few hours after Dave made his announcement and already his blog article has many trackbacks and comments.

So in final, final conclusion. A good day, with good speakers and a good group of attendees from mixed backgrounds. Will watch developments with interest.

[First blog post using WordPress 2.7, other blogs covering are Phil’s CETIS blog, and Dave Pattern has another blog entry on his talk. If you have written anything on this event then please let me know!]

JISC Library Management System Review

The JISC and SCONUL have just released a Review (Horizon Scan) of Library Management Systems (LMS or ILS). Intersting stuff. A lot of stuff to be expected, and some interesting findings:

  • Dire need to embrace web 2.0 and to get OPACs out of the 90s
  • The need to intergrate with campus systems such as Financial and registry systems.
  • Open Source is used within current systems, and as systems in their own right, the latter have no real penetration in the UK market and are unlikely to do so in the short term (lack of advantages), and are not currently more ‘open’ than other systems.
  • Most major systems are much of a muchness (they put it far more elegantly, though after glancing through 100 odd pages I’m not much inclined to find each quote), little reason to change system at the moment. Especially as…
  • It is a good time for the role and definition of a LMS to be looked at.
  • We are already moving away from the ‘one big system approach’ i.e. Aquabrowser, ERMs and Metasearch are all separate products. Almost certainly going to move in this direction, and this is a good thing. This requires standards and cooperation for different systems to talk to each other.
  • UK market is small compared to global market, though not that different to the norm, except for BL ILL server and our concept of Short Loans.
  • Libraries have not made good use of looking at how users work and interact with their systems.
  • There is a need and movement to liberate data (silios and all that)
  • LMS may become a back of house system (though this should not be seen as a bad thing)
  • Recommends that libraries increase the value of their investment by implementing additional services around the LMS. Which is fine if these new services are using standard protocols to interact with the LMS, if their are using proprietary api’s or talk direct to the database then it is another thing to lock the library in to one provider (or at least, another reason why changing LMS will have a large impact).
  • Procurement process is overall expensive, especially when LMSs are more or less the same-ish.
  • Encourages libraries to review their contracts (ie not changing system, just getting more – or better value – out of the current system with better contract).

One thing that interested me were the Vendor comments (and something the report reccommends to libraries) on the lack of consortia in the UK, and noted that other countries make good use of this (ie this is best practice). I can certainly see scope for this, especially as (like the report notes) the software has already been designed to handle consortiams.But what would the consortiams be? geographic based (London? South East? M25 group? Scotland) or perhaps using other groupings (Russel group, 94 group, CURL). Or perhaps smaller groupings based on counties or similar institutions.

I’ll quote two bullet points in their reccommendations:

  • The focus on breaking down barriers to resources is endorsed, involving single sign on, unifying workflows and liberating metadata for re-use.
  • SOA-based interoperability across institutional systems is emphasised as the foundation for future services and possibly the de-coupling of LMS components

The report also says “Libraries currently remain unconvinced about the return on their investment in electronic resource management systems.” Not really, we’re just waiting for a good one to come to market :)

I like a comment from one of the reference group “Since around 2000 there has been a growth in the perception of the library collection not as something physical that you hold, but as something you organise access to.” Nothing new, but nicely put.

Report can be found here [pdf], took ages to find where it is on the JISC website (JISC website? confusing? surely not!), but it is here (and the key item – the report – is at the very very bottom of the page (why?). I originally found the report via Talis’ Panlibus blog.