random access memories

I’m often asked how I got into computing in a generation when neither IT not computer science were on the school curriculum. So, I’ll try to fill in a few random gaps from some cobwebby parts of my memory …

It all started as a hobby. One of my earliest memories of using a computer was thanks to the North Kent Amateur Computer club and the host of hobbyists and enthusiasts that used to attend. It was the age of the kit computer, when the likes of the UK101 ruled supreme.

Compukit_UK101_Mainboard_s1

I think it was Thursday evenings after school when I’d turn up at some local venue — usually a spare school hall — to find a room full of enthusiastic, often bearded participants, all hunched over circuit boards and small TV sets acting as monitors.

From time to time one of them would pick up a soldering iron and make an adjustment to something, or unplug and restart the system or wallop a TV set to stop the picture jumping. These are the folks I’ve always thought of as “digital natives” — people who understand how to design and use digital technology — but that phrase seems to be (ab)used now merely to mean people who have grown up using digital devices as consumers.

I remember testing out some of the simple 3D green mazes and early scrolling games that tried to make it feel like I was speeding across a largely imagined landscape and into the distance. The members of the club were always unfailingly generous in letting me try out their latest hardware builds and hand-cranked software instead of merely dismissing me as another irritating spotty teenager from the local comprehensive avoiding homework with a much more interesting displacement activity.

Towards the  end of my sixth form days, the first computer had turned up at the school — a Commodore PET.

pet3d

Ever since, I’ve regarded this as what a “real” computer should be like — with a built-in monitor and keyboard, and its own sophisticated storage system (aka a cassette tape deck). We were able to book the PET after school to use for something like an hour at a time — just about enough time to load something from the tape, suss out how it worked and then hand it over to the next person. We seemed to spend most of our time standing around it and working out how to win at the text-based version of the Star Trek game.

I remember at home we also ended up owning one of the earliest video game consoles — a Magnavox Odyssey. My mum bought it for us secondhand (I’m not quite sure how we managed to persuade her to do that, particularly given how tight money was). It seemed great fun for its time, complete with a light rifle and a set of screen overlays that would be held in place on the TV screen courtesy of the static charge.

magnavox-odyssey-ad

These imaginative static overlays turned our black and white TV into “colour” for the first time, with most of the games based on a variation of Pong. To change game, you removed one of the cards that came with the console, and inserted another one in its place. I remember using the rifle to take pot shots at the ball of light as it moved around the “haunted house” overlay, appearing momentarily at some of the “windows” cut into the static film. State of the art or what?

Like most of my early machines, sadly I’ve no idea what happened to the Odyssey over the years — lost in one of those many post-university moves between various rental properties in and around rundown parts of London.

My own first purchase was a Sinclair ZX-80, then the ZX-81, then the ZX-Spectrum.

Screen Shot 2014-10-29 at 07.32.04

Like many of my generation, I owe a lot to the pioneering spirit of Sir Clive Sinclair: thanks mate.

sinclair_zx80_hr_1s

At university, the Spectrum ended up in the communal room, used for gaming alongside our “artistic” pyramid built from dozens of empty beer cans assembled on the mantelpiece. The dusty two-bar electric fire that made this the only warm room in the house was at the end of slightly melted extension cable which ran into the adjacent bedroom — the only room in the Stoke Newington house we all shared that had no electricity slot meter in it. (From what I remember, most of the house was wired via a spaghetti ball of cables back into that room and its free electricity supply …).

Initially, programming these early computers either consisted of copying in program listings from hobbyist magazines or writing your own code. With the magazine listings there always seemed to be errors in the code, meaning it was necessary to buy the next month’s edition as well for the errata — unless I managed to work out for myself what lines had been missed or garbled in the meantime. Later came the use of cassette decks, a rather erratic and unpredictable way of loading programs — often only failing after 15 or 20 minutes or more when the Play button on the cassette player would pop up and you’d realise the program had failed to load.

Later I moved onto a Commodore 64, a BBC Model B and then onto a “proper” computer — the Apricot F10. My programming efforts, which had started with BASIC, experimented with everything from assembly language (on both Z80 and 6502 processors) to Fortran, Prolog and Pascal.

act_apricot-f2_1

At work when I started, it was an age when the IBM PC had yet to dominate — it was not unusual to find an office where almost everyone was working at a different incompatible computer. On one desk would be an Apple II, at another a Commodore and at another an Apricot. Apricot were doing well in the UK at the time, particularly in the public sector — they ran faster and more effectively at a lower price than their American cousins. Not that it helped them much in fighting off the competition …

… to be continued (possibly) at some future point …

[With thanks to http://www.computinghistory.org.uk, www.rewindmuseum.com and http://www.computinghistory.org.uk for the assortment of images used in this blog]

Posted in technology, Uncategorized | 1 Comment

more on the 1999 change of address demonstrator ….

I mentioned in a previous post the work done in the late 1990s to put online a change of address service.

This service enabled citizens to inform separate government departments, via the internet and in a single transaction, of a change of address.  The two departments who took part in this work were Inland Revenue (now part of HMRC) and the Department of Social Security (now DWP). The project never really moved beyond its live demonstration phase with a limited subset of citizens.

I’ve recently managed to source some additional screen grabs from that era, as below.

Missing from these is the stage where users were authenticated by a third party digital certificate — in this case Royal Mail’s ViaCode or Barclay Bank’s Endorse (NatWest Bank had also been involved in some earlier work). It was this signing that helped confirm the user’s identity (a similar federated model as that currently being developed by the Cabinet Office’s Identity Assurance Programme).

The smartcard authentication method required a user to have a valid and pre-initialised smartcard. with a recognised digital certificate present. The smartcard was inserted into the citizen’s smartcard reader before accessing the secure web site.  This enabled (transparently to the user) the web browser on the PC to establish a secure session with the site using a trusted certificate. When this secure session was established, the citizen was able to access the protected web site. Then, once the user had completed the web pages, the data were signed using the digital certificate.

The initial welcome screen.

coa1

Next was the screen for entering personal details.

coa2

Then the old and new address pages, with the addresses automatically validated with the Post Office’s Personal Address File (PAF).

coa6 coa7

Followed by letting users decide which departments they wanted to notify of their change of address.

coa3

And finally, there was the summary and declaration screen.

coa4

After which the user would be presented with a confirmation page and a reference number to quote in the event of any follow-on enquiries.

coa5

The summary architecture for this service is shown below.

Change of Address 1999

XML was used as part of the government’s adoption of open standards for data and interfaces via the GovTalk initiative.

Postscript(s)

16.10.2014

Stefan Czerniawksi points out the above largely relates to the early stage demonstrator — and that the later live pilot expanded to include more departments and was made available through third party sites. See also his related blog here.

Posted in identity, IT, IT strategy, open government, privacy, public services, security, technology, technology policy | 1 Comment

a tale of two countries: the digital disruption of government

Screen Shot 2014-10-14 at 11.39.40

My Australian colleague, Marie Johnson, and I have drafted a paper for this month’s Commonwealth Association for Public Administration and Management (CAPAM) conference being held in Putrajaya, Malaysia. It looks at government endeavours in the UK and Australia over the last 20 or so years to use technology to improve our public services.

You can download a copy of the paper (PDF) here — A Tale of Two Countries – Fishenden and Johnson.

Due to a diary conflict, I won’t be attending to co-present the paper, but Marie will be there to narrate and debate our ‘tale of two countries’.

Screen Shot 2014-10-14 at 11.35.55

Posted in IT, IT strategy, open government, public services, technology, technology policy | 1 Comment

updated Identity Assurance Principles for the UK Government

We’ve been making good progress at the Privacy and Consumer Advisory Group (PCAG) on reviewing the work of various government departments — everything from the Identity Assurance Programme (IDAP), to the “big data” work of the Office of National Statistics, to the “data sharing” proposals, to electoral transformation and other programmes.

I’d like to acknowledge the very open way that most of the government teams have engaged with PCAG — and that even where discussions may have become “full and frank” they have always remained constructive. The Minister for the Cabinet Office, Francis Maude, has also been very supportive of our work, part of the reason its scope has expanded considerably from our earlier focus on identity assurance.

Of course, as an independent advisory group we don’t have any “power” in the sense of a veto over the work of the various government departments — but in general most people we’ve engaged with have understood the sense in applying best privacy and security principles to their work, rather than leaving it full of holes or subject to large-scale public suspicion. It helps that the government’s Technology Code of Practice has as part of its Point 6 the requirement that “Users should have access to, and control over, their own personal data.” Indeed, some programmes — such as the NHS care.data programme — might have avoided some of their problems if they’d observed this policy in the first place…

We’ve just formally submitted our updated Identity Assurance Principles (.pdf) to the UK Government’s IDAP team. They will provide their public response in due course once they’ve had time to consider them and their impact on their work. These updated Principles follow on from PCAG’s earlier work, and our subsequent open consultation.

Posted in identity, open government, privacy, public services, security, technology, technology policy | 1 Comment

more work required: on ‘big govt IT’, ‘transactions’ and the future of public service design

I posted online recently some headline stats comparing the relative scale of UK banking transaction volumes with UK government transaction volumes. They sparked a healthy debate about the nature of ‘transactions’ and the complexity of processing required of a transaction embedding a complex welfare claim form relative to one for a simple financial exchange.

Neither the estimate of total UK banking transactions per annum nor the estimated number of UK government transactions seem reliable in my original post (one commentator suggesting that for just one of the government services the true figure was of a magnitude 7x greater than that shown on the Govt Transactions Explorer). Also, some is inevitably double-counting: something may start as an HMRC payment and then become a banking transaction, so there’s a degree of mutual inter-dependence in such figures.

After much internal debate, I decided to pull that original infographic  — to prevent the propagation of something potentially misleading without a proper context. Irritatingly, LinkedIn removed all the subsequent comments too — if I’d know that, I’d have left it there. Mea culpa and many apologies to those who contributed: luckily I was keeping a summary of the comments since they raised many useful points which I wanted to capture, so they are not entirely lost.

So, important lesson learned … and to return to my original point, which was about the relative scale of government IT compared to what is happening in other areas, comparative stats like these in my replacement graphic make a point about scale whilst still skirting the issue of comparative transactional complexity:

transaction stats

Or on the sheer scale of what the overall internet is now handling (or, at least, was handling in 2012…):

Screen Shot 2014-07-12 at 10.51.17(source: mashable.com/2012/11/27/email-stats-infographic/. Retrieved: 12.07.2014)

Usefully, within the 1.15bn HMRC transactions of the first graphic above, the Transactions Explorer lets us drill down to see what ‘transactions’ make up this total. And to drill down again into each specific service. It’s a very useful tool — we need to see much more of this transparency and insight. Letting the sunshine in should also help resolve issues of associated data quality — addressing the points raised that some of the data appears to be underestimating what actually currently happens.

My original point was not intended to be specifically about the nature of transactions (which after all range through the gamut of ISO 20022 domains of Payments, Securities, Trade services, Cards and FX to the GovTalk Envelope [PDF] ), but about the scale of government IT in the digital era. Implicit was also a much bigger question about whether, with a proper data architecture and redesigned public services, many of these transactions are even necessary — combined with the question of how best we architect it all (an issue usefully discussed in this blog post by Stefan Czerniawski).

Many current government ‘transactions’ are merely automated versions from the old paper world, moving electronic versions of forms from one place to another — either literally, or by mimicking the form online in a series of interminable web pages that ape the paper world. We can throw all the tin and software we like at these ‘digital forms’, but it’s not going to do much to improve the quality, efficiency, or relevance of the services involved.

The more challenging issue is how we ensure these processes and services (and indeed the organisations behind them) are re-thought and redesigned for the digital age. In the same way that distributing a text document to multiple contributors and then trying to reconcile all their comments and changes (now scattered across multiple forked copies of the original document) is being superseded by a model where documents are collaborated on online in a single place and not sent around at all, there is enormous scope for a smarter data architecture. One that moves away from mirroring the capture or flow of online equivalents of paper documents to one oriented around data and capturing the delta for specific services — such as welfare or tax — rather than the entire data set time after time for each service and functional business silo.

So yes — to start with ‘transactions’ or technology being used to automate what is there is to start in entirely the wrong place. Many ‘government transactions’ (and potentially some government organisations and agencies) could potentially be dramatically improved, or perhaps even obsoleted entirely, with better designed public services. As I’ve commented on before, the poor design of many public service processes and associated paper forms are as socially unacceptable as the same poorly-designed services delivered on to a screen. So some of the more complex ‘transactions’ that people commented upon — such as case management work — raise the wider question about the overall design of public services, and why such things are being sent around in the first place, effectively using technology to fossilise the way things were done in the paper age.

I started from the perspective that whilst once governments were often amongst the users of ‘big’ technology (in terms of scale and speed), others now have claimed that crown. More importantly, that government could learn from the best of what has happened elsewhere — and use technology as a lever to redesign our public services, not merely to automate them in their current state. That’s part of the narrative I was discussing in my recent CIO piece ‘The birth of the composable enterprise‘.

Improving our public services requires the re-evaluation, redesign and re-engineering of its organisations on every level – people, process, technology and governance. This was implicit in the comment on the LinkedIn (RIP — grrrr, lesson learned) thread: ‘Maybe the question is how many should the UK Govt be doing and not looking at what they are doing’. Those who commented that the debate about ‘transactions’ is ‘one dimensional’ and missing more important wider issues are entirely right: it’s time we had that wider and much more difficult debate — something Mark Thompson for example raises in his ComputerWeekly piece ‘Where is the long-term political vision for digital public services?‘.

 

Posted in future Britain, IT, IT strategy, open government, public services, social exclusion, social inclusion, technology, technology policy | 1 Comment

high level cross-government architecture — 2003 style

I tweeted recently a couple of old IT architectural schematics from 2003. They provide an interesting (historical) perspective on how to bring existing government systems into the world of online services.

Following a few requests, I thought I might as well follow-up by blogging in a bit more detail about those schematics here — given the subject remains highly topical.

Back in 2003, the aim was to reach consensus on a cross-government technical architecture that would provide the right balance between centrally provided  and department provided components. In 2003, the open standards of e-GIF (the e-Government Interoperability Framework) prevailed and the default data format for inter-system interoperability was XML (the extensible Markup Language).

The “Vision” from 2003 is shown below.

online service vision 2003

The high level architecture behind this vision is shown below.

conceptual x-government viewIt’s fairly self-explanatory at this level — with several core elements shown down the righthand side (management and operations; data interoperability [XML]; security framework; metadata framework), and four key technical layers to the left: data sources; data access; business, logic and workflow; and UI components and processes.

This is expanded a little more in the view that follows.

x-govt conceptual more detailed

The top tier makes clear the multi-channel strategy and the way government services were envisaged as being delivered through a whole range of different organisations: from government to businesses to the voluntary sector, and through a variety of devices.

Web services were to provide the common, open standards for how the various online government services could be delivered via public interfaces. At the “common services” tier a range of modular components (from authentication to payments to notifications) were to exist, providing a flexible way of pulling common components together in the design of frontline services. Behind this, a similar open layer of web services existed which would use XML to proprietary integration mechanisms to connect into existing systems — be they public or private sector — needed in the design and delivery of online services.

For this approach to work, both departments and the centre needed to agree standards for the web services to be exposed by departments. The integration between XML and existing systems — to enable that integration — also needed to be resolved to ensure consistent delivery. There would need to be a way of reliably orchestrating processes both at the central government level (for cross-government orchestration of multi-departmental services) and at the department level (cross-system).

At the centre, the Government Gateway (which became an umbrella term for a range of component services — primarily identity and authentication, transaction orchestration, payments, and departmental integration services), acted as a broker, ensuring all calls were made through the same common architecture and endpoints. This included defining common standards in areas such as naming conventions, error handling responses and so on, together with the XML schema, meta content etc of the actual SOAP (Simple Object Access Protocol) methods and calls used for the interaction of content and services.

At what has often proved the most intractable and complex layer — that of bridging the gap between existing systems and online services — three elements came into play:

  • custom adaptors (specific integration tools for e.g. a mainframe)
  • web services interfaces surfaced via the adaptors and exposing data and methods through native XML/SOAP
  • associated process logic to ensure data and application integrity

backend integrationThis approach enabled a variety of existing systems to be bridged into the open, web services world. Of course, this was meant to be a transitive stage and not something to fossilise and preserve existing systems, enabling them to live on forever. It was intended to be a pragmatic way of taking early benefits in a massively diverse brownfield environment whilst a parallel programme could begin to re-engineer and re-architect backend processes, data, systems and their owning organisations into something better suited to twenty-first century government services.

backend transition

This secondary stage foresaw the assessment, transformation, re-factoring and web-enablement of these older systems — with an end goal of removing older backend systems and disaggregating and componentising them to enable departments to redesign and improve their services free of the restrictions of their inherited IT estate.

The illustration below shows conceptually how this architectural approach would enable the central orchestration of a service across multiple departments, with each of those departments in turn tackling local integration across their multiple backend systems.

x-govt orchestrationAn alternative view of the central components/local integration model is shown below, illustrating the role of common, re-usable components in the overall architecture and service design model.

common components A more detailed breakdown of the layers of the model is shown below.

detailed layers

Returning to a higher level perspective, the schematic below shows how the various components comprising the Government Gateway provided the realisation of some of this vision.

GG enablers

Note the existence of a “virtual department” to the right of the picture: this was a major concept at the time. Rather than trying to fix all of the historic issues of existing systems, data, processes and organisational hierarchies, the proposition was to create ‘virtual departments’ that would provide a way of exposing new services built around citizens’ and businesses’ needs rather than merely projecting the existing departmental service structures onto the internet. These would enable the development and design of better services — but, over time, they would also build out the future of government, at which point the existing systems could be switched off. More ambitiously, they would also enable the potential reconfiguration of government itself as it would no longer be tied down by information systems that had fossilised the historic business units and hierarchical functional silos of the departments and agencies in which they had been designed and deployed.

Some 11 years on, there remains considerable healthy debate about the best architectural models for government — across business, information and technical levels, and indeed the wider organisational configuration of government itself. At the polar extremes are opposing technical perspectives on the merits of emergent solutions and approaches versus imposed blueprints, and of centralised versus federated models.

As with most of these things, neither extreme in itself will prevail: good systems, successful technology and well-designed user services tend to involve a shifting blend of models and approaches. But hopefully I’m not alone in finding it useful to document the journey we’re on — and the road already travelled.

Posted in future Britain, identity, IT, IT strategy, privacy, public services, security, technology, technology policy | Tagged | 1 Comment

20 years of “online government” 101. Part 4: approaches to social inclusion

This is part 4 in my occasional blog summarising the past 20 years or so of UK efforts to move government online. The previous parts provided summaries on progress towards a single online presencea high-level summary of the overall architectural thinking and a look at approaches to identity.

In this one, I’ll take a similar (and equally arbitrary) whistle-stop tour of some of the main developments around the topic of social inclusion/exclusion related to the use of information technology. It sketches in a few more details behind my CIO article ‘Truly digital social inclusion‘ — and like my other blogs, makes no pretence at being comprehensive.

Much of this debate orbits around self-evident distinctions made between the public and private sectors: in particular, that the private sector can decide on its target audience and be selective (if it wishes) about with whom it chooses to interact. It may for example choose to target only a specific segment of a market (the rich, the young, the gullible, etc.). The public sector however provides universal services, potentially available to us all. With that exclusive, monopoly-provider status comes enormous responsibility — given that it’s not possible for citizens to obtain most public services elsewhere.

e-govt-POSTThis was recognised back in 1998, in a review of government use of IT by the Parliamentary Office of Science and Technology:

“… government and business have different motives and constituents, so it would be naive to expect the applications of ICT in business to be mirrored exactly in government. There are also a number of areas of concern over the potential wider use of ICT in government, including issues such as privacy, vulnerability of a public electronic infrastructure to crime, acts of war and terrorism, potential abuses of civil rights, and social cohesion versus social exclusion. How to gain the benefits of ICT in the public sector while avoiding the pitfalls is an important policy question for Departments, Government as a whole and Parliament.”

Ever since the first efforts to use technology to put government services online in the 1990s, there’s been a political focus on the concept of social exclusion caused by what has been termed a ‘digital divide': public services need to be available to all, and yet with the increasing adoption of technology in all aspects of our daily lives, the concern is that some less tech-savvy citizens are becoming, or will become, disadvantaged.

Government Direct

The 1996 Government Direct green paper made clear that it was aware of and intended to tackle this issue:

“All of the services will be accessible and easy to use. They will be available via terminals, either in the home or in convenient public places such as libraries, post offices and shopping centres. And they will be available alongside a full range of other services, including Citizen’s Charter information, thus providing an electronic “one-stop-shop” for Government. They will provide interactive guidance as users work through questionnaires and forms, making them simpler and quicker to use than paper-based forms. The services could also be available over an extended working day and at weekends, and for 24 hours a day, seven days a week where appropriate. Responses will be as near to immediate as practicable, and where an immediate response is not available, it will be possible to obtain electronic reports of progress. The services will be linked so that it will not normally be necessary to tell government the same information (for example, about a change of address) more than once.”

Touchscreen kiosks in public places were seen as one of the main ways in which access to online services would be made universal, even to those without access to technology in their homes or workplaces. The plan was that they would be found in public places, from libraries to Post Offices to Job Centres to banks and supermarkets. In hindsight, many of these earlier government documents seriously underestimated the speed and spread of the internet, and in particular the growth of mobile devices as a means of access in place of earlier assumptions about PCs and fixed line connectivity.

The review of government use of IT by the Parliamentary Office of Science and Technology in 1998 recognised the

“important role for Government in stewarding the development of an inclusive information society. A central recommendation is that local community ‘resource centres’ should be established, providing a publicly accessible means of conducting business electronically. Clearly, Government would be an important provider of information and services through such an infrastructure — and might be by far the most significant one in the case of disadvantaged communities.

Government’s role [is] as both potential contributor to and mitigator of this problem. Here, the two main factors are access … and behaviour … Thus, while Government has several options for providing a range of methods of access to reach every sector of society, these would be wasted if people don’t actually use them and exclude themselves from society. An interesting dimension to the issue of information ‘haves’ and ‘have nots’ is the potential scenario that some have suggested of government itself being an information ‘have not’ and thus incapable of acting to safeguard the interests of the wider population against the minority of ‘haves’”

Methods of access were foreseen as spanning:

POST table

One of the most comprehensive reviews of online users’ needs and social inclusion was the ‘View from the Queue’ study and report of 1998, which appears to be one of the few government papers to have conducted extensive citizen and business research in order to inform its conclusions.

View from the Queue

Importanty, the ‘View from the Queue’ recognises a simple reality that often seems to have been overlooked:

“Services can, of course, deploy technology in other ways that do not impact on the customer at the point of interface.”

However, at times there seems to have been a less valid view of technology, one of trying to miscast its role solely as one of ‘screen-based’ delivery of services, rather than about its more important role in re-engineering the processes, systems, structures, organisations and role of government. Such sidelining of technology into a superficial presentational role does little to help inform the underlying topic of social inclusion and how better to design public services to meet their universal requirements. The ‘View from the Queue’ however set out a more inclusive and comprehensive range of improvements that could impact on social inclusion, including:

    • simplifying procedures and documentation
    • reducing time taken queuing or waiting
    • minimising referrals between officials
    • eliminating interactions which fail to yield outcomes
    • extending contact opportunities beyond office hours
    • improving relationships with the public

It also foresaw the potential for electronic government services to improve four key areas:

    • speed of carrying out transactions
    • convenience/access
    • flexibility in options and hours of service
    • empowerment (bring services closer to the public and allowing them to choose how/when to carry out transactions).

It also sought to allay concerns about technology and how it will be used by government … by:

    • ensuring ‘confidentiality’ or privacy in interacting with government
    • providing safeguards against fraud or computer hacking
    • providing guarantees about government’s use of information
    • providing assistance and support to users

One of the many surveys it conducted examined how likely people were to use online government services:

View from the Queue survey

16 years later, the recent GDS-published survey provides an interesting comparison:

GDS survey

Different elements of the ‘View from the Queue’ research indicated that widespread public confidence in new services would only be achieved by:

  • improving existing services or offering benefits to users that they do not get at present. Both the qualitative research and interviews with large businesses point out that there is little point in merely replacing existing services/transactions with a new electronic version, described by one qualitative respondent as potentially ‘moving the queue from the counter to a kiosk’
  • allaying concerns about technology and how it will be used by government.

Its qualitative research identified a number of improvements desired in regard to existing services:

  • the simplification of procedures and documentation (e.g. forms), where this is possible
  • reducing the time taken queuing or waiting and the amount of referral between different officials or offices and trying to eliminate interactions which do not lead to an outcome
  • greater flexibility of means of making contact and greater opportunities for contacting government outside of ‘normal’ office hours
  • improving relationships with the public; in particular there is a feeling that services are currently set up to suit the government’s needs, rather than the public’s, and this can lead to a sense of powerlessness. The Desk Research confirms that this is a widespread perception of many public services.

There’s also a whole range of qualitative feedback and comments, which include the following nuggets:

“Filling out forms was felt to be particularly complicated and time consuming”

“Contacting government departments by telephone was described as being lengthy, frustrating and sometimes costly. The respondents described being held in telephone queues, passed to several different departments and not obtaining answers to queries as being particularly frustrating.”

“In dealing with a person face-to-face it was claimed that one would have wait in a queue, often in a post office or government offices, e.g. DSS office. The DSS office in particular was described as being a particularly undesirable place to queue.”

“The lack of accountability, i.e. no one person taking responsibility for queries/applications etc.”

“Whilst the respondents claimed that it was always appealing to save money, it was not of importance towards use. The respondents felt that the most important factors towards use would be to offer something that was easier and quicker than the existing method.”

“The existing transactions with government were seen as being complicated and time consuming. In some cases, respondents described feelings of humiliation and irritation with regard to previous dealings with government.”

“Interestingly, respondents were surprised that with the amount of technology available, the application procedures were still very lengthy. These comments centred around the technology supporting government staff, ie their own computer systems, rather than the electronic government offering.

Possible solutions to such (long familiar) issues included:

“The need to provide a service, with particular reference to accountability of staff, i.e. one person dealing with a query rather than being dealt with by several people and departments.”

And direct feedback from respondents includes the following statements:

“Simplify it. Ninety per cent of forms are not user friendly. Most forms are designed for lawyers and accountants.”

“It is so unprofessional, maybe they should link up to computers, it is behind the times.”

“… electronic government services could ‘free-up’ staff time to deal with queries of a more complicated and sensitive nature.”

“I think with all this technology and they still can’t manage to do this (obtaining a passport quickly).”

For those who had a problem with their last contact with government services, the two top reasons cited were:

    • Staff were not helpful/lacked knowledge
    • Staff were slow in dealing with the transaction

Over half of benefits claimants found it difficult to fill in the forms, with nearly half saying they needed help to fill them in. Half found communicating in writing difficult and half found filling in forms difficult.

“Face-to-face or telephone contact was perceived as being easier than written communication and form filling (there are issues of literacy here that are not explored in the research). Eighty-five per cent found it easy to communicate face-to-face and 70 per cent found it easy by telephone. “

Portal Feasibility

In 1999, the Portal Feasibility Study, made some more specific recommendations:

“The Portals must support Government policies for social inclusion and therefore a wide range of channels will be needed which will collectively appeal to all sectors of the user community.

From the channel media perspective, potential ‘portal’ delivery channels were categorised as:

    • Direct electronic channels, for example internet access through a customer’s PC, interactive television or kiosk
    • Voice telephony channels where the customer contacts a call centre agent by telephone who is able to communicate with the Portal using a direct electronic channel
    • Face-to-face channels where the customer interacts directly with an agent who is able to communicate with the Portal using a direct electronic channel, for example with a Post Office counter clerk or Bank teller.”

The UKOnline initiative from around 2000 made a concerted effort to address issues of social inclusion, investing substantially in areas such as PCs in libraries and potential partnerships with Citizen Advice Bureaux to ensure there were local access points, and in UKOnline centres aimed at helping improve the general skills and capabilities of citizens. The inheritance of these initiatives survives in the network of community internet access points called the ‘UK Online Centres network’, now run by the Tinder Foundation.

e-govt strategy 2000

In 2000, ‘e-Government: a strategic framework for public services in the information age’ commented:

 “The transformation of the way government and citizens interact must be an occasion for increasing social inclusion. It will be an opportunity to address disadvantage which arises from geographical location, to improve communications and employment opportunities. The Government is committed to reducing the digital divide, through the policies developed by the Social Exclusion Unit; through IT learning centres; and in its commitment to improving IT skills and access through the National Grid for Learning, the National Learning Network, learndirect and the Library Network. There are many local programmes in support of these aims too. But spanning the digital divide means more than skills and access, and it has to be accepted that some citizens will not want or will not be able to be direct users of new technologies. That does not mean that this strategy has nothing to offer them. New technology can support better face to face and telephone transactions as well as direct interaction online. A challenge for the public sector will be how to free up staff from internal processes in order to offer more effective interactions, and how to provide front line staff with the skills, information and equipment they need to act as intermediaries in this new environment.

The bold above (my emphasis) holds true now — and provides insight into the true potential offered by digital, not merely the simplistic notion of serving up existing services onto a screen. This is precisely why we need to ensure the move to digital gets it right where previous initiatives failed: it reflects the more fundamental issues that the Parliamentary Office of Science and Technology report highlighted in 1998, namely that where IT has been deployed extensively in government

“… this has tended to involve the automation of existing manual procedures based on the movement of paper, and has not reflected the major shift in management practices seen in the commercial world where IT has been used to move away from functional business units and to re-structure organisations around the processes that support the core business.”

It’s clear that social inclusion has been a concern at least since the 1990s and the first attempts to move government services online. But this narrow association with purely technological aspects has at times diluted the focus on the underlying causes of social inclusion — notably the way public services are designed, operated and delivered across multiple channels. As my recent CIO article argues, social inclusion needs to be addressed in the round — across all delivery channels — not become distorted by an isolated obsession about digital inclusion related to adding on-screen delivery as merely another channel for public services delivery.

There are also wider aspects that can be neglected in the move to truly digitally designed and operated public services. For example, the social issues that arise as government begins to manage information better. Take an example such as the potential that exists to provide real-time data to enable detailed geographic mapping of where taxes are generated and welfare disbursed. Poorly managed, certain communities or areas could be stigmatised by such developments (part of this debate started to happen when crime maps first began to be published online) — another reason issues of social inclusion/exclusion need to be considered holistically, not in fragments.

I’ll conclude this post with a paraphrased quote from my CIO piece:

Tackling social inclusion requires the realignment of the entire life-cycle of our public services around citizens’ needs … this important topic must not become sidetracked into a narrow focus on ‘screen-based’ service delivery: the opportunities offered by digital reform can enable the delivery of meaningful, socially-inclusive improvements to the design and operation of our public services – across all of the delivery channels that citizens and businesses use.

Posted in future Britain, IT, IT strategy, open government, public services, social exclusion, social inclusion, technology, technology policy | 1 Comment