Notes from GDS Tech Talks – Interoperability and Open Standards – 2020-02-03

These notes are just an attempt to capture some of the key points from the talks. I haven’t supplemented with anything personal so everything here should be considered my (potentially incorrect) interpretation of the speakers’ content.


“What’s new in open standards / open web” by Daniel Appelquist (@torgo), Samsung Internet & W3C

Daniel has kindly shared the link to his slides here:

Talk from Daniel Appelquist (@torgo). Dan does a number of things in open standards space; he works at Samsung Internet (android web browser with a strong focus on user privacy), and he is a member of the W3C Technical Architecture Group (TAG).

The TAG ( ponders deep questions about the design of the web, publishes findings, and helps coordinate cross-technology architecture developments.

Some of the new things coming along are:

  • Connections between the web and your device e.g. WebNFC, Web bluetooth, Web USB, Serial API, Context API(?)
  • SMS Reeceive API, Contats API, Clipboard Access API, File System APIs…

New features that make the web able to develop a richer experience:

  • Service Worker, Manifest File, Push Notifications
  • Badging API (to enable an icon on desktop / home screen to show some richer information e.g a number that is programmatically set)
  • Web Assembley
  • WebGL
  • WebXR (bringing augmented / virtual reality use cases to the web)
  • Web payment

New layout capabilities:

  • CSS Grid
  • “Houdini” APIs (the next step in extending style, enabling you to script style within the Javascript layer. “Houdini” in quotes as it’s the name of the ‘project / team’ not necessarily the official name of the APIs)

New comms capabilities:

  • WebRTC, WebSockets, Streams, Web Transport

Enhancements to Web Security:

  • Feature policy, Signed Exchanges & Packaging

Progressive Web Apps
Intended to be inherently multi-device, multi-os, multi-browser. Grew out of a movement to make sure web apps on your phone are available offline, and to have more features that people expect from applications themselves.

What makes the web ‘open’?

  • BUilt on open standards
  • Based on user needs
  • Transparent and open processs
  • Fair access
  • Royalty Free
  • Compatible with open source
  • Multiple implementations

What makes a standard open?

  • Collaborating between all interested partiens, not just individual suppliers -transparent and published decision-making process that is review by subject matter experts

Wide Review

  • The open tandarsds proces itself – call for reviews, public comemnt period….
  • Github issues in the repo where the standard is being work on
  • Discussions in browser-specific or engine-specific mailing lists (e.g. Chromium)
  • Review by vertical groups in W3C (Accessibility, Internationalisation, Privacy)
  • Review in standards groups outside W3C
  • TAG review
  • People talking on the internet

Ethical Web
How do we build things that are not only subject to strong technical and business review, but are also ethical?

Rules that we se for ourselves, encoding certain lmoral values into rules that guide conduct. e.g. The Hippocratic Oath (“Do no harm”).

Lots of examples of Tech Ethics out there:

ACM Ethics

Google AI Ethics

Mozilla Manifesto

  • should enable civil discourse, human dignity, individual expression

“The web already encodes ethics.”
e.g. Accessibility (A11Y)

  • WCAG
  • ARIA
  • Silver
    UPcoming – AOM)

Internationalization (I18N)

  • Make it possibe to use Web Tech with different languages

Privacy & Security

  • A workshop called out that “Pervasive Monitoring is a Threat to the Web” – resulted in a call to move the web to HTTPS
  • W3C Privacy & Security Questionnaire – could we adopt this for health?

Last year, Dan and Hadley Beeman worked on W3C TAG Ethical Web Principles.

Some bits from that:

“The Web Should Not Cause Harm to Society.”

There are lots of examples of this already:
– Filter bubbles
– Clik-bait
– Fake news
– Addictive behaviours

The web must support health community and debate.


  • Harassment
  • Algorithms that prioritise “engagement”
  • Content warnings

“The web should prioritise the needs of marginalised over the comfort of the priviledged”

The web is for all people.

The web must enable freedom of expression…
– …as long as it doesn’t contravene human rights
– Passive monitoring and surveillance can have a chilling effect
– Freedom of expression does not mean free speech everywhere
– Hate speech, harassment, or abuse may reasonably be denied a platform.

The web must make it possible for peple to verify the information teh see: e.g.

  • Credible web activity
  • Extensions that verify information or verify sources (Dan uses “News Guard”
  • Data journalism

Web must enhance individuals’ control and power:
Consider harmful use cases such as:

  • Stalkerware
  • IoT used by abusive partners

The web must be an environmentally sustainable platform:

The web is transparent:

  • “View source”
  • Transparency enables:
    • learning
    • research & audit
    • extensions

The web is multi-browser, multi-os, and multi-devices:

Multiple implementations help to keep the web open
People should be able to render web content as they want.

How do we apply these principles in real life?

  1. Start with web ( “A responsive website is usually the best way to build a service that will work on whatever device or browser they choose to use.
  2. Respect peoples’ privacy
    • Do you really need that info?
    • Consider GDPR
    • Don’t collect gender information unless you really need it.
  3. Use permissions-requiring features, & especially notifications, sparingly.
  4. Use the URL
  5. Test in multiple browsers
  6. Design for marginalised groups and communities
    • Tools for reporting harassment.
    • Features that discourage negative behaviours
    • Features that encourage good behaviours:
      • Content warnings
      • Text descriptions
  7. Respect humans and their choices
    • Let people delete their accounts
    • Dont fingerprint people (or their browsers)
  8. Get involved in web standards
  9. Join a discussion on a new technology / standard proposal

“Intro to OpenAPI” by Lorna Mitchel (@lornajane), Nexmo

APIs are the enginer of modern software development. API descriptions power up our API workflows.

Spec-first API Design

Who writes API specs?

<some missed notes>

Having tech writers on the team makes a massive difference. Tech writers have to be incredibly sharp and concise.

Should I write API descriptions for my existing APIs?

Yes – retrofitting can be painful, but helps highlight places where you’re making mistakes you shouldn’t be in your existing APIs.

API Description Languages

There are some proprietary API description languages e.g. API Blueprint (from Apiary) and RAML from Mulesoft.

OpenAPI ( is an open standard (formerly known as “Swagger”).

<Lorna did a walkthrough of an OpenAPI specification example which I didn’t attempt to note>

API descriptions are not just important for understanding your APIs, they are going to enable your users to discover your APIs.

OpenAPI allows you to define components once, and refer back to them from multiple endpoints within the API specification.

How do you edit OpenAPI specs

They are just text-based specs, so you can use your text editor. Lorna uses Atom with a plugin (possibly still called “Swagger”).

Stoplight Studio is a free-to-use IDE for working with OpenAPI specs. There is a desktop and web-based version.

Spectral Linting tool lets you lint your OpenAPI specs using pre-defined rules

What can you do now?

Generate Documentation – use your OpenAPI spec as the source for your API reference documentation

  • There are a choice of tools you can use to do this
  • The content and presentation of your documentation is completely separate
  • The OpenAPI spec has support for examples which is great for enhancing your documentation

Redoc is a documentation-generator that builds from OpenAPI specs

Postman ( lets you import the OpenAPI spec into a Collection, with requests already set up for you to easily create requests to support debugging etc.

Prism lets you turn an OpenAPI spec into an API server

(The OpenAPI spec has recently had Webhooks support added – will be in the 3.1 release. Lorna expects Prism to support this pretty quickly.)

Code Generation – OpenAPI Generator – lets you take an OpenAPI specification and generate lightweight API wrapper client code in your chosen language. It’s a quick way to create sample client code for your APIs.

Finding OpenAPI Tools – a community listing of OpenAPI tools – you can submit contributions and additions.

What lessons has OpenAPI learned from WSDL and SOAP?

  • The ability to quickstart developers – SOAP provides the WSDL that developers could just feed into their dev stack and automate lots of the work. OpenAPI is building on this concept, but adding lots of new stuff.

Why haven’t you mentioned Swagger?

Swagger is more than one thing. It is a tradename for the toolset from which the OpenAPI was specification emerged. Formerly, the specification standard was called ‘Swagger’ – this was then renamed to OpenAPI 2.0.

Swagger ( still provide tools for working with APIs, and their tools support the OpenAPI specification.

Does OpenAPI handle GraphQL?

Not really – OpenAPI is really REST-ish. If you’re working with GraphQL, then you’re best off using the tools that are provided for that.

OpenAPI isn’t necessarily aiming to converge with GraphQL world, as it’s not a great fit in terms of paradigm.

Does OpenAPI make recommendations on where the descriptions should be hosted?

There are no official recommendations on where you should host the descriptions (as in which path to host the documentation on). Lorna feels like OpenAPI probably wouldn’t recommend hosting your descriptions on the API paths themselves.

“API First for the humans and the machines” by Kin Lane (@kinlane), Postman

Going to talk a bit more about the governance around API development.

It’s important to set the stage when talking about APIs – Kin focuses on web APIs (as opposed to browser APIs, hardware APIs etc.)

The latest wave of web APIs was set into motion by eCommerce giants like Amazon, Ebay etc.

Following that, startups began seeing APIs being more than just about ecommerce, but about social interactions.

In 2006, a big step was when Amazon realised you could deliver infrastructure using APIs.

Why are we doing web APIs?

Originally, the commerce giants had data that they needed to make available to people on the web. APIs were being used to publish and syndicate content to websites. APIs allowed you to publish the data to websites, mobile applications directly fro the data source.

We also saw the creation of widgets, allowing people to take that data and make it available embedded within other websites.

Data needed sharing to trusted partners, e.g. sharing water utility data across the US.

With mobile phones taking hold, people realised APIs could be used to drive sensors, and other data sources connecting everyday items to the internet.

API Development Lifecycles

An API lifecycle can take an average of 3-12 months to deliver each version of an API, using a code-first approach. It can take that long to put something out there, get feedback from the people who will use it, and then iterate on it.

The introduction of tools like Apiary (Kin largely credits them with being first) allowed people to design, refine, document, and iterate their API designs without writing any code. Processes that would take years could take months or even days.

Postman is an API platform, with the majority of usage being using it as an HTTP client. it has evolved along the way, with much more functionality being added to it. Today you can use it to actually design your API.

<Kin gave a walkthrough of designing an API using Postman>

Demonstrated creating a new API Path in Postman, and then added a JSON example to the request. You can then create a Mock API, which gives you a published Mock API URL, which returns the example JSON response you added. The response could be something other than JSON: it could be YAML, or CSV etc.

As Lorna demonstrated with Stoplight, Postman can create documentation for sharing with users. As Lorna also demonstrated, you can import OpenAPI specifications into Postman. A new feature also lets you add two-way sync with the git repository containing your OpenAPI specification.

This integration gives a lot of flexibility around how users can work with your APIs. You can either build entirely within Postman, or plug into an existing API dev lifecycle – potentially allowing different members of your team to use different tools e.g. you could be using Postman, whilst others are using Stoplight tools.

Letting developers build against your mock APIs

Having Mock APIs let’s people start to design the things that are going to interact with those APIs. e.g. mocking your mobile apps, your display widgets

The advantage of getting people doing this very quickly is that you start getting feedback on the API design, and you can feed this back into your API design really quickly.

It allows your API users to help define the API tests that are used to validate each API response. Having users involved in the design, allows you to create API contracts by applying API tests that everyone contributes to.

Deploying your API

Once your API is ready it can be deployed using a gateway or other approach. There are some services which can take an OpenAPI specification and set up your management layer – although almost all require you to wire up the endpoints to the data layer behind.

API Management

  • All APIs should have a management layer.
  • All should have security, rate limits, analytics. API management is what will protect you if there are issues. There are many API management options out there now.


All APIs should be clearly documented.


  • Monitor from different regions around the world.
    • Your monitors can have issues, you need to build in monitoring resilience.


  • Make sure API tests are avaialble to be run manually against the API.
  • API tests should also be available to be run as part of a CI/CD pipeline (automated tests)


Ensure security – You should use OWASP, and associated tools, to make sure you are covering common security guidelines and avoiding common security vulnerabilites.


You need to make sure that there is a strategy for APIs to have owners and support avaialble.


Every API should be announced. There should internal and external communications around every API release. All developers should have to step up and be part of that communications strategy. They need to be able to communicate the value and detail of the API releases.


“Leveraging all of the outputs from API operations to ensure all APIs are observable and seen by stakeholders.”


Establishing governance of design, mocks, docs, tests, and all other parts of the API life cycle.

How do you define what API design should look like?

Make all teams move to an API-first strategy. It forces team to be having conversations early on.

You should have a documented approach to how you govern your APIs.

“Governance wraps up the full life cycle, and allows teams to understand how they should be building and operating APIs.”


Mainstream companies are getting more organised about how they deliver APIs, establishing common approaches to delivering APIs across their teams.

Working with government

APIs can be an answer to departments hoarding data for ‘power’. They also affect the politics though. For example, gov departments might be paid to hold the ‘authoritative list’ of something, so unsurprisingly there are multiple ‘authoritative lists’.

APIs can tackle the data silos, and enable better cooperation between gov departments.

API Advocacy

The most important tools we have are evangelism and advocacy for API-first approaches. It’s important to tell stories of what’s going on, and to help developers (and others) understands the human effects and benefits of APIs. It’s important to be able to articulate the human aspect of APIs – not just the technical. We need the conversations to focus on the real-life stories.


Continued storytelling is important to keep API approaches alive, to educate new champions and advocates, to keep up the momentum. Stories are all that matter – communicating the impact we make on our environment, our children, our veterans etc.

NHS Digital consults the public on data and tech

Recently, NHS Digital has published two public consultations that are of broad public interest and deserving of comprehensive feedback. They are looking to define principles and standards for the use of NHS technology and data.

I’ve summarised and linked to them below in the hope that you will take a bit of time to take part.

Continue reading “NHS Digital consults the public on data and tech”

My 2018 event schedule

I only realised in hindsight that I attended a number of tech events, meetups, training, hackathons etc. during 2017.

To attempt to be a bit more organised, and in case it’s of interest to others, I’m going to keep a live list of my planned events for 2018 here. Give me a shout if you’re interested in any of them and would like to know more!






  • TBC – MindTheProduct Conference, London


The Zen of Interoperability

I was having a discussion with a colleague this week about architectural options for some specific interoperability use cases we are tackling.

The conversation touched on implementation choice – how much flexibility should there be in how to approach specific interoperability use cases within the NHS?

I’ve thought about this quite a bit, and struggled with the complexity that “many ways to do the same thing” can introduce. I naturally found myself quoting PEP 20 — The Zen of Python |


There should be one — and preferably only one — obvious way to do it.

Could this be a reasonable principle for us to take with NHS interoperability?

I wonder if there is a place for “The Zen of NHS Interoperability” – to define some guiding principles for all of us working hard to make interoperability useful for the NHS.

Here is a slightly tongue-in-cheek attempt at a “Zen of NHS Interoperability” 🙂

The Zen of Interoperability

Elegant is better than messy.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Open is better than controlled.
But controlled is better than proprietary.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you designed it.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Clear Standards are one honking great idea -- let's have more of those!

What are ‘dispositions’ in Urgent Care?

The majority of the content in this post was provided by a colleague.

In Urgent & Emergency Care, dispositions play a key part in helping us to categorise patients and ensure they get the right response for their clinical need.

‘Dispositions’ are defined here:

It is the third definition we’re interested in here:

3. the plan for continuing health care of a patient following discharge from a given health care facility.

My version of this, with the added context of how we use dispositions in NHS urgent & emergency care is:

A disposition packages the perceived clinical need of a patient in the form of a skill set and a time frame.

e.g. “Speak to a clinician | within 2 hours”

We allocate dispositions using information / input data from the patient and our clinical expertise, and are there to help consistently communicate a point-in-time assessment of a patient’s clinical need in terms of what needs to happen next; they are really only ever recommendations.

Although dispositions are essentially recommendations, some scenarios may have pre-defined dispositions that are considered appropriate.

For example – specific clinical conditions might require a disposition that denotes a response by an ambulance, or Key Performance Indicators (KPIs) such as National Quality Requirements (NQRs) might define a maximum amount of time within which a patient should receive a call back from a General Practitioner (GP).

The decision-making process taken to reach a recommended disposition (skillset and timeframe) will vary between entities too. Different organisations, regions, even clinical IT systems may use different reasoning dependent on specific external factors.

For example – a local region may have recently experienced a high number of clinical incidents with unwell children, and therefore decide that all children are seen by a specialist paediatric service, regardless of their presenting features.

E.g. Disposition of “See paediatric specialist | within 2 hours”

This is still a disposition representing a clinical recommendation – i.e. the recommendation that a child be seen within 2 hours by a paediatric specialist – however the decision on which disposition is appropriate was affected by different factors.

In all situations disposition are based on a combination of expert opinion, relevant evidence, and situational factors and therefore they are always subject to change and re-evaluation.

How are dispositions used?

Once a perceived skillset and timeframe have been “packaged” into a disposition, the response to the patient’s identified need will, and can, vary depending on the availability of services locally, the risk appetite of the responsible organisations or individuals, and the prioritisation within services that are available.

How does this relate to prioritisation?

Prioritisation is the next step and can only ever be relative.

A “package” of information has led to the disposition but other factors will decide which of the patients assigned a similar disposition require priority.

More external factors come into play here, such as the age of the patient, their specific condition, and any co-morbidities they may have.

Again, it is ultimately a matter of expert opinion as to which factors have the greatest weighting when deciding priority (although as we do more with data this is likely to be come more evidence-based and less dependent on pure expert opinion).

All the time expert opinion is a significant part of the decision-making process there will be conflict between different expert perspectives and belief systems.

NHS Hack Day 17 – Manchester

"NHS Public Data" team

I have just spent the weekend at NHS Hack Day in Manchester, hosted by the Co-op in their new tech hub The Federation.

I wrote this blog post with the primary intention of sharing with my colleagues in NHS Digital to hopefully encourage some more people to get involved – but I don’t think there’s anything here that doesn’t apply to everyone.

It was a brilliant weekend, and there was a super mixture of people there – plenty of healthcare professionals, IT professionals, some senior management types (CIOs / CCIOs), general “techies” (professional, aspiring, and amateur), researchers, and a lawyer.

"Mobi-Alert" team

What’s NHS Hack Day like?

For those who aren’t familiar with NHS Hack Day, it goes something like this:

The event runs 9-5 Sat and Sun

Lunch is provided on both days and hot drinks are available throughout.

Saturday morning is spent using coffee to recover from the work week, chatting with people, and pitching ideas.

Pitches are 2 minutes each and you can pitch anything from a solid idea to an open-ended question (this time we even had someone who wanted to create a sci-fi story about healthcare in the year 2100).

Most people are nervous

Some people only decide to pitch an idea whilst watching the other pitches – the team that won last weekend only decided to pitch after getting confidence from the other pitches.

After pitching everyone has some time to go and talk to the pitchers, explore the ideas, and gradually teams are formed around the projects. Sometimes ideas are merged together, sometimes they’re split off into smaller projects.

The rest of Saturday, and most of Sunday is spent working on projects 

Different people work in different ways; some teams like to stick the headphones on and just chip away at a problem and others will spend a lot of time working through problems interactively. 

Sometimes people start building software etc. in the first hour, sometimes people don’t build at all.

On Sunday afternoon teams decide if they would like to present their project to everyone else – and if so submit their projects

This is completely optional, but it feels good and is encouraged – the community is friendly and rarely does a team not present something.

"Trendy" team

At about 15:30 everyone gets together and watches presentations

Each team gets 3 minutes to present their work, and 2 minutes to answer questions.

I’m always amazed at what people have managed to prepare – last weekend we had a Fresh Prince rap from one team, and a promotional video from another…

I had several conversations with people where they were not sure where they were expected to be in terms of progress at various points through the event. Superbly, there is no right answer.

The presentations are one of the most enjoyable bits for me – and last weekend had me smiling throughout every single presentation. There were so many great ideas, and every team had something interesting to show.

Teams will present anything from some paper mockups and a bit of narrative through to a fully working product with audience participation – it is dependent on the type of project, the team, and how good people are at getting up on Sundays mornings.


After presentations, there is a short period of evaluation where either a panel of invited judges, or the community, will vote for the top three projects – and those teams are given some prizes

We tried a new approach to voting this time where the community was given 3 votes each to vote for the three projects they were most excited by. We trusted the community to not vote for themselves, and to only vote three times – this simply doesn’t need policing.

People then help put the borrowed space back to how it was found, and head home feeling enthused 🙂 

The last 30 mins is spent clearing up, saying goodbye, exchanging contact details, plotting world domination, and just generally wrapping up an enjoyable weekend.


Why should I care?

If you’re thinking “well I’m sure you all had fun, but does this matter to me?”, here are a few of my thoughts:

There is absolutely no ‘right skill set’

In fact I shouldn’t need to explain that diversity always wins and this certainly includes diversity of skills.  The best outputs come from the teams with the most diversity, and there is no buzz quite like building something with a diverse team of techies, healthcare professionals, artists, and users.

On our team, we were all learners in one way or another so a large amount of our time was spent pairing, learning, explaining, and discovering – this is just as rewarding as having something shiny to present the end of it.

This type of event is unconstrained thinking at its absolute best

As an embittered and tiring NHS technology person, I go to these events to recharge my batteries. This kind of community is not subject to the organisational, political, and learned behavioural constraints that many of us are.

It’s incredibly rare that a pitch is binned because “we’ll never get it through the <insert your favourite bureaucratic restriction here> process” – people are there to busk and solve problems. The concept of a political mandate, or a 4:1 return on investment simply isn’t important here.

The ideas and outputs from these events are a map for the future

Maps are so useful – we are all pretty convinced of the benefit of roadmaps, and visions, and Google maps.

The ideas at these events give us a clue about what is around the corner for NHS technology. Many people at these events are recently qualified healthcare professionals, or are maybe only involved with NHS technology as users and see this as an opportunity to have a voice.

The things they want, and expect, are clues – to what we should be thinking about, to where we should be going, and to where we’re falling short.

It helps prove that the centre can engage, listen and help

Do not read subtext into this – I am not saying “NHS England / NHS Digital never engage with the community”.

But it should not raise eyebrows at these events when one says that they work for NHS Digital, or NHS England – people are, but shouldn’t be, pleasantly surprised.

Last weekend I think I counted the number of attendees from NHS England / NHS Digital on one hand – I’d love to see this go onto two hands.

People are enthused to see us there, and actually we can be really helpful as guides, navigators, and mentors. It encourages innovators just to know that we’ve considered it of value to take time to be there.

It’s not just about AI and mobile apps – people solve fundamental problems too

One of my favourite projects from last week: FastPass. A team of seasoned IT support professionals who were determined to sort out the drag of password resets – both for support staff and users. They built a working system for self-service password resets and they intend to take it forward within their local NHS trust.

My team tackled the challenge of collecting timely feedback from users of NHS services at a scale that would produce enough data to be significant.

Real problems, not flashy, that could genuinely make stuff better.


In conclusion

Next time there’s an NHS Hack Day near you, try and get along – if only for one day.

If you don’t like it then fair enough.

But you might do, and you might find it leads to you bringing a better, more energised self back into work the following Monday – and that can only be a good thing for your own organisation, and the NHS.

Visit for more information, or follow @NHSHackDay on Twitter.

If you are interested, and would like to ask some questions feel free to drop me an email at or @mattstibbs on Twitter – I’d be really happy to tell you more.


I recently visited family in Singapore – I’ve been lucky enough to visit several times now and always enjoy spending time there. The benefit of staying with family is that you get to see the place through local eyes – there’s a lot to notice as you walk to the local hawker centre for lunch.

I’m really interested in Singapore’s civic infrastructure and seem to notice new things every time I visit; there’s normally some technology involved.

I sometimes find myself asking “How come we don’t just have something like this here?” as if it is just that simple. In reality Singapore has an interesting setup (in many ways) which allows it to make things work that might not back here in the UK. I certainly don’t claim to deeply understand these differences, but I’m interested enough to keep learning about it.

For my own interest I decided that this time I would note a few things down; I get excited about some of these things but they are not necessarily as ground-breaking as they feel – this is as much for personal reference in the future as I maybe follow their developments. 

Whilst I was writing this blog post, I noticed this article published on the WSJ which talks about Singapore’s plans to “take the ‘Smart City’ to the next Whole New Level”  –  this is intriguing and exciting having seen first hand the efficient way in which Singapore provides some of its civic services to citizens.

Driving in Singapore

From what I understand, Singapore is an incredibly expensive place to drive. For example: on initial registration of a car in Singapore there is a registration fee (tax) of 150% the market value of the car – a car worth $40,000 will cost you an additional $60,000 to register. This is before you even get started – you still have standard running costs / road taxes etc. to keep it running. Once a car is 10 years old, there are additional licences you have to get in order to keep it running – consequently the large majority of cars on the roads in Singapore are less than 10 years old.

Singapore has one way to pay driving-related charges – payments are facilitated by In-vehicle Units (IU). Any car wishing to use ‘priced roads’ in Singapore must be fitted with an IU (I don’t think I’ve seen a car that doesn’t have one in the windscreen yet…).



The IU takes a payment card against which it makes charges – generally this uses the EZ-Link stored value card although the more recent units also support NETS (a Singapore cashless payments company which offers more favourable arrangements for Singapore businesses and residents than the international players such as MasterCard / Visa).

The IU contains a radio transceiver which is activated by all sorts of things. When a device is charged, it simply beeps and the fee is automatically deducted from the payment card.


Almost every car park around the city uses the IU for parking charges – the IU is read on the way in, and then as you pass through the exit barrier your device is automatically charged (surprisingly parking charges are actually very reasonable).

As you drive around the city, you notice Electronic Road Pricing (ERP) gantries projecting a bright white line of light onto the road surface – driving through this white line means you will be charged a toll charge, which changes depending on the time of day and level of congestion.


Driving into your apartment block, your IU device bleeps as the barrier identifies your car and lets you through – no need for a separate remote control or access card.

We have achieved similar things to this in the UK using Automatic Number Plate Recognition (ANPR) cameras – and I’m sure that as ANPR technology becomes more available and cheaper to deploy, we will see it used much more frequently for payment in local car parks, for instance. However IU devices are ubiquitous in Singapore, and one of the things that makes the system so efficient is that ubiquity – it’s the fact that a single ‘standard’ has almost 100% penetration across the whole system that I find interesting.

Feedback, feedback, feedback, data!

I’ve always enjoyed giving feedback at passport control in Changi Airport. Two reasons for this:

  • it is run pretty efficiently and rarely takes a long time
  • the officers do this really cool emphatic ‘dance of the stamp’ as they adorn your passport.

I’ve always tapped the ‘very smiley face’ on the ‘feedback terminal’.


This time though, it struck me just how many ‘feedback terminals’ there are dotted around the airport…

Just a few opportunities for giving feedback that I noticed were:

  • After you’ve been through passport control
  • After you’ve used the toilets (feedback is assigned to the operative on duty)
  • After you’ve taken a walk around the Cactus Garden
  • After you’ve purchased something from duty free
  • After you’ve visited the Butterfly Garden
  • After you’ve taken a photo in front of the Photo Garden(?)
  • After you’ve bought refreshments from the Tip Top food stand (curry puffs and kopi are a must)

Take your time…

Whilst waiting at a pelican crossing, I noticed these boxes fitted for the crossing control. At first glance I thought it was some kind of payment terminal (I was prepared for the fact that there may be some charging associated with using the crossing – anything is possible), however my sister explained that these Land Transport Authority (LTA) crossing controls are fitted with tech which allows people to request more time to cross the road.

Those who are eligible are issued with an RFID card that they present to the crossing control when activating it triggering it to remain green for longer.


I thought this was an ace idea but I did wonder whether this sort of thing would even really work in the UK – we certainly lack the same level of compliance when it comes to crossing roads.

AXS to services

We decided we wanted to have a BBQ at the East Coast Beach one evening – for this you have to book a BBQ pitch. As we were walking through a mall, we passed what looked like a cash machine. “Oh, hold on, I’ll just book our BBQ pitch.” Turns out it was an ‘AXS terminal’.

It seems you can do a whole host of ‘everyday things’ via an AXS terminal, and they are placed all over the city. You can pay fines, pay bills,  buy tickets, access government services, top things up, book BBQs…

It’s not a completely novel concept – you can, I understand, top up your PAYG phone from some ATM machines in the UK.

But you can find an AXS machine in most shopping centres, and each machine provides a whole range of services (over 150 apparently) – they are almost as ubiquitous as ATM machines. The system is consistent, it’s providing a standardised platform for providers of services to make transactions available to citizens, and citizens know how to use it to interact with the city.

Open Flood Data…

We managed to get caught in the first downpour Singapore had seen for several weeks – and this rainstorm came with conviction. We actually spent over an hour stranded in a cave surround by the “The Ten Courts Of Hell” at Haw Par Villa whilst we waited for the storm to subside.

Whilst sitting there watching the sky empty itself, my sister said “It hasn’t rained like this for weeks – I wonder if the drains are coping”. She loaded her WeatherLah app and showed me a map of all the storm drains / channels around Singapore, and how full they were. The geek in me loved that I could see the status of the entire drainage network, in a single view, on a smartphone, from inside a cave.

Again, there’s nothing particularly ground-breaking about water level data being made available – we have this in the UK already via the Environment Agency’s real-time flood monitoring API; but for some reason it felt like I was looking at a ‘system’, as opposed to lots of monitoring stations dotted independently around the place. My mental model of Singapore was that of a single machine and in subsequently reading about the Smart City plans this kind of makes sense.

Incidentally I asked my sister why she was interested in the status of the drainage system around Singapore and she said “Oh, I’m not”. WeatherLah advertises that Singapore is known to flood sometimes and the app will alert you to this in advance – so I would assume this is proven to be useful data for citizens.

Data, data, data

Singapore has bold aspirations when it comes to using technology and data to really make the state work for its citizens. Just this week they have announced their new ‘open data portal’ – (it’s not dissimilar to the work the Office of National Statistics have been doing around access to and visualisation of data). The Singapore open data portal appears to be targeting developers as a primary consumer of the data and their blog uses the strap line “Understanding Singapore by exploring and visualising open data”. Again there is a focus on the idea of ‘Singapore as a system’ – I think it’s going to be really interesting and I’m certainly going to be watching with keen interest to see where it goes over the next couple of years.