Quickly installing Sun/Oracle Java in Linux Mint 15

Almost the same technique as yesterday, but a much bigger timesaver this time.  Most Linux distributions come with the open OpenJDK installed. This is fine for most things, but I’ve noticed that apps that are graphically complex (PyCharm for one) have some rendering issues and CPU usage is high.

You can install the Sun/Oracle Java instead, but this seems to be a pain to do from the download. There is another PPA for this:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer

It still downloads all however-many-megabytes of installer, but it’s fire and forget. No need to un-install OpenJDK, they can coexist.

Quickly installing Sublime Text 2 in Linux Mint 15

Sublime Text is my new favourite text editor. It has some really awesome functionality and a clean, modern interface.

Sometimes you just can’t be bothered using tar, putting the binary in the right place, working out how to get it onto the menu. I downloaded the Sublime Text 2 tar.bz2, and realised I had to do this manually.

Ubuntu and derivatives like Mint allow you to use PPAs – software repositories that aren’t included by default but can easily be added so that you install binaries easily. The most widely known one is for installing Java, but there is also one for installing Sublime Text 2.

You do this as follows:

sudo add-apt-repository ppa:webupd8team/sublime-text-2
sudo apt-get update
sudo apt-get install sublime-text

Job done – on the menu, with a short command line alias “subl” to use as well.

Don’t lose your data…

A bit of a different subject to normal posts. I’ve seen a lot of tweets recently from people who have lost irreplaceable data because they haven’t got a backup or their backups weren’t working properly.

Bruce Schneier recently said on his blog:

Remember the rule: no one ever wants backups, but everyone always wants restores.

This is the truth – it isn’t the backup that matters, it is the restore. You need to test it. If you are serious about your data, back it up!

I had a scare last year when my laptop’s SSD failed without warning, and then I found out my backups hadn’t been working properly. Luckily my elite data recovery skills meant I could get the data back.

I took this as a chance to implement a robust, dependable backup system that I knew I could rely on.


You need to decide what you are protecting

  • Photos – these to me are genuinely irreplaceable.
  • Projects – code, notes, datasheets, data etc. I could redo these, but it would take time and effort.
  • Emails – again, I would have no way of recreating these

And what you aren’t:

  • Media – TV, films, music. I’m not bothered about these – I can get them again
  • Programs and OS – I can download these again.

At this point I should say that I am not a fan of “bare metal restore” or full disk imaging. Why?

  • Individual files are not easily accessible – it is far harder to determine if things are working correctly.
  • The file formats are often proprietary and undocumented – if it isn’t working, I am going to have a hard time fixing that.
  • Bare metal restores are difficult onto different hardware – they don’t handle changes well, even a different sized partition complicates this.
  • I would hope I need to restore infrequently enough that re-installing my OS and programs is a welcome clean-out rather than inconvenience.

You need to decide what you are protecting against:

  • Disk failure – this seems to be the biggest threat to my data. One external HD and two mSATA SSDs have failed in the past two years. My view is now that no single storage device can be trusted, especially SSDs
  • Theft – my laptop, iPad, server or backup drives could be stolen.
  • Idiocy and mistakes – I could delete something I didn’t mean to at any point in time. Or simply change something I didn’t mean to.

It would be fair to say, my solution is belt and braces and then some.

Central storage

Instead of trusting my data to my individual mobile devices and backing those up, the primary store of data is on a central server located in our house.

This is a HP N40L server (which are often available for £100 with a cashback offer), running with 2x3TB drives in a RAID1 configuration. RAID1 is otherwise known as “mirroring” and I have implemented it in software (which means that I can put the drives in any machine, unlike with hardware RAID where the chipset must be the same). All RAID1 does is protect against drive failure – nothing else. If the machine is stolen, I lose my data. If I delete my data, I lose my data. Don’t fall into the trap that many do and call RAID1 a backup. I have done it for convenience and because these large drives are currently unproven in terms of reliability.

Although this is the primary store of data, I need to be able to work with this data quickly and when away from the house. Therefore everything is synced between the server and mobile devices periodically.

For Windows machines, I use SyncBack Pro to do this in near-realtime. It’s very effective and bi-directional.

Central storage backup

I run two of my own backups on the central storage.

Firstly, on a daily basis, an incremental backup is performed between the 2x3TB RAID1 array and an external 4TB USB drive. The incremental backup means I have 90 days of history on all of my files available immediately. The external USB drive means that there is a degree of isolation between the server and drive, and I can quickly remove it from the house if need be.

Secondly, at the beginning of each month, I plug in a second external 4TB USB drive. Again, this is an incremental backup, but less frequent. I then remove the drive and store it in my substantial safe. This protects me against hardware failure – even if the server decides to send 240V into all connected devices, this drive is not connected to the machine all of the time. It also protects me from theft and fire to a degree – only a determined burglar could open the safe.

Both of these use SyncBack Pro as well.

Offsite central storage backup

The entire central server is then backed up to the cloud using Crashplan. The most important feature of Crashplan is that it is offsite. Whatever happens to the hardware in the house, Crashplan will have the data.

Crashplan also allows friends and families to backup to my server and take advantage of all the other backups I perform.

Once a year I backup photos to a portable USB hard drive and give this to a trusted third party (parents) to look after.

Offsite laptop backup

Not content with that, I run Backblaze on my personal laptop. Backblaze is a competitor to Crashplan. This backs up everything on the laptop to the cloud.

(I’m not actually quite this paranoid – I used to use Backblaze on our old “server” running Windows 7. When I upgraded to the HP N40L, I found Backblaze doesn’t run on Windows server OS, so had to switch to Crashplan. I have another 18 months of Backblaze subscription left to use).

Dropbox and Github

The final aspect of backup is for all of my project work. All of it is on Dropbox. This isn’t primarily for backup – it is for access from wherever I want. All of my code goes onto Github.


A number of the devices mentioned above are encrypted using Truecrypt. A number of more sensitive documents are encrypted before being sent to the cloud.


I regularly check the above is all working. I recently had an SSD failure, and initially noticed that 1 of the above mechanisms wasn’t working. It was quickly fixed.


This might be paranoid, but all this data is vital to me.

My photos, at the moment are stored:

  1. On my laptop
  2. On the RAID array in the server
  3. On the permanently connected USB drive
  4. On the once-a-month USB drive
  5. On the offsite portable USB drive
  6. On Crashplan
  7. On Backblaze

The chance of all of this going wrong at the same time is virtually zero.

We need an antidote to the anti-code

In the last post, I briefly went over the process of reverse engineering the algorithm behind an anti-code generator for an alarm system.

It turned out that the algorithm was very simple indeed. For a given 5-digit numeric quote code, we can derive a 5-digit reset code using a “secret” 8-bit (256 possibilities) version number as a key. This has a lot in common with a keyed hash function or a message authentication code.

There are some pretty serious security implications with this mechanism.

5 digit numeric codes are never going to be strong

Even if I had to enter a pin at random, a 5-digit numeric code only has 100,000 options – I have a 1/100,000 chance of getting it right.

If we made this into a 5-digit hexadecimal code, we would now have a 1/1,048,576 chance – a factor of over 10 times less likely.

Up this to a 6-digit alphanumeric code, and it is now 1/2,176,782,336 – a factor of over 20,000 times less likely we could guess the code.

It doesn’t take many alterations to the limits on codes to make them much more secure.

For this reason it surprises me that alarms are still using 4-digit pins, but most internet forums insist on 8-character passwords with alphanumeric characters and punctuation.

The algorithm isn’t going to stay secret

There is no way to reliably protect a computer application from reverse engineering. If you can run it, at all, it is highly likely the operation can be observed and reversed. Relying on the secrecy of an algorithm or a key hidden within the software is not going to afford any level of security.

One we know the algorithm, the odds massively improve for an attacker

The algorithm takes a version number from 0-255. For a given quote code, I can try each version number, giving me a list of up to 256 potentially valid reset codes (sometimes, two version numbers will generate the same reset code).

If I enter a code from this list, I now have a 1/256 chance of getting it right. Not great compared to 1/100,000 for a purely random guess.

This is entirely due to the short version number used.

Given a quote/reset code, most of the time we can infer the version

It quickly became apparent that for most quote/reset pairs, there was only a single version number than could produce this pair. I’m awful at probability and decision maths, so I thought running a simulation would be better.

I like running simulations – generally when the number of simulations becomes large enough, the results tend towards the correct value. So I tried the following:

1. Generate a genuine quote/reset pair using a random quote.

2. Use a brute force method to see which version numbers can produce this pair

3. Record if more than one version number can produce this quote/reset pair.

I started doing this exhaustively. This would take a long time though… someone on the Crypto stack exchange answered my question with a neater, random simulation.


I ran this test over 20 million times. From this it turns out that 99.75% of quote/reset code pairs will directly tell me the version number. Most of the remaining 0.25% require yield two version numbers. A tiny number (<0.001%) yield more than four version numbers. You are almost certain to know the version number after two quote/reset pairs as a result.

What does this mean in the real world?

The version number is treated as the secret, and I am informed that this secret is often constant across an entire alarm company. All ADT alarms or all Modern Security Systems alarms may use the same version number to generate reset codes.

This means I could get hold of any quote/reset pair, infer the version number, and then use that later to generate my own anti-codes for any ADT alarm. I could get hold of these quote/reset pairs by going to an accomplice’s house with a ADT alarm system, or by eavesdropping on communications.

With that anti-code I could either reset a system presenting a quote code, or impersonate an alarm receiving centre (there are other speech based challenge-response requirements here to prove the caller is genuine, which are easily gamed I would imagine).


A 5-digit reset code using an 8-bit key is never going to be secure.

When computer passwords are 8 characters and 128-bit keys are the norm, this anti-code mechanism seems woefully inadequate.

Reversing an anti-code

A contact in the alarm industry recently asked if I could take a look at a quick reverse engineering job. I’m trying to gain some credibility with these guys, so I naturally accepted the challenge.

Many alarms have the concept of an “anti-code”. Your alarm will go off and you will find it saying something like this on the display:


QUOTE 12345

The idea is then that you call the alarm receiving centre, quote 12345, they will input this into a PC application, get a reset code back, tell the customer, and then they can use this to reset the alarm. This means that you need to communicate with the alarm receiving centre to reset the alarm.

Alarm manufacturers provide their own applications to generate these codes. This particular manufacturer provides a 16-bit MS-DOS command line executable, which will refuse to run on modern PCs. This is a pain – it’s not easy to run (you need to use a DOS emulator like DOS-BOX) and it doesn’t allow for automation (it would be convenient to call a DLL from a web-based system, for example).

So I was asked if I could work out the algorithm for generating the unlock codes. x86 reverse engineering is not my forté, especially older stuff, but I thought i would have a quick go at it.

Turns out it was easier than expected! I find documenting reverse engineering incredibly difficult in a blog format, so I’ll just cover some of the key points.

Step 1: Observe the program

First things first, let’s get the program up and running. DOS-BOX is perfect for this kind of thing.

The program takes a 5 digit input and produces a 5 digit output. There is also a version number which can be input which varies from 0-255.

I spent a while playing around with the inputs. Sometimes things like this are so basic you can infer the operation (say, if it is XORing with a fixed key, flipping the order of some bits or similar). It didn’t look trivial, but it was plain to see that there were only two inputs – the input code and version. There was no concept of time or a sequence counter.

At this stage, I’m thinking it might be easiest to just create a lookup for every single pin and version. It would only be 2,560,000 entries (10,000 * 256). That’s a bit boring though, and I don’t have any idea how to simulate user input with DOS-BOX.

Step 2: Disassemble the program

To disassemble a program is to take the machine code and transform it into assembly language, which is marginally more readable.

There are some very powerful disassemblers out there these days – the most famous being IDA. The free version is a bit dated and limited, but it allowed me to quickly locate a few things.

An area of code that listens out for Q (quit) and V (version), along with limiting input characters from 0-9. Hex values in the normal ASCII range along with getch() calls are a giveaway.

Keyboard input
Another area of code appears to have two nested loops that go from 0-4. That would really strongly indicate that it is looping through the digits of the code.

Other areas of code add and subtract 0x30 from keyboard values – this is nearly always converting ASCII text numbers to integers (0x30 is 0, 0x31 is 1 etc. so 0x34 – 0x30 = 4)


A block of data, 256 items long from 0-9. Links in with the maximum value of the “version” above. Might just be an offset for indexing this data?

IDA’s real power is displaying the structure of the code – this can be a lot more telling than what the code does, especially for initial investigations.

Code structure
It’s still assembly language though, and I’m lazy…

Step 3: Decompile the program

Decompiling is converting machine code into a higher level language like C. It can’t recover things like variable names and data structures, but it does tend to give helpful results.

I used the free decompiler dcc to look at this program. I think because they are both quite old, and because dcc has signatures for the specific compiler used, it actually worked really well.

One procedure stood out – proc2, specifically this area of code:
DCC outputIt’s a bit meaningless at the moment, but it looks like it is two nested while loops, moving through some kind of data structure, summing the results and storing them. This is almost certainly the algorithm to generate the reset code.

Now, again, I could work through this all and find out what all the auto named variables are (i.e. change loc4 to “i” and loc5 to “ptrVector”. Or I could step through the code in a debugger and not have to bother…

Step 4: Run the code in a debugger

A debugger allows you to interrupt execution of code and step through the instructions being carried out. It’s generally of more use when you have the source code, but it is still a helpful tool. DOS-BOX can be run in debug mode and a text file generated containing the sequence of assembly instructions along with the current registers and what is being written and read from them. It’s heavy going, but combined with IDA and the output from DCC, it’s actually quite easy to work out what is going on!

Step 5: Write code to emulate the behaviour

Shortly after, I had an idea how the algorithm worked. Rather than work it through by hand, I knocked up a quick Python program to emulate the behaviour.The first cut didn’t quite work, but a few debug statements and a couple of tweaks later, and I was mirroring the operation of the original program.

Overall, it was only a few hours work, and I’m not really up on x86 at all.

I’m not releasing the algorithm or the software, as it could be perceived as a threat. In the next post, I am going to discuss some of my security concerns around the idea of an anti-code and this specific implementation.

What’s inside a WebWayOne SPT?

I managed to find a reasonable resolution image of a WebWayOne SPT (supervised premises transceiver, the device that communicates with the ARC (alarm receiving centre)). Just some quick notes about what is on it.

Annotated PCB

Annotated PCB

The Coldfire processors have a hardware encryption acceleration engine on them, which suggests that some fairly heavy duty encryption is happening.

Tomographic motion detection

Typical alarms use PIR (passive infrared), microwave or ultrasound detectors for motion detection. PIR are by far the most common type of detector – they work by detecting changes in infrared emitted by warm bodies. They are cheap, very reliable, and actually quite hard to beat.

Laser break beams are only really seen in films, though simple active infra-red break beams are often used on scaffolding alarms.

The problem with all of these is that they cannot see through objects. A common method of circumventing PIR detectors is to “mask” them – you either cover them  using paint (or another infrared opaque coating) or simply put something like a box in front of them. Higher security systems have “anti-masking” detectors which use an active element to check that their view has not been masked.

It can mean that complex, cluttered, or continually changing spaces need a lot of PIRs to be adequately covered.

Step in a new type of motion detection – tomographic motion detection. This sounds really clever and innovative. You might have heard of tomography from the medical world – CT scan stands for computerised tomography. It means “imaging by cross section”. Xandem have come to the market with a new detector that uses 2.4GHz radio signals to detector motion in a space.

A group of wireless nodes form a mesh of connections, as shown in this image from the patent:

Mesh network

Mesh network

Each one of those lines represents a radio path. The system uses 2.4GHz signals, the same as with WiFi or Bluetooth. These are heavily attenuated by anything containing water – such as the human body. A human body placed in the radio path of any two nodes will reduce the received signal strength (RSS).

By carefully measuring the RSS from each node to each other and doing some clever processing, you should be able to build up an image of what the area usually looks like. Any significant disturbance would signal an alarm. Hence, motion of a human body can be detected.

This would work through walls, shelves, furniture and so on – as long as the signal strength is attenuated too much.

This is clever stuff. Very easy to fit (though you do need power to each node), and probably very hard to beat. It is expensive though.

For those interested, here is a link to the patent:


And I have pulled a picture of the PCB from the FCC report on it:



The markings on the main IC are not visible, but based on the frequency, size of the package, crystal frequency, crystal connections and antenna connections, this is a TI CC2540 RF SoC – a brother to the CC1110 RF SoC, using an 8051 core connected to a RF transceiver.

Interestingly there is a micro-USB and debugging connector on the board as well!

Why am I hacking your alarm?

Since I’ve started posting about alarm systems, a number of people have questioned by motives. I can understand why – these are security products and I can see how many people would think poking around inside them is “dodgy”.

I’d say I have three main drivers:
  1. I love taking electronics apart and working out how they work. It’s much more challenging and interesting when someone has actively tried to stop you doing this – alarms are an ideal target because of this. I initially bought an alarm hoping it would contain a rolling code system for me to reverse, but it turned out to be far too basic. In the end I found a massive string of vulnerabilities anyway.
  2. I find security as a concept fascinating, from locks through access systems through human factors. I love how the perception of security is so often different to the reality. One of my current drivers is that I think the security economics around alarms is totally broken – it’s driven by outdated , rigid standards and insurance rather than actual security.
  3. I used to watch Bugs on BBC1 each week religiously so I spent my teenage years watching people break into high-tech buildings using fancy electronics. Whilst I’m not doing the actual breaking in, having the means to disable alarm systems and bypass access controls is fun and something I never thought would be possible to do.

It swings both ways, especially for RF comms

In a few of the previous posts, I’ve discussed some principles used in the radio communications in alarms. I’ve mentioned that some features are harder to implement well using one-way radios. What is the difference between one-way and two-way? What practical difference will it make?

Radio communications can be one-way or two-way, depending on how they have been designed.

A one-way system has a transmitter in each of the detectors and a receiver in the panel. This means that the detectors can send signals to the panel, but the panel cannot send signals to the detectors.

In a two-way system, each component has both a transmitter and receiver. This means that the detectors are now capable of receiving a signal from the panel.

It is fairly normal for the two-way systems to use a combined transmitter and receiver called a “transceiver”. Whilst not a strict limitation, most of these transceivers can only transmit or receive at any given moment in time (this is called half duplex). They can switch from receive to transmit very quickly, so from a user perspective they look like they are transmitting and receiving at the same time.

Most older systems use one-way radio. I suspect this is because there were not easy to use, cheap integrated RF transceivers available 10 or 20 years ago. Often they will use a simple AM transmitter built from discrete components or one of the very old remote control ICs that require an 8-bit address (these are common in wirelessly controlled mains sockets still).

A lot of newer systems use two-way radio. They will use one of the modern integrated RF transceivers like the TI CCxxxx, Si4432, or any of the Nordic Semi products. These do all of the hard RF work (and even a lot of the packet handling and encoding, sometimes even encryption) for you, and are controlled using a simple digital serial protocol. They are very cheap and versatile.

What are the practical limitations of one-way radios?

There are an awful lot of them – too many to list really. Let’s cover a few really key ones

Detectors have no idea if the system is armed or not

There is no way for a detector to know if the system is armed or not as it cannot receive any information.

This means that they always have to behave as if the system was armed. This behaviour has to be a balance of responding to alarms quickly vs preserving battery life. This trade-off is often accomplished by holding-off alarm detection for a period of a few minutes after an alarm has been raised.

It also means that they try to send supervisory “OK” status messages as infrequently as possible – and by standards, this can be up to 240 minutes.

This has practical implications for how responsive an alarm system can be.

The panel cannot ping the detectors when it is armed

Two-way panels all actively check the presence and status of detectors at the moment the system is armed. If any are in tamper, contacts open, detectors missing, or batteries low, the user will be warned (and possibly, arming the alarm is not allowed). This is very similar to how a wired system works.

One-way systems need to rely on the last alarm or status message received. They could be from a long time prior and could be out of date.

Rolling code and encryption is much harder to do well

In a previous post, I discussed how rolling code systems can’t just accept the next code in the sequence – they need to accept codes over a wide window, possibly the next 256 valid codes. This is because the transmission is not guaranteed to be received and the transmitter hops forwards regardless.

With a two way system, this window can be avoided. The keyfob can continue to send the same code in the sequence until the panel sends a message back saying that it has been received (this is a simple explanation of how it could work, pure rolling code is rare in two-way systems).

Alongside this, one-way radio makes exchanging keys in encryption systems difficult. A similar concept to the window of valid codes needs to be used to ensure that transmissions are received correctly after a key changes. For this reason, encryption keys in one-way systems are most often fixed (though they can be exchanged during the initial pairing).

Conceptually, it’s exactly the same as two people trying to communicate reliably with each other, where one of them can only speak and the other only listen. There’s also a 2 year old in the room who won’t shut up (interference), and another guy who is actively trying to make sure everything goes wrong (a malicious attacker).

This raises another interesting aside – alarm systems always need to find a balance between security and reliability of communications. There is little use in ensuring that communications are completely secure if it means alarm messages to do make it through.

Security devices and product differentiation

An interesting subject has come up on the TSI forumsproduct differentiation in relation to encryption and security in alarm signalling systems.

As with alarms, there are different grades of signalling devices. These go from grade 1 (low risk, doesn’t seem to be used much or at all) to grade 4 (high risk, banks, jewellers). It’s common for the signalling device to be a higher grade than the alarm system, although this is not mandated.

Grade 4 requires encryption, protection from message substitution and replay etc. One provider, WebWayOne has built a system that uses several proven technologies like AES-128 and other widely known cryptographic fundamentals.

One of WebWayOne’s representatives said on the forum:

“Once these techniques are in place they may as well be deployed across all grades if system, it makes no sense not to.”

This is an awesome attitude to have and, to me, signals that these guys have actually understood the challenges in implementing a secure protocol. They are not weakening lower grade systems by weakening the cryptography and protocol.

Why do I think this is sound reasoning? It’s probably easier to argue why weakening the cryptography and protocol is not a good idea – here are some ways I have seen it done in other systems using cryptography (not alarm signalling systems – I am extending my reasoning from other products to apply to them).

Reducing key-length

Some products differentiate different grades of security by reducing key length. This tends to be a bad idea.

Practically all cryptographic techniques are vulnerable to brute-force attacks – it really is just trying every single key, one by one. It’s accepted at the moment that 40, 56 and 64 bit keys are not long enough to protect against brute-force attacks. 112 bit (twice 56, used in keying method 2 in triple DES) and 128 bit are currently long enough to protect against brute-force attacks. This will change in the future, but we are safe for a good few years yet.

Anything above 128 bits is therefore deemed wasteful – your highest grade product could use 128 bits and be secure. You could alter your lower grade product to use 64 bit keys. To the lay person, you might think that this would take half the time to brute force –  but it is actually easier by a factor of 2^64 (18446744073709551616 times easier).

You could offer 127 bit encryption – this would take half the time to crack. But what would be the point? It would be product differentiation for no reason, and implementing a custom key length nearly always means you are “rolling your own” and will make mistakes.

Altering the protocol

Changing the protocol in anyway would also be an odd way to differentiate a lower grade.

Outside of key length, most aspects of a protocol are either a binary secure/not secure. You can’t offer 50% of message authentication. You can’t offer 50% of a secure means of key exchange. They are either present and secure, present and insecure, or not present at all.

If any aspect of a secure protocol is deemed insecure, it’s highly likely that the whole thing will fall apart. This isn’t always the case, but it’s fairly usual to see a theoretical vulnerability against a single part (say, the message authentication) turn into a full blown practical exploit against the whole thing. This means you need to tread carefully when trying to artificially weaken a protocol.

The hardware is there anyway

Signaling systems don’t have the same constraints as wireless detectors. They have plentiful power and space, which affords the use of comparatively powerful hardware.

Most detectors use 8-bit microcontrollers like the PIC, ATmega, or 8051 built into the CC1110. They run using slow clock rates (this lowers power consumption) and have limited RAM and register space. Implementing full blown cryptographic schemes in these is not easy, especially when you move up to something like RSA with 1024 bit keys (RSA is public key cryptography, where you need a much longer key to be secure than with symmetric cryptography like AES).

I have not seen inside any IP signaling devices, but I would wager that they use modern, powerful 32-bit processors like the ARM, with plentiful RAM and fast clocks. There are cryptographic libraries already available on these processors that allow you to easily build a secure protocol.

This hardware is likely the same across all grades. Again, it just makes no sense to build a lower grade system using different hardware to artificially constrain it.


Properly pen testing products, as compared to “test house” testing to standards, is a time consuming, expensive and highly skilled job. Having two distinct products, even if they only different slightly in hardware and software, would really require two distinct pen tests to be performed. This is cost you do not need to bear. Test the grade 4 product, use the same hardware and software for grade 2, and you have just tested both at the same time.

Differentiate on the tangible aspects

When it comes down to it, all of this doesn’t really matter to the customer. They just want something secure. So differentiate on the tangible things – how long the signalling takes to report issues, and the response to alarms.