## Chinese Certificate Authority ‘mistakenly’ gave out SSL Certs for GitHub Domains

A Chinese certificate authority (CA) appeared to be making a significant security blunder by handing out duplicate SSL certificates for a base domain if someone just has control over its any subdomain.

The certificate authority, named WoSign, issued …

A Chinese certificate authority (CA) appeared to be making a significant security blunder by handing out duplicate SSL certificates for a base domain if someone just has control over its any subdomain. The certificate authority, named WoSign, issued a base certificate for the Github domains to an unnamed GitHub user. But How? First of all, do you know, the traditional Digital Certificate

## CAAR Receives National Grant & Donates Additional Funds to Help Build a Permanent Albemarle …

The Association advocates for the protection of private property rights and provides tools and technology for members to achieve expertise in serving …

The Association advocates for the protection of private property rights and provides tools and technology for members to achieve expertise in serving ...

## Russian Lawmaker’s Son Convicted of Stealing 2.9 Million Credit Card Numbers

The son of a prominent Russian lawmaker has been found guilty in the United States of running a hacking scheme that stole and sold 2.9 million US credit card numbers using Point-of-Sale (POS) malware, costing financial institutions more than $169 Milli… The son of a prominent Russian lawmaker has been found guilty in the United States of running a hacking scheme that stole and sold 2.9 million US credit card numbers using Point-of-Sale (POS) malware, costing financial institutions more than$169 Million. Roman Seleznev, 32, the son of Russian Parliament member Valery Seleznev, was arrested in 2014 while attempting to board a flight in the

## Crypto 2016: Breaking the Circuit Size Barrier for Secure Computation Under DDH

The CRYPTO 2016 Best Paper Award, went to a paper written by Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sh…

The CRYPTO 2016 Best Paper Award, went to a paper written by Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sharing.

Even more interesting, the authors present a protocol where 2PC for branching programs is realized in a way that communication complexity depends only on the input size and the computation is linear of circuit size.

The central idea develops around building efficient evaluation of RMS (restricted multiplication straight line) programs. The special feature of RMS is that they allow multiplications only with memory and input values; the additions come for free between memory values. Although this class seems quite restrictive it covers the class of branching programs (logaritmic depth boolean circuits with polynomial size and bounded input).

In the 2PC evaluation of RMS, suppose there is a linear shared memory value $[y]$ between the parties $P_1$ and $P_2$. When $P_1$ wants to share an input value $x$ to $P_2$ it sends an ElGamal encryption of $x$, $g^{xc}$ where $c$ is a symmetric ElGamal key. Clearly, the encryption is homomorphic with respect to multiplication, but how can we make any operations between a linear SS (secret shared) value and an ElGamal encryption?

This is solved by introducing a distributive DLog procedure which converts the El-Gamal ciphertexts into linear SS values. The method uses a truncated PRF which counts the number of steps until the PRF evaluated in the ElGamal encryption equals to $0$. Unfortunately this algorithm has a probability of outputting an incorrect result but it can be fixed by evaluating multiple instances of the same protocol in parallel and then use an MPC protocol to select the result majority.

Of course, there are some caveats at the beginning of the scheme such as converting the key generation procedure to a public key one and removing circularity key assumptions. These are gradually presented by the authors so that it can ease the reader's understanding of the ideas.

What I find neat is that at the end of the paper we can see easily how to reduce the communication for general 'dense' arithmetic circuits by splitting them in multiple reduced depth chunks and then apply the RMS programs for each gate (because an addition or multiplication gate can be represented as a branching program).

Of course we can spot some open problems left as future work such as:
1. Extend the protocols for larger classes other than branching programs.
2. Protocol only works for $2$ parties. Can we find something with constant communication for multiple parties without using FHE?
3. Can we convert the protocol for malicious parties in some other way rather than a generic complier as in [2]?

[1]: Boyle, Elette, Niv Gilboa, and Yuval Ishai. "Breaking the Circuit Size Barrier for Secure Computation Under DDH."
[2]: Ishai, Yuval, et al. "Cryptography with constant computational overhead." Proceedings of the fortieth annual ACM symposium on Theory of computing. ACM, 2008.

## Opera Browser Sync Service Hacked; Users’ Data and Saved Passwords Compromised

Opera has reset passwords of all users for one of its services after hackers were able to gain access to one of its Cloud servers this week.

Opera Software reported a security breach last night, which affects all users of the sync feature of its web browser.
So, if you’ve been using Opera’s Cloud Sync service, which allows users to synchronize their browser data and settings

Opera has reset passwords of all users for one of its services after hackers were able to gain access to one of its Cloud servers this week. Opera Software reported a security breach last night, which affects all users of the sync feature of its web browser. <!-- adsense --> So, if you’ve been using Opera’s Cloud Sync service, which allows users to synchronize their browser data and settings

Well, we all know that the FBI has previously hosting porn on the Internet. I still remember the case of PlayPen, the world’s largest dark web child pornography site, which was seized by FBI and ran from agency’s own servers to uncover the site’s visitors.

Now, one of the most popular sites owned and operated by the FBI has been serving porn as well.

Well, we all know that the FBI has previously hosting porn on the Internet. I still remember the case of PlayPen, the world's largest dark web child pornography site, which was seized by FBI and ran from agency’s own servers to uncover the site's visitors. Now, one of the most popular sites owned and operated by the FBI has been serving porn as well. FBI-owned Megaupload.org and several

## CHES 2016: On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking

During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his paper, co-authored with Matthieu Rivain, on bit-sliced higher-order masking.
Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to bit-slice Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [GR16]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work.
The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the multiplicative complexity, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the parallel multiplicative complexity. While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [CRV14] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound.

References:
[CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014 & JCEN 2015.

[GR16] Dahmun Goudarzi and Matthieu Rivain. How Fast Can Higher-Order Masking Be in Software? Cryptology ePrint Archive, 2016.
During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his paper, co-authored with Matthieu Rivain, on bit-sliced higher-order masking.

Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to bit-slice Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [GR16]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work.

The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the multiplicative complexity, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the parallel multiplicative complexity. While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [CRV14] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound.

References:

[CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014 & JCEN 2015.

[GR16] Dahmun Goudarzi and Matthieu Rivain. How Fast Can Higher-Order Masking Be in Software? Cryptology ePrint Archive, 2016.

## This Open Source 25-Core Processor Chip Can Be Scaled Up to 200,000-Core Computer

Researchers have designed a new computer chip that promises to boost the performance of computers and data centers while processing applications in parallel.

Princeton University researchers have developed a 25-core open source processor, dubbed Piton…

Researchers have designed a new computer chip that promises to boost the performance of computers and data centers while processing applications in parallel. Princeton University researchers have developed a 25-core open source processor, dubbed Piton named after the metal spikes used by rock climbers, which has been designed to be flexible, highly scalable, fast and energy-efficient to

## Apple releases ‘Emergency’ Patch after Advanced Spyware Targets Human Rights Activist

Apple has released iOS 9.3.5 update for iPhones and iPads to patch three zero-day vulnerabilities after a piece of spyware found targeting the iPhone used by a renowned UAE human rights defender, Ahmed Mansoor.

One of the world’s most invasive softwar…

Apple has released iOS 9.3.5 update for iPhones and iPads to patch three zero-day vulnerabilities after a piece of spyware found targeting the iPhone used by a renowned UAE human rights defender, Ahmed Mansoor. One of the world's most invasive software weapon distributors, called the NSO Group, has been exploiting three zero-day security vulnerabilities in order to spy on dissidents and

## WhatsApp to Share Your Data with Facebook — You have 30 Days to Stop It

Nothing comes for Free, as “Free” is just a relative term used by companies to develop a strong user base and then use it for their own benefits.

The same has been done by the secure messaging app WhatsApp, which has now made it crystal clear that the popular messaging service will begin sharing its users’ data with its parent company, Facebook.

However, WhatsApp is offering a partial

Nothing comes for Free, as "Free" is just a relative term used by companies to develop a strong user base and then use it for their own benefits. The same has been done by the secure messaging app WhatsApp, which has now made it crystal clear that the popular messaging service will begin sharing its users’ data with its parent company, Facebook. However, WhatsApp is offering a partial

## Germany and France declare War on Encryption to Fight Terrorism

Yet another war on Encryption!

France and Germany are asking the European Union for new laws that would require mobile messaging services to decrypt secure communications on demand and make them available to law enforcement agencies.

French and Germa…

Yet another war on Encryption! France and Germany are asking the European Union for new laws that would require mobile messaging services to decrypt secure communications on demand and make them available to law enforcement agencies. French and German interior ministers this week said their governments should be able to access content on encrypted services in order to fight terrorism, the

## Happy Birthday! LINUX Turns 25 Years Old Today

Linux has turned 25!

Dear all, today is August 25, 2016, and it is time for the celebration, as it’s the 25th Anniversary of the Linux project, announced by its creator, Finnish programmer Linus Torvalds, on August 25, 1991.

Who can forget one of the…

Linux has turned 25! Dear all, today is August 25, 2016, and it is time for the celebration, as it's the 25th Anniversary of the Linux project, announced by its creator, Finnish programmer Linus Torvalds, on August 25, 1991. Who can forget one of the most famous messages in the computing world posted by Torvalds exactly 25 years ago today, on 25 August 1991: <!-- adsense --> Hello everybody out

## Attack of the week: 64-bit ciphers in TLS

A few months ago it was starting to seem like you couldn’t go a week without a new attack on TLS. In that context, this summer has been a blessed relief. Sadly, it looks like our vacation is over, and it’s time to go back to school.

Today brings the news that Karthikeyan Bhargavan and Gaëtan Leurent out of INRIA have a new paper that demonstrates a practical attack on legacy ciphersuites in TLS (it’s called “Sweet32”, website here). What they show is that ciphersuites that use 64-bit blocklength ciphers — notably 3DES — are vulnerable to plaintext recovery attacks that work even if the attacker cannot recover the encryption key.

While the principles behind this attack are well known, there’s always a difference between attacks in principle and attacks in practice. What this paper shows is that we really need to start paying attention to the practice.

So what’s the matter with 64-bit block ciphers?

 source: Wikipedia

Block ciphers are one of the most widely-used cryptographic primitives. As the name implies, these are schemes designed to encipher data in blocks, rather than a single bit at a time.

The two main parameters that define a block cipher are its block size (the number of bits it processes in one go), and its key size. The two parameters need not be related. So for example, DES has a 56-bit key and a 64-bit block. Whereas 3DES (which is built from DES) can use up to a 168-bit key and yet still has the same 64-bit block. More recent ciphers have opted for both larger blocks and larger keys.

When it comes to the security provided by a block cipher, the most important parameter is generally the key size. A cipher like DES, with its tiny 56-bit key, is trivially vulnerable to brute force attacks that attempt decryption with every possible key (often using specialized hardware). A cipher like AES or 3DES is generally not vulnerable to this sort of attack, since the keys are much longer.

However, as they say: key size is not everything. Sometimes the block size matters too.

You see, in practice, we often need to encrypt messages that are longer than a single block. We also tend to want our encryption to be randomized. To accomplish this, most protocols use a block cipher in a scheme called a mode of operation. The most popular mode used in TLS is CBC mode. Encryption in CBC looks like this:

 (source: wikipedia)

The nice thing about CBC is that (leaving aside authentication issues) it can be proven (semantically) secure if we make various assumptions about the security of the underlying block cipher. Yet these security proofs have one important requirement. Namely, the attacker must not receive too much data encrypted with a single key.

The reason for this can be illustrated via the following simple attack.

Imagine that an honest encryptor is encrypting a bunch of messages using CBC mode. Following the diagram above, this involves selecting a random Initialization Vector (IV) of size equal to the block size of the cipher, then XORing the IV with the first plaintext block (P), and enciphering the result (PIV). The IV is sent (in the clear) along with the ciphertext.

Most of the time, the resulting ciphertext block will be unique — that is, it won’t match any previous ciphertext block that an attacker may have seen. However, if the encryptor processes enough messages, sooner or later the attacker will see a collision. That is, it will see a ciphertext block that is the same as some previous ciphertext block. Since the cipher is deterministic, this means the cipher’s input (PIV) must be identical to the cipher’s previous input (P’ ⊕ IV’) that created the previous block.

In other words, we have (P ⊕ IV) = (P’ ⊕ IV’), which can be rearranged as (P ⊕ P’) = (IV ⊕ IV’). Since the IVs are random and known to the attacker, the attacker has (with high probability) learned the XOR of two (unknown) plaintexts!

What can you do with the XOR of two unknown plaintexts? Well, if you happen to know one of those two plaintext blocks — as you might if you were able to choose some of the plaintexts the encryptor was processing — then you can easily recover the other plaintext. Alternatively, there are known techniques that can sometimes recover useful data even when you don’t know both blocks.

The main lesson here is that this entire mess only occurs if the attacker sees a collision. And the probability of such a collision is entirely dependent on the size of the cipher block. Worse, thanks to the (non-intuitive) nature of the birthday bound, this happens much more quickly than you might think it would. Roughly speaking, if the cipher block is b bits long, then we should expect a collision after roughly 2^{b/2} encrypted blocks.

In the case of a 64-bit blocksize cipher like 3DES, this is somewhere in the vicinity of 2^32, or around 4 billion enciphered blocks.

(As a note, the collision does not really need to occur in the first block. Since all blocks in CBC are calculated in the same way, it could be a collision anywhere within the messages.)

Whew. I thought this was a practical attack. 4 billion is a big number!

It’s true that 4 billion blocks seems like an awfully large number. In a practical attack, the requirements would be even larger — since the most efficient attack is for the attacker to know a lot of the plaintexts, in the hope that she will be able to recover one unknown plaintext when she learns the value (P ⊕ P’).

However, it’s worth keeping in mind that these traffic numbers aren’t absurd for TLS. In practice, 4 billion 3DES blocks works out to 32GB of raw ciphertext. A lot to be sure, but not impossible. If, as the Sweet32 authors do, we assume that half of the plaintext blocks are known to the attacker, we’d need to increase the amount of ciphertext to about 64GB. This is a lot, but not impossible.

The Sweet32 authors take this one step further. They imagine that the ciphertext consists of many HTTPS connections, consisting of 512 bytes of plaintext, in each of which is embedded the same secret 8-byte cookie — and the rest of the session plaintext is known. Calculating from these values, they obtain a requirement of approximately 256GB of ciphertext needed to recover the cookie with high probability.

That is really a lot.

But keep in mind that TLS connections are being used to encipher increasingly more data. Moreover, a single open browser frame running attacker-controlled Javascript can produce many gigabytes of ciphertext in a single hour. So these attacks are not outside of the realm of what we can run today, and presumably will be very feasible in the future.

How does the TLS attack work?

While the cryptographic community has been largely pushing TLS away from ciphersuites like CBC, in favor of modern authenticated modes of operation, these modes still exist in TLS. And they exist not only for use not only with modern ciphers like AES, but they are often available for older ciphersuites like 3DES. For example, here’s a connection I just made to Google:

Of course, just because a server supports 3DES does not mean that it’s vulnerable to this attack. In order for a particular connection to be vulnerable, both the client and server must satisfy three main requirements:

1. The client and server must negotiate a 64-bit cipher. This is a relatively rare occurrence, but can happen in cases where one of the two sides is using an out-of-date client. For example, stock Windows XP* does not support any of the AES-based ciphersuites. Similarly, SSL3 connections may negotiate 3DES ciphersuites.
2. The server and client must support long-lived TLS sessions, i.e., encrypting a great deal of data with the same key. Unfortunately, most web browsers place no limit on the length of an HTTPS session if Keep-Alive is used, provided that the server allows the session. The Sweet32 authors scanned and discovered that many servers (including IIS) will allow sessions long enough to run their attack. Across the Internet, the percentage of vulnerable servers is small (less than 1%), but includes some important sites.
3.  Sites vulnerable to the attack (source: Sweet32 paper).
4. The client must encipher a great deal of known data, including a secret session cookie. This is generally achieved by running adversarial Javascript code in the browser, although it could be done using standard HTML as well.

These caveats aside, the authors were able to run their attack using Firefox, sending at a rate of about 1500 connections per second. With a few optimizations, they were able to recover a 16-byte secret cookie in about 30 hours (a lucky result, given an expected 38 hour run time).

So what do we do now?

While this is not an earthshaking result, it’s roughly comparable to previous results we’ve seen with legacy ciphers like RC4.

In short, while these are not the easiest attacks to run, it’s a big problem that there even exist semi-practical attacks that succeed against the encryption used in standard encryption protocols. This is a problem that we should address, and papers like this one can make a big difference in doing that.

Notes:

* Note that by “stock” Windows XP, I’m referring to Windows XP as it was originally sold. According to Stefan Kanthak, Microsoft added AES support to SChannel via a series of updates in August 11, 2009. It’s not clear when these became “automatic install”. So if you haven’t updated your XP in a long time, that’s probably a bad thing.

A few months ago it was starting to seem like you couldn't go a week without a new attack on TLS. In that context, this summer has been a blessed relief. Sadly, it looks like our vacation is over, and it's time to go back to school.

Today brings the news that Karthikeyan Bhargavan and Gaëtan Leurent out of INRIA have a new paper that demonstrates a practical attack on legacy ciphersuites in TLS (it's called "Sweet32", website here). What they show is that ciphersuites that use 64-bit blocklength ciphers -- notably 3DES -- are vulnerable to plaintext recovery attacks that work even if the attacker cannot recover the encryption key.

While the principles behind this attack are well known, there's always a difference between attacks in principle and attacks in practice. What this paper shows is that we really need to start paying attention to the practice.

So what's the matter with 64-bit block ciphers?
 source: Wikipedia

Block ciphers are one of the most widely-used cryptographic primitives. As the name implies, these are schemes designed to encipher data in blocks, rather than a single bit at a time.

The two main parameters that define a block cipher are its block size (the number of bits it processes in one go), and its key size. The two parameters need not be related. So for example, DES has a 56-bit key and a 64-bit block. Whereas 3DES (which is built from DES) can use up to a 168-bit key and yet still has the same 64-bit block. More recent ciphers have opted for both larger blocks and larger keys.

When it comes to the security provided by a block cipher, the most important parameter is generally the key size. A cipher like DES, with its tiny 56-bit key, is trivially vulnerable to brute force attacks that attempt decryption with every possible key (often using specialized hardware). A cipher like AES or 3DES is generally not vulnerable to this sort of attack, since the keys are much longer.

However, as they say: key size is not everything. Sometimes the block size matters too.

You see, in practice, we often need to encrypt messages that are longer than a single block. We also tend to want our encryption to be randomized. To accomplish this, most protocols use a block cipher in a scheme called a mode of operation. The most popular mode used in TLS is CBC mode. Encryption in CBC looks like this:

 (source: wikipedia)
The nice thing about CBC is that (leaving aside authentication issues) it can be proven (semantically) secure if we make various assumptions about the security of the underlying block cipher. Yet these security proofs have one important requirement. Namely, the attacker must not receive too much data encrypted with a single key.

The reason for this can be illustrated via the following simple attack.

Imagine that an honest encryptor is encrypting a bunch of messages using CBC mode. Following the diagram above, this involves selecting a random Initialization Vector (IV) of size equal to the block size of the cipher, then XORing the IV with the first plaintext block (P), and enciphering the result (PIV). The IV is sent (in the clear) along with the ciphertext.

Most of the time, the resulting ciphertext block will be unique -- that is, it won't match any previous ciphertext block that an attacker may have seen. However, if the encryptor processes enough messages, sooner or later the attacker will see a collision. That is, it will see a ciphertext block that is the same as some previous ciphertext block. Since the cipher is deterministic, this means the cipher's input (PIV) must be identical to the cipher's previous input (P' ⊕ IV') that created the previous block.

In other words, we have (P ⊕ IV) = (P' ⊕ IV'), which can be rearranged as (P ⊕ P') = (IV ⊕ IV'). Since the IVs are random and known to the attacker, the attacker has (with high probability) learned the XOR of two (unknown) plaintexts!

What can you do with the XOR of two unknown plaintexts? Well, if you happen to know one of those two plaintext blocks -- as you might if you were able to choose some of the plaintexts the encryptor was processing -- then you can easily recover the other plaintext. Alternatively, there are known techniques that can sometimes recover useful data even when you don't know both blocks.

The main lesson here is that this entire mess only occurs if the attacker sees a collision. And the probability of such a collision is entirely dependent on the size of the cipher block. Worse, thanks to the (non-intuitive) nature of the birthday bound, this happens much more quickly than you might think it would. Roughly speaking, if the cipher block is b bits long, then we should expect a collision after roughly 2^{b/2} encrypted blocks.

In the case of a 64-bit blocksize cipher like 3DES, this is somewhere in the vicinity of 2^32, or around 4 billion enciphered blocks.

(As a note, the collision does not really need to occur in the first block. Since all blocks in CBC are calculated in the same way, it could be a collision anywhere within the messages.)

Whew. I thought this was a practical attack. 4 billion is a big number!

It's true that 4 billion blocks seems like an awfully large number. In a practical attack, the requirements would be even larger -- since the most efficient attack is for the attacker to know a lot of the plaintexts, in the hope that she will be able to recover one unknown plaintext when she learns the value (P ⊕ P').

However, it's worth keeping in mind that these traffic numbers aren't absurd for TLS. In practice, 4 billion 3DES blocks works out to 32GB of raw ciphertext. A lot to be sure, but not impossible. If, as the Sweet32 authors do, we assume that half of the plaintext blocks are known to the attacker, we'd need to increase the amount of ciphertext to about 64GB. This is a lot, but not impossible.

The Sweet32 authors take this one step further. They imagine that the ciphertext consists of many HTTPS connections, consisting of 512 bytes of plaintext, in each of which is embedded the same secret 8-byte cookie -- and the rest of the session plaintext is known. Calculating from these values, they obtain a requirement of approximately 256GB of ciphertext needed to recover the cookie with high probability.

That is really a lot.

But keep in mind that TLS connections are being used to encipher increasingly more data. Moreover, a single open browser frame running attacker-controlled Javascript can produce many gigabytes of ciphertext in a single hour. So these attacks are not outside of the realm of what we can run today, and presumably will be very feasible in the future.

How does the TLS attack work?

While the cryptographic community has been largely pushing TLS away from ciphersuites like CBC, in favor of modern authenticated modes of operation, these modes still exist in TLS. And they exist not only for use not only with modern ciphers like AES, but they are often available for older ciphersuites like 3DES. For example, here's a connection I just made to Google:

Of course, just because a server supports 3DES does not mean that it's vulnerable to this attack. In order for a particular connection to be vulnerable, both the client and server must satisfy three main requirements:
1. The client and server must negotiate a 64-bit cipher. This is a relatively rare occurrence, but can happen in cases where one of the two sides is using an out-of-date client. For example, stock Windows XP* does not support any of the AES-based ciphersuites. Similarly, SSL3 connections may negotiate 3DES ciphersuites.
2. The server and client must support long-lived TLS sessions, i.e., encrypting a great deal of data with the same key. Unfortunately, most web browsers place no limit on the length of an HTTPS session if Keep-Alive is used, provided that the server allows the session. The Sweet32 authors scanned and discovered that many servers (including IIS) will allow sessions long enough to run their attack. Across the Internet, the percentage of vulnerable servers is small (less than 1%), but includes some important sites.
3.  Sites vulnerable to the attack (source: Sweet32 paper).
4. The client must encipher a great deal of known data, including a secret session cookie. This is generally achieved by running adversarial Javascript code in the browser, although it could be done using standard HTML as well.
These caveats aside, the authors were able to run their attack using Firefox, sending at a rate of about 1500 connections per second. With a few optimizations, they were able to recover a 16-byte secret cookie in about 30 hours (a lucky result, given an expected 38 hour run time).

So what do we do now?

While this is not an earthshaking result, it's roughly comparable to previous results we've seen with legacy ciphers like RC4.

In short, while these are not the easiest attacks to run, it's a big problem that there even exist semi-practical attacks that succeed against the encryption used in standard encryption protocols. This is a problem that we should address, and papers like this one can make a big difference in doing that.

Notes:

* Note that by "stock" Windows XP, I'm referring to Windows XP as it was originally sold. According to Stefan Kanthak, Microsoft added AES support to SChannel via a series of updates in August 11, 2009. It's not clear when these became "automatic install". So if you haven't updated your XP in a long time, that's probably a bad thing.

## 6 ways Sophos Home can keep your kids safe this school year!

In many part of the world right now we are right in the middle of back-to-school season. Kids are getting excited to see their friends again and head back to the classroom, and are preparing for the best possible experience in school this year. But what about at home? With so much of a child’s […]

In many part of the world right now we are right in the middle of back-to-school season. Kids are getting excited to see their friends again and head back to the classroom, and are preparing for the best possible experience in school this year.

But what about at home? With so much of a child’s social life, homework and playtime happening online nowadays, you want to make sure their experience on the internet is as safe and fun as possible.

We have a number of tips that your kids can use to be safer online and on social media. But no matter how careful or internet-savvy your kids might be, criminals are always coming up with new ways to cause problems and find their way into your home computers.

Thankfully, Sophos Home can help. It brings our commercial-grade security straight to your home computers, completely for free. And, it’s just been given a great rating by Tom’s Guide!

Here are 6 ways Sophos Home can protect your kids this school year:

1. Web filtering: Sophos Home offers 28 web filter options, giving you the ability to allow, warn, or block entire categories of websites from your children’s computers. Categories range from blogs and chat sites to gambling and pornography.
2. Web and phishing protection: Even if kids are surfing good sites online, bad things can still happen. We can prevent access to sites that have been unknowingly compromised by malware and prevent kids from being redirected to fake websites posing as the real thing.
3. Potentially Unwanted Application protection: Worried your kids could be downloading apps and programs on their computer that are full of ads, spyware and other issues? Sophos Home can detect and prevent these applications and programs from installing or running.
4. Antivirus/Antimalware: Sophos Home can stop and remove malware that would allow cybercriminals to steal information and spy on your children.
5. Management: Multiple kids in multiple places? Sophos Home can manage and protect up to 10 Macs and PCs on one account, no matter where in the world they are located.
6. Money: Kids cost a lot of money. Sophos Home can protect them all without a penny, shilling, kopek, cent or any other currency you can think of.

We’re a little bit biased, but we think Sophos Home is the ideal way to keep your kids safe online this year, and give you one less thing to worry about.

Try Sophos Home today!

Filed under: Enduser, Security Tips Tagged: anti-malware, Antivirus, Back to School, Cybersafety, cybersecurity, Sophos Home, Web filtering

## MIT Researchers Solve the Spectrum Crunch to make Wi-Fi 10 times Faster

While using your cell phone at a massive public event, like a concert, conference, or sporting event, you have probably experienced slow communication, poor performance or slow browsing speeds, as crowds arrive.

That’s because of ‘Spectrum Crunch’, which means, Interference of WiFi signals with each other.

WiFi signals of all cell-phones in a large event interfere with each other because

While using your cell phone at a massive public event, like a concert, conference, or sporting event, you have probably experienced slow communication, poor performance or slow browsing speeds, as crowds arrive. That's because of ‘Spectrum Crunch’, which means, Interference of WiFi signals with each other. WiFi signals of all cell-phones in a large event interfere with each other because

## Cisco Exploit Leaked in NSA Hack Modifies to Target Latest Version of Firewalls

Recently released NSA exploit from “The Shadow Brokers” leak that affects older versions of Cisco System firewalls can work against newer models as well.

Dubbed ExtraBacon, the exploit was restricted to versions 8.4.(4) and earlier versions of Cisco’s Adaptive Security Appliance (ASA) – a line of firewalls designed to protect corporate, government networks and data centers.

Recently released NSA exploit from "The Shadow Brokers" leak that affects older versions of Cisco System firewalls can work against newer models as well. Dubbed ExtraBacon, the exploit was restricted to versions 8.4.(4) and earlier versions of Cisco's Adaptive Security Appliance (ASA) – a line of firewalls designed to protect corporate, government networks and data centers. <!-- adsense -->

## CRYPTO 2016 – Backdoors, big keys and reverse firewalls on compromised systems

The morning of the second day at CRYPTO’s 2016 started with a track on “Compromised Systems”, consisting of three talks covering different scenarios and attacks usually disregarded in the vast majority of the cryptographic literature. They all shared, as well, a concern about mass surveillance.

Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have worked hard on this scenario, developing a whole range of tools, with different notions of security and setup assumptions, between which one of the most common axioms is that Alice has access to a trusted computer with a proper implementation of the cryptographic protocol she wants to run. The harsh truth is that this is a naïve assumption. The Snowden revelations show us that powerful adversaries can and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its way to users or using Tailored Access Operation units.

Nevertheless, the relevance of these talks was not just a matter of a “trending topic”, or distrust on the authoritarian and unaccountable practices of intelligence agencies. More frequently than we would like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.) are found in popular cryptographic software, leaving the final user unprotected even when using honest implementations. In the meantime, as Paul Kocher remembered on his invited talk the day after, for most of our community it passes without notice that, when we design our primitives and protocols, we blindly rely on a mathematical model of reality that sometimes has little to do with it.

In the same way as people from the CHES community has become more aware –mainly also the hard way– that relying on the wrong assumptions leads to a false confidence of the security of the deployed systems and devices, I think those of us not that close to hardware should also try to step back and look at how realistic are our assumptions. This includes, as these talks addressed in different ways, starting to assume that some standards might –and most of the systems will— be compromised at some point, and that we understand what can still be done in those cases.

How would a Cryptography that worries not only about prevention, but also about the whole security cycle look like? How can the cryptography and information security communities come closer?

### Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines

The reverse firewalls framework was recently introduced by Mironov and Stephens-Davidowitz, with a paper that has already been discussed in our group’s seminars and this same blog. A secure reverse firewall is a third party that “sits between Alice and the outside world” and modifies her sent and received messages so that even if her machine has been corrupted, Alice’s security is still guaranteed.

Their elegant construction does not require the users to place any additional trust on the firewall, and relies on having the underlying cryptographic schemes to be rerandomizable. With this threat model and rerandomization capabilities, they describe impossibility results as well as concrete constructions.

For example, in the context of semantically secure public-key encryption, in order to provide reverse firewalls for Bob, the scheme must allow a third party to rerandomize a public key and map ciphertexts under the rerandomized public key to ciphertexts under the original public key. In the same context, Alice’s reverse firewall must be able to rerandomize the ciphertext she sends to Bob, in such a way that Dec(Rerand(Enc(m)))=m.

### Big-Key Symmetric Encryption: Resisting Key Exfiltration

The threat addressed in Bellare’s talk is that of malware that aims to exfiltrate a user’s key, likely using her system’s network connection. On their work, they design a schemes that aim to protect against this kind of Advanced Persistent Threats by making secret keys so big that their undetected exfiltration by the adversary is difficult, yet making the user’s overhead almost exclusively in terms of storage instead of speed.

Their main result is a subkey prediction lemma, that gives a nice bound on an adversary’s ability to guess a modest length subkey, derived by randomly selecting bits of a big-key from which partial information has already been leaked. This approach, known as the Bounded Retrieval Model, has been –in the words of the authors—largely a theoretical area of research, whereas they show a fully concrete security analysis with good numerical bounds, constants considered.
Other highlighted aspects of their paper were the concrete improvements over [ADW09] and the key encapsulation technique carefully based in different security assumptions (random oracle, standard model).

### Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results

The last talk of the session focused on the concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs with input, which are fundamental building blocks in cryptographic protocols that have already been successfully compromised, as we learnt when the DUAL_EC_DRBG scandal came to light.

On their paper, the authors revisit a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it could actually have. By giving concrete “Backdoored PRG” constructions, they show how that model fails. Moreover, they also study robust PRNGs with input, for which they show that Big Brother is still able to predict the values of the PRNG state backwards, as well as giving bounds on the number of these previous phases that it can compromise, depending on the state-size of the generator.

[ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages 36{54. Springer, Heidelberg, Aug. 2009.

[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Soﬁa, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.

The morning of the second day at CRYPTO’s 2016 started with a track on “Compromised Systems”, consisting of three talks covering different scenarios and attacks usually disregarded in the vast majority of the cryptographic literature. They all shared, as well, a concern about mass surveillance.

Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have worked hard on this scenario, developing a whole range of tools, with different notions of security and setup assumptions, between which one of the most common axioms is that Alice has access to a trusted computer with a proper implementation of the cryptographic protocol she wants to run. The harsh truth is that this is a naïve assumption. The Snowden revelations show us that powerful adversaries can and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its way to users or using Tailored Access Operation units.

Nevertheless, the relevance of these talks was not just a matter of a “trending topic”, or distrust on the authoritarian and unaccountable practices of intelligence agencies. More frequently than we would like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.) are found in popular cryptographic software, leaving the final user unprotected even when using honest implementations. In the meantime, as Paul Kocher remembered on his invited talk the day after, for most of our community it passes without notice that, when we design our primitives and protocols, we blindly rely on a mathematical model of reality that sometimes has little to do with it.

In the same way as people from the CHES community has become more aware –mainly also the hard way– that relying on the wrong assumptions leads to a false confidence of the security of the deployed systems and devices, I think those of us not that close to hardware should also try to step back and look at how realistic are our assumptions. This includes, as these talks addressed in different ways, starting to assume that some standards might –and most of the systems will— be compromised at some point, and that we understand what can still be done in those cases.

How would a Cryptography that worries not only about prevention, but also about the whole security cycle look like? How can the cryptography and information security communities come closer?

### Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines

The reverse firewalls framework was recently introduced by Mironov and Stephens-Davidowitz, with a paper that has already been discussed in our group’s seminars and this same blog. A secure reverse firewall is a third party that “sits between Alice and the outside world” and modifies her sent and received messages so that even if her machine has been corrupted, Alice’s security is still guaranteed.

Their elegant construction does not require the users to place any additional trust on the firewall, and relies on having the underlying cryptographic schemes to be rerandomizable. With this threat model and rerandomization capabilities, they describe impossibility results as well as concrete constructions.

For example, in the context of semantically secure public-key encryption, in order to provide reverse firewalls for Bob, the scheme must allow a third party to rerandomize a public key and map ciphertexts under the rerandomized public key to ciphertexts under the original public key. In the same context, Alice’s reverse firewall must be able to rerandomize the ciphertext she sends to Bob, in such a way that Dec(Rerand(Enc(m)))=m.

### Big-Key Symmetric Encryption: Resisting Key Exfiltration

The threat addressed in Bellare’s talk is that of malware that aims to exfiltrate a user's key, likely using her system’s network connection. On their work, they design a schemes that aim to protect against this kind of Advanced Persistent Threats by making secret keys so big that their undetected exfiltration by the adversary is difficult, yet making the user’s overhead almost exclusively in terms of storage instead of speed.

Their main result is a subkey prediction lemma, that gives a nice bound on an adversary’s ability to guess a modest length subkey, derived by randomly selecting bits of a big-key from which partial information has already been leaked. This approach, known as the Bounded Retrieval Model, has been –in the words of the authors—largely a theoretical area of research, whereas they show a fully concrete security analysis with good numerical bounds, constants considered.
Other highlighted aspects of their paper were the concrete improvements over [ADW09] and the key encapsulation technique carefully based in different security assumptions (random oracle, standard model).

### Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results

The last talk of the session focused on the concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs with input, which are fundamental building blocks in cryptographic protocols that have already been successfully compromised, as we learnt when the DUAL_EC_DRBG scandal came to light.
On their paper, the authors revisit a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it could actually have. By giving concrete “Backdoored PRG” constructions, they show how that model fails. Moreover, they also study robust PRNGs with input, for which they show that Big Brother is still able to predict the values of the PRNG state backwards, as well as giving bounds on the number of these previous phases that it can compromise, depending on the state-size of the generator.

[ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages 36{54. Springer, Heidelberg, Aug. 2009.

[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Soﬁa, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.

## CHES 2016: Flush, Gauss, and Reload – A Cache Attack on the BLISS Lattice-Based Signature Scheme

MathJax TeX Test Page

Leon Groot Bruinderink presented at CHES a cache-attack against the signature scheme BLISS, a joint work with Andreas Hulsing, Tanja Lange and Yuval Yarom.
The speaker first gave a brief introduction on BLISS (Bimodal Lattice Signature Scheme), a signature scheme whose security is based on lattice problems over NTRU lattices. Since such problems are believed to be hard even if in the presence of quantum computers, BLISS is a candidate for being a cryptographic primitive for the post-quantum world. In addition, its original authors proposed implementations making BLISS a noticeable example of a post-quantum algorithm deployable in real use-cases.
Informally speaking, a message $\mu$ is encoded in a challenge polynomial $\mathbf{c}$, which is then multiplied by the secret key $\mathbf{s}$ according to the following formula: $$\mathbf{z} = \mathbf{y} + (-1)^b ( \mathbf{s} \cdot \mathbf{c} )$$ where the bit $b$ and the noise polynomial $\mathbf{y}$ are unknown to the attacker. It is easy to see that if the attacker gains information about the noise polynomial, some linear algebra operations would lead her to the secret key. The coordinates of $\mathbf{y}$ are independently sampled from a discrete Gaussian distribution, which is implementable in several ways. The ones targeted in the paper are CDT and rejection samplings. In particular, the first method was also covered during the talk therefore I am focusing only on that in this blog post.
The idea behind CDT sampling is precomputing a table according to the cumulative distribution function of the discrete Gaussian, drawing a random element and considering it as an index in the table. The element in the cell indexed by the random number is returned. In the end, elements returned by such a procedure will be distributed statistically close to a discrete Gaussian. Although being fast, this has the drawback of needing to store a large table, fact that it is known to be vulnerable to cache-attacks.
The peculiarity of the attack carried out by Bruinderink \emph{et al.} is that, since the algorithm does not return the exact cache lines in which the sampling table is accessed, the equations learned are correct up to a small error, say $\pm 1$. The authors managed to translate such an issue into a shortest vector problem over lattices. Then, they run LLL algorithm to solve the problem and retrieve correct equations.
MathJax TeX Test Page
Leon Groot Bruinderink presented at CHES a cache-attack against the signature scheme BLISS, a joint work with Andreas Hulsing, Tanja Lange and Yuval Yarom.
The speaker first gave a brief introduction on BLISS (Bimodal Lattice Signature Scheme), a signature scheme whose security is based on lattice problems over NTRU lattices. Since such problems are believed to be hard even if in the presence of quantum computers, BLISS is a candidate for being a cryptographic primitive for the post-quantum world. In addition, its original authors proposed implementations making BLISS a noticeable example of a post-quantum algorithm deployable in real use-cases.
Informally speaking, a message $\mu$ is encoded in a challenge polynomial $\mathbf{c}$, which is then multiplied by the secret key $\mathbf{s}$ according to the following formula: $$\mathbf{z} = \mathbf{y} + (-1)^b ( \mathbf{s} \cdot \mathbf{c} )$$ where the bit $b$ and the noise polynomial $\mathbf{y}$ are unknown to the attacker. It is easy to see that if the attacker gains information about the noise polynomial, some linear algebra operations would lead her to the secret key. The coordinates of $\mathbf{y}$ are independently sampled from a discrete Gaussian distribution, which is implementable in several ways. The ones targeted in the paper are CDT and rejection samplings. In particular, the first method was also covered during the talk therefore I am focusing only on that in this blog post.
The idea behind CDT sampling is precomputing a table according to the cumulative distribution function of the discrete Gaussian, drawing a random element and considering it as an index in the table. The element in the cell indexed by the random number is returned. In the end, elements returned by such a procedure will be distributed statistically close to a discrete Gaussian. Although being fast, this has the drawback of needing to store a large table, fact that it is known to be vulnerable to cache-attacks.
The peculiarity of the attack carried out by Bruinderink \emph{et al.} is that, since the algorithm does not return the exact cache lines in which the sampling table is accessed, the equations learned are correct up to a small error, say $\pm 1$. The authors managed to translate such an issue into a shortest vector problem over lattices. Then, they run LLL algorithm to solve the problem and retrieve correct equations.

## Cyber-Crime and how business can better protect themselves

One in ten adults have been a victim of online fraud and cybercrime in the past year. Rob Sheldon, Partner in Technology Outsourcing and Privacy at Fieldfisher’s Manchester office takes a look at that shocking statistic and asks what can we do to protect ourselves and business better?

At least two thirds of companies experienced a cyber-breach last year and many are not caused by remote hackers but by employees who make an error or act maliciously. For example, deliberately compromising employee data through a grievance (such as the Morrison’s data breech) or for commercial gain by selling information to third parties which is criminal activity.

The UK government recognises this problem and for the past two years has sponsored an annual cyber-breach survey to record the experiences of SME’s and large corporates.  The government is also doubling its investment in cyber-security to £1.9bn over the next five years and ranks cyber-security incidents in the top three highest risks to the nation. Cyber- breaches are a known risk at corporate level and now we’re seeing that it’s a risk which filters down.

In the UK, unlike most US states, there is no obligation on companies to notify regulators of security breaches where personal data is compromised. Payment card data, bank account information and National Insurance numbers are all personal data. However, the new General Data Protection Regulation moves us closer to a US model and introduces an obligation to notify the regulator, and in high-risk cases, affected individuals of security breaches where their data is compromised. This regulation would also bring new rights for individuals to have greater control over the use of their data and greater powers for regulators.

New Pan-EU laws require ‘operators of essential services’ to maintain minimum security requirements and to report cyber-security incidents. This will enable regulators to spot cyber-breach trends and it will also add to transparency across the financial services, utilities, energy, transport and healthcare sectors.  Similar requirements are also imposed on digital services providers, online marketplaces, and search engines etc.

How can we better protect our data?

Studies demonstrate that we’re terrible at changing default passwords. Whilst there is a significant legislative focus coming from government on organisations to ensure data is secure and that individuals take responsibility for safeguarding their data, a few easy ways you can protect your data right now are:

•          Don’t use the same password for every account
•          Don’t write down passwords on slips of paper/post-it note
•          Take advantage of stronger authentication (e.g. biometric data and eliminating passwords through the use of push-based authentication systems linked to mobile devices)

So you’ve done all this but how can you and your business guard against a cyber-attack?

•          Challenge and report suspicious activity (e.g. phising attacks – e-mails from unknown individuals requesting information)
•          Ensure privacy settings are used on mobile devices and social media (so you only share what you want to share with who you want to see it)
•          Checking statements for unusual transactions/unauthorised activity
•          Use credit check tools to monitor for unusual activity
•          Ignore e-mails that look too good to be true

Any finally always follow advice from companies who hold our valuable information e.g. a bank won’t contact you and ask for your details, particularly by e-mail – whilst you might get e-mails that look like they’re from a bank, be suspicious of any request (however genuine it looks) to part with valuable personal information (e.g. data of birth/NI numbers/account numbers/sort codes).

Studies demonstrate that individuals are becoming more savvy about online behaviour but there are generational variances and some age-groups are more likely to give away more data than others.

My expectation is that we will see commercial organisations lead the way in improving security, particularly around mobile devices/IT equipment, and this will quickly find its way into the consumer mainstream.

If you have any questions about cyber-crime in UK, our tech team is always on hand to answer your questions, so get in touch via email, or Twitter.

One in ten adults have been a victim of online fraud and cybercrime in the past year. Rob Sheldon, Partner in Technology Outsourcing and Privacy at Fieldfisher’s Manchester office takes a look at that shocking statistic and asks what can we do to protect ourselves and business better?

At least two thirds of companies experienced a cyber-breach last year and many are not caused by remote hackers but by employees who make an error or act maliciously. For example, deliberately compromising employee data through a grievance (such as the Morrison’s data breech) or for commercial gain by selling information to third parties which is criminal activity.

The UK government recognises this problem and for the past two years has sponsored an annual cyber-breach survey to record the experiences of SME’s and large corporates.  The government is also doubling its investment in cyber-security to £1.9bn over the next five years and ranks cyber-security incidents in the top three highest risks to the nation. Cyber- breaches are a known risk at corporate level and now we’re seeing that it’s a risk which filters down.

In the UK, unlike most US states, there is no obligation on companies to notify regulators of security breaches where personal data is compromised. Payment card data, bank account information and National Insurance numbers are all personal data. However, the new General Data Protection Regulation moves us closer to a US model and introduces an obligation to notify the regulator, and in high-risk cases, affected individuals of security breaches where their data is compromised. This regulation would also bring new rights for individuals to have greater control over the use of their data and greater powers for regulators.

New Pan-EU laws require 'operators of essential services' to maintain minimum security requirements and to report cyber-security incidents. This will enable regulators to spot cyber-breach trends and it will also add to transparency across the financial services, utilities, energy, transport and healthcare sectors.  Similar requirements are also imposed on digital services providers, online marketplaces, and search engines etc.

How can we better protect our data?

Studies demonstrate that we're terrible at changing default passwords. Whilst there is a significant legislative focus coming from government on organisations to ensure data is secure and that individuals take responsibility for safeguarding their data, a few easy ways you can protect your data right now are:

•          Don’t use the same password for every account
•          Don’t write down passwords on slips of paper/post-it note
•          Take advantage of stronger authentication (e.g. biometric data and eliminating passwords through the use of push-based authentication systems linked to mobile devices)

So you’ve done all this but how can you and your business guard against a cyber-attack?

•          Challenge and report suspicious activity (e.g. phising attacks – e-mails from unknown individuals requesting information)
•          Ensure privacy settings are used on mobile devices and social media (so you only share what you want to share with who you want to see it)
•          Checking statements for unusual transactions/unauthorised activity
•          Use credit check tools to monitor for unusual activity
•          Ignore e-mails that look too good to be true

Any finally always follow advice from companies who hold our valuable information e.g. a bank won't contact you and ask for your details, particularly by e-mail – whilst you might get e-mails that look like they're from a bank, be suspicious of any request (however genuine it looks) to part with valuable personal information (e.g. data of birth/NI numbers/account numbers/sort codes).

Studies demonstrate that individuals are becoming more savvy about online behaviour but there are generational variances and some age-groups are more likely to give away more data than others.

My expectation is that we will see commercial organisations lead the way in improving security, particularly around mobile devices/IT equipment, and this will quickly find its way into the consumer mainstream.

If you have any questions about cyber-crime in UK, our tech team is always on hand to answer your questions, so get in touch via email, or Twitter.

## IT pros are worried about email security, says survey

Last week we released Sophos Email, our brand new secure email gateway solution, as an addition to our Email product range and our Sophos Central management platform. It’s engineered to provide our leading threat and spam protection to users of Microsoft Exchange Online, Office 365, Google Apps for Work and many other email services. And […]

Last week we released Sophos Email, our brand new secure email gateway solution, as an addition to our Email product range and our Sophos Central management platform.

It’s engineered to provide our leading threat and spam protection to users of Microsoft Exchange Online, Office 365, Google Apps for Work and many other email services. And from what we heard in our recent email security survey, users of cloud-based email services like these are in desperate need of the extra protection it delivers.

We conducted the survey among our Spiceworks community and readers of Naked Security. We’re just starting to analyze some of the results and they make for interesting reading.

First, they confirmed that businesses are rapidly shifting to cloud-based email with a total of 38% using it today as their primary email platform.

Many are also choosing to use the cloud for security too, with 43% of respondents using a cloud-based service for email security – almost double the percentage of those using the next most popular solution of dedicated email hardware appliances (22%).

18% of our respondents are using email protection as part of their UTM hardware appliance. Virtualized dedicated appliances (12%) were also a popular option, although only 5% were using a virtualized UTM.

When we asked users of the most popular cloud-based email platform – Microsoft Office 365 – about their biggest concerns, system reliability (fear of service downtime or outages) and lack of security came top of the list.

It’s not surprising reliability was a top concern – recent outages are clearly in people’s thoughts.

In terms of security, it’s clear that IT teams are not confident in the security they are getting with Office 365 – 50% of Office 365 users agreed that third party security solutions are essential to extend Office 365 security.

This tallies clearly with Gartner’s prediction that, by 2018, 40% of Office 365 deployments will rely on third party protection – an increase from under 10% in 2015.

Ransomware, malicious attachments, malicious URLs, viruses and phishing were the top five security threats people were worried about, with over half the respondents saying they were very concerned about these threats. Despite this recognition of email-borne threats, over a quarter of respondents admit that they rarely or never review their email security policy to check if it is effective.

We also asked our participants which features are most needed to improve their current solutions. It seems advanced sandboxing solutions, such as Sophos Sandstorm, and data protection features, including encryption and data loss prevention, are the most sought after.

Time-of click protection, which checks URLs in emails as people click them and not just when the email is received, was also a highly requested addition to counter those worries about malicious URLs.

We’ll look more closely into the numbers over the coming weeks. For now, the message is pretty clear – IT teams recognize that email is a primary threat vector, infrastructure and security is moving to the cloud, and businesses are looking for extra protection to make sure they don’t fall foul of the ever-increasing number and sophistication of threats.

So we believe Sophos Email is great for those who are looking to move their email infrastructure and security to the cloud.

And, because Sophos Email is part of Sophos Central, it can be managed right alongside Endpoint, Mobile, Server, Web and Wireless, meaning better security is matched by increased efficiency too.

If you haven’t moved to the cloud just yet, a Sophos Email Appliance or UTM solution may help give you peace of mind.

If you’d like to take a look at Sophos Email or any of our Email products, simply start a free 30-day trial.

And if you’d like to tell us about your email security hopes and fears our survey is still open!

Filed under: Corporate, Enduser Tagged: email security, Sophos, Sophos Email

## Unknown Bidder Buys 2,700 Bitcoins (worth $1.6 million) at US Government Auction A winning anonymous bidder bought 2,700 Bitcoins (worth roughly$1.6 Million) in an auction held by the United States Marshals Service (USMS) on Monday.

The US government announced at the beginning of this month its plans to auction 2,719 Bitcoins tha…