Table of Contents
X-XSS-Protection
Cross-Site Scripting (XSS) is an attack in which malicious scripts can be injected on a page.
For example:
<h1>Hello, <script>alert('hacked')</script></h1>
This is a pretty obvious attack and something that browsers can block: if you find a part of the request in the source code, it might be an attack.
The X-XSS-Protection
controls this behavior.
Values:
0
Filter disabled.1
Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.1; mode=block
Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page.1; report=http://domain/url
Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice.
Let's create a simple web server with node.js to play with this.
var express = require('express')
var app = express()
app.use((req, res) => {
if (req.query.xss) res.setHeader('X-XSS-Protection', req.query.xss)
res.send(`<h1>Hello, ${req.query.user || 'anonymous'}</h1>`)
})
app.listen(1234)
I am using Google Chrome 55.
No header
http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E
Nothing happens. The browser successfully prevented this attack.
This is the default behavior in Chrome if no header is set, as you can see in the error message in the Console.
It even helpfully highlights it in the source.
X-XSS-Protection: 0
X-XSS-Protection: 0
http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=0
Oh no!
X-XSS-Protection: 1
X-XSS-Protection: 1
http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=1
The attack was successfully blocked by sanitizing the page because of our explicit header.
X-XSS-Protection: 1; mode=block
X-XSS-Protection: 1; mode=block
http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=1;%20mode=block
The attack is blocked by simply not rendering the page.
X-XSS-Protection: 1; report=http://localhost:1234/report
X-XSS-Protection: 1; report=http://localhost:1234/report
http://localhost:1234/?user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E&xss=1;%20report=http://localhost:1234/report
The attack is blocked and also reported to an address of our choice.
X-Frame-Options
This header allows you to prevent clickjack attacks.
Imagine that an attacker has a YouTube channel and he needs subscribers.
He can create a website with a button that says "Do not click" which means that everyone will definitely click on it. But there's a completely transparent iframe on top of the button. When you click the button, you actually click on the Subscribe button on YouTube. If you were logged into YouTube, you will now be subscribed to the attacker.
Let's illustrate this.
First, install the Ignore X-Frame headers extension.
Create this HTML file.
<style>
button { background: red; color: white; padding: 10px 20px; border: none; cursor: pointer; }
iframe { opacity: 0.8; z-index: 1; position: absolute; top: -570px; left: -80px; width: 500px; height: 650px; }
</style>
<button>Do not click his button!</button>
<iframe src="https://youtu.be/dQw4w9WgXcQ?t=3m33s"></iframe>
As you can see, I have cleverly positioned the viewport of the iframe to the Subscribe button.
The iframe is on top of the button (z-index: 1
) so when you try to click the button you click on the iframe instead.
In this example, the iframe is not completely hidden but I could do that with opacity: 0
.
In practice, this does not work because you are not logged into YouTube, but you get the idea.
You can prevent your website from being embedded as an iframe with the X-Frame-Options
header.
Values
deny
No rendering within a frame.sameorigin
No rendering if origin mismatch.allow-from: DOMAIN
Allows rendering if framed by frame loaded from DOMAIN.
We are going to use this webserver for experiments.
var express = require('express')
for (let port of [1234, 4321]) {
var app = express()
app.use('/iframe', (req, res) => res.send(`<h1>iframe</h1><iframe src="//localhost:1234?h=${req.query.h || ''}"></iframe>`))
app.use((req, res) => {
if (req.query.h) res.setHeader('X-Frame-Options', req.query.h)
res.send('<h1>Website</h1>')
})
app.listen(port)
}
No header
Everyone can embed our website at localhost:1234
in an iframe.
http://localhost:1234/iframe
http://localhost:4321/iframe
X-Frame-Options: deny
X-Frame-Options: deny
No one can embed our website at localhost:1234
in an iframe.
http://localhost:1234/iframe?h=deny
http://localhost:4321/iframe?h=deny
X-Frame-Options: sameorigin
X-Frame-Options: sameorigin
Only we can embed our website at localhost:1234
in an iframe on our website.
An origin is defined as a combination of URI scheme, hostname, and port number.
http://localhost:1234/iframe?h=sameorigin
http://localhost:4321/iframe?h=sameorigin
X-Frame-Options: allow-from http://localhost:4321
X-Frame-Options: allow-from http://localhost:4321
It looks like Google Chrome ignores this directive because you can use Content Security Policy (see below).
Invalid 'X-Frame-Options' header encountered when loading 'http://localhost:1234/?h=allow-from%20http://localhost:4321': 'allow-from http://localhost:4321' is not a recognized directive. The header will be ignored.
It also had no effect in Microsoft Edge.
Here's Mozilla Firefox.
http://localhost:1234/iframe?h=allow-from%20http://localhost:4321
http://localhost:4321/iframe?h=allow-from%20http://localhost:4321
X-Content-Type-Options
This header prevents MIME confusion attacks (<script src="script.txt">
) and unauthorized hotlinking (<script src="https://raw.githubusercontent.com/user/repo/branch/file.js">
).
var express = require('express')
var app = express()
app.use('/script.txt', (req, res) => {
if (req.query.h) res.header('X-Content-Type-Options', req.query.h)
res.header('content-type', 'text/plain')
res.send('alert("hacked")')
})
app.use((req, res) => {
res.send(`<h1>Website</h1><script src="/script.txt?h=${req.query.h || ''}"></script>`)
})
app.listen(1234)
No header
http://localhost:1234/
Even though script.txt
is a text file with the content type of text/plain
it was still executed as if it was a piece of JavaScript.
X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
http://localhost:1234/?h=nosniff
This time the content types do not match and the file was not executed.
Content-Security-Policy
The new Content-Security-Policy
(CSP) HTTP response header helps you reduce XSS risks on modern browsers by declaring what dynamic resources are allowed to load via a HTTP Header.
You can ask the browser to ignore inline JavaScript and load JavaScript files only from your domain, for example.
Inline JavaScript can be not only <script>...</script>
but also <h1 onclick="...">
.
Let's see how it works.
var request = require('request')
var express = require('express')
for (let port of [1234, 4321]) {
var app = express()
app.use('/script.js', (req, res) => {
res.send(`document.querySelector('#${req.query.id}').innerHTML = 'changed by ${req.query.id} script'`)
})
app.use((req, res) => {
var csp = req.query.csp
if (csp) res.header('Content-Security-Policy', csp)
res.send(`
<html>
<body>
<h1>Hello, ${req.query.user || 'anonymous'}</h1>
<p id="inline">is this going to be changed by inline script?</p>
<p id="origin">is this going to be changed by origin script?</p>
<p id="remote">is this going to be changed by remote script?</p>
<script>document.querySelector('#inline').innerHTML = 'changed by inline script'</script>
<script src="/script.js?id=origin"></script>
<script src="//localhost:1234/script.js?id=remote"></script>
</body>
</html>
`)
})
app.listen(port)
}
No header
http://localhost:4321
It works like you would normally expect it to.
Content-Security-Policy: default-src 'none'
Content-Security-Policy: default-src 'none'
http://localhost:4321/?csp=default-src%20%27none%27&user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E
default-src
applies to all resources (images, scripts, frames, etc.) and the value of 'none'
doesn't allow anything.
We can see it in action here, along with very helpful error messages.
Chrome refused to load or execute any of the scripts. It also tried to load favicon.ico
even though it's also prohibited.
Content-Security-Policy: default-src 'self'
Content-Security-Policy: default-src 'self'
http://localhost:4321/?csp=default-src%20%27self%27&user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E
Now we can load scripts from our origin, but still no remote or inline scripts.
Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'
Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'
http://localhost:4321/?csp=default-src%20%27self%27;%20script-src%20%27self%27%20%27unsafe-inline%27&user=%3Cscript%3Ealert(%27hacked%27)%3C/script%3E
This time we also allow inline scripts to run.
Note that our XSS attack was also prevented. But not when you allow unsafe-inline
and set X-XSS-Protection: 0
at the same time.
Other
content-security-policy.com has nicely formatted examples.
default-src 'self'
allows everything but only from the same originscript-src 'self' www.google-analytics.com ajax.googleapis.com
allows Google Analytics, Google AJAX CDN and Same Origindefault-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';
allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load (eg object, frame, media, etc). It is a good starting point for many sites.
I have not tested this but I think that
frame-ancestors 'none'
should be equivalent toX-Frame-Options: deny
frame-ancestors 'self'
should be equivalent toX-Frame-Options: sameorigin
frame-ancestors localhost:4321
should be equivalent toX-Frame-Options: allow-from http://localhost:4321
script-src 'self'
i.e. without'unsafe-inline'
should be equivalent toX-XSS-Protection: 1
If you take a look at facebook.com and twitter.com headers, they use CSP a lot.
Strict-Transport-Security
HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks.
Let's say that you want to go to facebook.com
. Unless you type https://
, the default protocol is HTTP and the default port for HTTP is 80
.
So the request will be made to http://facebook.com
.
$ curl -I facebook.com
HTTP/1.1 301 Moved Permanently
Location: https://facebook.com/
And then you are redirected to the secure version of Facebook.
If you were connected to a public WiFi that an attacker is running, they could hijack this request and serve their own webpage that looks identical to facebook.com and collect your password.
What you can do to prevent this is to use this header to tell that the next time the user wants to go to facebook.com, they should be taken to the https version instead.
$ curl -I https://www.facebook.com/
HTTP/1.1 200 OK
Strict-Transport-Security: max-age=15552000; preload
If you logged into Facebook at home and then went to facebook.com on the insecure WiFi, you'd be safe because the browser remembers this header.
But what if you used Facebook on the insecure network for the first time ever? Then you are not protected.
To fix this, browsers ship with a hard-coded list of domains known as the HSTS preload list that includes the most popular domain names that are HTTPS only.
If you want to, you could try to submit your own here. It's also a handy website for testing if your site is using this header correctly. Yeah, I know, mine doesn't.
Values, combination of, separated by ;
max-age=15552000
The time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.includeSubDomains
If this optional parameter is specified, this rule applies to all of the site's subdomains as well.preload
If the site owner would like their domain to be included in the HSTS preload list maintained by Chrome (and used by Firefox and Safari).
What if you need to switch back to HTTP before max-age
or if you had preload
?
You are out of luck.
This header is very strictly enforced.
You'd need to ask all of your users to clear their browsing history and settings.
Public-Key-Pins
HTTP Public Key Pinning (HPKP) is a security mechanism which allows HTTPS websites to resist impersonation by attackers using mis-issued or otherwise fraudulent certificates.
Values
pin-sha256="<sha256>"
The quoted string is the Base64 encoded Subject Public Key Information (SPKI) fingerprint. It is possible to specify multiple pins for different public keys. Some browsers might allow other hashing algorithms than SHA-256 in the future.max-age=<seconds>
The time, in seconds, that the browser should remember that this site is only to be accessed using one of the pinned keys.includeSubDomains
If this optional parameter is specified, this rule applies to all of the site's subdomains as well.report-uri="<URL>"
If this optional parameter is specified, pin validation failures are reported to the given URL.
Instead of using a Public-Key-Pins
header you can also use a Public-Key-Pins-Report-Only
header. This header only sends reports to the report-uri specified in the header and does still allow browsers to connect to the webserver even if the pinning is violated.
That is what Facebook is doing:
$ curl -I https://www.facebook.com/
HTTP/1.1 200 OK
...
Public-Key-Pins-Report-Only:
max-age=500;
pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18=";
pin-sha256="r/mIkG3eEpVdm+u/ko/cwxzOMo1bk4TyHIlByibiA5E=";
pin-sha256="q4PO2G2cbkZhZ82+JgmRUyGMoAeozA+BSXVXQWB8XWQ=";
report-uri="http://reports.fb.com/hpkp/"
Why do we need this? Isn't trusting Certificate Authorities enough?
An attacker could create their own certificate for www.facebook.com and trick me into adding it to my trust root certificate store. Or it could be an administrator in your organization.
Let's create a certificate for www.facebook.com.
sudo mkdir /etc/certs
echo -e 'US\nCA\nSF\nFB\nXX\nwww.facebook.com\nno@spam.org' | \
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/certs/facebook.key \
-out /etc/certs/facebook.crt
And make it trusted on our computer.
# curl
sudo cp /etc/certs/*.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
# Google Chrome
sudo apt install libnss3-tools -y
certutil -A -t "C,," -n "FB" -d sql:$HOME/.pki/nssdb -i /etc/certs/facebook.crt
# Mozilla Firefox
#certutil -A -t "CP,," -n "FB" -d sql:`ls -1d $HOME/.mozilla/firefox/*.default | head -n 1` -i /etc/certs/facebook.crt
Let's create our own web server that uses this certificate.
var fs = require('fs')
var https = require('https')
var express = require('express')
var options = {
key: fs.readFileSync(`/etc/certs/${process.argv[2]}.key`),
cert: fs.readFileSync(`/etc/certs/${process.argv[2]}.crt`)
}
var app = express()
app.use((req, res) => res.send(`<h1>hacked</h1>`))
https.createServer(options, app).listen(443)
Switch to our server.
echo 127.0.0.1 www.facebook.com | sudo tee -a /etc/hosts
sudo node server.js facebook
Does it work?
$ curl https://www.facebook.com
<h1>hacked</h1>
Good. curl
validates certificates.
Because I've visited Facebook before and Google Chrome has seen the header, it should report the attack but still allow it, right?
Nope. Public-key pinning was bypassed by a local root certificate. Interesting.
Alright, what about www.google.com?
echo -e 'US\nCA\nSF\nGoogle\nXX\nwww.google.com\[email protected]' | \
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/certs/google.key \
-out /etc/certs/google.crt
sudo cp /etc/certs/*.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
certutil -A -t "C,," -n "Google" -d sql:$HOME/.pki/nssdb -i /etc/certs/google.crt
echo 127.0.0.1 www.google.com | sudo tee -a /etc/hosts
sudo node server.js google
Same. I guess this is a feature.
Anyway, if you don't add these certificates to your store, you won't be able to visit these sites because the option to add an exception in Firefox or Proceed unsafely in Chrome are not available.
Content-Encoding: br
Content-Encoding: br
The content is compressed with Brotli.
It promises better compression density and comparable decompression speed to gzip. It is supported by Google Chrome.
Naturally, there is a node.js module for it.
var shrinkRay = require('shrink-ray')
var request = require('request')
var express = require('express')
request('https://www.gutenberg.org/files/1342/1342-0.txt', (err, res, text) => {
if (err) throw new Error(err)
var app = express()
app.use(shrinkRay())
app.use((req, res) => res.header('content-type', 'text/plain').send(text))
app.listen(1234)
})
- Uncompressed: 700 KB
- Brotli: 204 KB
- Gzip: 241 KB
Timing-Allow-Origin
The Resource Timing API allows you to measure how long it takes to fetch resources on your page.
Because timing information can be used to determine whether or not a user has previously visited a URL (based on whether the content or DNS resolution are cached), the standard deemed it a privacy risk to expose timing information to arbitrary hosts.
<script>
setTimeout(function() {
console.log(window.performance.getEntriesByType('resource'))
}, 1000)
</script>
<img src="http://placehold.it/350x150">
<img src="/local.gif">
It looks like you can get detailed timing information (domain lookup time, for example)
only for resources that are on your origin unless the Timing-Allow-Origin
is set.
Here's how you can use it.
Timing-Allow-Origin: *
Timing-Allow-Origin: http://foo.com http://bar.com
Alt-Svc
Alternative Services allow an origin's resources to be authoritatively available at a separate network location, possibly accessed with a different protocol configuration.
This one is used by Google:
alt-svc: quic=":443"; ma=2592000; v="36,35,34"
It means that the browser can use, if it wants to, the QUIC (Quick UDP Internet Connections) or HTTP over UDP protocol
on port 443
for the next 30 days (max age is 2592000
seconds or 720 hours or 30 days). No idea what v
stands for. Version?
- https://www.mnot.net/blog/2016/03/09/alt-svc
- https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp/
P3P
Here's a couple of P3P
headers I've seen:
P3P: CP="This is not a P3P policy! See https://support.google.com/accounts/answer/151657?hl=en for more info."
P3P: CP="Facebook does not have a P3P policy. Learn why here: http://fb.me/p3p"
Some browsers require third party cookies to use the P3P protocol to state their privacy practices.
The organization that established P3P, the World Wide Web Consortium, suspended its work on this standard several years ago because most modern web browsers don't fully support P3P. As a result, the P3P standard is now out of date and doesn't reflect technologies that are currently in use on the web, so most websites currently don't have P3P policies.
I did not do much research on this but it looks like this is needed for IE8 to accept 3rd party cookies.
This is accepted by Internet Explorer. For example, IE's "high" privacy setting blocks all cookies from websites that do not have a compact privacy policy, but cookies accompanied by P3P non-policies like those above are not blocked.