Research: Hacking the Chromebook (Part 1)

By now many in the bounty arena have heard of Google’s new Chromebook bounty totalling $100,000. While not as big as the infamous Zerodium one million bounty for iOS9, this one comes with a crucial difference – it is being offered by the manufacturer of the device in question instead of a security company with possibly shady customers.

In this series of posts, we will explore our attempts to break into the Chromebook, beginning with some basic exploration of the Chromebook while in guest mode.

What is the Bounty For?

According to Google, the bounty is being offered for the following:

participants that can compromise a Chromebook or Chromebox with device persistence in guest mode (i.e. guest to guest persistence with interim reboot, delivered via a web page)

This would preclude any kind of physical access methods, or exploits delivered while logged in using a non-guest account. This also would preclude exploits delivered while in developer mode which provides shell access.

Background

Chromebooks run Chrome OS, which is essentially a stripped down version of Linux with Chrome browser as its main interface. Most apps are either Chrome apps, or Chrome extensions, and can be installed from the app store. There is a main Linux user named “chronos” that runs most of the underlying system, with some specialized users for certain services, and individual user accounts are located in the “/home/chronos/u-XXXXX/” folder including the guest user. A fuller description of the security system can be found in this paper from MIT .

While in guest mode, only default apps/extensions are available, and new ones cannot be installed. Another important point is that guest mode uses tmpfs file system for storage, which is RAM based and does not persist.

Poking Around Chrome

At this initial stage, we have explored the Chromebook to see what possible avenues of attack may be possible. We started with looking at the Chrome browser itself. Here are some interesting things we found:

  • Chrome is running in incognito mode
  • Only the default plug-ins are loaded (chrome://plugins/) including: Chrome’s PDF Reader, Native Client, Widevine decryption, and Adobe Flash. This is basically the same as Chrome out of the box on other platforms. Screenshot below:
chromebook1.png
  • No extensions are listed (chrome://extensions/), HOWEVER, that isn’t really true. If you try to open certain files, it is clear from the URLs that there are hidden extensions installed. We did not look into listing them, but they should be easy to find on the Chromium source. Screenshot below:
chromebook2.png
  • Extensions cannot be installed via the Chrome store, OR manually by downloading and dragging them in. For the Chrome store, the install button is simply not there. For manual installs, message “Installation is not enabled” comes up.
  • Access and changes to flags is allowed (chrome://flags), but does not persist across reboots. Flags can be changed and take effect by restarting Chrome for the current session.
  • History, bookmarks, caches, etc. do not persist across reboots.

Download and Opening Files

You can download all files and open some of them:

  • Safe browsing is enabled and checks downloads against a blacklist.
  • Downloaded Office files (doc, xls, etc) open via an extension inside the browser that looks like a scaled down version of Google Docs
  • Downloaded Image files open in the browser but also can be opened with Gallery
  • Downloaded text and HTML files open in the browser
  • Downloaded sound and video files open via a dedicated sound and video player apps that pop up above the taskbar
  • Needless to say that possibly malicious files like shell scripts, JS files, etc. do not open although we haven’t explored any possible holes there yet
  • There is no editor of any kind in Guest mode, full users can install apps to edit
  • File URLs are used for local files and it is possibly to introspect SOME directories (/tmp and /media)

Poking around the Desktop

Chromebooks also have a desktop of sorts which is really Chrome underneath. There isn’t much available other than Chrome itself, the Files application and Help.

  • The Files application gives access to the Downloads folder and any USB drives that get plugged in. It can rename, move and delete files and folders but not much more than that. Because it is restricted to the Downloads folder only, it is not possible to see the rest of the files system. It is also clear via the Files application that RAM is being used, since the space available is less than 1 GB versus much more for regular users (no quotas are enabled).
  • As mentioned earlier, Gallery, video and sound players are available by clicking on the right files.
  • There is screenshot functionality available via a hot key
  • Lower right corner provides access to settings such as Bluetooth, WiFi, etc.
  • There is a very basic and restricted shell (crosh) available by pressing CTRL-ALT-T but it is another Chrome extension with very few commands (in developer mode, it provides access to bash). Below is an example of top running in the shell (interestingly enough the W command in top can write files):
chromebook3.png

Possible Avenues of Attack

First of all, as stated above, the guest user’s home directory is using tmpfs, which does not persist. This would mean that we would need actually execute some code that would persist in the system OUTSIDE that directory and come back upon reboot. Here are some possible entry points:

  • Default Chrome plugins – via malicious PDFs, Flash files, video/audio with DRM or native client apps
  • Default extensions – these can be targeted via malicious files for the Office extension. Another possibility is to use Chrome’s built-in developer tools but that would probably be out of scope.
  • Chrome browser itself
  • Javascript APIs
  • Video/Sound can target the built-in audio or video player
  • Malicious images can target Gallery
  • The various settings available to the user can be exploited across multiple users
  • Possibly via other processes running in the system and their users

Second, even if we manage to break in and execute code, it would still only execute in the context of the chronos user. We would then need to figure out how to elevate privileges to reach root access.

Third, we would need to figure out how to get past verified boot. Google outlines some potential ways this may happen ONCE attacker has gained super-user privileges.

Conclusion

In this post, we have briefly explored some of the pieces of the Chromebook software with the eye towards exploitation. In followup posts, we hope to continue digging in further.

Advisory: Open Redirect on adwords.google.com

Overview

A marketing server at adwords.google.com operated an open redirect.

Background

Google’s main AdWords landing page takes a cd parameter indicating a specific country to target, which redirects to a country-specific page. Examples of such URL are as follows:

Canada: https://adwords.google.com/?cd=ca
France: https://adwords.google.com/?cd=fr
Russia: https://adwords.google.com/?cd=ru

Details

The cd parameter which specifies the country was not checked against a valid list of values. Instead, this parameter is used to replace the “com” value in the URL with the value from the cd parameter. For example:

https://adwords.google.com/?cd=fr

becomes

https://www.google.fr/adwords/

This can be used to redirect users to a malicious page. Example URL with malicious content:

https://adwords.google.com/?cd=some.evil.site.com

Redirects to:

https://www.google.some.evil.site.com/adwords/

The vendor communicated that they consider this a low level attack, and do not plan to track a fix for this issue. However, we have since confirmed that this issue has been fixed prior to publication.

References

Google Security CID: 9–6197000008153
Google’s view on open directs: https://sites.google.com/site/bughunteruniversity/nonvuln/open-redirect

Credits

Discovered and written by Yakov Shafranovich

Timeline

2015–08–07: Vendor notified
2015–08–07: Initial vendor response
2015–08–11: Vendor replicated the issue
2015–09–05: Follow up communications with vendor
2015–09–20: Fix confirmed
2015–10–12: Public disclosure

2016–03–14: Updated disclosure posted

Advisory: Content Injection in Google Calendar API (Google)

Overview

Google Calendar embedding API allows injection of arbitrary content.

Background

Google Calendar offers an embedding API, allowing calendars to be embedded in web pages. This API is accessible at the following URL:

https://www.google.com/calendar/embed?src=<url>

Details

In case of an invalid URL, it mirrors back the text of the URL. For example, the following URL:

https://www.google.com/calendar/embed?src=<url>

would show the following message:

Invalid calendar id: <url>

While we have not been able to show script execution, we believe that this can still be used as a vector of attack, especially since this injects content into a page that is SSL secured with a Google certificate. Interestingly, a similar content injection attack was considered to be severe enough by Microsoft to be fixed (as per our earlier blog post).

We would like to present the following attack scenarios:

Scenario #1 — Phishing attack

An attacker can use this to send email to Google Calendar users with the following URL:

https://www.google.com/calendar/embed?src=-We%20are%20having%20a%20problem%20with%20your%20account.%20Please%20call%201-800-BAD-GUYS%20x123%20and%20give%20them%20your%20password

Resulting in the following message:

Invalid calendar id: -We are having a problem with your account. Please call 1–800-BAD-GUYS x123 and give them your password

A sophisticated attack can issue a unique extension for each spammed address, bypassing the requirement for users to provide their email address over the phone. It would also be possible to automate such attack with voice API providers like Twilio, further making users feel at ease, since most phone phishing attempts use humans.

Scenario #2 — Stock manipulation attack

An attacker can use the following URL to show a fake Google Calendar in attempt to manipulate the stock market:

https://www.google.com/calendar/embed?src=-BEGIN%3AVEVENT%0ASUMMARY%3A%20%0AUID%3A12345%0ADESCRIPTION%3BSergey%20Brin%2FElon%20Musk%20to%20announce%20%24GOOG%20acquisition%20of%20%24TSLA%0ALOCATION%3AMountainview%2C%20CA%0ADTSTART%3BTZID%3D%2FUS%2FEastern%3A20151109T100000%0ADTEND%3BTZID%3D%2FUS%2FEastern%3A20151109T113000%0AURL%3Ahttp%3A%2F%2Fintranet.google.com%2Fcalendar%2Fsergerybrin%2Feventid%3D12345%0AEND%3AVEVENT

This would result in the following message:

Invalid calendar id: -BEGIN:VEVENT SUMMARY: UID:12345 DESCRIPTION;Sergey Brin/Elon Musk to announce $GOOG acquisition of $TSLA LOCATION:Mountainview, CA DTSTART;TZID=/US/Eastern:20151109T100000 DTEND;TZID=/US/Eastern:20151109T113000 URL:http://intranet.google.com/calendar/sergerybrin/eventid=12345 END:VEVENT

Seemingly this would indicate a fake calendar event announcing the acquisition of Tesla Motors by Google. HOWEVER, given that the embed API can be used to embed any calendar, it would be trivial to create a real Google calendar with fake info and embed it.

Screenshot

(Please note the SSL icon)

google.png

Vendor Response

Vendor response is as follows:

There is no XSS demonstrated here, just content injection, which has legitimate use and is not a technical security bug for the purposes of our reporting program (phishing generally isn’t a “technical” enough attack scenario). All inputs appear to be properly sanitized, so if you want to demonstrate an actual XSS, you’d need to show proof of script execution

References

Google Security CID: 6–1050000008145
Google Calendar Embed docs: https://support.google.com/calendar/answer/41207?hl=en

Timeline

2015–08–07: Vendor notified
2015–08–07: Vendor response
2015–10–26: Public disclosure

Version Information

Version 1
Last updated on 2015–09–21

Research: Chrome For Android Reveals Phone Model and Build

[UPDATED: 2018-12-25 – we published an expanded post with fix information]

Overview

Google’s Chrome browser for Android tends to disclose information that can be used to identify the hardware of the device it is running on. This problem is further exacerbated by the fact that many applications on Android use Chrome WebView or Chrome Custom Tabs to render web content.

Background — Chrome and Headers

The Chrome browser for Android is provided by Google as the build-in browser in the Android operating system for mobile devices. It is based on the Chromium open source project. It also provides the WebView and Custom Tabs APIs for other applications running on the Android platform, to be used for rendering web content within the apps themselves without opening a separate browser window.

As all browsers, Chrome sends a variety of headers as part of every request to the web servers it communicates with. These headers are defined in the HTTP protocol, latest standard of which can be found in RFCs 7230, 7231, 7232, 7233, 7234 and 7235. Among these is the User-Agent header which is the subject of this post.

The “User-Agent” header in HTTP is defined by RFC 7231, section 5.5.3 as follows:

The “User-Agent” header field contains information about the user agent originating the request, which is often used by servers to help identify the scope of reported interoperability problems, to work around or tailor responses to avoid particular user agent limitations, and for analytics regarding browser or operating system use.

Background — Android Model and Build ID

Android devices have a build-in MODEL and BUILD ID, identifying the phone model and Android build. They are defined in in android.os.Build.MODEL and android.os.Build.ID properties. These are further defined in the Android Compatibility Definition document (section 3.2.2) as follows:

MODEL — A value chosen by the device implementer containing the name of the device as known to the end user. This SHOULD be the same name under which the device is marketed and sold to end users. There are no requirements on the specific form

ID — An identifier chosen by the device implementer to refer to a specific release, in human-readable format. This field can be the same as android.os.Build.VERSION.INCREMENTAL, but SHOULD be a value sufficiently meaningful for end users to distinguish between software builds. The value of this field MUST be encodable as 7-bit ASCII and match the regular expression “^[a-zA-Z0–9._-]+$”.

An attempt to map models to more descriptive names can be found on GitHub. A list of known build IDs for Nexus devices can be found here and here.

Details

As per Chrome docs, the Chrome for Android User Agent string includes the Android version number and build tag information. This information by default is also sent when applications use Android’s WebView and Chrome Custom Tabs APIs to serve web content in their own applications. While Android does offer ability to override these (via WebSettings.setUserAgent() in WebView), most applications choose not to do that to assure compatibility by relying on the default header.

Aggravating this issue is that the user agent header is sent always, with both HTTP and HTTPS requests, often by processes running in background. Also, unlike the desktop Chrome, on Android no extensions or overrides are possible to change the header other than the “Request Desktop Site” option on the browser itself for the current session.

For example of a user-agent header for Chrome Beta, on Nexus 6, with Android v5.1.1:

Mozilla/5.0 (Linux; Android 5.1.1; Nexus 6 Build/LYZ28K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.34 Mobile Safari/537.36

When a user chooses the “Request Desktop Site” option, the user agent header sent is a generic Linux header instead. Here is an example for Chrome Beta, on Nexus 6, with Android v5.1.1:

Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.34 Safari/537.36

The difference is that on mobile mode, the following string is extra:

Android 5.1.1; Nexus 6 Build/LYZ28K

The fact that it identifies the operating system and its version is not unique. This follows generally what many other browsers have been doing on desktop and mobile. It is the build tag that is the problem. As described above, the build tag identifies both the device name and its firmware build. For many devices, this can be used to identify not only the device itself, but also the carrier on which it is running and from that the country.

An example can be easily seen from the above where build LYZ28K can be easily identified as Nexus 6 running on T-Mobile, implying a US presence. It would also be trivial to use the build information to figure out the carrier based on which carriers are known to be at what build number. Build numbers are easily obtainable from manufacturer and phone carrier websites

Bug # 494452 has been filed for this against Chromium before, but Google has choose to keep the design of the user agent string intact.

Possible Mitigation by Android Applications Using WebView

As discussed above, application authors can use WebSettings.setUserAgent() method to set the override the user agent. While many are reluctant to do so in order to lose compatibility, we would like to suggest the following approach of using the default user agent and erasing the build information in it.

Conclusion

Even the NSA described that user agents only identify browsers. Unfortunately, on Android they can also identify the device model, carrier and more. In our opinion, this is simply too much information as it reveals the underlying firmware. While user fingerprinting exists, it is less trivial to tie a specific piece of software to a specific piece of hardware with the granularity of carrier, build and country. An analogy to this would be a desktop browser sending the vendor name and the build number of the BIOS in a desktop computer. Additionally, this information can be used to target users with malware attacks targeting specific builds known to be vulnerable.

We suggest following the approach taken by Mozilla:

Adding a device identifier to the Firefox OS User Agent (UA) string is STRONGLY DISCOURAGED by Mozilla.

and:

Mozilla strives to provide greater privacy for users. Therefore, we have been working to reduce the level of “fingerprintability” of different browser configurations — that is to say, how uniquely identifiable a particular user’s browser is to sites through detection methods of which the user is unaware. (i.e. server-side methods) Adding e.g. hardware information to the UA reduces privacy by increasing fingerprintability.

Credits

Researched and written by Yakov Shafranovich.