Research: The Dangers of Proxying S3 Content

Background

It is common for organizations to use Amazon’s S3 service as a place to host static assets and other content. The content within Amazon S3 is organized in “buckets”. Amazon also provides ability to point custom domains at S3 buckets through virtual hosting or the static website endpoints. In both cases, a CNAME mapping is created from the custom domain to an Amazon domain name.

However, SSL support is not available via the custom domain name, but SSL is provided if either “s3.amazonaws.com/<bucket-name>” URL or the “<bucket-name>.s3.amazonaws.com” domain name is used directly (as long as there are no periods in the bucket name). The reason why SSL doesn’t work when accessing the plain domain names is because Amazon is not able to provide certificates for them because they are not the domain owner and current S3 functionality does not allow custom certificates to be loaded. However, for “s3.amazonaws.com” domains, Amazon provides a wildcard certificate which works just fine. If you try to access the domain names directly, you will be served content with the same wild card certificate which of course would not match the domain names.

Possible SSL solution – CloudFront or Another CDN

One possible solution offered by AWS is to use their CDN offering called CloudFront. In that case, you can setup two CloudFront domains that sit in front of the S3 buckets and CNAME your domain names to them. This of course comes at a higher price and a confusing set of options: you can use the cloudfront.net subdomains, or a free SNI-enabled SSL certificate not compatible with older browsers or a costly ($600/month/per domain) option to upload your own SSL certificate. The data would then flow as follows:

[S3] >—-internal AWS network—-> [CloudFront] >—–SSL—-> [users]

Another set of solutions is to use a non-AWS CDN like CloudFront, etc. and have the CDN proxy the content with SSL. The setup would be similar to Amazon with SNI and non-SNI SSL options available. The data flow would then look like this (for CloudFlare):

[S3] >—-HTTP—-> [CloudFlare] >—–SSL—-> [users]

What You Should Not Do – Proxy S3 Content Yourself

Of course many developers would immediately react to this particular problem in the same way: I can do it better by myself! The usual solutions is to have a script or a webserver rule that will automatically retrieve the content from S3 and display it to the user. So it would look like this:

Everything after “/static/” would that be retrieved from some S3 bucket, let’s say “marketing.example”. So the following path would be followed:

Of course this only lasts as long as there is only one bucket. Let’s say now another bucket is needed called “support.example”. So the script will become something like with the bucket name in the URL:

What will often happen at this point is that the developer will not realize that the bucket names need to be validated against a whitelist of valid buckets. Because S3 bucket names are not unique to one AWS user but share a global namespace across all S3 users, this script would be able to retrieve data from any other S3 bucket, as long as it is public. This will not happen when using CloudFront or other CDNs because they will be mapped 1-to-1 against a specific S3 bucket.

How will this look like? If an attacker can figure out that the script takes arbitrary bucket names, they can go ahead and create a new bucket called “evil.example” and then use the following URLs to retrieve content from it:

What can this be leveraged for? Some examples:

  • Serving malware since the content will be served under the target domain and the target SSL certificates
  • Facilitating phishing attacks
  • XSS since HTML / JS content will bypass the same origin policy since it is served from the same domain as the target
  • Stealing session cookies since the code will run in the same domain and have access to cookies
  • If the content is retrieved using the S3 APIs, then an attacker could setup a “Requester Pays Bucket” and make money off the target (although Amazon would probably catch this eventually)
  • [insert your exploit here]

Recommendations

  • Don’t re-invent the wheel, use an existing solution like CloudFront, or some other CDN
  • If you must proxy content yourself, make sure you have a whitelist of valid buckets, and use other technologies like subdomains, HTTPOnly cookies, CSP headers, etc. to segregate the S3 content from the rest of the site

Research: Crashing Browsers Remotely via Insecure Search Suggestions

Summary

Intercepting insecure search suggestion requests from browsers, and returning very large responses leads to browser crashes (but not RCE). Affected browsers are FireFox on the desktop and Android, and Chrome on desktop and Android – other Chromium and FireFox derived browsers maybe affected. Internet Explorer and Safari are not affected. The issue is exploitable remotely, albeit not easily.

Background – Search Suggestions

Most browsers, desktop and mobile, support a feature which allows users to type either in the address bar or the search box and see a list of “search suggestions”. These are similar to the search suggestions provided by most search engines within their homepages and search bars. Examples of search suggestions in the browser and search engine webpage appear below:

The protocol that underlies this mechanism is OpenSearch Suggestions Extensions, a JSON protocol running over HTTP (as defined here). This protocol allows browsers and other applications to send simple keyword queries to the search engine servers which return JSON responses translated by the browser into results in the search bar. It should be noted that some search engines define their own APIs instead of OpenSearch, which browsers then implement.

Search engines can also publish OpenSearch description documents (as defined here) and embed those in their webpages, which browsers can automatically discover and use. The discovery of new search engines happens automatically in some browsers when the user visits a particular site (Chrome and IE Edge [SSL only]), or triggered manually by the user via an icon in the search bar (FireFox). FireFox and Internet Explorer (prior to Edge) also support plugins and APIs for doing this as well.

An example of an open search description document defining the suggestion protocol from AOL Search (original from here) – note the bolded part that defines the search suggestion protocol:

<?xml version="1.0" encoding="UTF-8"?>
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
                       xmlns:moz="http://www.mozilla.org/2006/browser/search/">
    <ShortName>AOL Search</ShortName>
    <Description>The AOL Search engine delivers great search results so
    you can search less and discover more.</Description>
    <Language>en-us</Language>
    <InputEncoding>UTF-8</InputEncoding>
    <Image width="16" height="16" type="image/x-icon">http://search.aol.com/favicon.ico</Image>
    <Url type="text/html" method="get" template="http://search.aol.com/aol/search?q={searchTerms}&amp;s_it=opensearch"/>
    <Url type="application/x-suggestions+json" template="http://autocomplete.search.aol.com/autocomplete/get?output=json&amp;it=opensearch&amp;q={searchTerms}"/>
    <moz:SearchForm>http://search.aol.com/aol/webhome</moz:SearchForm>
</OpenSearchDescription>

Background – Search Engines and HTTPS

Even in the the post-Snowden era, many popular search engines still do not support encryption (HTTPS). Other search engines may support HTTPS but also still support non-HTTPS connections and do not redirect users automatically to HTTPS. Because browser vendors tend to include the most popular search engines in specific countries by default, they end up including multiple search engines which are not using HTTPS. This also applies to the search suggestions endpoints used by browsers.

Some examples (for English locale, US only):

  • Android AOSP stock browser (source)
    • Bing (non-SSL version)
    • Yahoo (non-SSL version)
  • Chrome (desktop and Android) (source)
    • AOL Search (does not support SSL)
    • Ask.com (does not support SSL)
  • FireFox (desktop only) (source)
    • Ebay (does not support SSL)

Exploit Details

Because browsers include multiple non-HTTPS search engines with insecure search suggestions endpoints, it would be possible for an attacker on the network level to intercept the traffic flowing between the browser and the search engine endpoints, and substitute their own. If a very large response is returned (2+ GBs), the browser can run out of memory and crash. This is due to the fact that browsers do not check for sizes in the search suggestions responses. Obviously, this is more of an issue for mobile devices which have lower memory than desktops.

For Android AOSP browser and Chromium, this issue appear to be directly tied to the processing code of search engine responses. For FireFox, this is a more generic issue around large XMLHTTPRequest responses, which is what the browser is using internally for search suggestions. Our bug reports with the vendors provide more details on which code is causing this. This re-enforces the fact network traffic SHOULD NEVER be trusted.

The following crashes were observed – we have not been able to cause an RCE or a buffer overflow:

  • Android AOSP stock browser on Android (v4.4) – application crashes
  • Chrome v51 on Android (v6.01) – application crashes
  • Chrome v51 on desktop Linux (Ubuntu v16.04) – the entire computer freezes requires a reboot (this maybe to due to swapping being disabled with an SSD drive)
  • FireFox v47 on desktop Linux (Ubuntu v16.04) and Android (v6.01) – application crashes

Safari v9.1 and Internet Explorer 11 and Edge appear not to be are not affected, although a similar bug has happened before with Safari. We did not test prior versions of either Safari or IE. We also did not test any other browsers derived from Chromium or FireFox.

The practical exploitation of this issue is mitigated by several factors:

  • The attacker must have control over DNS and the network traffic of the victim machine. This is most likely in cases of a rogue WiFi hotspot or a hacked router.
  • Most browsers have a rather short timeout for search engine suggestions response, not allowing sufficient time for the large response packet to be transferred over network
  • Due to the very large response size needed to trigger this issue, it is only exploitable over broadband or local networks such as rogue WiFi hotspot

Vendor Responses

Google response re: Android AOSP browser:

The team reviewed this issue and don’t believe there is a security vulnerability here. It seems the worse things that can happen is the browser crashes due to resource exhaustion. The phone is still usable so there isn’t a denial of service.

Google response re: Chromium:

We don’t consider DoS to be a security vulnerability. See the Chrome Security FAQ:

https://www.chromium.org/Home/chromium-security/security-faq#TOC-Are-denial-of-service-issues-considered-security-bugs-

Mozilla / FireFox response has been to remove the security restriction on this bug, therefore indicating that this is not a security issue.

Steps to Replicate

(This is for Chrome but is similar for other browsers)

1. Install DNSMASQ and NGINX:
sudo apt-get install dnsmasq nginx
2. Modify the /etc/hosts file to add the following entries to map to the IP of the local computer (varies by vendor of the phone):
192.168.1.x ss.ask.com
3. Configure /etc/dnsmasq.conf file to listen on the IP:
listen-address=192.168.1.x
4. Restart DNSMASQ:
sudo /etc/init.d/dnsmasq restart
5. Use fallocate to create a file in “/var/www/html/”
sudo fallocate -l 5G query
6. Modify DNS settings on the test machine or the same machine to point to “192.168.1.x”. If same machine, modify resolve.conf as follows:
nameserver 192.168.1.x
7. Start Chrome, go to settings and choose “Ask.com” as the default search provider.
8. Open new tab and try to type something in the omnibox.

References

Android bug reports: 214784 and 214785
Chromium bug reports: 624779 and 624794 (patch accepted)
FireFox bug reports: 1283675 and 1283672
OpenSearch description document: doc here
OpenSearch Suggestions extension v1.1: doc here
Safari Search Suggestions bug: see ArsTechnica story here

Credits

Researched and written by Yakov Shafranovich.

Timeline

2016-06-30: Bug filed with Android
2016-06-30: Bug filed with Chromium
2016-06-30: Bug filed Mozilla/FireFox
2016-06-30: Response from Chromium, Won’t Fix
2016-07-12: Response from Android, not a security issue
2016-07-13: Android team is ok with disclosure
2016-07-14: Mozilla removes security restrictions on the bug
2016-07-26: Public disclosure

Research: Securing Android Applications from Screen Capture


Summary — TL, DR

Apps on Android and some platform services are able to capture other apps’s screens by using MediaProjection API. Because of the way this API implements “securing” sensitive screens, there exist some possible security issues. The best way to secure your Android app is to use FLAG_SECURE on sensitive screens and DO NOT use the virtual keyboard (here is why).

MediaProjection API

Since Android 5.0, there exists a new MediaProjection API that allows apps to record videos and take screenshots of screens belonging to other apps. The API is described as follows:

Android 5.0 lets you add screen capturing and screen sharing capabilities to your app with the new android.media.projection APIs. This functionality is useful, for example, if you want to enable screen sharing in a video conferencing app. The new createVirtualDisplay() method allows your app to capture the contents of the main screen (the default display) into a Surface object, which your app can then send across the network. The API only allows capturing non-secure screen content, and not system audio. To begin screen capturing, your app must first request the user’s permission by launching a screen capture dialog using an Intent obtained through the createScreenCaptureIntent() method.

(On Android versions prior to 5, there are other methods such as undocumented APIs, and ADB, we are focusing on Android 5+)

This API also drives several other functions in the OS:

All of these functions as well as the MediaProjection API can take screenshots and videos of other apps. For apps to use the API, special permission is required, for platform features, no special permission is needed. Additionally, any applications signed by the system key (Google apps) can use this API without permission as well.

A good open source example of an application that uses the API can be found here:

https://github.com/JakeWharton/Telecine

Secure and non-secure content

As mentioned in the Google docs above, “the API only allows capturing non-secure screen content”. What exactly is “secure” and “non-secure” content?

This refers to a special flag which can be applied to views in Android, called FLAG_SECURE. It is described in Android docs as follows:

Treat the content of the window as secure, preventing it from appearing in screenshots or from being viewed on non-secure displays

Setting this flag on Android view will prevent screenshots from being taken manually, and any other app or platform service will show a black screen. This functionality is not global for the entire app, but can be set on specific screens which can be more sensitive, and not set on others. There is no other way or permission that can mark an entire app or any part of it from being excepted from screen capture or recording.

NOTE: Even on views marked with FLAG_SECURE, the virtual keyboard is ALWAYS visible. This is due to a known Android bug which Google has so far refused to fix:

https://code.google.com/p/android/issues/detail?id=129285

How screen capture really works in Android

The term “secure” as used in this context does not mean that the content of the app cannot be captured, rather that it cannot be “viewed on non-secure displays”. This is because screen capture and the concept of secure / non-secure isn’t what developers may think it is.

Behind the scenes, this API and related platform services use the concept of Casting (similar to AirPlay). Apps that capture screenshots and record videos, must create a virtual display to which then the device content is cast to. The FLAG_SECURE flag is also not used for security but rather means copyrighted content in context of DRM and displays — i.e. secure content would be something like a DVD, and a secure display would be an HDTV.

This is clear on the device itself — when an app begins to record the screen, the cast icon is turned on in the notification bar. This is also clear from the Android source code and this doc:

Display flag: Indicates that the display has a secure video output and supports compositing secure surfaces. If this flag is set then the display device has a secure video output and is capable of showing secure surfaces. It may also be capable of showing protected buffers. If this flag is not set then the display device may not have a secure video output; the user may see a blank region on the screen instead of the contents of secure surfaces or protected buffers.

That would means that an Android device casting to a DRM-protected display like a TV would always display sensitive screens, since the concept of secure really means “copyrighted”. For apps, Google forestalled this issue by preventing apps not signed by the system key from creating virtual “secure” displays, but not for physical devices. There is also an existing Android bug asking for the concept of DRM and screen security to be separated into different flags:

https://code.google.com/p/android/issues/detail?id=93026

Security issues with the current API

First of all, a basic foundation of mobile app security is a clear separation between apps. One Android app is should never able to read the preferences, data or capture cloud notifications of another app. This paradigm breaks down in case of screen capture/recording. An app gaining access to the MediaProjection API or any of the platform services using it, is able to capture screen output from other apps including PIN numbers, passwords, credit card numbers, etc.

Second, by using a flag used for marking copyrighted content, it would make it easier to subvert the system. Some ways this can be subverted include:

  • Gaining permissions to create a virtual display marked as secure would show all secure content. Right now this is preventable by using the system key, but a rooted phone or some other way that fools the system into creating such display would by pass this protection.
  • Also, casting to a physical secure display, or perhaps a wireless one, would also display content

Third, even with the FLAG_SECURE in use, some parts of the screen can still be captured. The virtual keyboard is one existing example, but there may be others (perhaps notifications?).

[ADDED 06/24/2016: Mark Murphy from CommonsWare points out several other issues with FLAG_SECURE child objects – see his blog post here]

Fourth, there is no clear indication to the user they are being recorded other than the cast icon. Clicking on the icon shows no devices since virtual devices aren’t going to be listed. A better warning may be needed.

A better solution as suggested in this bug report, would be to define a separate flag and never allow any app or system service to see its output under any reason, and also to blank out the entire screen even if other apps or service display anything on it. An even better solution would be to make this opt-in for apps, instead of opt-out.

Attack vectors

There are several possible avenues of attack which would result in an app being installed on a user’s phone recording their app activity. These include:

  • Malicious apps in the app store that masquerade as legit casting apps requiring record permission — since users don’t know that casting apps can also record their screen
  • Remote install via compromised desktop as described in this paper
  • Overlaying permission screens as described here

To record even non-secure screens, the following can be tried but they are not practically feasible:

  • On rooted phones, creating a virtual display marked as “secure”
  • Fooling the system into thinking that a given app is a system app, allowing it to create secure displays

All of these would result in an app sitting on the phone and recording user activity. However, other than the last two methods, FLAG_SECURE views would not be recordable, although the virtual keyboard is. The only indicator to the user would be the cast icon, but when they click on it, no devices would be listed.

Conclusion — Protect Your Apps

To protect your apps from being recorded by other apps, FLAG_SECURE should be used on any views containing sensitive data. Additionally, since the virtual keyboard is also vulnerable, it is recommended that these screens either use a custom keyboard, or if feasible to use an on-screen custom layout (for numeric input like PIN numbers).

We have surveyed many of the top apps in the Google Play store, and many of them including some Google-owned apps do not use FLAG_SECURE or if they do, do not secure the keyboard.

We also hope that Google would lock down this API and solve the issues highlighted in this article.

Credits

Researched and written by Yakov Shafranovich.

References

Google CID: 3–5606000008769

Research: Hacking the Chromebook (Part 1)

By now many in the bounty arena have heard of Google’s new Chromebook bounty totalling $100,000. While not as big as the infamous Zerodium one million bounty for iOS9, this one comes with a crucial difference – it is being offered by the manufacturer of the device in question instead of a security company with possibly shady customers.

In this series of posts, we will explore our attempts to break into the Chromebook, beginning with some basic exploration of the Chromebook while in guest mode.

What is the Bounty For?

According to Google, the bounty is being offered for the following:

participants that can compromise a Chromebook or Chromebox with device persistence in guest mode (i.e. guest to guest persistence with interim reboot, delivered via a web page)

This would preclude any kind of physical access methods, or exploits delivered while logged in using a non-guest account. This also would preclude exploits delivered while in developer mode which provides shell access.

Background

Chromebooks run Chrome OS, which is essentially a stripped down version of Linux with Chrome browser as its main interface. Most apps are either Chrome apps, or Chrome extensions, and can be installed from the app store. There is a main Linux user named “chronos” that runs most of the underlying system, with some specialized users for certain services, and individual user accounts are located in the “/home/chronos/u-XXXXX/” folder including the guest user. A fuller description of the security system can be found in this paper from MIT .

While in guest mode, only default apps/extensions are available, and new ones cannot be installed. Another important point is that guest mode uses tmpfs file system for storage, which is RAM based and does not persist.

Poking Around Chrome

At this initial stage, we have explored the Chromebook to see what possible avenues of attack may be possible. We started with looking at the Chrome browser itself. Here are some interesting things we found:

  • Chrome is running in incognito mode
  • Only the default plug-ins are loaded (chrome://plugins/) including: Chrome’s PDF Reader, Native Client, Widevine decryption, and Adobe Flash. This is basically the same as Chrome out of the box on other platforms. Screenshot below:
chromebook1.png
  • No extensions are listed (chrome://extensions/), HOWEVER, that isn’t really true. If you try to open certain files, it is clear from the URLs that there are hidden extensions installed. We did not look into listing them, but they should be easy to find on the Chromium source. Screenshot below:
chromebook2.png
  • Extensions cannot be installed via the Chrome store, OR manually by downloading and dragging them in. For the Chrome store, the install button is simply not there. For manual installs, message “Installation is not enabled” comes up.
  • Access and changes to flags is allowed (chrome://flags), but does not persist across reboots. Flags can be changed and take effect by restarting Chrome for the current session.
  • History, bookmarks, caches, etc. do not persist across reboots.

Download and Opening Files

You can download all files and open some of them:

  • Safe browsing is enabled and checks downloads against a blacklist.
  • Downloaded Office files (doc, xls, etc) open via an extension inside the browser that looks like a scaled down version of Google Docs
  • Downloaded Image files open in the browser but also can be opened with Gallery
  • Downloaded text and HTML files open in the browser
  • Downloaded sound and video files open via a dedicated sound and video player apps that pop up above the taskbar
  • Needless to say that possibly malicious files like shell scripts, JS files, etc. do not open although we haven’t explored any possible holes there yet
  • There is no editor of any kind in Guest mode, full users can install apps to edit
  • File URLs are used for local files and it is possibly to introspect SOME directories (/tmp and /media)

Poking around the Desktop

Chromebooks also have a desktop of sorts which is really Chrome underneath. There isn’t much available other than Chrome itself, the Files application and Help.

  • The Files application gives access to the Downloads folder and any USB drives that get plugged in. It can rename, move and delete files and folders but not much more than that. Because it is restricted to the Downloads folder only, it is not possible to see the rest of the files system. It is also clear via the Files application that RAM is being used, since the space available is less than 1 GB versus much more for regular users (no quotas are enabled).
  • As mentioned earlier, Gallery, video and sound players are available by clicking on the right files.
  • There is screenshot functionality available via a hot key
  • Lower right corner provides access to settings such as Bluetooth, WiFi, etc.
  • There is a very basic and restricted shell (crosh) available by pressing CTRL-ALT-T but it is another Chrome extension with very few commands (in developer mode, it provides access to bash). Below is an example of top running in the shell (interestingly enough the W command in top can write files):
chromebook3.png

Possible Avenues of Attack

First of all, as stated above, the guest user’s home directory is using tmpfs, which does not persist. This would mean that we would need actually execute some code that would persist in the system OUTSIDE that directory and come back upon reboot. Here are some possible entry points:

  • Default Chrome plugins – via malicious PDFs, Flash files, video/audio with DRM or native client apps
  • Default extensions – these can be targeted via malicious files for the Office extension. Another possibility is to use Chrome’s built-in developer tools but that would probably be out of scope.
  • Chrome browser itself
  • Javascript APIs
  • Video/Sound can target the built-in audio or video player
  • Malicious images can target Gallery
  • The various settings available to the user can be exploited across multiple users
  • Possibly via other processes running in the system and their users

Second, even if we manage to break in and execute code, it would still only execute in the context of the chronos user. We would then need to figure out how to elevate privileges to reach root access.

Third, we would need to figure out how to get past verified boot. Google outlines some potential ways this may happen ONCE attacker has gained super-user privileges.

Conclusion

In this post, we have briefly explored some of the pieces of the Chromebook software with the eye towards exploitation. In followup posts, we hope to continue digging in further.

Research: Chrome For Android Reveals Phone Model and Build

Overview

Google’s Chrome browser for Android tends to disclose information that can be used to identify the hardware of the device it is running on. This problem is further exacerbated by the fact that many applications on Android use Chrome WebView or Chrome Custom Tabs to render web content.

Background — Chrome and Headers

The Chrome browser for Android is provided by Google as the build-in browser in the Android operating system for mobile devices. It is based on the Chromium open source project. It also provides the WebView and Custom Tabs APIs for other applications running on the Android platform, to be used for rendering web content within the apps themselves without opening a separate browser window.

As all browsers, Chrome sends a variety of headers as part of every request to the web servers it communicates with. These headers are defined in the HTTP protocol, latest standard of which can be found in RFCs 7230, 7231, 7232, 7233, 7234 and 7235. Among these is the User-Agent header which is the subject of this post.

The “User-Agent” header in HTTP is defined by RFC 7231, section 5.5.3 as follows:

The “User-Agent” header field contains information about the user agent originating the request, which is often used by servers to help identify the scope of reported interoperability problems, to work around or tailor responses to avoid particular user agent limitations, and for analytics regarding browser or operating system use.

Background — Android Model and Build ID

Android devices have a build-in MODEL and BUILD ID, identifying the phone model and Android build. They are defined in in android.os.Build.MODEL and android.os.Build.ID properties. These are further defined in the Android Compatibility Definition document (section 3.2.2) as follows:

MODEL — A value chosen by the device implementer containing the name of the device as known to the end user. This SHOULD be the same name under which the device is marketed and sold to end users. There are no requirements on the specific form

ID — An identifier chosen by the device implementer to refer to a specific release, in human-readable format. This field can be the same as android.os.Build.VERSION.INCREMENTAL, but SHOULD be a value sufficiently meaningful for end users to distinguish between software builds. The value of this field MUST be encodable as 7-bit ASCII and match the regular expression “^[a-zA-Z0–9._-]+$”.

An attempt to map models to more descriptive names can be found on GitHub. A list of known build IDs for Nexus devices can be found here and here.

Details

As per Chrome docs, the Chrome for Android User Agent string includes the Android version number and build tag information. This information by default is also sent when applications use Android’s WebView and Chrome Custom Tabs APIs to serve web content in their own applications. While Android does offer ability to override these (via WebSettings.setUserAgent() in WebView), most applications choose not to do that to assure compatibility by relying on the default header.

Aggravating this issue is that the user agent header is sent always, with both HTTP and HTTPS requests, often by processes running in background. Also, unlike the desktop Chrome, on Android no extensions or overrides are possible to change the header other than the “Request Desktop Site” option on the browser itself for the current session.

For example of a user-agent header for Chrome Beta, on Nexus 6, with Android v5.1.1:

Mozilla/5.0 (Linux; Android 5.1.1; Nexus 6 Build/LYZ28K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.34 Mobile Safari/537.36

When a user chooses the “Request Desktop Site” option, the user agent header sent is a generic Linux header instead. Here is an example for Chrome Beta, on Nexus 6, with Android v5.1.1:

Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.34 Safari/537.36

The difference is that on mobile mode, the following string is extra:

Android 5.1.1; Nexus 6 Build/LYZ28K

The fact that it identifies the operating system and its version is not unique. This follows generally what many other browsers have been doing on desktop and mobile. It is the build tag that is the problem. As described above, the build tag identifies both the device name and its firmware build. For many devices, this can be used to identify not only the device itself, but also the carrier on which it is running and from that the country.

An example can be easily seen from the above where build LYZ28K can be easily identified as Nexus 6 running on T-Mobile, implying a US presence. It would also be trivial to use the build information to figure out the carrier based on which carriers are known to be at what build number. Build numbers are easily obtainable from manufacturer and phone carrier websites

Bug # 494452 has been filed for this against Chromium before, but Google has choose to keep the design of the user agent string intact.

Possible Mitigation by Android Applications Using WebView

As discussed above, application authors can use WebSettings.setUserAgent() method to set the override the user agent. While many are reluctant to do so in order to lose compatibility, we would like to suggest the following approach of using the default user agent and erasing the build information in it.

Conclusion

Even the NSA described that user agents only identify browsers. Unfortunately, on Android they can also identify the device model, carrier and more. In our opinion, this is simply too much information as it reveals the underlying firmware. While user fingerprinting exists, it is less trivial to tie a specific piece of software to a specific piece of hardware with the granularity of carrier, build and country. An analogy to this would be a desktop browser sending the vendor name and the build number of the BIOS in a desktop computer. Additionally, this information can be used to target users with malware attacks targeting specific builds known to be vulnerable.

We suggest following the approach taken by Mozilla:

Adding a device identifier to the Firefox OS User Agent (UA) string is STRONGLY DISCOURAGED by Mozilla.

and:

Mozilla strives to provide greater privacy for users. Therefore, we have been working to reduce the level of “fingerprintability” of different browser configurations — that is to say, how uniquely identifiable a particular user’s browser is to sites through detection methods of which the user is unaware. (i.e. server-side methods) Adding e.g. hardware information to the UA reduces privacy by increasing fingerprintability.

Credits

Researched and written by Yakov Shafranovich.

Microsoft June Security Update Breaks Copy/Paste of Files in Windows

Overview

We recently looked at a problem on a Windows computer that was peculiar. The computer had Windows v8.1 installed and as of last week everything was working fine. Over the weekend Microsoft updates for June got applied and now the system cannot copy/paste files. Copy/paste works fine between regular programs, just not for files. The symptoms are:

  1. Selecting files, doing Ctrl-C, and then Ctrl-V in a different folder does not do anything.
  2. Right clicking with your mouse on the files, and selecting “Copy”, then right clicking in another folder and selecting “Paste” does not work.

“Send To” still works fine, but some users are also reporting issues with copy/paste not just with files but also Microsoft Office programs including Outlook.

Further digging led me to the fact that a specific update for Windows breaks copy/paste. I have verified this issue by un-installing and re-installing the update, and checking Copy/Paste of files in between. This update is:

“Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege” — MS-061 and KB3057839

The interesting question is why this is happening?

Several possibilities:

1. This maybe related specifically to SpectorSoft and its suite of surveillance programs (eBlaster, SpectorPro, etc.).

It is possible that SpectorSoft hooked onto some Windows API that potentially can cause a security issue, and Microsoft closed that hole. It is also plausible that other spyware-like programs have been using this hole. According to some comments on Reddit, this maybe directly related to Duqu-like spyware that recently infected Kaspersky.

THIS MEANS THAT IF YOUR WINDOWS COMPUTER CANNOT COPY/PASTE FILES AFTER JUNE 9, 2015, YOU PROBABLY HAVE SURVEILLANCE SOFTWARE OR SPYWARE INSTALLED!!!

2. The security patch is for the clipboard.

The vulnerabilities disclosed by Microsoft seem to revolve around memory management. Perhaps Microsoft was concerned with people trying to hack their way in via the placing items in the clipboard and then moving the code into a different place in memory?