Gmail for Android Allows Screenshots in Confidential Mode

Google recently added confidential mode for Gmail that allows the sender to restrict how their email gets by the recipients including forbidding printing, etc. However, it looks like that Gmail for android still allows screenshots to be taken for such emails. To replicate, try the following:

  1. Send a confidential email to another Gmail user.
  2. Open the email, press Power + Volume Down. A screenshot will be taken.

The underlying reason is because the Gmail app is not using “FLAG_SECURE” for such screens (more information on FLAG_SECURE can be found in our earlier blog post). By contrast, many Android apps with higher security requirements use it. For example, Chrome for Android prohibits screenshots when browsing in anonymous mode on Android.

We did file a bug report with Google and here is their response:

We can not prevent someone from taking a photograph of their phone, using a second camera for example. Preventing taking photos of the content is not part of the feature.

References:
Google bug report # 112838515

Five Tools for Starting Security Analysis of Android Apps

Here are five, easy to use, tools to start security analysis of a Android apps. While they are basic, they allow to do the initial checking for things like lack of SSL, sensitive files, broadcast issues and secrets in code. We also highly recommend buying a cheap Android device for testing instead of/in addition to an emulator.

As always, please obey all relevant laws and do not use these tools for illegal activity.

On-device MITM proxy – PacketCapture

An MITM proxy is used to inspect network traffic going from/to a particular mobile device, or perhaps a specific application on the device. Normally, an MITM proxy requires setting up a separate test machine with the proxy and then pointing traffic from the test device to that machine. However, PacketCapture, is a free and easy to use MITM proxy that runs on the Android device itself, can optionally inspect SSL traffic and can also be selectively applied to a specific app. It lacks the bells and whistles of other proxies, but it is very easy to use. Behind the scenes it works by creating a VPN connection to itself on the device.

One thing to keep in mind: the next version of Android (Android P) will enable TLS by default. Apps can still opt out via a network security policy (see here). Once that changes takes place, you are advised to check the network security policy first before trying this tool.

On-device Broadcasts Monitor – Android Broadcasts Monitor

One of the common pitfalls in Android development is using global broadcasts when exchanging data between different components of the application. Because global broadcasts can be seen by other apps, they can leak sensitive data. An easy way to look for these is to install the Android Broadcasts Monitor app (Google Play link here) which will show you all global broadcasts as they happen.

On-device File Manager

Another useful tool in your toolbox is an on-device file manager. This can be used to check if a particular application leaves any sensitive data on the SD card where it can be accessed by other apps. In particular, you should inspect the “/Android/” directory. We are fans of the Amaze File Manager (source at GitHub) but you can use any other as well.

If you do end up using Amaze, it has a nice feature where you can backup an installed app to the SD card, which allows you to get an APK of an app for further analysis with tools like JADX.

On-device Video Recorder – Telecine

Recording on-device videos comes really useful when making demos or doing bug bounties. One useful tool we use is Telecine by Jake Wharton which can record all screen activity (except FLAG_SECURE). One useful tip is to use “ffmpeg” or a similar tool to downscale the resolution like this example:

ffmpeg -i Telecine_video.mp4 -crf 40 -an final.mp4

Android Decompiler – JADX

JADX is a Java decompiler which can take an Android APK and decompile it back to Java source code. One useful thing this can be used for is to analyze possible secrets that are included in the Android resources (not code). Often, there may be sensitive data that is easier to find instead of searching through source code. The “/strings” and “/raw” folders are usually the best place to start.

Keep in mind that Android uses a custom JVM which is not the same one as normal Java. Therefore things relevant to security like cryptography, SSL connections, etc. do not necessarily behave the same way as in regular JVMs.

Research: Auto-detection of Compressed Files in Apple’s macOS

Summary

Compressed files on macOS are autodetected by the operating system even if they are renamed to certain other extensions. This can be used to fool users and antivirus software that relies on file extensions by packaging malicious code inside compressed files with different extensions. The vendor (Apple) does not consider this to be a security issue. Most anti-virus vendors for macOS are not affected by this issue. This was originally discovered in macOS v10.11 (El Capitan) and v10.12 (Sierra), but the latest version of macOS v10.13 (High Sierra) was not tested.

Background

[NOTE: This bug was originally discovered as a result of a different set of bugs in Google’s Chrome browser. While the impact of this particular issue isn’t high, it was interesting enough for us to pursue a coordinated disclosure process. Because of the large number of parties involved, the disclosure coordination process took a long time which is why this article took almost two years to publish.]

On Microsoft Windows, files are identified by their extensions, which appears after the “.” in the filename. On macOS metadata about the file maybe available separately and either a creator code, a type code or a Uniform Type Identifier is used. However, on the Internet (in browsers and email clients) instead of filenames, MIME media types are used with a registry maintained by IANA on behalf of the IETF. Linux systems use a mix of extensions and media types, with some auto-detection / “sniffing” of media types based on file content. Some mappings do exists across the various systems as well.

For example, a ZIP archive would be identified as follows:

  • Windows – .zip extension
  • Internet/Linux – application/zip media type
  • macOS UTI – com.pkware.zip-archive

Additionally, on most desktop OSes, an association exists between a file type and an application that will open it by default. Those associations are maintained differently from OS to OS, but at their core they associate a particular identifier about a file type such as an extension (Windows) or a media type (browsers), and a program assigned to open it by default. Users are used to this arrangement and many security utilities such as antivirus programs will only look inside files that maybe dangerous. For example, a ZIP file on Windows if renamed to a different extension may not necessarily be scanned by default because double clicking on it will not open it.

Another important point is that malware authors may sometimes try to disguise malicious code by compressing it inside an archive such as a ZIP file. The expectation is that when a user downloads it, they will double click and open it using the default program on that platform, and then will execute the malicious code. This is another reason why this functionality deserves a closer look.

Details

The following two things were discovered:

  1. The compression utility that is part of macOS will open any file extension associated with that program and will try to “sniff” / auto-detect the original file type used. The following file extensions were tested:
    • ZIP Files when renamed as:
      • .AS
      • .CPGZ
      • .PAX
      • .XIP (a Gatekeeper warning will be shown for non-signed files)
    • DMG files when renamed as:
      • .CDR
      • .DART
      • .DC42
      • .DISKCOPY42
      • .DMG
      • .DMGPART
      • .DVDR
      • .IMG
      • .IMGPART
      • .ISO
      • .NDIF
      • .SMI
      • .SPARSEBUNDLE
      • .SPARSEIMAGE
      • .TOAST
      • .UDIF
  2. The OS itself (macOS) itself will open and execute some file formats even when renamed to a different extension. Gatekeeper protection is not bypassed. The following extensions are affected:
    • PKG
    • MPKG

To duplicate the first issue, create a ZIP file containing any content (we used the EICAR test file) and rename to include a file extension as any of the compression formats above for ZIP. (AS, CPGZ, PAX or XIP). Send this file to a macOS computer via USB or email or a link; download and double click. The ZIP file will open correctly. You can also do the same thing but with a DMG file for any of the DMG file formats listed above (DC42, ISO, etc).

To duplicate the second issue, create a PKG file containing some code or take an existing one, rename as .MPKG and transfer to a macOS computer. Double click to execute.

All testing was done in May 2016 on a MacBook Pro running MacOS v10.11.3 (El Capitan), and re-tested again in April 2017 on a MacBook running MacOS v10.12.04 (Sierra). It is unclear whether later versions of MacOS are affected since we did not perform testing on versions past v10.12.04 (Sierra).

Recommendations

There are two issues:

  1. Human users and anti-malware software are not aware that macOS supports a large number of legacy compression file types and may not be properly looking out for them or scanning them.
  2. Because of the “sniffing” behavior, it would be trivial for an attacker to package malware inside a well known format like ZIP or DMG rename it to one of these extensions. Anti-virus software may fail to scan such archives because they do not expect a ZIP file to be packaged that way.

The information in this article was originally discovered while analyzing Google’s Chrome browser (details here).

Our recommendations are as follows:

  • Apple should consider deprecating or adding a warning for these extensions and removing the “sniffing” support.
  • Anti-malware software for macOS should support all of these formats, as well as accounting for the possibility of one format being renamed as another

Vendor Responses

The vendor (Apple) does not consider this to be a security issue as follows:

After examining your report we do not see any actual security implications. All of the extensions provided in your report are supported disk image formats and will be treated equally.

After examining your report we do not see any actual security implications. Archive Utility opens archive files and the extensions you provided are archive extensions.

After examining your report we do not see any actual security implications. The Installer app makes it clear when executable code is running even if the file has been renamed.

As per advice of Apple’s security team, we also contacted multiple antivirus vendors that provide AV software for macOS to check if they are affected by this issue. Here is what we got back:

Vendors That Responded:

  • Avast – not affected
  • Avira – not affected
  • AVG – related bug for engine versions prior to 4668 has been fixed earlier (see CVE-2017-9977 and our blog post); other products not affected
  • BitDefender – not affected
  • Cisco – one product impacted, tracked by bug identifier CSCve34034 – no CVE has been issued – Cisco has issued an advisory and is tracking this under CVE-2018-0237:
    • Cisco AMP Virtual Private Cloud Appliance – The Cisco AMP appliance does not rely on the file extension when processing ZIP archives or PKG install packages. However, older versions relied on file extension to detect DMG files and so is susceptible to one of the scan evasion problems described in the advisory. The DMG portion is now fixed in software release 1.4.5.
  • ClamXAV (Canimaan Software) – not affected
  • Comodo – not affected
  • CyberByte – not affected
  • Dr. Web – not affected
  • ESet – not affected
  • F-Secure – not affected
  • Intego – not affected
  • Kaspersky – not affected
  • Malware Bytes – not affected
  • Protect Works – not affected
  • QuickHeal – not affected
  • Sophos – not affected
  • Symantec – not affected
  • Trend Micro – not affected
  • Webroot – not affected

Other Vendors:

  • 360 Total Security – pending
  • BullGuard – no response
  • EScanAV – no response
  • GData – pending
  • Google Chrome – safe browsing affected prior to M51 and M52 (see our blog post here)
  • MacKeeper – no response
  • McAfee – no response
  • Panda – no response
  • QuikAV – pending
  • Total Defense – pending

References

Apple Product Security Followup Numbers: 638059697,  640528823 and 640528841

Cisco Bug # CSCve34034 / CVE # CVE-2018-0237

Credits

Advisory written by Yakov Shafranovich.

Timeline Summary

2016-03-21: Report # 638059697 submitted
2016-05-04: Reports # 640528823 and 640528841 submitted
2016-05-21: Report # 640528823 rejected
2016-06-22: Report # 638059697 rejected
2016-06-23: Report # 640528841 rejected

2017-03-15: Advisory provided to the vendor for comment
2017-04-23: Retested on macOS Sierra, updated and resent to vendor for comment
2017-04-28: Reply from vendor received
2017-05-01: Retested on a fresh install of macOS Sierra, revised advisory sent to vendor for comment
2017-05-01: Notifications go out to AV vendors

2018-01-24: Second time that notifications go out to AV vendors
2018-02-10: Third and final time that notifications go out to AV vendors
2018-02-10: Final advisory shared with the vendor (Apple) for comment
2018-02-25: Public disclosure

2018-04-23: Updated with the new Cisco advisory and CVE

Experimental Tool: Public Cloud Storage Search

This was done as an experiment using Google’s Custom Search Engine. This tool provides access to publicly available content that Google indexes from major cloud providers such as AWS, Azure, DropBox, Google Cloud, etc.

More information here: https://wwws.nightwatchcybersecurity.com/tools/

Source here: https://github.com/nightwatchcybersecurity/public-cloud-storage-search

Live example here: https://cse.google.com/cse/publicurl?cx=002972716746423218710:veac6ui3rio

 

 

The Dangers of Plain HTTP Links in Mobile Apps

When dealing with browser security, there is a concept called “the line of death“. This concept means that a user can only trust content that appears within the browser’s address bar or above, and nothing below that line (there is an excellent article from Eric Lawrence who is a Chrome developer explaining this in detail). What that means is that users can click on content above that line safely, but not below since the content appearing below the line may be fake or modified by the attacker. However, it is clear that the rest of the browser UI including menus, settings sections, about box, etc. are static and should be static and safe (unless modified by extensions).

The same concept would apply to mobile apps – part of the UI that are static should be safe as well, although it is harder to tell the static and non static parts apart. This leads add to the issue at hand – what happens when the static parts of the app have hyperlinks that don’t use HTTPS? A user of the app would normally trust those links but if they are on a hostile network, clicking on a plain HTTP link would in fact expose them to a potential MITM attack either via DNS hijacking or MITM interception. That means that if they are using a network where the attacker controls the DNS or the network connection itself, these links can be easily hijacked. You can easily image a scenario, where an attacker blocks WhatsApp or Facebook traffic but redirects users who use the HTTP versions to their own malicious site.

On the other hand, when HTTPS is used for these links, the mobile browser will check if SSL certificates are being served on that link, and whether they are signed by a real CA.

Thoughts?

Research: The Dangers of Proxying S3 Content

Background

It is common for organizations to use Amazon’s S3 service as a place to host static assets and other content. The content within Amazon S3 is organized in “buckets”. Amazon also provides ability to point custom domains at S3 buckets through virtual hosting or the static website endpoints. In both cases, a CNAME mapping is created from the custom domain to an Amazon domain name.

However, SSL support is not available via the custom domain name, but SSL is provided if either “s3.amazonaws.com/<bucket-name>” URL or the “<bucket-name>.s3.amazonaws.com” domain name is used directly (as long as there are no periods in the bucket name). The reason why SSL doesn’t work when accessing the plain domain names is because Amazon is not able to provide certificates for them because they are not the domain owner and current S3 functionality does not allow custom certificates to be loaded. However, for “s3.amazonaws.com” domains, Amazon provides a wildcard certificate which works just fine. If you try to access the domain names directly, you will be served content with the same wild card certificate which of course would not match the domain names.

Possible SSL solution – CloudFront or Another CDN

One possible solution offered by AWS is to use their CDN offering called CloudFront. In that case, you can setup two CloudFront domains that sit in front of the S3 buckets and CNAME your domain names to them. This of course comes at a higher price and a confusing set of options: you can use the cloudfront.net subdomains, or a free SNI-enabled SSL certificate not compatible with older browsers or a costly ($600/month/per domain) option to upload your own SSL certificate. The data would then flow as follows:

[S3] >—-internal AWS network—-> [CloudFront] >—–SSL—-> [users]

Another set of solutions is to use a non-AWS CDN like CloudFront, etc. and have the CDN proxy the content with SSL. The setup would be similar to Amazon with SNI and non-SNI SSL options available. The data flow would then look like this (for CloudFlare):

[S3] >—-HTTP—-> [CloudFlare] >—–SSL—-> [users]

What You Should Not Do – Proxy S3 Content Yourself

Of course many developers would immediately react to this particular problem in the same way: I can do it better by myself! The usual solutions is to have a script or a webserver rule that will automatically retrieve the content from S3 and display it to the user. So it would look like this:

Everything after “/static/” would that be retrieved from some S3 bucket, let’s say “marketing.example”. So the following path would be followed:

Of course this only lasts as long as there is only one bucket. Let’s say now another bucket is needed called “support.example”. So the script will become something like with the bucket name in the URL:

What will often happen at this point is that the developer will not realize that the bucket names need to be validated against a whitelist of valid buckets. Because S3 bucket names are not unique to one AWS user but share a global namespace across all S3 users, this script would be able to retrieve data from any other S3 bucket, as long as it is public. This will not happen when using CloudFront or other CDNs because they will be mapped 1-to-1 against a specific S3 bucket.

How will this look like? If an attacker can figure out that the script takes arbitrary bucket names, they can go ahead and create a new bucket called “evil.example” and then use the following URLs to retrieve content from it:

What can this be leveraged for? Some examples:

  • Serving malware since the content will be served under the target domain and the target SSL certificates
  • Facilitating phishing attacks
  • XSS since HTML / JS content will bypass the same origin policy since it is served from the same domain as the target
  • Stealing session cookies since the code will run in the same domain and have access to cookies
  • If the content is retrieved using the S3 APIs, then an attacker could setup a “Requester Pays Bucket” and make money off the target (although Amazon would probably catch this eventually)
  • [insert your exploit here]

Recommendations

  • Don’t re-invent the wheel, use an existing solution like CloudFront, or some other CDN
  • If you must proxy content yourself, make sure you have a whitelist of valid buckets, and use other technologies like subdomains, HTTPOnly cookies, CSP headers, etc. to segregate the S3 content from the rest of the site

Research: Crashing Browsers Remotely via Insecure Search Suggestions

Summary

Intercepting insecure search suggestion requests from browsers, and returning very large responses leads to browser crashes (but not RCE). Affected browsers are FireFox on the desktop and Android, and Chrome on desktop and Android – other Chromium and FireFox derived browsers maybe affected. Internet Explorer and Safari are not affected. The issue is exploitable remotely, albeit not easily.

Background – Search Suggestions

Most browsers, desktop and mobile, support a feature which allows users to type either in the address bar or the search box and see a list of “search suggestions”. These are similar to the search suggestions provided by most search engines within their homepages and search bars. Examples of search suggestions in the browser and search engine webpage appear below:

The protocol that underlies this mechanism is OpenSearch Suggestions Extensions, a JSON protocol running over HTTP (as defined here). This protocol allows browsers and other applications to send simple keyword queries to the search engine servers which return JSON responses translated by the browser into results in the search bar. It should be noted that some search engines define their own APIs instead of OpenSearch, which browsers then implement.

Search engines can also publish OpenSearch description documents (as defined here) and embed those in their webpages, which browsers can automatically discover and use. The discovery of new search engines happens automatically in some browsers when the user visits a particular site (Chrome and IE Edge [SSL only]), or triggered manually by the user via an icon in the search bar (FireFox). FireFox and Internet Explorer (prior to Edge) also support plugins and APIs for doing this as well.

An example of an open search description document defining the suggestion protocol from AOL Search (original from here) – note the bolded part that defines the search suggestion protocol:

<?xml version="1.0" encoding="UTF-8"?>
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
                       xmlns:moz="http://www.mozilla.org/2006/browser/search/">
    <ShortName>AOL Search</ShortName>
    <Description>The AOL Search engine delivers great search results so
    you can search less and discover more.</Description>
    <Language>en-us</Language>
    <InputEncoding>UTF-8</InputEncoding>
    <Image width="16" height="16" type="image/x-icon">http://search.aol.com/favicon.ico</Image>
    <Url type="text/html" method="get" template="http://search.aol.com/aol/search?q={searchTerms}&amp;s_it=opensearch"/>
    <Url type="application/x-suggestions+json" template="http://autocomplete.search.aol.com/autocomplete/get?output=json&amp;it=opensearch&amp;q={searchTerms}"/>
    <moz:SearchForm>http://search.aol.com/aol/webhome</moz:SearchForm>
</OpenSearchDescription>

Background – Search Engines and HTTPS

Even in the the post-Snowden era, many popular search engines still do not support encryption (HTTPS). Other search engines may support HTTPS but also still support non-HTTPS connections and do not redirect users automatically to HTTPS. Because browser vendors tend to include the most popular search engines in specific countries by default, they end up including multiple search engines which are not using HTTPS. This also applies to the search suggestions endpoints used by browsers.

Some examples (for English locale, US only):

  • Android AOSP stock browser (source)
    • Bing (non-SSL version)
    • Yahoo (non-SSL version)
  • Chrome (desktop and Android) (source)
    • AOL Search (does not support SSL)
    • Ask.com (does not support SSL)
  • FireFox (desktop only) (source)
    • Ebay (does not support SSL)

Exploit Details

Because browsers include multiple non-HTTPS search engines with insecure search suggestions endpoints, it would be possible for an attacker on the network level to intercept the traffic flowing between the browser and the search engine endpoints, and substitute their own. If a very large response is returned (2+ GBs), the browser can run out of memory and crash. This is due to the fact that browsers do not check for sizes in the search suggestions responses. Obviously, this is more of an issue for mobile devices which have lower memory than desktops.

For Android AOSP browser and Chromium, this issue appear to be directly tied to the processing code of search engine responses. For FireFox, this is a more generic issue around large XMLHTTPRequest responses, which is what the browser is using internally for search suggestions. Our bug reports with the vendors provide more details on which code is causing this. This re-enforces the fact network traffic SHOULD NEVER be trusted.

The following crashes were observed – we have not been able to cause an RCE or a buffer overflow:

  • Android AOSP stock browser on Android (v4.4) – application crashes
  • Chrome v51 on Android (v6.01) – application crashes
  • Chrome v51 on desktop Linux (Ubuntu v16.04) – the entire computer freezes requires a reboot (this maybe to due to swapping being disabled with an SSD drive)
  • FireFox v47 on desktop Linux (Ubuntu v16.04) and Android (v6.01) – application crashes

Safari v9.1 and Internet Explorer 11 and Edge appear not to be are not affected, although a similar bug has happened before with Safari. We did not test prior versions of either Safari or IE. We also did not test any other browsers derived from Chromium or FireFox.

The practical exploitation of this issue is mitigated by several factors:

  • The attacker must have control over DNS and the network traffic of the victim machine. This is most likely in cases of a rogue WiFi hotspot or a hacked router.
  • Most browsers have a rather short timeout for search engine suggestions response, not allowing sufficient time for the large response packet to be transferred over network
  • Due to the very large response size needed to trigger this issue, it is only exploitable over broadband or local networks such as rogue WiFi hotspot

Vendor Responses

Google response re: Android AOSP browser:

The team reviewed this issue and don’t believe there is a security vulnerability here. It seems the worse things that can happen is the browser crashes due to resource exhaustion. The phone is still usable so there isn’t a denial of service.

Google response re: Chromium:

We don’t consider DoS to be a security vulnerability. See the Chrome Security FAQ:

https://www.chromium.org/Home/chromium-security/security-faq#TOC-Are-denial-of-service-issues-considered-security-bugs-

Mozilla / FireFox response has been to remove the security restriction on this bug, therefore indicating that this is not a security issue.

Steps to Replicate

(This is for Chrome but is similar for other browsers)

1. Install DNSMASQ and NGINX:
sudo apt-get install dnsmasq nginx
2. Modify the /etc/hosts file to add the following entries to map to the IP of the local computer (varies by vendor of the phone):
192.168.1.x ss.ask.com
3. Configure /etc/dnsmasq.conf file to listen on the IP:
listen-address=192.168.1.x
4. Restart DNSMASQ:
sudo /etc/init.d/dnsmasq restart
5. Use fallocate to create a file in “/var/www/html/”
sudo fallocate -l 5G query
6. Modify DNS settings on the test machine or the same machine to point to “192.168.1.x”. If same machine, modify resolve.conf as follows:
nameserver 192.168.1.x
7. Start Chrome, go to settings and choose “Ask.com” as the default search provider.
8. Open new tab and try to type something in the omnibox.

References

Android bug reports: 214784 and 214785
Chromium bug reports: 624779 and 624794 (patch accepted)
FireFox bug reports: 1283675 and 1283672
OpenSearch description document: doc here
OpenSearch Suggestions extension v1.1: doc here
Safari Search Suggestions bug: see ArsTechnica story here

Credits

Researched and written by Yakov Shafranovich.

Timeline

2016-06-30: Bug filed with Android
2016-06-30: Bug filed with Chromium
2016-06-30: Bug filed Mozilla/FireFox
2016-06-30: Response from Chromium, Won’t Fix
2016-07-12: Response from Android, not a security issue
2016-07-13: Android team is ok with disclosure
2016-07-14: Mozilla removes security restrictions on the bug
2016-07-26: Public disclosure

Research: Securing Android Applications from Screen Capture (FLAG_SECURE)


Summary — TL, DR

Apps on Android and some platform services are able to capture other apps’s screens by using MediaProjection API. Because of the way this API implements “securing” sensitive screens, there exist some possible security issues. The best way to secure your Android app is to use FLAG_SECURE on sensitive screens and DO NOT use the virtual keyboard (here is why).

MediaProjection API

Since Android 5.0, there exists a new MediaProjection API that allows apps to record videos and take screenshots of screens belonging to other apps. The API is described as follows:

Android 5.0 lets you add screen capturing and screen sharing capabilities to your app with the new android.media.projection APIs. This functionality is useful, for example, if you want to enable screen sharing in a video conferencing app. The new createVirtualDisplay() method allows your app to capture the contents of the main screen (the default display) into a Surface object, which your app can then send across the network. The API only allows capturing non-secure screen content, and not system audio. To begin screen capturing, your app must first request the user’s permission by launching a screen capture dialog using an Intent obtained through the createScreenCaptureIntent() method.

(On Android versions prior to 5, there are other methods such as undocumented APIs, and ADB, we are focusing on Android 5+)

This API also drives several other functions in the OS:

All of these functions as well as the MediaProjection API can take screenshots and videos of other apps. For apps to use the API, special permission is required, for platform features, no special permission is needed. Additionally, any applications signed by the system key (Google apps) can use this API without permission as well.

A good open source example of an application that uses the API can be found here:

https://github.com/JakeWharton/Telecine

Secure and non-secure content

As mentioned in the Google docs above, “the API only allows capturing non-secure screen content”. What exactly is “secure” and “non-secure” content?

This refers to a special flag which can be applied to views in Android, called FLAG_SECURE. It is described in Android docs as follows:

Treat the content of the window as secure, preventing it from appearing in screenshots or from being viewed on non-secure displays

Setting this flag on Android view will prevent screenshots from being taken manually, and any other app or platform service will show a black screen. This functionality is not global for the entire app, but can be set on specific screens which can be more sensitive, and not set on others. There is no other way or permission that can mark an entire app or any part of it from being excepted from screen capture or recording.

NOTE: Even on views marked with FLAG_SECURE, the virtual keyboard is ALWAYS visible. This is due to a known Android bug which Google has so far refused to fix:

https://code.google.com/p/android/issues/detail?id=129285

How screen capture really works in Android

The term “secure” as used in this context does not mean that the content of the app cannot be captured, rather that it cannot be “viewed on non-secure displays”. This is because screen capture and the concept of secure / non-secure isn’t what developers may think it is.

Behind the scenes, this API and related platform services use the concept of Casting (similar to AirPlay). Apps that capture screenshots and record videos, must create a virtual display to which then the device content is cast to. The FLAG_SECURE flag is also not used for security but rather means copyrighted content in context of DRM and displays — i.e. secure content would be something like a DVD, and a secure display would be an HDTV.

This is clear on the device itself — when an app begins to record the screen, the cast icon is turned on in the notification bar. This is also clear from the Android source code and this doc:

Display flag: Indicates that the display has a secure video output and supports compositing secure surfaces. If this flag is set then the display device has a secure video output and is capable of showing secure surfaces. It may also be capable of showing protected buffers. If this flag is not set then the display device may not have a secure video output; the user may see a blank region on the screen instead of the contents of secure surfaces or protected buffers.

That would means that an Android device casting to a DRM-protected display like a TV would always display sensitive screens, since the concept of secure really means “copyrighted”. For apps, Google forestalled this issue by preventing apps not signed by the system key from creating virtual “secure” displays, but not for physical devices. There is also an existing Android bug asking for the concept of DRM and screen security to be separated into different flags:

https://code.google.com/p/android/issues/detail?id=93026

Security issues with the current API

First of all, a basic foundation of mobile app security is a clear separation between apps. One Android app is should never able to read the preferences, data or capture cloud notifications of another app. This paradigm breaks down in case of screen capture/recording. An app gaining access to the MediaProjection API or any of the platform services using it, is able to capture screen output from other apps including PIN numbers, passwords, credit card numbers, etc.

Second, by using a flag used for marking copyrighted content, it would make it easier to subvert the system. Some ways this can be subverted include:

  • Gaining permissions to create a virtual display marked as secure would show all secure content. Right now this is preventable by using the system key, but a rooted phone or some other way that fools the system into creating such display would by pass this protection.
  • Also, casting to a physical secure display, or perhaps a wireless one, would also display content

Third, even with the FLAG_SECURE in use, some parts of the screen can still be captured. The virtual keyboard is one existing example, but there may be others (perhaps notifications?).

[ADDED 06/24/2016: Mark Murphy from CommonsWare points out several other issues with FLAG_SECURE child objects – see his blog post here]

Fourth, there is no clear indication to the user they are being recorded other than the cast icon. Clicking on the icon shows no devices since virtual devices aren’t going to be listed. A better warning may be needed.

A better solution as suggested in this bug report, would be to define a separate flag and never allow any app or system service to see its output under any reason, and also to blank out the entire screen even if other apps or service display anything on it. An even better solution would be to make this opt-in for apps, instead of opt-out.

Attack vectors

There are several possible avenues of attack which would result in an app being installed on a user’s phone recording their app activity. These include:

  • Malicious apps in the app store that masquerade as legit casting apps requiring record permission — since users don’t know that casting apps can also record their screen
  • Remote install via compromised desktop as described in this paper
  • Overlaying permission screens as described here

To record even non-secure screens, the following can be tried but they are not practically feasible:

  • On rooted phones, creating a virtual display marked as “secure”
  • Fooling the system into thinking that a given app is a system app, allowing it to create secure displays

All of these would result in an app sitting on the phone and recording user activity. However, other than the last two methods, FLAG_SECURE views would not be recordable, although the virtual keyboard is. The only indicator to the user would be the cast icon, but when they click on it, no devices would be listed.

Attacks in the Wild

Some examples of these type of attacks happening in the wild:

  • July 2018 – Panoptispy Study – see here, here, here and here
  • July 2018 – Anubis Android malware – see here and here

Conclusion — Protect Your Apps

To protect your apps from being recorded by other apps, FLAG_SECURE should be used on any views containing sensitive data. Additionally, since the virtual keyboard is also vulnerable, it is recommended that these screens either use a custom keyboard, or if feasible to use an on-screen custom layout (for numeric input like PIN numbers).

We have surveyed many of the top apps in the Google Play store, and many of them including some Google-owned apps do not use FLAG_SECURE or if they do, do not secure the keyboard.

We also hope that Google would lock down this API and solve the issues highlighted in this article.

Credits

Researched and written by Yakov Shafranovich.

References

Google CID: 3–5606000008769