Exfiltrating data from remote access services via video and sound

Given the current situation, many of us are now working remotely all the time. Many of such arrangements are facilitated via tools like Citrix, RDP, VNC, LogMeIn, etc.  We have been researching some possibilities about how to exfiltrate data via such arrangements. Here are some obvious choices:

  • File connections – if enabled
  • Remote USB connections – if enabled
  • Remote printing connections – if enabled
  • Exfiltrating via email or Internet connections at the remote desktop level

Most of these have obvious controls that can be activated by an administrator, which would leave the attackers with very few channels left. The two in particular that we were interested in, are video and sound, since the user can view their remote screen and many tools allow hearing sound from their remote desktop.

For exfiltration of data via video, we originally considered encoding data with base-64 using an encoding tool such as the Windows certutil CLI command, then doing screen capture on the host and running some sort of OCR against it such as Tesseract. However, we ran across a much better tool from Pen Test Partners called PTP-RAT which flexes the pixels on the screen to transfer information (see their blog post and GitHub repo).

For exfiltration of data via sound, we originally considered using a tool that will modulate the data into a sound form that used to be used by modems back in the 1980s/1990s. However, we ran across a much better suited tool from Roman Zayde called amodem which is able to do this. While the tool is designed for exfiltrating data across a physical air gap, it should work the same way on a remote desktop by converting the data into sound via the soundcard and capturing it back on the host then decoding it.

P.S. For extra brownie points, you can also try enabling the webcam and microphone on the host, and transfer data from the host back to the remote desktop using the same mechanisms.

FLAG_SECURE and Accessibility Services (a11y)

About four years ago, we first blogged about FLAG_SECURE in Android applications and how it can be used to protect sensitive screens from being captured in screenshots. One of the reasons that prompted our original blog post is the fact that FLAG_SECURE was not intended for security of screenshots but is rather a side effect of DRM.

Since then we have reported the lack of FLAG_SECURE as an issue to multiple Android app vendors. Our most recent blog post was about the fact that Google’s 2FA application (“Google Authenticator”) lacked FLAG_SECURE. We disclosed this specific issue (“Google Authenticator”) because of recent reports of malware attacking such applications using Android’s accessibility services (a11y). Yanick Fratantonio (who had a lot more experience with these services) went through the trouble of putting together a POC, as well as writing a truly excellent blog post about how FLAG_SECURE does NOT protect against attacks using these services.

After reviewing both our research and disclosures published by others, there seem to have been an assumption going around for a while that FLAG_SECURE should protect against such attacks (a11y) but no actual data to back this up. To be clear, FLAG_SECURE should still protect against malware using the MediaProjection API to record video or take screenshots, but it DOES NOT protect against attacks using Android’s accessibility services (a11y).

We amended our earlier posts to make this distinction clear and would like to thank Yanick Fratantonio for bringing this up.

Another Tale of Personal Data Harvesting: Alsid, Lusha.co and Simpler Apps

After reading a recent post by Antoine Neuenschwander, we wanted to share a similar experience from one of our consultants regarding the sale and use of their personal data, featuring many of the same players as Antoine’s post.

Part 1 – The Sales Call from Alsid

One of the many scourges of modern work is the fact that salespeople try to reach you all the time. In this particular case, our consultant was called on their work phone, via email and LinkedIn by someone from a French cybersecurity company called Alsid. Then to their surprise a call from France rang on their personal, US-based, cell phone and left a voicemail. That got them curious – how did this company get a hold of a personal cell phone number? Being that the company is French it is subject to GDPR so they asked for a copy of their data held by the company. The company did provide a fairly extensive GDPR response but the cell phone number wasn’t in it! After follow-up questioning, they eventually dug it out – their salesperson got it from Lusha.co:

Screen Shot 2020-02-10 at 11.07.33 PM

There are several privacy concerns here:

  • Why was the number not provided in the initial GDPR request?
  • Did the caller check to make sure the number wasn’t a wireless phone AND wasn’t on the Do Not Call List (since it is illegal to place telemarketing calls to such numbers in the US)?

Part 2 – The B2B Contact Enrichment Tool – Lusha.co

Lusha.co provides a set of plugins that can enrich LinkedIn profiles – so if you are looking at someone’s profile, they can supplement that with the person’s phone number or email from another sources. The website, marketing materials and privacy policy are pretty explicit about this:

Screen Shot 2020-02-10 at 11.12.56 PM

And (emphasis added):

Our Services are designed to help Users and vendors (e.g. HR professionals, B2B partners, sales platforms) validate and verify contact information and to find business profiles they seek in order to interact with relevant Contacts (as defined below), through access to business profiles retained in Lusha’s database (“Lusha Database”). 

A data request sent to Lusha.co resulted in the response below. Note the language around Simpler – specifically the last paragraph. It seems that Simpler provides mobile apps to be used for “verification”, then those apps slurp up the user’s contacts and share them back with Lusha.co:

Screen Shot 2020-02-10 at 11.25.52 PM

Excerpt of the text appears below:

Simpler also offers its users the opportunity to contribute to a collaborative security effort, meant to assist in authenticating the identifying attributes of an individual. This effort can assist in establishing a trusted channel of communication for online and offline interactions.

If a Simpler user consents to contribute to this effort, basic contact information (name and phone number) found within such user’s contacts may be shared with Lusha, which implements the security solution.

If you dig deeper into the Lusha.co materials, a lot of similar language appears there as well.  Instead of a discussion of B2B contact data, it suddenly becomes a matter of “security“, “trust” and a “collaborative security effort“. When you look at their data page (emphasis added), note that the language quickly changes from “lead enrichment” or B2B data” to a “collaborative security effort”:

Lusha’s core purpose is to make online engagement safe and efficient. In today’s fast-paced and multi-layered world, one of the main challenges to online users is trust. A major risk in online interactions is the risk of encountering fraud, whether by phishing attempts or by identity theft. Widespread fraud can lead to the loss of customer trust, extra costs of time and money required to manage fraud incidents, damages to the reputation of individuals and institutions, possible legal costs and many more negative outcomes.

Lusha’s unique solution is based on a collaborative security effort, effectively utilizing information to verify online identities. The Lusha service provides its users with valuable insights and assists in authenticating the identity of individuals in the online sphere.

There are several privacy concerns here:

  • Why is a tool that claims to provide B2B information obfuscated behind being “a security solution”?
  • How can “security” and “trust” justify essentially taking users’ address books from their mobile devices and selling that data for marketing?

Part 3 – The Mobile Address Books from Simpler

The link provided by Lusha actually leads to the Google Play Store. This leads to two apps – Simpler Caller ID and Dialer, both apps with more than 5 million installations each. The link on the store listings leads to the company’s website where third app is listed – EasyBackup, a contacts backup manager (only for iOS). That one is owned by a different company called “Top Floor” which also makes an app called “Cleaner Pro” (for iOS) which claims to remove duplicate contacts. Mailing addresses for both companies go to co-working spaces: one in Brooklyn, NY and the other in Los Angeles, CA.

Here are the apps:

The Simpler Apps website still lists all of these apps as being theirs:

Screen Shot 2020-02-11 at 11.08.27 PM

A request was sent to Simpler for copy of the data they collected, and the following response was received – no data. A follow-up response was sent and a response is still pending:

Screen Shot 2020-02-11 at 11.01.27 PM

At this point we are at a dead end with Simpler, but further research reveals the following nuggets buried in the terms of use and privacy policy:

By using our Services, you acknowledge and agree that we will share contact information with other users our affiliates and business partners for the purpose of ensuring that their current contact information is up to date. You acknowledge that you have the rights and permissions required to allow us to share such contact information.


We do not share your Personal Information with third parties except:

  • The Services are based on contact management and Caller ID (if applicable), therefore, we will use your number and contact for this purpose. This disclosure is also important to individuals that are not our users which may be identified by the caller ID. We enable an easy opt-out in the event you no longer wish to be identified, for more information see the User Right section below.

  • We may also share Personal Information with (i) our subsidiaries and business partners if needed; (ii) subcontractors and other third party service providers (e.g. payment processors, server, cloud management, etc.); and (iii) any potential purchasers or investors of the Company.

There are several privacy concerns here:

  • Why is a tool that claims to provide tools for making sure contact information is correct, selling data for marketing?
  • What is the connection between Simpler and Lusha.co?

Part 4 – Tying It All Together: Lusha.co and Simpler

At this point, it is fairly clear what happened – a bunch of mobile apps slurp app contacts from their users’ address books and provide it to Lusha.co to be used by marketers and recruiters. This is being presented as a “collaborative security solution” while it is essentially just selling personal data, albeit with an opt-out available. What is frustrating about this, is that regular users who are friends with the people being targeted are installing these apps, thinking it is just a simple utility while all of their contacts are actually being sold behind their back. At the same time, Lusha.co is claiming to be a security solution while they are clearly not.


But, there is more … a set of simple Google searches shows that one of the co-founders of Lusha.co (“Yoni Tserruya”) is actually the original app developer for all four of these apps (here, here, here and here). Furthermore, if you download the Android apps provided by Simpler and look at the signing keys via jadx, they are issued to the same person as seen below:

Screen Shot 2020-02-11 at 7.40.10 AMScreen Shot 2020-02-11 at 7.40.25 AM

Now these apps are being published by companies other than Lusha.co – Simpler Apps and Top Floor, but are they subsidiaries or related to Lusha.co? If they are, then the privacy policies seem to allow them to transfer data from these apps back to Lusha.co. Overall, the arrangement may be legal but perhaps unethical.

Bottom line: this example highlights yet another way personal data is harvested, sold and re-used for commercial purposes.

Vendor Responses

We reached out for comment to all of the companies mentioned in the article and will update the blog post with feedback or comments.

Mapping the Jan 2020 Java Security Patches Back to the Original Source Code Changes

(UPDATE [01/16/2020]: Added clarification as to the source of the information for the source commits)

Earlier today Oracle releases patches for various flavors of Java. At the same time, the open source version of Java – OpenJDK – released their patches as well. The problem is that it is virtually impossible to figure out what the actual bugs are. Here are partial screenshots from Oracle’s critical patch update advisory, text version and the OpenJDK advisory:

Screen Shot 2020-01-15 at 10.38.47 PM Screen Shot 2020-01-15 at 10.40.43 PM Screen Shot 2020-01-15 at 10.41.46 PM

The actual CVE descriptions are also cryptic – here is an example of one (CVE-2020-2583):

Vulnerability in the Java SE, Java SE Embedded product of Oracle Java SE (component: Serialization). Supported versions that are affected are Java SE: 7u241, 8u231, 11.0.5 and 13.0.1; Java SE Embedded: 8u231. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise Java SE, Java SE Embedded. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of Java SE, Java SE Embedded.

Note: This vulnerability applies to Java deployments, typically in clients running sandboxed Java Web Start applications or sandboxed Java applets (in Java SE 8), that load and run untrusted code (e.g., code that comes from the internet) and rely on the Java sandbox for security. This vulnerability can also be exploited by using APIs in the specified Component, e.g., through a web service which supplies data to the APIs.

CVSS v3.0 Base Score 3.7 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L). 

The problem is that not enough information is being provided in order to make an educated decision, especially in cases where there are limited resources for patching. How do we figure out what the vulnerability actually is?

It is possible to derive the source code changes from the Debian package updates OpenJDK release notes that map the CVEs to the actual source code changesets in the OpenJDK repository (HG and GitHub). For example as seen below, “CVE-2020-2583” links to the source code changes # 8224909:

Screen Shot 2020-01-15 at 11.33.13 PM

Looking at the HG repository for OpenJDK and the GitHub mirror it is pretty easy to figure out:

Screen Shot 2020-01-15 at 11.32.20 PM

Screen Shot 2020-01-15 at 11.31.47 PM

And the actual changes can now be examined in detail:

Screen Shot 2020-01-15 at 11.34.46 PM Screen Shot 2020-01-15 at 11.34.31 PM

Based on those sources, we can now reconstruct the actual bugs and their source code changes for JDK 11 (Debian mesage here based on the OpenJDK release notes here):

CVE ID Component CVSSv3 Commit Description
CVE-2020-2604 core-libs/
7.1 8231422 (HG / GitHub) Better serial filter handling
CVE-2020-2601 security-libs/
6.8 8229951 (HG / GitHub) Better Ticket Granting Services
CVE-2020-2655 security-libs/
4.8 8231780 (HG / GitHub) Better TLS messaging support
CVE-2020-2593 core-libs/
4.8 8228548 (HG / GitHub) Normalize normalization for all
CVE-2020-2654 security-libs/
3.7 8234037 (HG / GitHub) Improve Object Identifier Processing
CVE-2020-2590 security-libs/
3.7 8226352 (HG / GitHub) Improve Kerberos interop capabilities
CVE-2020-2583 client-libs/
3.7 8224909 (HG / GitHub) Unlink Set of LinkedHashSets

Additionally for CVE-2020-2604, the Oracle release notes include this (JDK bug # corresponds to the commit # above:

Screen Shot 2020-01-15 at 11.57.13 PM


Written by Y. Shafranovich

Behavior Change in Chrome’s Download Protection Service Affecting Privacy

The Download Protection service in Chrome protects users from downloading malware/viruses by blocking files that are known to be unsafe by the Google’s Safe Browsing service. The way this works is that Chrome sends some basic information about the file being downloaded including the checksum to the Safe Browsing service and receives a verdict.

Historically, not all file types were checked – instead a list of extensions to be checked or ignored is defined in the Chrome source code and only those extensions would be checked. Any extensions not on the list are ignored. These lists can be found in the following file (“download_file_types.asciipb”).

Apparently, this has been changed recently. As per information provided from a recent bug report (https://crbug.com/1039128), Chrome now checks ALL extensions except for the ones on the whitelist. That means when you download almost any file, the checksum and some other information about the file are sent back to Google. It is not clear how this impacts privacy. Since other browsers use this API, it may impact them as well.

You can disable this behavior by disabling the Safe Browsing API entirely but that will also disable checks for malicious URLs. As of now, there doesn’t seem to be an option to disable download protection separately from safe browsing for URLs. The Safe Browsing API does include an option to download a list of dangerous files locally but it doesn’t appear that Chrome uses it yet.

You can see this information by typing the following in your Chrome’s URL bar and checking the “Download Protection” tab.

Example of a MRC file being downloaded from the Internet Archive:

Screen Shot 2020-01-12 at 10.30.18 AM

For some files (primarily ZIPs) a hash is sent as well. You can see this when trying to download the EICAR test file (standard test file for antivirus software):

"archive_directory_count": 0,
"archive_file_count": 1,
"archive_valid": true,
"archived_binary": [ {
"digests": {
"sha256": "2546DCFFC5AD854D4DDC64FBF056871CD5A00F2471CB7A5BFD4AC23B6E9EEDAD"
"download_type": 6,
"file_basename": "eicar_com.zip",
"length": 184
} ],
"download_type": 5,
"file_basename": "eicarcom2.zip",
"length": 308,
"referrer_chain": [ {
"ip_addresses": [ "" ],
"is_retargeting": false,
"is_subframe_referrer_url_removed": false,
"is_subframe_url_removed": false,
"main_frame_url": "",
"maybe_launched_by_external_application": false,
"navigation_initiation": "RENDERER_INITIATED_WITH_USER_GESTURE",
"navigation_time_msec": 1.628252679503e+12,
"referrer_main_frame_url": "",
"referrer_url": "https://www.eicar.org/?page_id=3950",
"server_redirect_chain": [ ],
"type": "EVENT_URL",
"url": "https://secure.eicar.org/eicarcom2.zip"
}, {
"ip_addresses": [ "" ],
"is_retargeting": false,
"is_subframe_referrer_url_removed": false,
"is_subframe_url_removed": false,
"main_frame_url": "",
"maybe_launched_by_external_application": false,
"navigation_initiation": "RENDERER_INITIATED_WITH_USER_GESTURE",
"navigation_time_msec": 1.628252674199e+12,
"referrer_main_frame_url": "",
"referrer_url": "https://www.google.com/search?q=antivirustst+file&oq=antivirustst+file",
"server_redirect_chain": [ ],
"type": "LANDING_PAGE",
"url": "https://www.eicar.org/?page_id=3950"
}, {
"ip_addresses": [ "" ],
"is_retargeting": false,
"is_subframe_referrer_url_removed": false,
"is_subframe_url_removed": false,
"main_frame_url": "",
"maybe_launched_by_external_application": false,
"navigation_initiation": "BROWSER_INITIATED",
"navigation_time_msec": 1.62825267089e+12,
"referrer_main_frame_url": "",
"referrer_url": "",
"server_redirect_chain": [ ],
"url": "https://www.google.com/search?q=antivirustst+file&oq=antivirustst+file"
} ],
"request_ap_verdicts": false,
"url": "https://secure.eicar.org/eicarcom2.zip"


Written by Y. Shafranovich.

Insecure Defaults in Adobe’s Mobile SDKs


Example/default configuration files provided by Adobe within their mobile SDKs include several insecure options. These have also been found in the wild in multiple mobile applications. When these options are used insecurely, attackers can view or modify information transmitted by the application back to Adobe’s cloud services.

Application developers are encouraged to check the configuration files within their own applications to make sure these options are set correctly. The vendor has updated some of these files with secure alternatives – for others, new SDKs are available with secure defaults.

We also have a tool available (“truegaze“) that can be used for static scanning of mobile applications with insecure defaults in their Adobe configurations.


Adobe provides multiple mobile SDKs intended for integration into mobile applications across multiple platforms. These SDKs communicate between the mobile apps and the vendor-provided cloud services. Some of the example/default configuration files include insecure settings. This can lead to applications copying these insecure settings into their own applications and we have observed this behavior in the wild. We are also working on automated tools to detect these files with insecure settings within mobile applications.

The main configuration file for these SDKs is called “ADBMobileConfig.json” and is usually packaged within the application file. There are several insecure settings included within this file which may lead to sensitive data being transmitted without SSL and can be seen or modified by an attacker with access to the network traffic. These include:

  • analytics -> ssl – Enables (true) or disables (false) the ability to send measurement data by using SSL (HTTPS). Default is false. This is the one most commonly found and should be changed to “true” by default.
  • mediaHeartbeat -> ssl – enables (true) or disables (false) the ability to send hearbeat data by using SSL (HTTPS). Default is false.

There are also additional settings which can be incorrectly set not to use SSL, but are not usually presented that way by default:

  • postback -> templateurl – configuration for the postback URL which are used to send data collected by the SDK to a third party server
  • remotes – defines the Adobe-hosted endpoints for dynamic configuration files including:
    • analytics.poi – endpoint for hosted POI configuration.
    • messages – endpoint for hosted in-app message configuration

This can also be configured via code as follows:

  • C/C++/Objective C – hbConfig.ssl = NO;
  • JS – MediaHeartbeatConfig.ssl = false

Here is an abbreviated example file with the insecure settings highlighted:

  "analytics": {
    "ssl": false,
  "messages": [
      "payload": {
        "templateurl": "http://example.com/subscriptions/{%mcid%}",
  "remotes": { 
        "analytics.poi": "http://assets.adobedtm.com/staging/42a6fc9b77cd9f29082cf19b787bae75b7d1f9ca/scripts/satellite-53e0faadc2f9ed92bc00003b.json", 
        "messages": "http://assets.adobedtm.com/staging/42a6fc9b77cd9f29082cf19b787bae75b7d1f9ca/scripts/satellite-53e0f9e2c2f9ed92bc000032.json" 


The following examples/docs were reported to the vendor and were updated to have secure defaults:

The following have insecure defaults and are present within vendor-provided code, documentation or code samples. The vendor will not be fixing them:

Vendor Response and Mitigation

Application developers utilizing the Adobe SDK within their applications should check the configuration for the SDK to make sure all of the options are set securely.

The vendor provided the following response:

Thanks for reaching out to Adobe.  The configuration file you identified is an empty “sample” file, and we’re working with the owner to update that config to use SSL by default.  In practice, Adobe customers will either:

1. Download a file from Mobile Services (where SSL is on by default)
2. Engage Adobe professional services to create a configuration file (wherein SSL is recommended) or,  
3. Customers will create their own configuration (where the vast majority enable SSL)

Additionally, we’ve released a new version of the SDK (https://github.com/Adobe-Marketing-Cloud/acp-sdks), configurable in Launch, where SSL is always turned on by default. 

The vendor also fixed most of these issues and provided the following response regarding the remaining unfixed issues:

Adobe has announced end-of-support for these vulnerable SDKs and encourages customers to move to our new version of the SDK where SSL is the default:


Static Scanning Tools

We have developed an open source tool that can be used for static scanning of mobile applications with insecure defaults in their Adobe configurations. You can find it here:



Adobe tracker # PSIRT-9709
Vendor documentation: see here


Advisory written by Y. Shafranovich.


2019-03-04: Initial report to the vendor
2019-05-06: Followup communication with the vendor
2019-07-28: Draft blog post sent to the vendor for review
2019-08-01: Follow-up communication with the vendor
2019-08-09: Follow-up communication with the vendor
2019-10-08: Follow-up communication with the vendor
2019-10-29: Follow-up communication with the vendor
2019-10-30: Ok to publish received from the vendor
2019-11-06: Public disclosure

Media Coverage

Path-Style Model for AWS S3 Can Lead to Namespace Squatting


Path-style model for AWS S3 and other services supporting S3 APIs can lead to namespace squatting. An attacker can create a bucket that shares the name with a special filename like “robots.txt” or “crossdomain.xml”, and insert their own content via filenames placed in that bucket. Services that rely on filename verification of domain ownership and are not precise about checking the content of such files, may end up verifying the ownership of the parent domain incorrectly. We have not yet been able to confirm this via testing.

AWS will be deprecating this functionality as of September 30th, 2020.


Amazon Web Services (AWS) provides a storage service called Simple Storage Service (S3) which allows users to storage data as files located inside separate locations called buckets (see docs). S3 currently supports two different addressing models: path-style and virtual-hosted style. The path-style looks like this:


The virtual-hosted style looks like this:


It is possible to name a bucket using a reserved name like “robots.txt”, “sitemap.xml” or “crossdomain.xml” and have that being available via the path-style addressing. HOWEVER, the only thing that would get returned is an XML-type directory listing. An attacker can add additional files into that bucket to try to influence the directory listing, but most parsers would disregard the entire file since it is malformed. What may end up happening is that the user will essentially squat this namespace.

It is not possible to reserve anything in the “.well-known” directory since it starts with a period and bucket names must start with a lowercase letter or a number. This it would not be possible to get an SSL certificate issued this way.

Additionally, if a third party service like Google WebMaster tools, Bing, etc. uses a domain validation approach to verify ownership by placing a file in the root directory, it may be possible to claim the “s3.amazonaws.com” domain as follows:

1. Create a bucket matching the verification name of the file.
2. Add the verification content as a key in that bucket.
3. Make the bucket public.

When the verification service hits the URL for “s3.amazonaws.com/verification.html” they will hit the bucket listing that was created. If the service disregards the XML and uses the value it finds, it may end up registering the service domain in the user’s account.

In our testing we have not yet found a service like that – most services will not parse an XML file that the directory listing produces.

Vendor Response and Mitigation

The vendor provided the following response:

We do not believe the behavior you describe in this report presents a security concern, given what you have outlined is theoretical.

Additionally, AWS has announced that the path-style addressing model will be deprecated as of September 30th, 2020 (see here and here).


Text written by Y. Shafranovich.


2019-02-03: Initial report to the vendor
2019-02-06: Followup communication with the vendor
2019-02-12: Followup communication with the vendor
2019-02-18: Followup communication with the vendor
2019-02-19: Followup communication with the vendor
2019-05-03: Followup communication with the vendor
2019-07-28: Draft blog post sent to the vendor for review
2019-08-14: Public disclosure

Brief Notes on Gmail for Android and Confidential Mode

Recently Google launched “Confidential Mode” for Gmail which seeks to protect sensitive information from unauthorized access – details here.

Some brief notes:

  • On the web version of Gmail, when replying to a confidential message, the reply is also sent as confidential. However, when using Gmail for Android that is not true – instead you get a warning that the message will not be sent with confidential mode.
  • When viewing confidential mode emails with Gmail for Android, FLAG_SECURE is not used (see our post here). That means other applications on the same device with the screen capture permissions can capture this content as well. This was reported to Google (issue # 112838515) and they do not consider it a security issue.

XSS in SSI printenv command – Apache Tomcat – CVE-2019-0221


Apache Tomcat had a vulnerability in its SSI implementation which could be used to achieve cross site scripting (XSS). This is only exploitable if SSI is enabled and the “printenv” directive is used which is unlikely in a production system.

The vendor has rated this as a Low severity issue. A fix was released in versions 7.0.94, 8.5.40 and 9.0.19. Users are encouraged to upgrade as soon as possible. CVE-2019-0221 has been assigned to track this issue.

Vulnerability Details

Server Side Includes (SSI) is a simple scripting language used server-side in some web servers for functionality like including files, echoing values of variables and displaying basic information about files. Note that these ARE NOT environment variables but are specific to SSI. They either have been set by the user or contain information about the incoming HTTP request (see full list here).

The “echo” directive prints out value of a single variable while the “printenv” directive prints out values of all variables. Both of these directives output HTML. The Apache Tomcat implementation correctly escapes XSS values when using the “echo” directive but not for the “printenv” directive. As the result, if an application is using this directive, an attacker can inject malicious input causing it to output and cause XSS.

Compare the code from the “echo” parameter which encodes the output correctly:

Screen Shot 2019-05-27 at 11.18.07 AM.pngVersus the code for the “printenv” parameter which DOES NOT encode the output:

Screen Shot 2019-05-27 at 11.21.13 AM.png

The fix is to add encoding as seen in this commit:

Screen Shot 2019-05-27 at 11.22.40 AM.png

In order to exploit this, several things have to true:

  1. SSI support has to be enabled in Apache Tomcat – either globally or on a specific web application. It is NOT ENABLED by default.
  2. A file with the “printenv” SSI directive must exist within the web application (usually “.shtml”).
  3. That file must be accessible to the attacker.

Steps To Replicate

1. Install a Java Runtime Environment (JRE) in Windows.

2. Download a vulnerable version of Tomcat and extract.

3. Modify the conf\context.xml file on line 19, to enable privileged context (this can also be done on individual applications instead of globally):

Context privileged="true">

4. Modify conf\web.xml to enable the SSI Servlet as per instructions here (this can also be done on individual applications instead of globally).

5. Put the following code in “webapps/ROOT/ssi/printenv.shtml”:

Echo test: <!--#echo var="QUERY_STRING_UNESCAPED" --><br/><br/>
Printenv test: <!--#printenv -->

6. Run Tomcat via the following command:

cd bin
catalina run

7. Call the following URLs to observe XSS (may need to use FireFox). Observe the difference between “echo” directive which escapes properly and the “printenv” directive which doesn’t escape properly




Screen Shot 2019-02-17 at 10.11.32 AM

Screen Shot 2019-02-17 at 10.10.55 AM.png

Vendor Response

This issue was responsibly reported to the vendor via the EU FOSSA bounty program operated by Intigriti. The vendor assigned CVE-2019-0221 to track this issue and provided a fix.

The vendor rated this issue as “Low Impact” on the following basis:

  • SSI is disabled by default
  • hardly anyone uses SSI
  • printenv is really a debug command that you would not expect to find
    used in a production system

The vendor also indicated that if there was a lower impact level, they would have used it as they consider the chances of a production system being exposed to this vulnerability to be very close to zero.

The vendor indicated that the following versions are vulnerable (no information is available on earlier versions):

  • Tomcat 9 – versions 9.0.0.M1 through 9.0.17 (9.0.18 is not affected)
  • Tomcat 8 – versions 8.5.0 to 8.5.39
  • Tomcat 7 – versions 7.0.0 to 7.0.93

Users are encouraged to upgrade to the following fixed versions or later:

  • Tomcat 9 – version 9.0.19 – details
  • Tomcat 8 – version 8.5.40 – details
  • Tomcat 7 – version 7.0.94 – details

Bounty Information

This report satisfied the requirement of the EU FOSSA bounty program and a bounty has been paid.


Apache SSI reference: see here – mod_include
CVE-ID: CVE-2019-0221
CVSS 2.0 Score: pending
CVSS 3.0 Score: pending
Tomcat SSI documentation: see here
Vendor advisory: see here


Text of the advisory written by Yakov Shafranovich.


2019-02-17: Initial report submitted to the platform
2019-02-19: Initial report validated by the platform
2019-03-12: Report accepted by the vendor
2019-05-17: Public advisory issued by the vendor
2019-05-27: Public disclosure by reporter