Path-Style Model for AWS S3 Can Lead to Namespace Squatting

Summary

Path-style model for AWS S3 and other services supporting S3 APIs can lead to namespace squatting. An attacker can create a bucket that shares the name with a special filename like “robots.txt” or “crossdomain.xml”, and insert their own content via filenames placed in that bucket. Services that rely on filename verification of domain ownership and are not precise about checking the content of such files, may end up verifying the ownership of the parent domain incorrectly. We have not yet been able to confirm this via testing.

AWS will be deprecating this functionality as of September 30th, 2020.

Details

Amazon Web Services (AWS) provides a storage service called Simple Storage Service (S3) which allows users to storage data as files located inside separate locations called buckets (see docs). S3 currently supports two different addressing models: path-style and virtual-hosted style. The path-style looks like this:

https://s3.amazonaws.com/bucket/file

The virtual-hosted style looks like this:

https://bucket.s3.amazonaws.com/file

It is possible to name a bucket using a reserved name like “robots.txt”, “sitemap.xml” or “crossdomain.xml” and have that being available via the path-style addressing. HOWEVER, the only thing that would get returned is an XML-type directory listing. An attacker can add additional files into that bucket to try to influence the directory listing, but most parsers would disregard the entire file since it is malformed. What may end up happening is that the user will essentially squat this namespace.

It is not possible to reserve anything in the “.well-known” directory since it starts with a period and bucket names must start with a lowercase letter or a number. This it would not be possible to get an SSL certificate issued this way.

Additionally, if a third party service like Google WebMaster tools, Bing, etc. uses a domain validation approach to verify ownership by placing a file in the root directory, it may be possible to claim the “s3.amazonaws.com” domain as follows:

1. Create a bucket matching the verification name of the file.
2. Add the verification content as a key in that bucket.
3. Make the bucket public.

When the verification service hits the URL for “s3.amazonaws.com/verification.html” they will hit the bucket listing that was created. If the service disregards the XML and uses the value it finds, it may end up registering the service domain in the user’s account.

In our testing we have not yet found a service like that – most services will not parse an XML file that the directory listing produces.

Vendor Response and Mitigation

The vendor provided the following response:

We do not believe the behavior you describe in this report presents a security concern, given what you have outlined is theoretical.

Additionally, AWS has announced that the path-style addressing model will be deprecated as of September 30th, 2020 (see here and here).

Credits

Text written by Y. Shafranovich.

Timeline

2019-02-03: Initial report to the vendor
2019-02-06: Followup communication with the vendor
2019-02-12: Followup communication with the vendor
2019-02-18: Followup communication with the vendor
2019-02-19: Followup communication with the vendor
2019-05-03: Followup communication with the vendor
2019-07-28: Draft blog post sent to the vendor for review
2019-08-14: Public disclosure

Brief Notes on Gmail for Android and Confidential Mode

Recently Google launched “Confidential Mode” for Gmail which seeks to protect sensitive information from unauthorized access – details here.

Some brief notes:

  • On the web version of Gmail, when replying to a confidential message, the reply is also sent as confidential. However, when using Gmail for Android that is not true – instead you get a warning that the message will not be sent with confidential mode.
  • When viewing confidential mode emails with Gmail for Android, FLAG_SECURE is not used (see our post here). That means other applications on the same device with the screen capture permissions can capture this content as well. This was reported to Google (issue # 112838515) and they do not consider it a security issue.

XSS in SSI printenv command – Apache Tomcat – CVE-2019-0221

Summary

Apache Tomcat had a vulnerability in its SSI implementation which could be used to achieve cross site scripting (XSS). This is only exploitable if SSI is enabled and the “printenv” directive is used which is unlikely in a production system.

The vendor has rated this as a Low severity issue. A fix was released in versions 7.0.94, 8.5.40 and 9.0.19. Users are encouraged to upgrade as soon as possible. CVE-2019-0221 has been assigned to track this issue.

Vulnerability Details

Server Side Includes (SSI) is a simple scripting language used server-side in some web servers for functionality like including files, echoing values of variables and displaying basic information about files. Note that these ARE NOT environment variables but are specific to SSI. They either have been set by the user or contain information about the incoming HTTP request (see full list here).

The “echo” directive prints out value of a single variable while the “printenv” directive prints out values of all variables. Both of these directives output HTML. The Apache Tomcat implementation correctly escapes XSS values when using the “echo” directive but not for the “printenv” directive. As the result, if an application is using this directive, an attacker can inject malicious input causing it to output and cause XSS.

Compare the code from the “echo” parameter which encodes the output correctly:

Screen Shot 2019-05-27 at 11.18.07 AM.pngVersus the code for the “printenv” parameter which DOES NOT encode the output:

Screen Shot 2019-05-27 at 11.21.13 AM.png

The fix is to add encoding as seen in this commit:

Screen Shot 2019-05-27 at 11.22.40 AM.png

In order to exploit this, several things have to true:

  1. SSI support has to be enabled in Apache Tomcat – either globally or on a specific web application. It is NOT ENABLED by default.
  2. A file with the “printenv” SSI directive must exist within the web application (usually “.shtml”).
  3. That file must be accessible to the attacker.

Steps To Replicate

1. Install a Java Runtime Environment (JRE) in Windows.

2. Download a vulnerable version of Tomcat and extract.

3. Modify the conf\context.xml file on line 19, to enable privileged context (this can also be done on individual applications instead of globally):

Context privileged="true">

4. Modify conf\web.xml to enable the SSI Servlet as per instructions here (this can also be done on individual applications instead of globally).

5. Put the following code in “webapps/ROOT/ssi/printenv.shtml”:

<html><head><title></title><body>
Echo test: <!--#echo var="QUERY_STRING_UNESCAPED" --><br/><br/>
Printenv test: <!--#printenv -->
</body></html>

6. Run Tomcat via the following command:

cd bin
catalina run

7. Call the following URLs to observe XSS (may need to use FireFox). Observe the difference between “echo” directive which escapes properly and the “printenv” directive which doesn’t escape properly

http://localhost:8080/ssi/printenv.shtml?%3Cbr/%3E%3Cbr/%3E%3Ch1%3EXSS%3C/h1%3E%3Cbr/%3E%3Cbr/%3E

http://localhost:8080/printenv.shtml?%3Cscript%3Ealert(%27xss%27)%3C/script%3E

Screenshots:

Screen Shot 2019-02-17 at 10.11.32 AM

Screen Shot 2019-02-17 at 10.10.55 AM.png

Vendor Response

This issue was responsibly reported to the vendor via the EU FOSSA bounty program operated by Intigriti. The vendor assigned CVE-2019-0221 to track this issue and provided a fix.

The vendor rated this issue as “Low Impact” on the following basis:

  • SSI is disabled by default
  • hardly anyone uses SSI
  • printenv is really a debug command that you would not expect to find
    used in a production system

The vendor also indicated that if there was a lower impact level, they would have used it as they consider the chances of a production system being exposed to this vulnerability to be very close to zero.

The vendor indicated that the following versions are vulnerable (no information is available on earlier versions):

  • Tomcat 9 – versions 9.0.0.M1 through 9.0.17 (9.0.18 is not affected)
  • Tomcat 8 – versions 8.5.0 to 8.5.39
  • Tomcat 7 – versions 7.0.0 to 7.0.93

Users are encouraged to upgrade to the following fixed versions or later:

  • Tomcat 9 – version 9.0.19 – details
  • Tomcat 8 – version 8.5.40 – details
  • Tomcat 7 – version 7.0.94 – details

Bounty Information

This report satisfied the requirement of the EU FOSSA bounty program and a bounty has been paid.

References

Apache SSI reference: see here – mod_include
CVE-ID: CVE-2019-0221
CVSS 2.0 Score: pending
CVSS 3.0 Score: pending
Tomcat SSI documentation: see here
Vendor advisory: see here

Credits

Text of the advisory written by Yakov Shafranovich.

Timeline

2019-02-17: Initial report submitted to the platform
2019-02-19: Initial report validated by the platform
2019-03-12: Report accepted by the vendor
2019-05-17: Public advisory issued by the vendor
2019-05-27: Public disclosure by reporter

Exploring the File System via Jenkins Credentials Plugin Vulnerability – CVE-2019-10320

Summary

The recently fixed vulnerability in the Jenkins Credentials plugin (fixed in v2.1.19) allowed users with certain permissions to confirm existence of a file on the server’s file system. While this doesn’t allow an attacker to view the file content, the ability to obtain information about the file system can be leveraged for other attacks. In this post we will explain how to reproduce this vulnerability.

It is also possible to load credentials from a valid PKCS#12 files on the Jenkins server, and obtain access to the contents of those credentials via a job. That may be addressed in a future blog post.

PLEASE NOTE: This is only exploitable by users that have sufficient access to the Jenkins server to add or update credentials. Usually anonymous users do not have that level of access.

Prerequisites

You will need to download, install and initialize Jenkins following these instructions. DO NOT install any plugin during the installation process. When done, you should be able to login to Jenkins via the following URL: “http://localhost:8080/“.

Installing the Vulnerable Plugin

1. Download the vulnerable plugin (v2.1.18) from the Jenkins update site as an HPI file:

updatesite

2. Go to the Jenkins plugin manager, and click the advanced tab (“http://localhost:8080/pluginManager/advanced“) to get to the manual plugin installation page. Select the HPI file downloaded in the previous step and install it. Restart the Jenkins server (“http://localhost:8080/restart“) after the plugin has been installed.

plugin

3. Login to the Jenkins management page (“http://localhost:8080/manage“) and plugin manager (“http://localhost:8080/pluginManager/“) to confirm that the vulnerable plugin has been installed.

manage

plugin

Getting to the Vulnerable Page

1. Login to Jenkins, then go to “Credentials”, “System”, “Global Credentials”. Click the new option “Add Credentials” that appears on the left side. The user that you are using MUST have sufficient permissions to add or update credentials. You can also reach this page by going directly to “http://localhost:8080/credentials/store/system/domain/_/newCredentials“.

Screen Shot 2019-05-23 at 11.11.51 PM

Screen Shot 2019-05-23 at 11.12.28 PM.png

2. In the “Kind” drop down box select “Certificate”, and from the two radio buttons select “From a PKCS#12 file on Jenkins master”.

Screen Shot 2019-05-23 at 11.12.52 PM.png

Exploitation

Put in a valid path in the “file” box and click anywhere in the page to refresh. You will get an error message “The file xxxx doesn’t exists” if the file is not present, OR “Could not load keystore” if the file does exists. This would allow an attacker to explore the file system and confirm whether specific files exist or not. While file content cannot be viewed (unless they are PKCS#12 files), the attacker can use this technique to help advance other attacks.

Screen Shot 2019-05-23 at 10.37.48 PM.pngScreen Shot 2019-05-23 at 10.37.40 PM.png

References

CVE-ID: CVE-2019-10320
Vendor advisory: see here

Remote Code Execution (RCE) in CGI Servlet – Apache Tomcat on Windows – CVE-2019-0232

Summary

Apache Tomcat has a vulnerability in the CGI Servlet which can be exploited to achieve remote code execution (RCE). This is only exploitable when running on Windows in a non-default configuration in conjunction with batch files.

The vendor released a fix in Tomcat versions 7.0.94, 8.5.40 and 9.0.19. Users are encouraged to upgrade as soon as possible. CVE-2019-0232 has been assigned to track this issue.

Vulnerability Details

Common Gateway Interface (CGI) is a standard protocol to allow web servers to execute command line programs / scripts via web requests. This protocol also allows passing of command line arguments to the script or program being executed via URL parameters. The protocol itself is defined in RFC 3875.

The following CGI request:

  • http://localhost:8080/cgi/test.bat?&dir

converts to:

  • test.bat &dir

Apache Tomcat supports execution of CGI scripts / programs in a non-default configuration via a special CGI servlet. This servlet also parses URL parameters and translates them into command line arguments. The actual execution of the CGI scripts happens via Java Runtime Environment (JRE)’s java.lang.Runtime class, exec() function.

When CGI support is enabled in Apache Tomcat in Windows, and command line argument passing is enabled, it is possible to cause command injection via parameter interpolation when calling a batch file (*.bat / *.cmd). This happens because “cmd.exe” performs interpolation on some special characters before execution which can cause other shell commands to be called. Neither Apache Tomcat or the Windows JRE perform any kind of input validation for these special characters. A partial list of these characters can be found here and here. Additional information about why this issue is specific to the Windows JRE can be found in this blog post by Markus Wulftange.

Steps To Replicate

1. Install a Java Runtime Environment (JRE) in Windows.

2. Download a vulnerable version of Tomcat and extract.

3. Modify the conf\context.xml file on line 19, to enable privileged context:

<Context privileged="true">

4. Modify conf\web.xml to enable the CGI Servlet by removing the comments around line 387 as follows and adding the following parameters (enableCmdLineArguments is only needed for Tomcat 9):

<servlet>
<servlet-name>cgi</servlet-name>
<servlet-class>org.apache.catalina.servlets.CGIServlet</servlet-class>
<init-param>
  <param-name>cgiPathPrefix</param-name>
  <param-value>WEB-INF/cgi</param-value>
</init-param>
<init-param>
  <param-name>executable</param-name>
  <param-value></param-value>
</init-param>
<init-param>
  <param-name>enableCmdLineArguments</param-name>
  <param-value>true</param-value>
</init-param>
<load-on-startup>5</load-on-startup>
</servlet>

5. Enable the CGI servlet by removing comments around this – you also need to change the URL pattern to match the one in the previous step (“cgi”):

<servlet-mapping>
<servlet-name>cgi</servlet-name>
<url-pattern>/cgi/*</url-pattern>
</servlet-mapping>

6. Create a folder for the CGI files:

mkdir webapps\ROOT\WEB-INF\cgi

7. Place the following text into a batch file located in “webapps\ROOT\WEB-INF\cgi\test.bat”

@echo off
echo Content-Type: text/plain
echo.
echo Hello, World!

8. Run Tomcat via the following command:

cd bin
catalina run

9. Trigger the following URLs and observe the dir command being run:

http://localhost:8080/cgi/test.bat?&dir

s1

Additional Notes – Environment Variables and Path

By default, Tomcat doesn’t pass all of the environment variables from the parent process that runs Tomcat itself. That means that if you run “set”, you will not see any environment variables other than those set by Tomcat itself. This also means that you would need to spell out the directory path of the command you are trying to run. However, if the “passShellEnvironment” parameter is set to true, the variables from the parent process will be passed through and you can call any command in PATH as well as view those variables. If the command cannot be found, there will be a error in the console log “XXXX is not recognized as an internal or external command”.

Example of trying to run a command without a full directory including the Tomcat console logs:

s5 s6

Examples of running with the parameter being set or spelling out the directory path:

s8s9.PNG

Example of trying to view the environment variables without and with the passShellEnvironment parameter being set to “true”:

s3

s2.PNG

Additional Notes – Memory Leaks / Denial of Service

If the command being executed is a long running command, it maybe possible to cause a denial of service or a memory leak. This happens because Tomcat waits for the OS process to complete.

Here is an example of netstat being triggered:

screen4

screen3

Additional Notes – Other Commands and STDERR

The “executable” parameter indicates which executable should be used to run the script. By default, this is set to “perl” with the expectation that the files being executed are Perl scripts. If this is set to empty, then it is possible to execute batch files since those are executed by “cmd.exe”. HOWEVER, it seems that the command interpolation only happens with batch files – if this is set to real program, then command interpolation doesn’t necessary occur and this vulnerability may be not exploitable.

Also, if the command being triggered outputs to STDERR instead of STDOUT, that output doesn’t get piped back to the web request – instead it goes to the Tomcat console log.

Here is an example when “java.exe” is set as the executable parameter and produces output to STDERR:

b3

Vendor Response

This issue was responsibly reported to the vendor via the EU FOSSA bounty program operated by Intigriti. Vendor analysis indicated that the core cause for this issue has to do with the way the Java Runtime Environment (JRE) interprets command arguments in Windows specifically and doesn’t impact Apache Tomcat when used with other operating systems. The vendor assigned CVE-2019-0232 to track this issue and provided a fix.

The vendor fix consists of two parts:

  • Disabling command line arguments from being passed to the CGI servlet in the default configuration (“enableCmdLineArguments” set to “false“) for Tomcat 7 and 8  – this was already disabled by default in Tomcat 9.
  • Adding a new configuration parameter (“cmdLineArgumentsDecoded“) to the default CGI configuration that will be used for input validation if passing of command line arguments is enabled and will be set to the following regular expression (OS specific). Note that if the user changes this parameter, they may become vulnerable again.
    • Windows - [[a-zA-Z0-9\Q-_.\\/:\E]+]
    • Other operating systems - [.*]

Affected Versions and Mitigation

Apache Tomcat is only vulnerable to this issue if the following conditions are met:

  • Running on Windows
  • CGI support is enabled either via the web.xml for a specific web application or the server as whole (see documentation). This is disabled by default.
  • The “privileged” setting is set to “true” in the Context element. This is “false” by default.
  • Tomcat 9 onlyenableCmdLineArguments is set to “true” (enabled by default in Tomcat 7 and 8)
  • The “executable” parameter is empty, and the the CGI scripts being executed are batch files (either .bat or .cmd). It is not clear if other commands that use “cmd.exe” are vulnerable as well.

The vendor indicated that the following versions are vulnerable (no information is available on earlier versions):

  • Tomcat 9 – versions 9.0.0.M1 through 9.0.17 (9.0.18 is not affected)
  • Tomcat 8 – versions 8.5.0 to 8.5.39
  • Tomcat 7 – versions 7.0.0 to 7.0.93

Users are encouraged to upgrade to the following fixed versions or later:

IMPORTANT NOTE: even when running a fixed version, you SHOULD NOT change the “cmdLineArgumentsDecoded” configuration parameter to a different value. If you do, your installation may become vulnerable. If an upgrade is not possible, users can apply one of the following mitigations:

  • Disable CGI support (it is disabled by default)
  • Or set the “enableCmdLineArguments” parameter to “false“. This setting will disable command line arguments from being passed via the CGI servlet.

Bounty Information

This report satisfied the requirement of the EU FOSSA bounty program and a bounty has been paid.

References

Blog post on JRE behavior: see here (Markus Wulftange)
CGI Standard: RFC 3875
CVE-ID: CVE-2019-0232
CVSS v2.0 Score: 9.3 – (AV:N/AC:M/Au:N/C:C/I:C/A:C)
CVSS v3.0 Score: 8.1 – (AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H)
Tomcat CGI Servlet source code: see GitHub
Vendor advisory: see here

Credits

Text of the advisory written by Yakov Shafranovich.

Timeline

2019-02-14: Initial report submitted to the platform
2019-02-17: Initial report validated by the platform
2019-03-03: Report acknowledged by the vendor
2019-03-14: Interim evaluation received from the vendor
2019-03-22: Communication with the vendor
2019-04-10: Public advisory issued by the vendor
2019-04-30: Public disclosure by reporter

Third Party Android App Storing Facebook Data Insecurely (Facebook Data Abuse Program)

Summary

A third-party Android application with Facebook API access was found to be copying user data into storage outside of Facebook, and storing it insecurely in two separate locations. This issue was reported to Facebook via their Data Abuse Bounty program and the insecure storage locations have been secured on November 12th, 2018. The Facebook app associated with this application has been removed from the Facebook platform but the Android application remains available in Google’s Play Store. The number of affected users is unknown. [ADDED: 02/15/2019 – Google has been notified].

Background

In April 2018, FaceBook announced a new Data Abuse Bounty program that rewards “people with first-hand knowledge and proof of cases where a Facebook platform app collects and transfers people’s data to another party to be sold, stolen or used for scams or political influence“.

In September 2018, we found an Android application in the Google Play store that purports to provide additional functionality to Facebook users that is not available through the platform. At the time of writing, the application had more than 1,000,000 downloads. After downloading the application, and examining it using JADX, we found that the application was using Facebook APIs to access data for the logged in user and copying to several storage locations outside of Facebook. Upon further examination, it was clear that at least two of such locations (a Firebase database and an API server) were making this data available without any authentication and without HTTPS. This would allow an attacker to mass download the user data accumulated by the application from its users.

We do not know for sure how many users have been impacted or exposed, but one of the databases accessed contained over 1,000,000 records. [ADDED: 02/15/2019 – The application purported to provide additional statistical information about the logged-in user’s Facebook account. There is a privacy policy within the application but it is ambiguous about the transfer of data].

Issue #1 – Storing user data in an public Firebase database

During our examination of the application, we located a Firebase database that the application was communicating with. The database was configured in test mode, which allowed anonymous public access by visiting the URL of “https://DATABASE.firebaseio.com/.json“. As seen in the attached screenshot, the database contained data obtained from Facebook. Aside from confirming the initial permission issue, we did not access or explore this database any further.

Screenshot:

likulator1

Issue #2 – Storing user data in a non-SSL server without authentication

During our examination of application, it become clear that the server that the application was communicating with, did not use SSL and was being accessed without authentication. As seen below, this would allow an attacker to download the data collected by the application from Facebook via a regular browser as well as spy on any connections between the application and the server. Aside from confirming the initial permission issue, we did not access or explore this database any further.

likulator2

likulator3

Vendor Response and Mitigation

We contacted the Facebook Data Abuse Bounty program but did not contact the vendor directly. After Facebook completed its review, the two insecure locations have been secured on November 12th, 2018. The Facebook app associated with this application has been removed from the Facebook platform but the Android application remains available in Google’s Play Store. [ADDED: 02/15/2019 – Google has been notified].

This discovery qualified under the terms of the Facebook Data Abuse Bounty Program and a bounty payment has been received.

References

Facebook report # 10101718616795015
Google reference # 8-7487000025062

Credits

This advisory was written by Yakov Shafranovich.

Timeline

2018-09-17: Initial report submitted to Facebook, initial response received
2018-11-12: Issued fixed
2018-11-27: Bounty decision received; sent disclosure request
2018-11-30: Facebook asked for additional time before disclosure
2019-01-15: Investigation has been finalized, FaceBook asked for a copy of the disclosure
2019-02-03: Draft disclosure shared for review
2019-02-14: Public Disclosure
2019-02-15: Minor updates; notification sent to Google

Thoughts on the MSI/JAR Authenticode Bypass

Earlier today, Google’s VirusTotal published a blog post about a new way to bypass code signing in Windows via JAR files:

Microsoft Windows keeps the Authenticode signature valid after appending any content to the end of Windows Installer (.MSI) files signed by any software developer. This behaviour can be exploited by attackers to bypass some security solutions that rely on Microsoft Windows code signing to decide if files are trusted. The scenario is especially dangerous when the appended code is a malicious JAR because the resulting file has a valid signature according to Microsoft Windows and the malware can be directly executed by Java.

In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows. For example, via the command “copy /b signed.msi + malicious.jar signed_malicious.jar”. The victim can be infected with just a double-click in such a file.

Here are some quick thoughts from our research team – note that we were not involved in this effort and have no insider knowledge. This is entirely based on public sources.

How can ZIP and EXE files be combined?

First of all, how is it possible that the same exact file can be executable both by Windows and Java? The trick lies in how Windows executable files work – as described in Microsoft’s documentation. Basically, the OS reads the file from the beginning, looking at the magic value of “MZ” followed by headers, then followed by the file content. We are going to assume that there is a table in the file that tells the reader how long each segment is, and therefore it is possible to append arbitrary data to the end of the file without it breaking.

A JAR file, however, is essentially a ZIP file. ZIP files have their index or central directory in the end of the file, and it is possible to prepend data in the beginning of the file and that file still being valid. That means that you can combine a Windows executable that is read from the beginning and rely on its headers and tables to tell the reader where to stop, and do the same for the ZIP content in the end of the file. Both files remain valid, while combined together. Also, while the example provided by VirusTotal is a JAR file, the same trick would work for other ZIP-based formats like Microsoft Office (DOCX/XSLX/etc), OpenOffice (ODT/ODS/etc), etc. Of course, this assumes that the software reading these files goes to the central directory of the ZIP and doesn’t check the magic value in the beginning.

Here is an modified example of PE files from Wikipedia, and a ZIP file example from OASIS, showing the direction in which file content is read:

revengpefile                  zip

Combined together:

revengpefile

zip

What is Microsoft Code Signing / Authenticode?

As per the original blog post and other technical documentation from Microsoft, the code signing in question is Authenticode which is used by Microsoft for Windows executables, drivers, and other files. The purpose it to make sure the file originated from a trusted publisher. There is also a command line tool included in Windows called “SignTool” which is used for signing and verifying files.

The way code signing works is described in a Microsoft technical document here. It is essentially a digital signature using PCKS7 and special X.509 certificates (code signing certificates issued by CAs). It is connected to the same PKI infrastructure as SSL certificates with some additional checks by CAs when issuing the certificate (not at sign time). Like all other digital signatures, it is essentially some sort of a hash signed by a private key of the holder of the certificate which is then verified by the public key in the X.509 certificate. The certificate itself is verified against public PKI infrastructure just like SSL.

Example appears below (from Microsoft documentation):screen shot 2019-01-16 at 8.00.15 pm

Bypassing Code Signing

In a standard digital signature scenario such as PGP or S/MIME, the entire content of the message is hashed to produce a message digest using a function like SHA. That hash is then digitally signed using the sender’s private key. Note that the entire message is hashed – this allowing the receiver to check if it was modified or not, not just bits and pieces.

One of the common refrains in security is “never roll your own crypto”, which in this case includes choosing what to hash. In the case of Authenticate, it appears that the file hash does not cover the entire file. As described in this document, information in the end of the file (after the second “remaining content” above) ARE NOT included in the hash (emphasis added):

Information past of the end of the last section. The area past the last section (defined by highest offset) is not hashed. This area commonly contains debug information. Debug information can generally be considered advisory to debuggers; it does not affect the actual integrity of the executable program. It is quite literally possible to remove debug information from an image after a product has been delivered and not affect the functionality of the program. In fact, this is sometimes done as a disk-saving measure. It is worth noting that debug information contained within the specified sections of the PE Image cannot be removed without invaliding the Authenticode signature.

Implications

This means that it is trivial to simply append another file like a JAR to the end of another digitally signed file, then rename it a JAR and have the resulting file look valid in Windows since the digital signature check will stop before reading the content of the JAR file. At the same time, the file will be executable by Java since it will read from the end ignoring any of the signed content appearing in the beginning. The same would apply for other ZIP based formats like Microsoft Word and this may allow an attacker to send a malicious document while masquerading it as a legit one. Additionally it appears from the blog post that some A/V and security products use the Authenticode signature as a shortcut to validate files so they don’t need to scan them.

Another possible use of this technique is to make attribution murky since some analysts may take the Authenticode signature at face value and not realize that the malware inside may not be from the publisher that signed the file.

An additional idea would be to use this trick to exfiltrate data out of an organization by putting the extra data in the end of the file. This assumes that the DLP and similar tools monitoring outbound traffic rely on the Authenticode signature as well.

Microsoft’s code signing method isn’t the only one that exists. Similar methods exists for Java, Adobe AIR, Android, MacOS, Debian, etc. Further research is needed to see if similar issues exist in other code signing schemes.

(Written by Yakov Shafranovich)

Speaking @BSidesDE This Friday on Android Privacy Bugs (CVE-2018-9489, CVE-2018-9581 and CVE-2018-15835)

We will be giving a talk this Friday (11/09/2018) at 10:30 am at BSides Delaware conference in Newark, Delaware. The talk will be given in Room A112 (Track 2). The talk is titled “A Tale of Three Brothers: Three Android Privacy Bugs”, and will cover three bugs in Android OS listed below. Two of them will be disclosed publicly for the first time during the talk. Slides, videos and full advisories should be posted next week.

Bugs covered:

UPDATED: Slides and video added:

Gmail for Android Allows Screenshots in Confidential Mode

Google recently added confidential mode for Gmail that allows the sender to restrict how their email gets by the recipients including forbidding printing, etc. However, it looks like that Gmail for android still allows screenshots to be taken for such emails. To replicate, try the following:

  1. Send a confidential email to another Gmail user.
  2. Open the email, press Power + Volume Down. A screenshot will be taken.

The underlying reason is because the Gmail app is not using “FLAG_SECURE” for such screens (more information on FLAG_SECURE can be found in our earlier blog post). By contrast, many Android apps with higher security requirements use it. For example, Chrome for Android prohibits screenshots when browsing in anonymous mode on Android.

We did file a bug report with Google and here is their response:

We can not prevent someone from taking a photograph of their phone, using a second camera for example. Preventing taking photos of the content is not part of the feature.

References:
Google bug report # 112838515

Five Tools for Starting Security Analysis of Android Apps

Here are five, easy to use, tools to start security analysis of a Android apps. While they are basic, they allow to do the initial checking for things like lack of SSL, sensitive files, broadcast issues and secrets in code. We also highly recommend buying a cheap Android device for testing instead of/in addition to an emulator.

As always, please obey all relevant laws and do not use these tools for illegal activity.

On-device MITM proxy – PacketCapture

An MITM proxy is used to inspect network traffic going from/to a particular mobile device, or perhaps a specific application on the device. Normally, an MITM proxy requires setting up a separate test machine with the proxy and then pointing traffic from the test device to that machine. However, PacketCapture, is a free and easy to use MITM proxy that runs on the Android device itself, can optionally inspect SSL traffic and can also be selectively applied to a specific app. It lacks the bells and whistles of other proxies, but it is very easy to use. Behind the scenes it works by creating a VPN connection to itself on the device.

One thing to keep in mind: the next version of Android (Android P) will enable TLS by default. Apps can still opt out via a network security policy (see here). Once that changes takes place, you are advised to check the network security policy first before trying this tool.

On-device Broadcasts Monitor – Android Broadcasts Monitor

One of the common pitfalls in Android development is using global broadcasts when exchanging data between different components of the application. Because global broadcasts can be seen by other apps, they can leak sensitive data. An easy way to look for these is to install the Android Broadcasts Monitor app (Google Play link here) which will show you all global broadcasts as they happen.

On-device File Manager

Another useful tool in your toolbox is an on-device file manager. This can be used to check if a particular application leaves any sensitive data on the SD card where it can be accessed by other apps. In particular, you should inspect the “/Android/” directory. We are fans of the Amaze File Manager (source at GitHub) but you can use any other as well.

If you do end up using Amaze, it has a nice feature where you can backup an installed app to the SD card, which allows you to get an APK of an app for further analysis with tools like JADX.

On-device Video Recorder – Telecine

Recording on-device videos comes really useful when making demos or doing bug bounties. One useful tool we use is Telecine by Jake Wharton which can record all screen activity (except FLAG_SECURE). One useful tip is to use “ffmpeg” or a similar tool to downscale the resolution like this example:

ffmpeg -i Telecine_video.mp4 -crf 40 -an final.mp4

Android Decompiler – JADX

JADX is a Java decompiler which can take an Android APK and decompile it back to Java source code. One useful thing this can be used for is to analyze possible secrets that are included in the Android resources (not code). Often, there may be sensitive data that is easier to find instead of searching through source code. The “/strings” and “/raw” folders are usually the best place to start.

Keep in mind that Android uses a custom JVM which is not the same one as normal Java. Therefore things relevant to security like cryptography, SSL connections, etc. do not necessarily behave the same way as in regular JVMs.