Pietro AlbiniZola2020-04-09T00:00:00+00:00/blog/atom.xmlDownloading all the crates on crates.io2020-04-09T00:00:00+00:002020-04-09T00:00:00+00:00
Pietro Albini
/blog/downloading-crates-io/<p>There are a lot of reasons you might want to download all the crates ever
uploaded to <a href="https://crates.io">crates.io</a>, Rust's package registry: code analysis across the
whole public ecosystem, hosting a mirror for your company, or countless other
ideas and projects.</p>
<p>The team behind crates.io receives a lot of support request asking what's the
best and least impactful way to do this, so here is a little guide on how to do
that!</p>
<h2 id="getting-a-list-of-all-the-crates">Getting a list of all the crates</h2>
<p>crates.io <a href="https://crates.io/data-access">offers multiple way</a> to interact with its data: the
<a href="https://github.com/rust-lang/crates.io-index">crates.io-index</a> GitHub repository, experimental <a href="https://static.crates.io/db-dump.tar.gz">daily database dumps</a>
and the crates.io API.</p>
<p>The way I recommend to get the list of all the crates is to rely on the index:
the experimental database dumps are more heavyweight and are only updated
daily, while usage of the API is governed by the <a href="https://crates.io/policies#crawlers">crawlers policy</a>
(limiting you to one API call per second). If you <em>absolutely</em> need to use the
API please talk with us by emailing <a href="mailto:help@crates.io">help@crates.io</a>, and we'll figure out a
solution.</p>
<p>The index is <a href="https://github.com/rust-lang/crates.io-index">a git repository</a>, and the format of its content
is defined by <a href="https://rust-lang.github.io/rfcs/2141-alternative-registries.html#registry-index-format-specification">RFC 2141</a>. There are crates such as
<a href="https://crates.io/crates/crates-index">crates-index</a> that allow you to easily query its contents, and I recommend
using them whenever possible.</p>
<h2 id="downloading-the-packages">Downloading the packages</h2>
<p>The best way to download the packages is to fetch them directly from our CDN.
Compared to calling the crates.io API, the CDN does not have rate limits and is
faster (as the API redirects you to the CDN after updating the download count).
The CDN URLs follow this pattern:</p>
<pre><code>https://static.crates.io/crates/{name}/{name}-{version}.crate
</code></pre>
<p>For example, <a href="https://static.crates.io/crates/serde/serde-1.0.0.crate">here is the link to download Serde 1.0.0</a>. Packages
are <code>tar.gz</code> files.</p>
<p>If you want to ensure the contents of the CDN were not tampered with you can
verify the SHA256 checksum of the file you downloaded by comparing it with the
<code>cksum</code> field in the index.</p>
<h2 id="keeping-your-local-copy-up-to-date">Keeping your local copy up to date</h2>
<p>The best way to keep your local copy up to date is to fetch a fresh list of
crates available on crates.io and check if all of them are present in the local
system, downloading the ones you're missing. I personally recommend this
approach as it's less error-prone, and it heals your copy automatically if for
whatever reason some of the changes are lost during a previous update.</p>
<p>Another interesting approach you could implement is to get the difference since
the last update of the index with <code>git diff</code>, parsing its output to get the
list of crates that were added. There are also third-party crates such as
<a href="https://crates.io/crates/crates-index-diff">crates-index-diff</a> that automate this process for you. This approach is more
fragile and error-prone, but it might be the only sensible solution if checking
whether you downloaded a crate or not is slow or expensive.</p>
<h2 id="common-issues-to-be-aware-of">Common issues to be aware of</h2>
<p>While the basics of downloading the contents of crates.io are simple, there are
a couple of issues to be aware of when implementing such tooling:</p>
<ul>
<li>
<p>The crates.io team strives to keep the registry as immutable as possible, but
we can't always keep that promise. The technology world doesn't exist in a
bubble, and there are laws everyone needs to abide to. Occasionally we
receive takedown requests due to trademark or copyright issues, and we have
to remove the crates both from the registry and the CDN: your tooling should
handle existing crates disappearing.</p>
</li>
<li>
<p>To reduce the download size for cargo users we regularly squash the index
repository into a single commit, and start the git history from scratch. The
previous history is kept in a separate branch. To account for this we
recommend running these commands to update the index:</p>
<pre><code>git fetch
git reset --hard origin/master
</code></pre>
</li>
</ul>
Shipping a compiler every six weeks2019-11-23T00:00:00+00:002019-11-23T00:00:00+00:00
Pietro Albini
/blog/shipping-a-compiler-every-six-weeks/<p><em>This blog post is a slightly edited version of the live transcript of the talk
I gave at <a href="https://barcelona.rustfest.eu">RustFest 2019</a> in Barcelona on November 10th, 2019. As it's a
transcript some parts of it are a bit repetitive or badly worded, but I hope
the message behind the talk will be conveyed by this post anyway.</em></p>
<p><em>The original transcript was provided by the RustFest organizers, and it's
released under the <a href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareALike 4.0 International
License</a>.</em></p>
<p><em>You can also <a href="/blog/shipping-a-compiler-every-six-weeks/slides.pdf">download the slides</a> or <a href="https://www.youtube.com/watch?v=As1gXp5kX1M">watch the recording of the
talk</a>.</em></p>
<hr />
<p>Hi, everyone. In this talk I am going to shed a bit of light on how the Rust
release process works and why we do it that way. As they said, I'm Pietro, a
member of the Rust Release team and the co-lead of the Infrastructure Team.
I'm actually a member of other teams: I do a lot of stuff in the project.</p>
<p>I think everyone is aware by now that we actually got a release out a few days
ago, with some features everyone awaited for, for a long time but that's not
the only release. Six weeks earlier we released 1.38 which was released in
September and <a href="https://github.com/rust-lang/rust/compare/1.37.0...1.38.0">changed a hundred thousands lines of code</a>. Users
just reported <a href="https://gist.github.com/pietroalbini/b02cadb117cfe49ad17e0168ce543e2d#1380">5 regressions</a> after the release came out. And only two of them
broke valid code, the other ones were just performance regressions or worse
error messages.</p>
<p>Six weeks earlier, there was another release, 1.37, and this changed <a href="https://github.com/rust-lang/rust/compare/1.36.0...1.37.0">tens of
thousands of lines of code</a> and we just got <a href="https://gist.github.com/pietroalbini/b02cadb117cfe49ad17e0168ce543e2d#1370">3
regressions</a> reported, and unfortunately all of them broke valid
code, but it's a very little number. Even before, we got 1.36 out in July with
just <a href="https://gist.github.com/pietroalbini/b02cadb117cfe49ad17e0168ce543e2d#1360">4 regressions</a> reported. I wanted to explain a little bit why
we do releases this fast, which creates a lot of problems for us, and how can
we prevent regressions, and just get very few reported after the release is
out?</p>
<p>So why do we have this schedule? The question is interesting because it's
really unusual in the compiler world. I collected some stats on some of the
most popular languages. While there are some efforts to shorten the release
cycles (Python <a href="https://www.python.org/dev/peps/pep-0602/">recently announced</a> that they are going to
switch to a yearly schedule), Rust is the only compiler except for browsers
that's sort of popular and has a six-week release cycle. In the compiler world
that's pretty fast, but there is a simple reason why we do that: we have no
pressure to ship things.</p>
<p>If a feature is not ready, we have issues, we can just delay it by a few weeks,
and nobody is going to care if it's going to get stabilised today or in a month
and a half. And we actually do that a lot. The most obvious example is <a href="https://github.com/rust-lang/rust/pull/63209#issuecomment-520741844">a few
weeks ago</a>, when we decided that async/await wasn't ready
enough to be shipped into Rust 1.38, because it turns out it wasn't actually
stabilised when the beta freeze happened and there were blocking issues so we
would have to rush the feature and backport the stabilization, and that's
something that we would not love to do.</p>
<p>We actually tried long release cycles, especially with the edition, and it
turns out they don't work for us. The stable edition came out in early December
and in September we still had questions on how to make the module system work.
We had <a href="https://github.com/rust-lang/rust/issues/53130#issuecomment-418824862">a proposal</a> in early September which was <a href="https://github.com/rust-lang/rust/issues/53130#issuecomment-418913061">not implemented
yet</a>, and that's what actually was released, but the proposal had no
time to bake on nightly, users didn't have much time to test it. It broke a lot
of our internal processes.</p>
<p>We actually did this thing which is something I'm not comfortable with still,
which is we actually <a href="https://github.com/rust-lang/rust/pull/56053">landed a change</a> in the behaviour of the module
system directly on the beta branch, two weeks before the stable release came
out, and if we did a mistake there we would have no way to roll it back before
the next edition, and we don't even know if we are going to do a 2021 edition
yet. This PR broke almost all the policies we have, but we had to do it,
otherwise we would not have been be able to ship a working edition, and
thankfully it ended great.</p>
<p>The 2018 edition works, I'm not aware of any huge mistakes we made but if we
actually made them it would've been really bad because we would have to wait a
lot to fix them and we would be stuck in the 2018 edition with a broken
features set for backward compatibility reasons.</p>
<p>So with such fast release cycles, how can we actually prevent regressions from
reaching the stable channel? Of course, the first answer is the compiler's test
suite, because rustc has a lot of tests. We have thousands of them that test
both the compiled output but also the error messages, and the tests run a lot:
we have 60 CI builders that run for each PR taking three to four hours. So, we
actually do a lot of testing but that's not enough because a test suite can't
really express everything that Rust language can do.</p>
<p>So we use the compiler to build the compiler itself: for every release we use
the previous one to build the compiler. On nightly we used beta, on beta we
used stable and on stable we used the previous stable. That allows us to catch
some corner cases, as the compiler codebase uses a lot of unstable features and
also it's a bit old so there are a lot of quirks in it. But still, that can't
actually catch everything.</p>
<p>We get bug reports from you all. We get them mostly from nightly, not from
beta, because people don't actually use beta. Asking our users to test beta is
something we can't really do: with such a fast cycle you don't have time to
test everything manually with the new compiler every six weeks. Languages with
long release cycles can afford to say "Hey, test the new beta release", but we
can't, and even when we ask, people don't really do that.</p>
<p>So we had an idea. Why don't we test our users' code ourselves? This is an idea
that seems really bad and seems to waste a lot of money but it actually works
and it's <a href="https://github.com/rust-lang/crater">Crater</a>.</p>
<p>Crater is a project that <a href="https://github.com/brson">Brian Anderson</a> started and I'm now the
maintainer of, which creates experiments which get all the source code
available on <a href="https://crates.io">crates.io</a> and <a href="https://github.com/rust-lang/rust-repos">all the Rust repositories on GitHub</a>
with a <code>Cargo.lock</code>, so if you create an "Hello World" repo on GitHub, or an
<a href="https://adventofcode.com/">Advent of Code</a> solutions repository, that's actually tested for every
release to catch regressions.</p>
<p>For each project we run <code>cargo test</code> two times, once with stable and one with
beta, and if <code>cargo test</code> runs on stable but fails on beta then that's a
regression, and we get a nice colourful report where we can inspect.</p>
<p><a href="https://crater-reports.s3.amazonaws.com/beta-1.39-1/full.html">This is the actual report for 1.39</a> and we got just 46 crates
that failed and those are regressions nobody reported before. The Release Team
manually goes through each (I hope we didn't break any of yours), manually
checks the log and then files issues. The Compiler Team looks at the issues,
fixes them and ships the fix to you all.</p>
<p>1.39 went pretty well. <a href="https://crater-reports.s3.amazonaws.com/beta-1.38-1/full.html">This is 1.38</a> and we had 600 crates that
were broken, so if we didn't have Crater there is a good chance your project
wouldn't compile anymore when you updated, and this would break the trust you
have in the stable channel.</p>
<p>We know it's not perfect. We don't test any kind of private code because of
course we don't have access to your source code. But also we only test
crates.io and GitHub, and not other repositories such as GitLab, mostly because
nobody got around to write scrapers yet. Also not every crate can be built in a
sandbox environment (of course we have a sandbox, we can't just run any code
without protection because turns out people on the Internet are bad).</p>
<p>Crater is not really something we can scale forever in the long term because it
uses a lot of compute resources already, which thankfully are sponsored, but if
the usage of Rust skyrockets we are going to reach a point where it's not
economically feasible to run Crater in a timely fashion anymore.</p>
<p>Those are real problems but for now it works great. It allows to catch tens of
regressions that often affect hundreds of crates and it's the real reason why
we can afford to make such fast releases. Without it, this is my personal
opinion, but I know it's shared by other members of the Release Team, I
wouldn't be comfortable making releases every six weeks without Crater because
they would be so buggy I wouldn't use them myself.</p>
<p>So to recap, the fast release cycles that we have allow the team not to burn
out and to simply ignore deadlines, and that's great especially for a community
of mostly volunteers. And Crater is the real reason why we can afford to do
that. It's a tool that wastes a lot of money but actually gets us great
results.</p>
<p>So I'm going to be around the conference today so if you have any questions,
you want to implement support for other open source repositories, reach out to
me, I'm happy to talk to you all. Thanks!</p>
<h2 id="questions-from-the-audience">Questions from the audience</h2>
<p><strong>You were hinting that maybe the edition idea wasn't such a success for us.
Would you think that jeopardises a possible 2021 edition of the language?</strong></p>
<p>The main issue wasn't really the edition itself; it was that we actually
started working on it really late. So basically we went way over time with
implementing the features. This is my personal opinion, it's not the official
opinion of the project, but if we make another edition I want explicit phases
where we won't accept any more changes after this date and to actually enforce
that because we nearly burnt out most of the team with the edition. There were
people that were just for months fixing regressions and fixing bugs and that's
not sustainable, especially because most of the contributors to the compiler
are volunteers.</p>
<p><strong>For private repository, of course you cannot run Crater, but how could
somebody who has a private repository, a private crate setup, would run Crater,
or is that possible now?</strong></p>
<p>Someone could just test on beta and create bug reports if they fail to compile.
We have some ideas on how to create a Crater for enterprises but it's just a
plan, an idea, and at the moment we don't have enough development resources to
actually do the implementation, test and documentation work that such a project
would require.</p>
<p><strong>A lot of crates have peculiar requirements about their environments. Can
you talk about how Crater handles that and specifically is it possible to
customise the environment in which my crates are built on Crater?</strong></p>
<p>So the environment doesn't have any kind of network access for obvious security
reasons, so you can't install the dependencies yourself but the build
environment runs inside Docker. We have <a href="https://github.com/rust-lang/crates-build-env">these big Docker images</a>,
4GB, which have most of the system dependencies used in the ecosystem
installed. You can easily check whether your crate works or not with <a href="https://docs.rs">docs.rs</a>:
since recently it uses the same build code as Crater, so if it builds on
docs.rs it builds on Crater as well. And if it doesn't build, you can file an
issue, probably the <a href="https://github.com/rust-lang/docs.rs/issues">docs.rs issue tracker</a> is the best place,
and if there are Ubuntu 18.04 packages available we are just going to install
them on the build environment, and then your package will work.</p>
<p><strong>How long does it take to run Crater on all of the crates?</strong></p>
<p>Okay, so that actually varies a lot because we are making constant changes to
the build environment, changes with the virtual machines and such. I think at
the moment running <code>cargo test</code> on the entire ecosystem takes a week and
running <code>cargo check</code>, which we actually do for some PRs, takes three days: if
there is a pull request that we know is risky and could break code, we usually
run Crater beforehand just on that and in those cases we usually do <code>cargo check</code> because is faster. The times really vary mostly because we make a lot of
changes to the virtual machines.</p>
<p><strong>Is it possible to supply the Crater run with more runners to speed up the
process?</strong></p>
<p>I think we could. At the moment, we are just in a sweet spot because we have
enough experiments that we fill out the servers, we don't have any idle time,
and the queue is not that long. If we had more servers then the end result is
that for a bunch of time the server is going to be idle so we are just wasting
resources. We have actually more sponsorship offers from corporations, so if we
reach a point where the queue is not sustainable anymore we are going to get
agents from them before asking the community. Also Crater is really heavy on
resources: at the moment I think we have 24 cores and 48GB of RAM, 4 terabytes
of disk space, so it's not something where you can throw out some small virtual
machine and get meaningful results out of it.</p>
My wishlist for Rust in 20192019-01-02T00:00:00+00:002019-01-02T00:00:00+00:00
Pietro Albini
/blog/rust-2019-wishlist/<p>It's starting to become a tradition to see a bunch of posts around the new year
on what the community wants to see from Rust. For the second year in a row, the
Rust core team <a href="https://blog.rust-lang.org/2018/12/06/call-for-rust-2019-roadmap-blogposts.html">asked for feedback for the 2019
roadmap</a>
and this is what I'd like: "rustfix all the things" and a better
infrastructure.</p>
<h2 id="add-rustfix-support-to-most-of-the-warnings">Add rustfix support to most of the warnings</h2>
<p>One of the features of Rust 2018 I don't see mentioned too often is
<a href="https://github.com/rust-lang-nursery/rustfix">rustfix</a>, the tool that fully migrates a project from Rust 2015 to
Rust 2018. The fact nobody talks about it is probably a good thing though,
since it means it works fine!</p>
<p>Rustfix is a really simple tool behind the scenes: it calls the compiler, gets
the suggestions from the warnings emitted by the compiler and replace them.
That means all the fixing logic is inside the compiler, with full access to its
internals. Other tools (like IDEs) can also apply those suggestions without
reimplementing them.</p>
<p>In 2019 we should greatly increase the scope of the fixes applicable by
rustfix, from the edition migration to most of the warnings emitted by the
compiler. I'd love to see a day when a <code>cargo fix</code> makes most of the warnings
disappear.</p>
<h2 id="improve-the-rust-infrastructure">Improve the Rust infrastructure</h2>
<p>The Rust project has grown a lot in the past few years, but its infrastructure
is lagging behind. Last month there was a <a href="https://internals.rust-lang.org/t/homu-queue-woes-and-suggestions-on-how-to-fix-them/8954">big discussion on
internals</a> on improving the bors queue, and there are <strong>a lot</strong>
of other stuff we want to improve as the infrastructure team.</p>
<p>One of the biggest one is switching away from <a href="https://travis-ci.org">Travis CI</a> for the
compiler repository. In the past year we had countless issues with them (both
small and big), and that's not acceptable when we're paying (a lot) for it. The
infra team is already planning to start the discussion on where we should
migrate in the coming weeks, and we'll involve the whole community in the
discussion when it happens.</p>
<p>Another thing I'd like to see is increased coverage for <a href="https://github.com/rust-lang-nursery/crater">Crater</a>, the
tool we use to test compiler changes across parts of the Rust ecosystem. There
are a lot of big wins we can make on it, like <a href="https://github.com/rust-ops/rust-repos/issues/20">testing repositories on
GitLab</a> or <a href="https://github.com/rust-lang-nursery/crater/issues/149">Windows support</a>, and any
contributor is welcome!</p>
<h2 id="looking-forward-for-the-next-year">Looking forward for the next year</h2>
<p>The past year has been a great for both the Rust project and myself. We
(finally!) shipped the 2018 edition, we grown a lot as a community and we have
big features near the end of the pipeline (for example async/await).</p>
<p>Personally I joined the release and infrastructure teams, and it's great to be
a small part of this success. I met and worked with a lot of awesome people,
and I hope I'll be able to continue to do that in the future.</p>
<p>I look forward for a way better <strong>#Rust2019</strong>.<br />
Pietro.</p>
Gandi security vulnerability: 2FA bypass2016-10-02T00:00:00+00:002016-10-02T00:00:00+00:00
Pietro Albini
/blog/gandi-security-vulnerability-2fa-bypass/<p><a href="https://www.gandi.net">Gandi</a> is a French domain name registrar I use for all
my domains, which also supports the volunteers behind some FLOSS projects. On
Tuesday 27 september 2016 I found a flaw in their login form which allowed to
completly bypass two factor authentication (2FA), after you inserted the right
handle and password.</p>
<h2 id="it-all-starts-with-a-broken-phone">It all starts with a broken phone</h2>
<p>The day before I found this vulnerability, my phone fell and hit the ground
with the corner. The display is now fully cracked and the touch screen doesn't
work anymore. I was very frustrated about this, because the other phone I have
at home is an old, crappy Android phone (which doesn't work perfectly).</p>
<p>Everything on the phone was backed up, except one thing: the seeds of my 2FA
tokens. For most of the sites I have the backup codes saved in my password
manager, but Gandi doesn't provide those, and I was worried to be locked out of
my account.</p>
<h2 id="a-vulnerability-discovered-out-of-frustration">A vulnerability discovered out of frustration</h2>
<p>I knew my Gandi account had 2FA enabled so, the day after, I went to their
website looking for a way to access my account.</p>
<p>I inserted my handle and the correct password on the login form, and I was then
prompted for the 2FA token. Some of the websites I use provide a way to disable
2FA, with either a backup code or an SMS to my phone number, but there was none
of that in the Gandi website.</p>
<p>Because some websites provide a "Reset password?" thing only after a number of
wrong tries, I inserted a dummy token (<em>123456</em>) and sent it. Obviously it
didn't work, but out of frustration I started clicking the button multiple
times.</p>
<p>After a few seconds of clicking that button with the wrong token, the website
logged me in and redirected me to my account page. It was the only time in my
life when clicking a button multiple times solved a problem, but it was also a
security vulnerability (you can't have only nice things, unfortunately).</p>
<h2 id="reporting-the-vulnerability">Reporting the vulnerability</h2>
<p>The bug didn't allow you to log into any account you wanted (as the <a href="https://blogs.dropbox.com/dropbox/2011/06/yesterdays-authentication-bug/">Dropbox
one</a> in 2011 did), because the correct handle and password
were still required and checked, but it made the whole 2FA thing useless, since
you were able to skip the check.</p>
<p>After replicating the bug on my account multiple times, I started looking
around for the Gandi security team's email address to report the vulnerability.
I looked for five minutes in their website, but with no luck. I then asked
their support team where should I report a security vulnerability, without
providing any details.</p>
<p>After a bit more than an hour I received their security team's email address
and the instruction to encrypt the message with the GPG key found in the
keyservers. While the response time wasn't so bad for a customer support, it
would be better if there was a page on their website with all the details.</p>
<h2 id="the-gandi-s-response">The Gandi's response</h2>
<p>After I sent the encrypted details to the security team's email address, I also
<a href="https://twitter.com/pietroalbini/status/780873928592003072">tweeted about a possible vulnerability I found</a>. A director of the
Gandi's USA office noticed it, and replied to my email acknowledging the report
and saying their security team is based in Paris and was asleep, and nobody in
the USA had the team's GPG key.</p>
<p>As he requested, I re-encrypted the report with his key, and after less than an
hour I received confirmation the bug was found and fixed. They said they can't
currently afford a bug bounty program, but two days after they offered to send
me a cover for my next phone.</p>
<h2 id="the-cause-of-the-vulnerability">The cause of the vulnerability</h2>
<p>When I discovered this, I was quite confused: how is it possible that you can
bypass the 2FA check by clicking the submit button multiple times? After they
fixed the bug they told me the cause was a mix of two flaws in their website's
code:</p>
<ul>
<li>
<p>When the 2FA checker was unreachable, their website was coded to skip the
check and authenticate the user, I guess to avoid blocking the login
functionality if there are problems with the 2FA checker, for example after
deploying a broken code change</p>
</li>
<li>
<p>Due to a problem in the network ACLs, one of the web backends wasn't
authorized to communicate with the 2FA checker, marking it as offline and
skipping the check because of the previous issue</p>
</li>
</ul>
<p>The combination of these two flaws meant when the load balancer redirected you
to the faulty backend, the 2FA check wasn't performed at all. This also
explains the "click the button multiple times" thing, since you needed to reach
that specific web backend in order to trigger the bug.</p>
<h2 id="lessons-learned">Lessons learned</h2>
<p>This was the first security vulnerability I found on a company's website, so I
experienced the responsible disclosure process for the first time. Reading
other disclosures was always a good learning experience for me, because I
learnt how to prevent the vulnerabilities other people discovered and reported.</p>
<p>There are some horror stories about disclosures out there, but the Gandi people
were fast to reply and fix the bug. The only thing I hope they change is, they
don't have any security team contact on their website, but I had to contact the
customer support.</p>
<p>I was told they don't want to receive spam in their security team's inbox (but
also don't want to lose emails due to the spam filters), and their support team
is trained to redirect the reports to the security guys, but even a "contact
the support team to report vulnerabilities" somewhere in the website would be
great.</p>
<h2 id="disclosure-timeline">Disclosure timeline</h2>
<p>I live in Italy, so everything happened in the UTC+2 timezone.</p>
<ul>
<li><strong>2016/09/27 20:30</strong>: found the issue on the Gandi website</li>
<li><strong>2016/09/27 21:00</strong>: contacted Gandi support asking for a security
contact</li>
<li><strong>2016/09/27 22:15</strong>: received the contact information from Gandi support</li>
<li><strong>2016/09/27 22:45</strong>: sent detailed report to Gandi's security team</li>
<li><strong>2016/09/27 23:33</strong>: received the first ACK from Gandi</li>
<li><strong>2016/09/27 23:43</strong>: sent detailed report to Gandi's USA office</li>
<li><strong>2016/09/28 00:36</strong>: received confirmation the vulnerability was fixed</li>
</ul>