I’m in the train from Paris where I attended the RMLL Security Track version 2016. The RMLL or “Rencontres Mondiales du Logiciel Libre” is an annual event around free software. Amongst multiple tracks, there is always one dedicated to information security (around free software of course). The global event was not scheduled this year but the team behind the security track are really motivated people and they decided to organize the security track despite the cancellation of the main event. I already attended the previous editions (2013, 2014, 2015) and came back again for this edition.
The organisation of the security track was the same: free for everybody, a good size to facilitate networking, interesting talks, live streaming and a nice opportunity to meet again good friends… It was held at the Mozilla offices in the center of the French capital, a nice place! After the welcome speech and some house-keeping rules, the first half-day started with a keynote by Ange Albertini: “Connecting Communities“. Ange started with a fact: it’s not always easy to share knowledge and there is no exception with hackers. This can be resumed by the following quote: “Rage against the infosec circus“. For Ange, it is clear: stop having ideas, try! If conferences are a nice way to share findings and results of security researches. Two good examples of such initiatives:
Ange is an active contributor to PoC||GTFO and he gave more information about the magazine. Behind printed first, there are hard deadlines to get things done. The electronic version comes always later. There is one issue per quarter so there’s no rush to miss one and they definitively prefer quality to quantity (the articles). Also, they don’t have any commitment regarding the number of pages. An article is often the result of exchanges between people.
The funny part of the electronic version: each issue has a proof-of-concept. It is delivery as a PDF but it contains always a hidden part. Some examples from the previous editions:
- a MBR
- a TrueCrypt container
- a JPG image
- a Ruby web server serving the file itself (my preferred one)
About those PoC, Ange’s response is just “because why not?“. The conclusion to this keynote: “We are looking for more people to share more knowledge“.
- a Tor proxy
- an SSH proxy
- a password manager
- a safe file container
- a portable toolbox
- In the 80’s, the BIOS was stored in read-only memory
- In the 90’s, the hardware became more complex and the BIOS moved to read-write memory (ex: for upgrade reasons)
- In the 2000’s, we saw run-time services
Paul reviewed how a computer boots and what are the security issues. There are different open source projects like Coreboot, U-Boot, Barebox or Libreboot. About the security issues, we have two different approaches: “to boot or not to boot” (verified boot) or “measured boot” (with a state indication). He reviewed all controls are implemented and how complex it is to implement a full open source boot environment. It was a very technical talk with many abbreviations now always easy to understand if you don’t play with such tools every day.
- It walks through a file system and collects binary files
- It analyses them and saves results in a database
- It builds dependencies graphs
Besides an inventory of (interesting) binaries, binmap is very helpful to detect if applications installed on a system are vulnerable. If a component is vulnerable to CVE-xxx and this component is used by multiple applications (a good example is OpenSSL – just saying), you “see” immediately where are the vulnerable applications. Based on multiple scans, it is also possible to track changes and build a (kind of) versioning system. This is a nice tool that you should add to your personal toolbox!
- Do not mix authoritative and recursive servers
- Mix different brands of DNS servers
- Hide your DNS master
- Do not invent new TLD (this is especially important since the market of “funny” TLD’s expanded)
A lot of information can be stored in DNS: there are many different record types. Amongst them, the TXT records may contain a lot of useful information like SPF, DKIM records, keybase.io validation records or Let’s Encrypt DNS challenges. Then Julien switched to more details about DNSSec. This isn’t something brand new (the RFC are from the year 2000) but they took a long time before being implemented. The root DNS are DNSsec aware since 2010 only. The goal is to proof the origin and integrity of zones and record signing. Julien reviewed how it works. If it sounds very interesting, honestly DNSSec is not very convenient to implement and mistakes are common. There are also constraints like the regular key renewal.
- The “IDS” mode: working with pcap, AF_PACKET or NFLOG technologies
- The “IPS” mode: working with NFQUEUE, IPFW, AF_PACKET.
The different modes and the available options to automatically block malicious traffic were reviewed. It is possible to implement very nice filters like to drop SSH connections from suspicious SSH clients. Powerful! The only remark I had was: how many organisations really implement such filters? Many of them can’t take the risk to block regular traffic for business reasons…
After a first coffee break, Sébastien Bardin presented a tool called BINSEC: “Binary level semantic analysis to the rescue“. It was very hard for me to follow the technical presentation. What you must keep in mind: The life of a program has the following stages: model > source code > assembler > executable code. To analyse the behaviour of a program, we do not always have access to its source code. Also, can we trust source code coming from external sources? Can we trust the compiler which optimize the code itself? To illustrate this case, Sébastien reviewed the CVE-2016-0777. The compiler decided, for optimisation reasons, to remove a memset() instruction which was clearing a memory zone with sensitive information. The presentation was a deep introduction to BINSEC.
Ivan Kwiatkowski presented his tool called Manalyze. The goal of his tool is to analyse PE files (Microsoft Windows executable). The PE format can be very complex and being used by many malwares, it is always interesting to have a deeper view of the file and “prevent annoyance of antivirus’ opaque decisions” as said Ivan. The tools is available as a command line or a web site exists to submit your own samples. Note that an intensive testing phase (fuzzing) has been performed and a bug bounty organised to ensure of the quality of the tool.
After the lunch break, J.C Jones from Mozilla presented the “Let’s Encrypt” project. This was not a technical talk, the project was already analysed multiple time. J. C. came back to the problematic of becoming a certificate authority. Netscape introduced HTTPS in 1995 with the release 1.0N of its browser. Not a long time ago, only 40,01% of the web traffic was based on HTTPS and, since the official launch of Let’s Encrypt, it increased by 8% (in only seven months). The main difficulty was to become trusted. It’s binary: you are or you aren’t! It is also based on a threat model: if someone issues a bad certificate, Let’s Encrypt will not be trusted anymore. J.C. reviewed the contraints they faced during the design and deployment of the platform. As example, do you know that the data and states must be kept at least 7.5 years?). Is was a very interesting talk.
Then, Julien Vehent also from Mozilla presented a talk about DevOps & Security. It started with a fact: today speed matters. You must be able to deploy new code in production in 15 minutes. The traditional cycle does not work anymore. In an ideal world, all deployments are automated and instantaneous. Which can be an issue for the security peeps. That’s why Julien explained that security must be integrated INTO DevOps. Security tests must be implemented into the delivery pipeline. By example: a 30 minutes meeting can be organised to perform the RRA (“Rapid Risk Assessment”). Some tests can be automated to prevent developers to make common mistakes (ex: based on the OWASP top-ten for web applications). As usual, plenty of ideas but, IMHO, not so easy to implement in a real world.
The Clement’s presentation was a review of the common authentication mechanism for web applications. After the classic BasicAuth, DigestAuth and cookies, Clément reviewed some protocols developed by many US universities like:
As you can see, there is a lack of standardisation. Each protocol was reviewed with more focus on SAML and OpenID.
The last talk was about Ring by Adrien Béraud. Today, people want to communicate privately. They are already plenty of applications available but some of them are restricted to a limited supported devices, others are obscure, not easy to choose the right one. Based on this fact, Ring was developed as an easy to use, free, distributed communication platform. It is secure, robust and build on top of open standards and distributed with a GPLv3 license. A Ring account is a key-pair. No account is created on a central server. To communicate with a peer, create a new key, scan the QRcode and… talk! Text, video and audio communications are available on multiple platforms. The quick demo was nice. It worked like a charm. Peers can find each others via the OpenDHT protocol. Connections are established using peer-to-peer TLS connections and calls are placed via SIP. The project is still ongoing and some major features are missing like: using multiple devices per user (with a sub-key for each device) or a user name registration. It looks promising, keep an eye on it! Finally a tool available on most platform? To close the second day, a rump session was organized (read: “lightning talks”) with interesting topics.
- MISP Galaxy
- MISP Hashstore
- MISP Workbench
- Powerless (running entirely from the registry)
- Finding systems connected to a specific C&C
- Fixed small mistakes (ex: search for files containing a key or password)
- Measuring security compliance (ex: search for “^passwordauthentication no$” in /etc/ssh/sshd_config