100% Qualys SSL Test A+

Written by William Roush on April 1, 2014 at 10:41 pm
Obtaining 100/100/100/100 on Qualys SSL Server Test

Obtaining 100/100/100/100 on Qualys SSL Server Test

For fun we’re going to poke at what it takes to score 100 across the board with Qualys SSL Server Test — however impractical this configuration may actually be.

Qualys SSL Server Test… What Is It?

Qualys SSL Server Test is an awesome web based utility that will scan your website’s SSL/TLS configuration against Qualys best practices. It’ll run through the various SSL and TLS protocol versions, test all the cipher suites, and simulate negotiation with various browser/operating system setups. It’ll give you not only a good basis for understanding how secure your site’s SSL/TLS configuration is, but if it’s accessible to people on older devices (I’m looking at you Windows XP and older IE versions!).

Getting 100/100/100/100

Late at night I was poking at some discussions on TLS, and wondered what it really took to score 100 across the board (I’ve been deploying sites that scored 100/90/100/90), so I decided to play with my nginx configuration until I scored 100, no matter how impractical this would be.

server {
  ssl_certificate /my_cert_here.crt;
  ssl_certificate_key /my_cert_here.key;

  # TLS 1.2 only.
  ssl_protocols TLSv1.2;

  # PFS, 256-bit only, drop bad ciphers.
  ssl_prefer_server_ciphers on;
  ssl_ciphers ECDH+AESGCM256:DH+AESGCM256:ECDH+AES256:SH+AES256:RSA+AESGCM256:RSA+AES256:!aNULL:!MD5:!kEDH;

  # Enable SSL session resume.
  ssl_session_cache shared:SSL:10m;
  ssl_session_timeout 10m;out 10m;

  location / {
    # Enable HSTS, enforce for 12 months.
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  }
}
Qualys wants only 256bit (or stronger) cipher suites.

Qualys wants only 256bit (or stronger) cipher suites.

This barely differs from our standard configuration (depending on if you chopse to mitigate BEAST instead of RC4 issues)

This barely differs from our standard configuration (depending on if you choose to mitigate BEAST instead of RC4 issues)

100/100/100/100 comes at a high price.

100/100/100/100 comes at a high price.

To get to having all 100s we drop pretty much all but the most modern browsers… oops!

100s Not Realistic

It seems you’ll want to aim for 100/90/100/90 with an A+. This configuration will give your users the ability to take advantage of newer features (such as Perfect Forward Secrecy and HTTP Strict Transport Security) and stronger cipher suites while not locking out older XP users, and without exposing your users to too many SSL/TLS vulnerabilities (when supporting XP, you have to choose between protecting against BEAST or use the theoretically compromised cipher RC4).

So we’ll want to go with something a little more sane:

server {
  ssl_certificate /my_cert_here.crt;
  ssl_certificate_key /my_cert_here.key;

  ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;

  # PFS + strong ciphers + support for RC4-SHA for older systems.
  ssl_prefer_server_ciphers on;
  ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:RC4-SHA:HIGH:!aNULL:!MD5:!kEDH;

  # Enable SSL session resume.
  ssl_session_cache shared:SSL:10m;
  ssl_session_timeout 10m;out 10m;

  location / {
    # Enable HSTS, enforce for 12 months.
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  }
}

Dan Kaminsky – Black Ops Of PKI

Written by William Roush on March 26, 2014 at 7:58 pm

Amazing talk by Dan Kaminsky discussing what is broken with X.509 (SSL). It’s an amazing dive into how X.509 works, various exploits, and the impeding problem of the Verisign MD2 root certificate that may be open to preimage attack sometime in the near future.

Solid State Drives Are More Robust Than Spinning Rust

Written by William Roush on March 20, 2014 at 7:37 pm

A number breakdown on why the idea that "SSDs are unreliable" is a silly statement.

I’ve been hearing some silly assumptions that magnetic drives are more "reliable" than Solid State Drives (SSDs). I’ve heard some silly ideas such as "can I mirror my SSDs to regular magnetic disks", while this behavior completely defeats the purpose of having the SSDs (all disks must flush their writes before additional writes can be serviced), but I’ll show you why in this configuration the traditional magnetic drives will fail first.

For the sake of being picky about numbers, I’m going to point out a few of these are “back of a napkin” type calculations. Getting all the numbers I need from a single benchmark is difficult (being as most people are interested in total bytes read/write, not operations served), additionally I don’t have months to throw a couple SSDs at this right now.

A Very Liberal Lifetime Of A Traditional Magnetic Disk Drive

So we’re going to assume the most extreme possibilities for a magnetic disk drive, a high performance enterprise grade drive (15k RPM), running at 100% load 24/7/365 for 10 years. This is borderline insane and would likely be toast under this much of a workload long before then, but this helps illustrate my point. The high end of the load these drives can put out is 210 IOPS. So what we see on a daily basis is this:

210 * 60 * 60 * 24 =     18,144,000
18,144,000 * 365   =  6,622,560,000

x 10               = 66,225,600,000

We expect at the most insane levels of load, performance and reliability that the disk can perform 66 billion operations in it’s lifetime.

The Expected Lifetime Of A Solid State Drive

Now I’m going to perform the opposite (for the most part), I’m going to go with a consumer grade triple-level cell (TLC) SSD. These drives have some of the shortest life span that you can expect out of an SSD that you can purchase off the shelf. Specifically we’re going to look at a Samsung 250GB TLC drive, which ran 707TB of information before it’s first failed sector, at over 2900 writes per sector.

250GB drive

250,000,000,000 / 4096 = ~61,000,000 sectors.
x2900 writes/sector = 176,900,000,000 write operations.

Keep in mind: the newer Corsair Force 240GB MLC-E drives claim a whopping 30,000 cycles before failure, but I’m going to keep this to "I blindly have chosen a random consumer grade drive to compete with an enterprise level drive", and not even look at the SSDs aimed at longer lifespans, which includes enterprise level SLC flash memory, which can handle over 100,000 cyles per cell!

So What Do You Mean More Robust?

The modern TLC drive from Samsung performed nearly three times the total work output of the enterprise level 15k SAS drive before dying. Well if that is the case why do people see SSDs are "unreliable"? The answer is simple: the Samsung drive will perform up to 61,000 write IOPS, where as the magnetic disk will perform at best 210, it would take me an array of 290 magnetic disks, at a theoretical optimal performance configuration (no failover) to match the performance of this single SSD.

Because of this additional throughput, the SSD wears out it’s lifespan much faster.

So I should Just Replace My HDDs with SSDs?

Whoa, slow down there, not quite. Magnetic storage still has a solid place from everywhere in your home to your data center. The $/GB ratio of magnetic storage is still much more preferable over the $/GB ratio of SSD storage. For home users this means the new hybrid drives (SSD/HDD) that have been showing up are an excellent choice, for enterprise systems you may want to look at storage platforms that allow you to use flash storage as read/write caches and data tiering methods.

PCI Compliant ScreenConnect Setup Using Nginx

Written by William Roush on February 19, 2014 at 9:26 pm

ScreenConnect’s Mono server fails PCI compliance for a list of reasons out of the box. We’re going to configure a Nginx proxy to make it compliant!

There are a few things we’ll want before configuring ScreenConnect, we need two public IP addresses (one for your website, one for the ScreenConnect relay server). We’ll want a 3rd party cert from your favorite cert provider. I’m also going to assume you’re running Windows so I’ll include extra instructions, skip those if you know what you’re doing and just need to get to the Nginx configuration.

Get Your Certificate

mkdir /opt/certs
cd /opt/certs

# Generate your server's private key.
openssl genrsa -out screenconnect.example.com.key 2048

# Make a new request.
openssl req -new -key screenconnect.example.com.key -out screenconnect.example.com.csr

Go ahead and log into your server using WinSCP and copy your .csr file to your desktop, and go get a certificate from your Certificate Authority (.crt) and load that back to the server.

Recommended ScreenConnect Configuration

In your ScreenConnect directory you have a "web.config" file. You’ll want to edit (or add if not found) the following properties under the "appsettings" section of the configuration file.

<add key="WebServerListenUri" value="http://127.0.0.1:8040/" />
<add key="WebServerAddressableUri" value="https://screenconnect.example.com" />

We want to configure the web server address to listen on the first IP address we have, additionally pick a port that we’ll use for the internal proxy. I went ahead with the default port 8040. You’ll also need to set the URI to the domain for your first IP (should match the domain on your certificate).

<add key="RelayListenUri" value="relay://[2nd IP]:443/" />
<add key="RelayAddressableUri" value="relay://screenconnectrelay.example.com:443/" />

Additionally we’ll configure our relay server to listen on the second IP, we’ll set it to use port 443 which will help us punch through most firewalls, and we’ll want to set the URI to a second domain name we have pointed at the IP address we specified.

Nginx Configuration

# Defining our ScreenConnect server.
upstream screenconnect {
  server 127.0.0.1:8040;
}

server {
  # Bindings
  listen [1st IP]:80;
  server_name screenconnect.example.com;

  location / {
    # Redirect all non-SSL to SSL-only.
    rewrite ^ https://screenconnect.example.com/ permanent;
  }
}

server {
  # Bindings
  listen [1st IP]:443 default_server ssl;
  server_name screenconnect.example.com;
  
  # Certificate information
  ssl_certificate /etc/ssl/certs/private/screenconnect.example.com.crt;
  ssl_certificate_key /etc/ssl/certs/private/screenconnect.example.com.key;

  # Limit ciphers to PCI DSS compliant ciphers.
  ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH;
  ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  location / {
    # Redirect to local screenconnect
    proxy_pass http://screenconnect;
    proxy_redirect off;
    proxy_buffering off;
    
    # We're going to set some proxy headers.
    proxy_set_header        Host            $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    
    # If we get these errors, we want to move to the next upstream.
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    
    # If there are errors we're going to intercept them.
    proxy_intercept_errors  on;
    
    # If there are any 400/500 errors, we'll redirect to the root page to catch the Mono error page.
    error_page 401 402 403 404 405 500 501 502 503 504 /;
  }
}

I’ve run a server with a similar setup through a Qualys PCI compliance scan (which the ScreenConnect server failed horribly prior to the changes), and it passed with flying colors.

Additionally remember to lock down your IP tables so you’re only open where you absolutely need to be, mainly 80 and 443 on your primary IP and 443 on your second IP. Add SSH into the mix if you use that to remotely connect to your servers (only accessible from inside of your company network though!).

Statically Compiled LINQ Queries Are Broken In .NET 4.0

Written by William Roush on January 19, 2014 at 5:31 pm

Diving into how a minor change in error handling in .NET 4.0 has broken using compiled LINQ queries as per the MSDN documentation.

Query was compiled for a different mapping source than the one associated with the specified DataContext.

When working on high performing LINQ code this error can cause a massive amount of headaches. This StackOverflow post blames the problem on using multiple LINQ mappings (which the same mappings from different DataContexts will count as "different mappings"). In the example below, we’re going to use the same mapping, but different instances which is extremely common for short-lived DataContexts (and reusing DataContexts come with a long list of problematic side-effects).

namespace ConsoleApplication1
{
    using System;
    using System.Data.Linq;
    using System.Linq;

    class Program
    {
        protected static Func<MyContext, Guid, IQueryable<Post>> Query =
            CompiledQuery.Compile<MyContext, Guid, IQueryable<Post>>(
                (dc, id) =>
                    dc.Posts
                        .Where(p => p.AuthorID == id)
            );

        static void Main(string[] args)
        {
            Guid id = new Guid("340d5914-9d5c-485b-bb8b-9fb97d42be95");
            Guid id2 = new Guid("2453b616-739f-458f-b2e5-54ec7d028785");

            using (var dc = new MyContext("Database.sdf"))
            {
                Console.WriteLine("{0} = {1}", id, Query(dc, id).Count());
            }

            using (var dc = new MyContext("Database.sdf"))
            {
                Console.WriteLine("{0} = {1}", id2, Query(dc, id2).Count());
            }

            Console.WriteLine("Done");
            Console.ReadKey();
        }
    }
}

This example follows MSDN’s examples, yet I’ve seen people recommending you do this to resolve the changes in .NET 4.0:

protected static Func<MyContext, string, IQueryable<Post>> Query
{
    get
    {
        return
            CompiledQuery.Compile<MyContext, string, IQueryable<Post>>(
                 (dc, id) =>
                    dc.Posts
                        .Where(p => p.AuthorID == id)
            );
    }
}

Wait a second! I’m recompiling on every get, right? I’ve seen claims it doesn’t. However peeking at the IL code doesn’t hint at that, the process is as follows:

  • Check if the query is assignable from ITable, if so let the Lambda function compile it.
  • Create a new CompiledQuery object (just stores the Lambda function as a local variable called “query”).
  • Compile the query using the provider specified by the DataContext (always arg0).

At no point is there a cache check, the only place a cache could be placed is in the provider (which SqlProvider doesn’t have one), and that would be a complete maintenance mess if it was done that way.

Using a test application (code is available at https://bitbucket.org/StrangeWill/blog-csharp-static-compiled-linq-errors/, use the db.sql file to generate the database, please use a local installation of MSSQL server to give the best speed possible so that we can evaluate query compilation times), we’re going to force invoking the CompiledQuery.Compile method on every iteration (10,000 by default) by passing in delegates as opposed to passing in the resulting compiled query.

QueryCompiled Average: 0.5639ms
QueryCompiledGet Average: 1.709ms
Individual Queries Average: 2.1312ms
QueryCompiled Different Context (.NET 3.5 only) Average: 0.6051ms
QueryCompiledGet Different Context Average: 1.7518ms
Individual Queries Different Context Average: 2.0723ms

We’re no longer seeing the 1/4 the runtime you get with the compiled query. The primary problem lies in this block of code found in CompiledQuery:

if (context.Mapping.MappingSource != this.mappingSource)
{
	throw Error.QueryWasCompiledForDifferentMappingSource();
}

This is where the CompiledQuery will check and enforce that you’re using the same mapper, the problem is that System.Data.Linq.Mapping.AttributeMappingSource doesn’t provide an Equals override! So it’s just comparing whether or not they’re the same instance of an object, as opposed to them being equal.

There are a few fixes for this:

  • Use the getter method, and understand that performance benefits will mainly be seen where the result from the property is cached and reused in the same context.
  • Implement your own version of the CompiledQuery class.
  • Reuse DataContexts (typically not recommended! You really shouldn’t…).
  • Stick with .NET 3.5 (ick).
  • Update: RyanF below details sharing a MappingSource below in the comments. This is by far the best solution.

You May Pay Even If You Do Everything Right (CryptoLocker)

Written by William Roush on January 13, 2014 at 7:14 pm

Many people in the IT field are depending on various products to protect them from CryptoLocker and similar malware, but how realistic is that really?

Seth Hall over at tunnl.in wrote an article detailing how many parts of your system must fail in order for CryptoLocker to infect your network. The major problem I have with the article is that this level of trust in your systems to protect you is exactly how a lot of companies got bit by the CryptoLocker ransomware, and the concept that "if you have these bases covered, you’re ok".

You’ll need an email server willing to send you infected executable attachments.

This assumes that CryptoLocker is going to come in a form that your email server will catch. One of the easiest ways to prevent your email server from blocking a piece of malware attached to an email is to password protect it. Which CryptoLocker has been known to do [1] [2] [3]. This leaves a handful of options in detecting the email: Either have a signature for the encrypted zip file, which if unique passwords are being used per email that wouldn’t work, or attempt to unencrypt all zips by searching the body of the email for the password (which I don’t think any mail filtering services do this).

And that is all dependent on the idea that you’re being infected by an already detected derivative of CrytpoLocker.

Your perimeter security solution will have to totally fail to spot the incoming threat.

Here Seth is talking about Firewall based anti-malware scanning. Again this falls into all of the same problems as relying on your email server to protect you.

Your desktop security solution will have to totally fail.

This is one of the major ones everyone relies on, your desktop antivirus catching malware, and by far this is what bit almost everyone infected by CryptoLocker. In my previous post about CryptoLocker I talk about how it wasn’t till 2013-11-11 that antiviruses were preventing CryptoLocker. With PowerLocker on the horizon these assumptions are dangerous.

Your user education program will have to be proven completely ineffective.

Now this is one of the major important parts of security, and by far one of the largest things that irk me in IT. I’ll go into this more in a more business-oriented post, but it comes down to this: what happens when I allow someone into the building that doesn’t have an access card? Human Resources would have my head and I could very well lose my job (and rightfully so!). Why is it that IT’s policies get such lackluster enforcement at most places?

In general, IT policies and training is always fairly weak. Users often forget (in my opinion: because there is no risk to not committing it to memory), and training initiatives are rarely taken seriously. People who "don’t get computers" are often put into positions were they’ll be on one for 8 hours a day (I’m not talking IT level proficiency, I’m talking "don’t open that attachment").

I feel this is mostly due to the infancy of IT in the workplace at many places, and will change as damages continue to climb.

Your perimeter security solution will have to totally fail, a second time.

It really depends on how you have your perimeter security set up. Some companies are blocking large swaths of the internet in an attempt to reduce the noise you get from various countries which they do not do business with and only receive attempts to break into their systems. This is pretty much the only circumstance your perimeter security will stop this problem.

Your intrusion prevention system [...] will have to somehow miss the virus loudly and constantly calling out to Russia or China or wherever the bad guys are.

This is by far a dangerous assumption. CryptoLocker only communicates to a command and control server for a public key to encrypt your files with. I’d be thoroughly impressed by a system that’ll catch a few kilobytes of encrypted data being requested from a foreign server and not constantly trigger false alerts from normal use of the internet.

Your backup solution will have to totally fail.

This is by far in my opinion the only realistic "this is 100% your responsibility with a nearly 100% chance of success" on this list. Backups that have multiple copies, stored cold and off-site have nearly no chance of being damaged, lost or tampered with. Tested backups have nearly no chance of failing. Malware can’t touch what it can’t physically access, and this will always be your ace in the hole.

In Conclusion

And don’t take this post wrong! The list that Seth gives is a great list of security infrastructure, procedures and policies that should be in place. However I think it reads as if you won’t get infected as long as you follow his list, and that is not entirely accurate.

Using Windows Vault To Store TortoiseHg (Mercurial) Passwords

Written by William Roush on December 17, 2013 at 7:34 pm

Mercurial has a built-in method for storing passwords, but it stores them plaintext in your settings file. For those of us bound by various compliance regulations, or just those of us that care about security, this is a huge no-no.

First you’ll want to clear your password from TortoiseHg’s authentication section for your repository if you haven’t already (this will remove your credentials from your “.hg\hgrc” settings file in the repo, you may want to manually confirm this).

Mercurial-Auth

Next you need to enable the Mercurial Keyring extension by pasting the text below into your mercurial.ini file (can be accessed via File > Settings > Edit File), which is bundled with TortoiseHg, so a path is not required:

[extensions]
mercurial_keyring=

On the next push it’ll ask for your password, put it in and it should never ask again.

To confirm/update your password was saved in the Windows Vault, go to your Control Panel > User Accounts > Manage your credentials

Mercurial-Windows-Vault>

Bittorrent Sync – Decentralized Cloud File Sharing

Written by William Roush on December 2, 2013 at 8:25 pm

BitTorrent Sync

What Is BitTorrent Sync?

BitTorrent Sync is one of the latest public offerings from BitTorrent Labs. BitTorrent Sync provides the ability for you to share folders between multiple machines, including your mobie devices using the same peer-to-peer technology that has been used for the popular file sharing protocol, BitTorrent. It’s almost exactly like a dropbox like offering, without the centralized authority that has access to your data.

How Does It Stack Against Dropbox Like Offerings?

While BitTorrent Sync doesn’t offer any kind of online presence, it does offer a fairly feature-complete desktop and mobile solution. You have the ability to provide read-only share access, and also have the ability to share multiple folders. You also have the ability to generate one-time use secrets that can only be consumed once so that someone that you give your secret to can’t continue to give it out to people they know.

Wait BitTorrent? Isn’t That Open File Sharing? Are My Files Secure?

Yes they are secure! All files shared between machines are encrypted using your share key. Of course anyone with your share key will have access to your data, but you keep that to yourself and only those that you want to have access to that data.

Getting Started

A new BitTorrent Sync Install

A new BitTorrent Sync Install

BitTorrent’s installer will run you right through the process without much interaction needed. After you’re inside of the main application you’re ready to go. Add a new folder, right click on it and click “Connect to mobile device”, scan the QR code into a phone that you’ve already installed BitTorrent Sync on and you’ve got two devices syncing.

Adding to mobile is a snap!

Adding to mobile is a snap!

Adding my Nexus to BitTorrent Sync.

Adding my Nexus to BitTorrent Sync.

Syncing Mobile Devices

BitTorrent on Android

BitTorrent on Android

Syncing works a bit better than Owncloud does on my Android device (Owncloud requires manual “refreshing” of folders). Additionally the device-to-device file transfer is very nice… assuming you’re transferring with someone that already has BitTorrent Sync, but this really only makes sense with sensitive or large files where the security of BitTorrent Sync is mandatory, otherwise I’d just send e-mail attachments.

Sending files between mobile devices is easy, just scan the QR code at the end of this wizard.

Sending files between mobile devices is easy, just scan the QR code at the end of this wizard.

Configuration

We’re going to crawl a little bit more into the internal settings of BitTorrent Sync, most people wont ever need to touch these, and I’ll touch into how these settings work under the technology dive (next section).

Folder settings

Folder settings

Right clicking a folder and clicking “Show folder preferences” will show a window with two tabs, we’re interested in the the properties right now. Here we see these options:

Use relay server when required Will leverage BitTorrent’s relay servers when sync devices cannot communicate directly over the internet.
Use tracker server Uses BitTorrent’s tracker servers to seek out BitTorrent Sync clients that match your secret.
Search LAN Searches the local area network for BitTorrent Sync clients that match your secret.
Search DHT network Searches the distributed hash table (DHT) network for BitTorrent Sync clients that match your secret.
Store deleted files in SyncArchive SyncArchive is a hidden folder that will temporarily store deleted files (30 days by default). You can access it by right clicking on a folder and select “Open SyncArchive”.
Use predefined hosts Manually add the addresses of your various BitTorrent Sync machines.

Technology

Peering

Peer-to-peer networking

BitTorrent has been around for over 12 years now, it allows multiple machines to share data between each other instead of getting it from a central server, this is known as peer-to-peer and commonly referred to as “P2P”. Many games such as the popular MMORPG World Of Warcraft use a similar technology to share patches among their customers, reducing the server load and improving the end user experience by reducing download times.

Trackers

Trackers in BitTorrent are servers that keep track of peers, this allows BitTorrent Sync to easily find all machines that share a common key and provides them with the information required to talk to each other. The upside of the tracker is it provides a centralized authority to quickly determine who to peer with, the downside is that you depend on this tracker for your peering information.

Trackers however are not required, if you’re syncing with your home network, you can disable all trackers and enable LAN-only sync, at this point BitTorrent has none of your information on it’s tracker servers, and every time you bring any two devices together on the same network, they’ll sync up.

Relays

Relay servers are used when your device cannot properly punch the holes required for communication (usually done automatically by BitTorrent Sync via UPnP), in this case BitTorrent Sync will use one of BitTorrent’s servers to relay data back and forth.

Would I Use It?

I’m still heavily inclined to stick with OwnCloud. The web interface is useful and I don’t have to rely on a 3rd party tracker. I’d possibly consider BitTorrent Sync if they had the option of me configuring and running my own tracker and transfer relay (allowing me to run without the need of BitTorrent Lab’s servers). While settings.dat in your profile show the trackers set as “udp://tracker.openbittorrent.com:80/announce” and “udp://tracker.publicbt.com:80/announce”, there is no way to change this in the UI and I haven’t seen any discussion on support for your own tracker. Additionally I don’t see any configuration for running your own relay servers.

Intro To SSDs – LOPSA East 2013 – Matt Simmons

Written by William Roush on November 18, 2013 at 7:27 pm

A bit of a lengthy video from Matt Simmons at LOPSA (League of Professional System Administrators) EAST 2013. Pretty much covers all you could ever want to know about the history of magnetic storage and the move to flash storage.

Been considering to make some of the LOPSA meetings up in Knoxville, have even talked with some LOPSA guys about the possibility of starting a Chattanooga division…