ScreenConnect Review

Written by William Roush on July 16, 2014 at 9:00 pm

Looking for remote support software that wont break the bank? Open to self-hosted alternatives? ScreenConnect is a viable feature-rich option with a very affordable price point.

ScreenConnect

What Is ScreenConnect?

ScreenConnect is self-hosted remote support software, an alternative to to LogMeIn Rescue, GoToAssist, or TeamViewer. The largest difference between ScreenConnect and its competitors is that it is self-hosted, where you deploy it on your own private servers.

Why Self-Hosted

Self-hosting comes with a variety of benefits, first is complete control over your traffic and environment. You can lock administration to internal access only, put it behind a reverse proxy, require additional authentication. The sky is the limit.

However, the biggest benefit to self-hosted (at least in this case) is the price.

Licensing

The cost of ScreenConnect at the time of posting is $325.00 per license. Each license entities you to one connected support session. A support session is defined as an active connection between a host and a guest. This means this support session can float between a small team where any one person can be supporting another at a time. This also means multiple techs can be on with a single guest and still only consume one license.

Lets break down the cost for 3 years of ownership with some competitors:

Solution Licensing Scheme 1st Year 2nd Year 3rd Year 3 year TCO
ScreenConnect $325/seat + 20% support renewal/year. $325 $65 $65 $455
TeamViewer $749 one time (1 authorized workstation). $749 $0 $0 $749
LogMeIn Rescue $1,188/yr $1,188 $1,188 $1,188 $3,564
GoToAssist $660/yr subscription $660 $660 $660 $1,980

Requirements

Full list of ScreenConnect requirements can be found here. One of the biggest benefits is that you can run ScreenConnect on a variety of server platforms, including Windows, OSX and Linux!

ScreenConnect achieves this by running a .NET application on top of the Mono platform. I’ve been weary about Mono before, but ScreenConnect’s performance and stability has changed my mind entirely about how commercially ready Mono is.

Download And Installation On Debian 7

Installation is easy, download the latest tar.gz file, unpack, run install, and follow the instructions:

root@screenconnect:~# cd /tmp
root@screenconnect:/tmp# wget http://www.screenconnect.com/Downloads/ScreenConnect_4.3.6563.5232_Release.tar.gz
root@screenconnect:/tmp# tar xvf ScreenConnect_4.3.6563.5232_Release.tar.gz
root@screenconnect:/tmp# cd ScreenConnect_4.3.6563.5232_Install/
root@screenconnect:/tmp/ScreenConnect_4.3.6563.5232_Install# ./install.sh
Welcome to the ScreenConnect Installer

The installer will do these things:
1) Prompt you for installation options
2) Display a list of actions to be taken
3) Prompt you for execution of the actions
4) Execute the actions

Where would you like to install ScreenConnect?
[/opt/screenconnect]

What would you like as the service name for this ScreenConnect installation?
[screenconnect]

The installation will perform the following actions:
- Install libavcodec-extra-53 with Advanced Package Tool (apt)
- Install libswscale2 with Advanced Package Tool (apt)
- Install libavutil51 with Advanced Package Tool (apt)
- Install libavformat53 with Advanced Package Tool (apt)
- Create service script at /etc/init.d/screenconnect
- Create startup links in /etc/rcX.d/ directories
- Copy files into /opt/screenconnect
- Initialize configuration files
- Start screenconnect service

Do you want to install ScreenConnect?
(Y/n): y

[[Removed installation output]]

Running 'Create service script at /etc/init.d/screenconnect'...
Running 'Create startup links in /etc/rcX.d/ directories'...
Running 'Copy files into /opt/screenconnect'...
Running 'Initialize configuration files'...
Running 'Start screenconnect service'...

Installation complete!

Trying to figure out the best URL for you to use...

To access your new ScreenConnect installation, open a browser and navigate to:

http://localhost:8040/Host

root@screenconnect:/tmp/ScreenConnect_4.3.6563.5232_Install#

Navigating to http://[your host's IP]:8040/Host will present you a wizard which will walk you through the rest of the installation process, including setting up your primary administration account and configuring your licensing information (if you need a trial license visit http://www.screenconnect.com/Try-It-Now).
Setup Wizard2014-05-25 22_48_20-ScreenConnect Remote Support Software

Hosting a Support Session

Hosting a support session is easy, click the plus button next to the “Support” header on the left, and you’ll be greeted with a list of options for sending your support request out.

Lots of options, easy to use.

Lots of options, easy to use.

I generally use invitation only and generate URLs to send to people over chat/e-mail, ScreenConnect supports plugging into a SMTP server and sending mail for you, or leveraging your locally installed mail client to send e-mails (I prefer this configuration for this method).

Active sessions are displayed in a list form, easy to tell status and who is connected.

Active sessions are displayed in a list form, easy to tell status and who is connected.

Your end user will be presented with instructions on how to connect, ScreenConnect supports a variety of methods to attempt to get the end-user online, including leveraging ClickOnce and Java Web Start, standard methods you’ll see competitors using.

Easy to understand instructions for the end user.

Easy to understand instructions for the end user.

From there it’s like any other remote desktop support software, with a large array of tools at the top of your screen.

Connection Information

Connection Information

Wide array of audio options, including listening and sending audio.

Wide array of audio options, including listening and sending audio.

Screenshot capture and video capture.

Screenshot capture and video capture.

Various file transfer options, nothing out of the ordinary.

Various file transfer options, nothing out of the ordinary.

Customizable toolbox, upload files that will be available between all sessions.

Customizable toolbox, upload files that will be available between all sessions.

Display quality and management.

Display quality and management.

By far the biggest thing I love about ScreenConnect’s UI is how well it manages multi-monitor clients. In most other software switching between displays is always clunky or seems sort of “out of the way”, ScreenConnect makes it feel right.

Various additional features.

Various additional features.

Nothing out of the ordinary in terms of rescue features, various blanking of devices, blocking of input, safe mode support. A bunch of “must haves” have all been checked.

Meetings

Meetings are kind of the inverse of support requests, a single presenter and multiple viewers. The UI is tweaked a bit to support this concept a bit more. I’ve had some minor UI workflow issues with handing presenter around being a little clumsy, but other than that it works well.

The only downfall about using it for meetings over GoToMeeting or something similar is that ScreenConnect doesn’t support plugging it into a phone system (though I understand this isn’t a trivial task from both the programming and logistics end), so you’ll either need to set up a conference room on your phone system or use the built-in VOIP functionality.

Administration

Administration is fairly straight-forward, everything is done with role-based access, though you can lock things down and prevent users from accessing specific groups of machines, the difficulty to do so leaves much to be desired on the UI (though this is currently being worked on as I understand it).

A nice server status screen showing general health of the application.

A nice server status screen showing general health of the application.

Funny enough the status screen shots “Windows Firewall Check” even though I’m on a Linux host…

ScreenConnect supports theming, allowing you to bring it inline with your company’s brand (be aware though, changing themes restarts the web site, so don’t expect uninterrupted service if you’re messing with that).

Additionally ScreenConnect keeps an audit log in the admin control panel, very useful if you need to track down changes or actions taken against the system.

Overall

ScreenConnect packs a ton of punch for a low cost with a wide range of platform options on a stable and rapidly developed software package. One of the most impressive things I’ve seen about ScreenConnect is the speed at which they’ve moved forward and provided more features, iterated on parts that were lacking and end up delivering a stable polished product every time.

In my opinion it is a must-have. With UPNP support it allows small-time technicians to purchase a copy, install it and run it on their home machines with no effort at all, but it includes the feature set and stability to be used at your SMB office (and probably beyond).

Passwordstate – Enterprise Password Management Review

Written by William Roush on May 30, 2014 at 4:40 pm

An end-user review of Passwordstate, a shared web-based password list software that gets you all the additional features you wanted over KeePass and other equivalents.

Before we start… Sorry about the large gap in posts, a mix of writer’s block and working on a reviews for a handful of things (Zultys PBX, ScreenConnect, etc.), there will be MUCH more to come soon!

I’d also love to write about more IT subjects in Chattanooga (locally developed software, startups, IT community, or businesses), if you have any suggestions feel free to throw them my way!

What is Passwordstate?

Passwordstate is a web-based password management tool written by Clickstudios. Think of it as KeePass on the web, but deployed inside your own private network.

Why Use it Over KeePass?

I personally love KeePass, I can’t talk about it enough, I wrote a post awhile ago all about it. However as much as I like it, it falls short on some management features that I feel I need when working in a team of diverse responsibilities and access levels. While we can create a lot of process and hoop jumping to resolve this issue, I’d rather not if it could be avoided (plus, we’re IT, we want software to do the hoop jumping and process for us! That is what it is there for).

Prerequisites For Install

The requirements for installation are pretty straightforward, IIS7+ and MSSQL 2005+, once these requirements are made the install for Passwordstate is easy. I’m deploying it on IIS8 and MSSQL 2012 Express on top of Windows 2012 R2 for this review.

Organization

Password state makes everything pretty easy to get to, unlike KeePass passwords are kept in “password lists”, imagine these lists as folders in KeePass. These lists can have a long list of permissions and customizations added to them (see later in this review for those options). On top of password lists you can create folders to store groups of password lists.

Navigating password lists is pretty simple.

Navigating password lists is pretty simple.

In the example above we have a folder for development environment passwords, we could grant access to our storage admin to “Storage Arrays”, our DBA to “Database” and so on. Allowing fine control to lists. Additionally I have a personal password list named “William’s Password List”, more on personal password lists later. Password Management Creating and editing passwords is pretty straight forward, a handful of fields you’re pretty familiar with if you use a password vault. Nothing really too special here other than a very nice UX design.

Auditing

By far the biggest benefit over a system like KeePass is the ability to audit access to passwords. What to know who last updated the password on a service account? System admin scanned all passwords before leaving? KeePass won’t tell me any of that.

Simple UI, easy to grab a password or check recent audit events.

Simple UI, easy to grab a password or check recent audit events.

Audit reports can be sent at regular intervals to your e-mail so you can stay on top of what is going on.

Further details on the state of your password lists.

Further details on the state of your password lists.

Personal Password Lists

Personal Password List Passwordstate has a different kind of password list for personal use, you can make a list for yourself that has additional security features (while you can password regular password list, I usually can justify additional passwords on personal lists a lot easier). In this case I’ve put a separate password on it from my account, requiring another step of authentication. These lists cannot be seen by administrators and stick with you.

Keeping personal passwords centralized have many benefits too.

Keeping personal passwords centralized have many benefits too.

The ability to keep your passwords in Passwordstate allows you to easily hand over all account passwords for various pieces of software (for example, if you hold a lot of licensing portal credentials on your personal e-mail account).

Password List Options

Another very powerful addition over Keepass is the customization behind your password lists.

A long list of configurable options to help make each list customized to it's purpose.

A long list of configurable options to help make each list customized to it’s purpose.

You can have some lists sync with Active Directory, others have very strict password complexity requirements, some lists only available during work hours, and other lists have expiration dates.

Problems With Passwordstate

There are a handful of issues with Passwordstate, first and foremost is that everything has to be done via the web UI. While Passwordstate is configured for SSL upfront, I can understand the argument that browsers are one of the most exposed pieces of software we use on a daily basis, putting our passwords in that basket may not be the best idea.

Additionally if you lose your Passwordstate server, your passwords are unavailable. Passwordstate does provide high availability options (additional cost for that though), but I’d throw an export of your password list every once in awhile with a DB backup into a fire safe and offsite just in case things get really bad.

A small annoyance is I can’t do upgrades unless I set up a backup path, when I’m backing up the entire machine with Veeam and I do an upgrade after a snapshot, I really don’t care if I have to roll the entire VM back, but I don’t really have the option. Really minor gripe though, I know why they’ve done it (for those that don’t have good backups in place).

Overall

With it being free up to 5 users, I don’t see why not for small businesses! Even beyond that I’d say the additional safety and auditing is worth the relatively low price $37/user (that lowers as you add more users) and tops out at $4272 for unlimited user installs. This is by far not an exhaustive list of what Passwordstate can do (we’ve just skimmed the surface), so go grab a 5 user license and try it out today!

100% Qualys SSL Test A+

Written by William Roush on April 1, 2014 at 10:41 pm
Obtaining 100/100/100/100 on Qualys SSL Server Test

Obtaining 100/100/100/100 on Qualys SSL Server Test

For fun we’re going to poke at what it takes to score 100 across the board with Qualys SSL Server Test — however impractical this configuration may actually be.

Qualys SSL Server Test… What Is It?

Qualys SSL Server Test is an awesome web based utility that will scan your website’s SSL/TLS configuration against Qualys best practices. It’ll run through the various SSL and TLS protocol versions, test all the cipher suites, and simulate negotiation with various browser/operating system setups. It’ll give you not only a good basis for understanding how secure your site’s SSL/TLS configuration is, but if it’s accessible to people on older devices (I’m looking at you Windows XP and older IE versions!).

Getting 100/100/100/100

Late at night I was poking at some discussions on TLS, and wondered what it really took to score 100 across the board (I’ve been deploying sites that scored 100/90/100/90), so I decided to play with my nginx configuration until I scored 100, no matter how impractical this would be.

server {
  ssl_certificate /my_cert_here.crt;
  ssl_certificate_key /my_cert_here.key;

  # TLS 1.2 only.
  ssl_protocols TLSv1.2;

  # PFS, 256-bit only, drop bad ciphers.
  ssl_prefer_server_ciphers on;
  ssl_ciphers ECDH+AESGCM256:DH+AESGCM256:ECDH+AES256:SH+AES256:RSA+AESGCM256:RSA+AES256:!aNULL:!MD5:!kEDH;

  # Enable SSL session resume.
  ssl_session_cache shared:SSL:10m;
  ssl_session_timeout 10m;out 10m;

  location / {
    # Enable HSTS, enforce for 12 months.
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  }
}
Qualys wants only 256bit (or stronger) cipher suites.

Qualys wants only 256bit (or stronger) cipher suites.

This barely differs from our standard configuration (depending on if you chopse to mitigate BEAST instead of RC4 issues)

This barely differs from our standard configuration (depending on if you choose to mitigate BEAST instead of RC4 issues)

100/100/100/100 comes at a high price.

100/100/100/100 comes at a high price.

To get to having all 100s we drop pretty much all but the most modern browsers… oops!

100s Not Realistic

It seems you’ll want to aim for 100/90/100/90 with an A+. This configuration will give your users the ability to take advantage of newer features (such as Perfect Forward Secrecy and HTTP Strict Transport Security) and stronger cipher suites while not locking out older XP users, and without exposing your users to too many SSL/TLS vulnerabilities (when supporting XP, you have to choose between protecting against BEAST or use the theoretically compromised cipher RC4).

So we’ll want to go with something a little more sane:

server {
  ssl_certificate /my_cert_here.crt;
  ssl_certificate_key /my_cert_here.key;

  ssl_protocols  SSLv3 TLSv1 TLSv1.1 TLSv1.2;

  # PFS + strong ciphers + support for RC4-SHA for older systems.
  ssl_prefer_server_ciphers on;
  ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:RC4-SHA:HIGH:!aNULL:!MD5:!kEDH;

  # Enable SSL session resume.
  ssl_session_cache shared:SSL:10m;
  ssl_session_timeout 10m;out 10m;

  location / {
    # Enable HSTS, enforce for 12 months.
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
  }
}

Dan Kaminsky – Black Ops Of PKI

Written by William Roush on March 26, 2014 at 7:58 pm

Amazing talk by Dan Kaminsky discussing what is broken with X.509 (SSL). It’s an amazing dive into how X.509 works, various exploits, and the impeding problem of the Verisign MD2 root certificate that may be open to preimage attack sometime in the near future.

Solid State Drives Are More Robust Than Spinning Rust

Written by William Roush on March 20, 2014 at 7:37 pm

A number breakdown on why the idea that "SSDs are unreliable" is a silly statement.

I’ve been hearing some silly assumptions that magnetic drives are more "reliable" than Solid State Drives (SSDs). I’ve heard some silly ideas such as "can I mirror my SSDs to regular magnetic disks", while this behavior completely defeats the purpose of having the SSDs (all disks must flush their writes before additional writes can be serviced), but I’ll show you why in this configuration the traditional magnetic drives will fail first.

For the sake of being picky about numbers, I’m going to point out a few of these are “back of a napkin” type calculations. Getting all the numbers I need from a single benchmark is difficult (being as most people are interested in total bytes read/write, not operations served), additionally I don’t have months to throw a couple SSDs at this right now.

A Very Liberal Lifetime Of A Traditional Magnetic Disk Drive

So we’re going to assume the most extreme possibilities for a magnetic disk drive, a high performance enterprise grade drive (15k RPM), running at 100% load 24/7/365 for 10 years. This is borderline insane and would likely be toast under this much of a workload long before then, but this helps illustrate my point. The high end of the load these drives can put out is 210 IOPS. So what we see on a daily basis is this:

210 * 60 * 60 * 24 =     18,144,000
18,144,000 * 365   =  6,622,560,000

x 10               = 66,225,600,000

We expect at the most insane levels of load, performance and reliability that the disk can perform 66 billion operations in it’s lifetime.

The Expected Lifetime Of A Solid State Drive

Now I’m going to perform the opposite (for the most part), I’m going to go with a consumer grade triple-level cell (TLC) SSD. These drives have some of the shortest life span that you can expect out of an SSD that you can purchase off the shelf. Specifically we’re going to look at a Samsung 250GB TLC drive, which ran 707TB of information before it’s first failed sector, at over 2900 writes per sector.

250GB drive

250,000,000,000 / 4096 = ~61,000,000 sectors.
x2900 writes/sector = 176,900,000,000 write operations.

Keep in mind: the newer Corsair Force 240GB MLC-E drives claim a whopping 30,000 cycles before failure, but I’m going to keep this to "I blindly have chosen a random consumer grade drive to compete with an enterprise level drive", and not even look at the SSDs aimed at longer lifespans, which includes enterprise level SLC flash memory, which can handle over 100,000 cyles per cell!

So What Do You Mean More Robust?

The modern TLC drive from Samsung performed nearly three times the total work output of the enterprise level 15k SAS drive before dying. Well if that is the case why do people see SSDs are "unreliable"? The answer is simple: the Samsung drive will perform up to 61,000 write IOPS, where as the magnetic disk will perform at best 210, it would take me an array of 290 magnetic disks, at a theoretical optimal performance configuration (no failover) to match the performance of this single SSD.

Because of this additional throughput, the SSD wears out it’s lifespan much faster.

So I should Just Replace My HDDs with SSDs?

Whoa, slow down there, not quite. Magnetic storage still has a solid place from everywhere in your home to your data center. The $/GB ratio of magnetic storage is still much more preferable over the $/GB ratio of SSD storage. For home users this means the new hybrid drives (SSD/HDD) that have been showing up are an excellent choice, for enterprise systems you may want to look at storage platforms that allow you to use flash storage as read/write caches and data tiering methods.

PCI Compliant ScreenConnect Setup Using Nginx

Written by William Roush on February 19, 2014 at 9:26 pm

ScreenConnect’s Mono server fails PCI compliance scans from Qualys for a list of reasons out of the box. We’re going to configure a Nginx proxy to make it compliant!

There are a few things we’ll want before configuring ScreenConnect, we need two public IP addresses (one for your website, one for the ScreenConnect relay server). We’ll want a 3rd party cert from your favorite cert provider. I’m also going to assume you’re running Windows so I’ll include extra instructions, skip those if you know what you’re doing and just need to get to the Nginx configuration.

Get Your Certificate

mkdir /opt/certs
cd /opt/certs

# Generate your server's private key.
openssl genrsa -out screenconnect.example.com.key 2048

# Make a new request.
openssl req -new -key screenconnect.example.com.key -out screenconnect.example.com.csr

Go ahead and log into your server using WinSCP and copy your .csr file to your desktop, and go get a certificate from your Certificate Authority (.crt) and load that back to the server.

Recommended ScreenConnect Configuration

In your ScreenConnect directory you have a “web.config” file. You’ll want to edit (or add if not found) the following properties under the “appsettings” section of the configuration file.

<add key="WebServerListenUri" value="http://127.0.0.1:8040/" />
<add key="WebServerAddressableUri" value="https://screenconnect.example.com" />

We want to configure the web server address to listen on the first IP address we have, additionally pick a port that we’ll use for the internal proxy. I went ahead with the default port 8040. You’ll also need to set the URI to the domain for your first IP (should match the domain on your certificate).

<add key="RelayListenUri" value="relay://[2nd IP]:443/" />
<add key="RelayAddressableUri" value="relay://screenconnectrelay.example.com:443/" />

Additionally we’ll configure our relay server to listen on the second IP, we’ll set it to use port 443 which will help us punch through most firewalls, and we’ll want to set the URI to a second domain name we have pointed at the IP address we specified.

Nginx Configuration

# Defining our ScreenConnect server.
upstream screenconnect {
  server 127.0.0.1:8040;
}

server {
  # Bindings
  listen [1st IP]:80;
  server_name screenconnect.example.com;

  location / {
    # Redirect all non-SSL to SSL-only.
    rewrite ^ https://screenconnect.example.com/ permanent;
  }
}

server {
  # Bindings
  listen [1st IP]:443 default_server ssl;
  server_name screenconnect.example.com;

  # Certificate information
  ssl_certificate /etc/ssl/certs/private/screenconnect.example.com.crt;
  ssl_certificate_key /etc/ssl/certs/private/screenconnect.example.com.key;

  # Limit ciphers to PCI DSS compliant ciphers.
  ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH;
  ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  location / {
    # Redirect to local screenconnect
    proxy_pass http://screenconnect;
    proxy_redirect off;
    proxy_buffering off;

    # We're going to set some proxy headers.
    proxy_set_header        Host            $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

    # If we get these errors, we want to move to the next upstream.
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;

    # If there are errors we're going to intercept them.
    proxy_intercept_errors  on;

    # If there are any 400/500 errors, we'll redirect to the root page to catch the Mono error page.
    error_page 401 402 403 404 405 500 501 502 503 504 /;
  }
}

I’ve run a server with a similar setup through a Qualys PCI compliance scan (which the ScreenConnect server failed horribly prior to the changes), and it passed with flying colors.

Additionally remember to lock down your IP tables so you’re only open where you absolutely need to be, mainly 80 and 443 on your primary IP and 443 on your second IP. Add SSH into the mix if you use that to remotely connect to your servers (only accessible from inside of your company network though!).

Statically Compiled LINQ Queries Are Broken In .NET 4.0

Written by William Roush on January 19, 2014 at 5:31 pm

Diving into how a minor change in error handling in .NET 4.0 has broken using compiled LINQ queries as per the MSDN documentation.

Query was compiled for a different mapping source than the one associated with the specified DataContext.

When working on high performing LINQ code this error can cause a massive amount of headaches. This StackOverflow post blames the problem on using multiple LINQ mappings (which the same mappings from different DataContexts will count as "different mappings"). In the example below, we’re going to use the same mapping, but different instances which is extremely common for short-lived DataContexts (and reusing DataContexts come with a long list of problematic side-effects).

namespace ConsoleApplication1
{
    using System;
    using System.Data.Linq;
    using System.Linq;

    class Program
    {
        protected static Func<MyContext, Guid, IQueryable<Post>> Query =
            CompiledQuery.Compile<MyContext, Guid, IQueryable<Post>>(
                (dc, id) =>
                    dc.Posts
                        .Where(p => p.AuthorID == id)
            );

        static void Main(string[] args)
        {
            Guid id = new Guid("340d5914-9d5c-485b-bb8b-9fb97d42be95");
            Guid id2 = new Guid("2453b616-739f-458f-b2e5-54ec7d028785");

            using (var dc = new MyContext("Database.sdf"))
            {
                Console.WriteLine("{0} = {1}", id, Query(dc, id).Count());
            }

            using (var dc = new MyContext("Database.sdf"))
            {
                Console.WriteLine("{0} = {1}", id2, Query(dc, id2).Count());
            }

            Console.WriteLine("Done");
            Console.ReadKey();
        }
    }
}

This example follows MSDN’s examples, yet I’ve seen people recommending you do this to resolve the changes in .NET 4.0:

protected static Func<MyContext, string, IQueryable<Post>> Query
{
    get
    {
        return
            CompiledQuery.Compile<MyContext, string, IQueryable<Post>>(
                 (dc, id) =>
                    dc.Posts
                        .Where(p => p.AuthorID == id)
            );
    }
}

Wait a second! I’m recompiling on every get, right? I’ve seen claims it doesn’t. However peeking at the IL code doesn’t hint at that, the process is as follows:

  • Check if the query is assignable from ITable, if so let the Lambda function compile it.
  • Create a new CompiledQuery object (just stores the Lambda function as a local variable called “query”).
  • Compile the query using the provider specified by the DataContext (always arg0).

At no point is there a cache check, the only place a cache could be placed is in the provider (which SqlProvider doesn’t have one), and that would be a complete maintenance mess if it was done that way.

Using a test application (code is available at https://bitbucket.org/StrangeWill/blog-csharp-static-compiled-linq-errors/, use the db.sql file to generate the database, please use a local installation of MSSQL server to give the best speed possible so that we can evaluate query compilation times), we’re going to force invoking the CompiledQuery.Compile method on every iteration (10,000 by default) by passing in delegates as opposed to passing in the resulting compiled query.

QueryCompiled Average: 0.5639ms
QueryCompiledGet Average: 1.709ms
Individual Queries Average: 2.1312ms
QueryCompiled Different Context (.NET 3.5 only) Average: 0.6051ms
QueryCompiledGet Different Context Average: 1.7518ms
Individual Queries Different Context Average: 2.0723ms

We’re no longer seeing the 1/4 the runtime you get with the compiled query. The primary problem lies in this block of code found in CompiledQuery:

if (context.Mapping.MappingSource != this.mappingSource)
{
	throw Error.QueryWasCompiledForDifferentMappingSource();
}

This is where the CompiledQuery will check and enforce that you’re using the same mapper, the problem is that System.Data.Linq.Mapping.AttributeMappingSource doesn’t provide an Equals override! So it’s just comparing whether or not they’re the same instance of an object, as opposed to them being equal.

There are a few fixes for this:

  • Use the getter method, and understand that performance benefits will mainly be seen where the result from the property is cached and reused in the same context.
  • Implement your own version of the CompiledQuery class.
  • Reuse DataContexts (typically not recommended! You really shouldn’t…).
  • Stick with .NET 3.5 (ick).
  • Update: RyanF below details sharing a MappingSource below in the comments. This is by far the best solution.

You May Pay Even If You Do Everything Right (CryptoLocker)

Written by William Roush on January 13, 2014 at 7:14 pm

Many people in the IT field are depending on various products to protect them from CryptoLocker and similar malware, but how realistic is that really?

Seth Hall over at tunnl.in wrote an article detailing how many parts of your system must fail in order for CryptoLocker to infect your network. The major problem I have with the article is that this level of trust in your systems to protect you is exactly how a lot of companies got bit by the CryptoLocker ransomware, and the concept that "if you have these bases covered, you’re ok".

You’ll need an email server willing to send you infected executable attachments.

This assumes that CryptoLocker is going to come in a form that your email server will catch. One of the easiest ways to prevent your email server from blocking a piece of malware attached to an email is to password protect it. Which CryptoLocker has been known to do [1] [2] [3]. This leaves a handful of options in detecting the email: Either have a signature for the encrypted zip file, which if unique passwords are being used per email that wouldn’t work, or attempt to unencrypt all zips by searching the body of the email for the password (which I don’t think any mail filtering services do this).

And that is all dependent on the idea that you’re being infected by an already detected derivative of CrytpoLocker.

Your perimeter security solution will have to totally fail to spot the incoming threat.

Here Seth is talking about Firewall based anti-malware scanning. Again this falls into all of the same problems as relying on your email server to protect you.

Your desktop security solution will have to totally fail.

This is one of the major ones everyone relies on, your desktop antivirus catching malware, and by far this is what bit almost everyone infected by CryptoLocker. In my previous post about CryptoLocker I talk about how it wasn’t till 2013-11-11 that antiviruses were preventing CryptoLocker. With PowerLocker on the horizon these assumptions are dangerous.

Your user education program will have to be proven completely ineffective.

Now this is one of the major important parts of security, and by far one of the largest things that irk me in IT. I’ll go into this more in a more business-oriented post, but it comes down to this: what happens when I allow someone into the building that doesn’t have an access card? Human Resources would have my head and I could very well lose my job (and rightfully so!). Why is it that IT’s policies get such lackluster enforcement at most places?

In general, IT policies and training is always fairly weak. Users often forget (in my opinion: because there is no risk to not committing it to memory), and training initiatives are rarely taken seriously. People who "don’t get computers" are often put into positions were they’ll be on one for 8 hours a day (I’m not talking IT level proficiency, I’m talking "don’t open that attachment").

I feel this is mostly due to the infancy of IT in the workplace at many places, and will change as damages continue to climb.

Your perimeter security solution will have to totally fail, a second time.

It really depends on how you have your perimeter security set up. Some companies are blocking large swaths of the internet in an attempt to reduce the noise you get from various countries which they do not do business with and only receive attempts to break into their systems. This is pretty much the only circumstance your perimeter security will stop this problem.

Your intrusion prevention system [...] will have to somehow miss the virus loudly and constantly calling out to Russia or China or wherever the bad guys are.

This is by far a dangerous assumption. CryptoLocker only communicates to a command and control server for a public key to encrypt your files with. I’d be thoroughly impressed by a system that’ll catch a few kilobytes of encrypted data being requested from a foreign server and not constantly trigger false alerts from normal use of the internet.

Your backup solution will have to totally fail.

This is by far in my opinion the only realistic "this is 100% your responsibility with a nearly 100% chance of success" on this list. Backups that have multiple copies, stored cold and off-site have nearly no chance of being damaged, lost or tampered with. Tested backups have nearly no chance of failing. Malware can’t touch what it can’t physically access, and this will always be your ace in the hole.

In Conclusion

And don’t take this post wrong! The list that Seth gives is a great list of security infrastructure, procedures and policies that should be in place. However I think it reads as if you won’t get infected as long as you follow his list, and that is not entirely accurate.

Using Windows Vault To Store TortoiseHg (Mercurial) Passwords

Written by William Roush on December 17, 2013 at 7:34 pm

Mercurial has a built-in method for storing passwords, but it stores them plaintext in your settings file. For those of us bound by various compliance regulations, or just those of us that care about security, this is a huge no-no.

First you’ll want to clear your password from TortoiseHg’s authentication section for your repository if you haven’t already (this will remove your credentials from your “.hg\hgrc” settings file in the repo, you may want to manually confirm this).

Mercurial-Auth

Next you need to enable the Mercurial Keyring extension by pasting the text below into your mercurial.ini file (can be accessed via File > Settings > Edit File), which is bundled with TortoiseHg, so a path is not required:

[extensions]
mercurial_keyring=

On the next push it’ll ask for your password, put it in and it should never ask again.

To confirm/update your password was saved in the Windows Vault, go to your Control Panel > User Accounts > Manage your credentials

Mercurial-Windows-Vault>