John Louros blogThis is John Louros blog feed.http://johnlouros.com/2018-08-20T00:00:00Zhttp://johnlouros.com/content/img/UnhandledXceptIcon.pnghttp://johnlouros.com/blog/diagnose-windows-service-startup-dotnet-coreDiagnose ASP.Net Core startup issues when hosted as a Windows Service 2018-08-20T00:00:002018-08-20T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>One of the great features released in ASP.Net Core 2.1 is the capability to host a application as a Windows Service. For Windows based production environments, Windows Services tend to be the chosen execution strategy for smaller services. It's a bare bones solution compared to IIS, but in a lot of cases, developers just need an open port to interact with their application.</p>
<p>Another core feature of .Net Core is the capability of compiling your application as a self-contained application. This means that you don't need to install .Net on the machine the application is intended to run on. The generated output will contain all the necessary files to run as a stand-alone application.</p>
<p>Another feature we need to praise .Net Core for is how easy it is to install. And why this is relevant you may ask? If your Continuous Integration pipeline supports running PowerShell or shell scripts, follow the guide described in <a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-install-script">this page</a> to learn how to install it. Then if you combine the two previously mentioned features (run as a Windows Service and compile as a self-contained application), existing Windows Services dependent on full .Net framework can be swapped with .Net Core application with minimal effort (assuming the same naming convention are kept).</p>
<pre><code class="language-PowerShell"># installs .Net Core SDK 2.1.400, if it was not previously installed
if ((dotnet --version | Where-Object { $_ -match '^2.1.400$' } | Measure-object).Count -ne 1) {
Write-Output 'Installing latest version of .Net Core (2.1.400)'
Invoke-WebRequest 'https://dot.net/v1/dotnet-install.ps1' -o dotnet-install.ps1
.\dotnet-install.ps1 -Channel 2.0 -Version 2.1.400 -InstallDir 'C:/Program Files/dotnet/'
}
dotnet --version
</code></pre>
<p>However, the main focus on this article is on how to troubleshoot Windows Service startup issues, in a context of a given .Net Core application, when you don't have access to the target machine. This was something I faced recently, so I thought is was worth sharing the PowerShell scripts I used.</p>
<p>Let's create a example application from scratch. The following code snippets were created with the assumption PowerShell 5.1 and .Net Core SDK 2.1.400 are installed.</p>
<pre><code class="language-powershell">mkdir example_app; cd example_app
dotnet new web
dotnet add package Microsoft.AspNetCore.Hosting.WindowsServices --version 2.1.1
# edit Program.cs to enable host to run as service
dotnet publish --self-contained --runtime win7-x64 --output dist
</code></pre>
<p>The application is now compiled. Let's run it as a Windows Service.</p>
<pre><code class="language-powershell">New-Service -Name example-service -BinaryPathName ((Get-Item .\dist\example-app.exe).FullName)
Start-Service -Name example-service
Get-Service -Name example-service
Invoke-WebRequest 'http://localhost:5000' -UseBasicParsing
</code></pre>
<p>Everything should be working without issues. Now, for experimentation purposes, let's throw an exception. Open <em>Program.cs</em> in your favourite editor and copy-paste the following code.</p>
<pre><code class="language-csharp">using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Hosting.WindowsServices;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace example_app
{
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().RunAsService();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
var pathToExe = Process.GetCurrentProcess().MainModule.FileName;
var pathToContentRoot = Path.GetDirectoryName(pathToExe);
// let's make it crash
throw new Exception("forcing fatal exception...");
return WebHost.CreateDefaultBuilder(args)
.UseContentRoot(pathToContentRoot)
.UseStartup<Startup>();
}
}
}
</code></pre>
<p>The service previously created should still be running. Let's stop, delete, compile, install and start our application (now throwing an exception).</p>
<pre><code class="language-powershell">Stop-Service -Name example-service
sc.exe delete example-service
Remove-Item './dist/' -Force -Recurse
dotnet publish --self-contained --runtime win7-x64 --output dist
New-Service -Name example-service -BinaryPathName ((Get-Item .\dist\example-app.exe).FullName)
Start-Service -Name example-service
</code></pre>
<p>Now let's take a look at Event Viewer logs.</p>
<pre><code class="language-powershell">$applicationName = 'example-app'
Get-EventLog -Newest 10 -LogName System -Source "Service Control Manager" | ? { $_.Message -match $applicationName }
Get-EventLog -Newest 10 -LogName Application -Source @("Application Error", ".Net Runtime", "Windows Error Reporting") | ? { $_.Message -match $applicationName }
</code></pre>
<p>That's it. You should be able to find the root cause of the issues in the message of the outputted logs.</p>
<p>Finally, this is only an example so let's clean up what we have made. Notice the last step uses <em>C:\Windows\System32\sc.exe</em> instead of <code>Remove-Service</code>. The reason is <code>Remove-Service</code> is a PowerShell 6 cmdlet and Windows 10 ships with PowerShell 5.1. Just to be safe, I prefer to call the Service Control executable.</p>
<pre><code class="language-powershell">Stop-Service -Name example-service
sc.exe delete example-service
</code></pre>
<p>References</p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/windows-service?view=aspnetcore-2.1">Host ASP.NET Core in a Windows Service</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-install-script">dotnet-install scripts reference</a></li>
</ul>
http://johnlouros.com/blog/setup-security-headers-s3-host-websiteSetup HTTP Security headers in a S3 hosted website2018-05-23T00:00:002018-05-23T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>In a previous <a href="/blog/using-CloudFront-to-serve-an-SPA-from-S3">blog post</a> I described how to host a Angular application using S3 and CloudFront. The combination of S3 and CloudFront is required to avoid 404 (page not found) errors when a user tries to access a dynamic route, defined in Angular. By itself, S3 works as a static website host (you can think of it as a directory), so it's only prepared to serve the files it is hosting. CloudFront can be configurated to intercept error responses (like 404 file not found) from S3 and return the root of your Angular app (index.html), then it will be Angular's responsibility to navigate and display the correct page for the defined route. This technique can be applied to other SPA frameworks, but for the sake of simplicity I will focus this blog post around Angular.</p>
<p>The described strategy works perfectly fine, but it has some limitations, like the inability to configure custom HTTP headers. This is problematic if you want to configure HTTP Security headers. If you are not familiar with HTTP Security headers, I strongly recommend a quick read of this <a href="https://www.keycdn.com/blog/http-security-headers">article</a>, or for a more in depth look, checkout out this <a href="https://app.pluralsight.com/library/courses/browser-security-headers/table-of-contents">PluralSight course</a>. Additional articles about HTTP security headers can be found at the bottom of this post. Bottom line, the desired intention is to be able to configure custom HTTP headers.</p>
<p>To get around this problem we could spin-up a EC2 instance, install a web server (i.e. <a href="http://nginx.org/">nginx</a>) and configure it ourselves. Totally valid solution, but the implication is a new virtual machine that you will need to manage.</p>
<p>The solution we are going to explore today is using CloudFront in combination with a Lambda@Edge function. CloudFront provides the capability to associate a Lambda function that will act as a HTTP interceptor. Conceptually this technique can be interpreted as a web application middleware, by applied to a cloud native application. In summary, responses from S3 will be intercepted by a Lambda function and modified to include the HTTP headers defined by us. The following diagram describes the event flow of this solution.</p>
<p><img src="/content/img/blog/cloudfront-events-that-trigger-lambda-functions.png" alt="cloudfront events that trigger lambda functions" /></p>
<p>Before proceeding please be aware of the current <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html">AWS Lambda@Edge requirements</a>. Most notably, by of time of this writing Lambda@Edge can be only associated functions creating in US East (N. Virginia) region. Keep an eye on the provided link for updates. Also, double check that none of the blacklisted or read-only headers are being modified.</p>
<p>This article it's a follow-up on two of my previous articles, therefor it's assumed:</p>
<ul>
<li><a href="/blog/host-your-angular-app-in-aws-s3">your Single-Page-Application is hosted in S3</a></li>
<li><a href="blog/using-CloudFront-to-serve-an-SPA-from-S3">AWS CloudFront is used to serve the application</a></li>
</ul>
<p>Now let's create our Lambda function. Here a list of required settings followed by the respective screenshot:</p>
<ul>
<li>set AWS region to US East (N. Virginia)</li>
<li>set runtime to Node.js 6.10 (works with 8.10 too)</li>
<li>ensure this Lambda function have read access to S3 <em>"S3 object read-only permissions"</em></li>
<li>have <em>"Basic Lambda Edge permissions"</em></li>
</ul>
<p><img src="/content/img/blog/create_lambda_edge.png" alt="create lambda screen" /></p>
<p>Regarding the code, there are a few things you need to be aware of. At the moment, environment variables are not supported. If you use environment variables, the following error will show up when you try to associate the function CloudFront: <code>com.amazonaws.services.cloudfront.model.InvalidLambdaFunctionAssociationException: The function cannot have environment variables. Function: arn:aws:lambda:us-east-1:999999999999:function:s3_response_interceptor_for_spa:1 (Service: AmazonCloudFront; Status Code: 400; Error Code: InvalidLambdaFunctionAssociation; Request ID: e9b7e605-5d45-11e8-940a-273897b66c49)</code></p>
<p>Additionally, the function needs to be published before it can be associated with a CloudFront event. Unfortunately, this makes testing a bit painful, since you would need publish every change to be able to test it. Fortunately, we can leverage Lambda test configuration to simulate a CloudFront origin response event. But let's testing for later. For now, let's check the actual Lambda function code. There are some inline comments to help understand the rationale behind it.</p>
<pre><code class="language-javascript">'use strict';
// function settings (since environment variables are not supported)
const s3BucketName = 'pwa.johnlouros.com';
const s3IndexFile = 'index.html';
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = (event, context, callback) => {
// modify response by intercepting CloudFront Origin Response event
let response = event.Records[0].cf.response;
const headers = response.headers;
// set HTTP Security headers or other custom headers (check the AWS docs for limitations)
headers['content-security-policy'] = [{
key: 'Content-Security-Policy',
value: "script-src 'self'"
}];
headers['x-content-type-options'] = [{
key: 'X-Content-Type-Options',
value: "nosniff"
}];
headers['x-frame-options'] = [{
key: 'X-Frame-Options',
value: "DENY"
}];
headers['x-xss-protection'] = [{
key: 'X-XSS-Protection',
value: "1; mode=block"
}];
headers['referrer-policy'] = [{
key: 'Referrer-Policy',
value: "same-origin"
}];
// handle 'Bad Request', 'Forbidden' or 'Not Found' responses
if (response.status === '400' || response.status === '403' || response.status === '404') {
// from S3 get the contents of 'index.html'
s3.getObject({
Bucket: s3BucketName,
Key: s3IndexFile
}, (err, data) => {
if (err) {
callback(err);
} else {
headers['content-type'] = [{
key: 'Content-Type',
value: "text/html"
}];
// prepare response message
response = {
headers: headers,
body: data.Body.toString('utf-8'),
status: '200',
statusDescription: 'OK'
}
callback(null, response);
}
});
} else {
callback(null, response);
}
};
</code></pre>
<p>We don't want to publish this version without testing. Let's create a test event to verify how this code behaves when S3 returns a 400 error. Screenshot and respective event JSON below:</p>
<p><img src="/content/img/blog/lambda_edge_configure_test.png" alt="configure Lambda test to mock CloudFront origin response event" /></p>
<pre><code class="language-json">{
"Records": [
{
"cf": {
"config": {
"distributionId": "EXAMPLE"
},
"response": {
"status": "400",
"headers": {
"last-modified": [
{
"value": "2016-11-25",
"key": "Last-Modified"
}
],
"vary": [
{
"value": "*",
"key": "Vary"
}
],
"x-amz-meta-last-modified": [
{
"value": "2016-01-01",
"key": "X-Amz-Meta-Last-Modified"
}
]
},
"statusDescription": "OK"
}
}
}
]
}
</code></pre>
<p>Assuming everything was properly defined, the test execution result should return a 200 status code and the body of your 'index.html'. Now let's publish this version.</p>
<p><img src="/content/img/blog/lambda_edge_publish_version.png" alt="publish a new version of this Lambda function" /></p>
<p>To be able to associate a AWS Lambda@Edge function with a CloudFront event, get a Lambda ARN from the published version.</p>
<p><img src="/content/img/blog/lambda_edge_view_published.png" alt="view lambda function published" /></p>
<p>Then navigate to a CloudFront distribution, select 'Behaviors' tab and edit the main behavior.</p>
<p><img src="/content/img/blog/cloudfront_behaviors_page.png" alt="cloudfront behaviors page" /></p>
<p>Create a new Lambda Function Association, mapping 'Origin Response' to your published function version ARN.</p>
<p><img src="/content/img/blog/cloudfront_behaviors_edit_add_lamda_edge.png" alt="edit CloudFront behaviors" /></p>
<p>Finally, on a <a href="blog/using-CloudFront-to-serve-an-SPA-from-S3">previous blog post</a> is was described how CloudFront 'Error Pages' could be used to get around error bubbled up from S3. Please make sure those custom error response mappings are deleted. From now on, the Lambda function will handle all you error responses from S3. To avoid any undesired behavior ensure CloudFront doesn't handle the error responses.</p>
<p><img src="/content/img/blog/cloudfront_error_pages_strike.png" alt="remove error pages settings from CloudFront" /></p>
<p>References</p>
<ul>
<li><a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html">AWS Lambda@Edge documentation</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html#lambda-header-restrictions">AWS Lambda@Edge headers restrictions</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-limits.html#limits-lambda-at-edge">limits on Lambda@Edge</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-response">CloudFront response event structure passed to Lambda</a></li>
<li><a href="https://app.pluralsight.com/library/courses/browser-security-headers">introduction to Browser Security Headers</a></li>
<li><a href="https://www.keycdn.com/blog/http-security-headers">explaining HTTP Security Headers</a></li>
<li><a href="https://report-uri.com/home/tools">tools to analyse and build security headers</a></li>
<li><a href="https://securityheaders.io/">analyse security headers</a></li>
</ul>
http://johnlouros.com/blog/how-to-fix-VS2017-missing-XAML-tools-build-errorHow to fix VS2017 missing XAML tools build error2018-01-04T00:00:002018-01-04T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Typically, when an older project is opened for the first time in the latest version of Visual Studio, the migration manager kicks in and automatically corrects any inconsistencies between Visual Studio versions. However, some other issue may arise. Using the default install of Visual Studio 2017 Community Edition, trying to compile the solution I was working on returned the following error:
<img src="/content/img/blog/vs2017-missing-XAML-build-tools-error-description.png" alt="error description" /></p>
<p>Taking a closer look at the error: <code>The "Microsoft.Build.Tasks.Xaml.PartialClassGenerationTask" task could not be loaded from the assembly C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\amd64\XamlBuildTask.dll. Could not load file or assembly 'file:///C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\amd64\XamlBuildTask.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask.</code>. It looks like there are a few XAML files that need to be compiled. Honestly, I was not familiar with this particular project so I was totally unaware of this dependency.</p>
<p>To solve this issue, modify your Visual Studio Installation (using Visual Studio Installer), in the "Individual components" tabs look for "Windows Workflow Foundation" and select it.
<img src="/content/img/blog/vs2017-installer-wwf-selected.png" alt="VS2017 installer" /></p>
<p>This might seem like a fairly obvious solution if you traditionally work with XAML or Windows Workflow Foundation. However, since I was not used to either, it took me some time to figure this out. Hopefully this article can help somebody else.</p>
http://johnlouros.com/blog/uploading-a-angular-app-to-S3-with-npmUploading a Angular App to S3 with npm2017-05-04T00:00:002017-05-04T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Lately I have been working in a few Angular projects. I would like to share some the discoveries I have made, but honestly I'm lacking the inspiration to write. I would rather just share my code and move on, but some information in not that easy to convey through code. Depending your interest, I'll might do both: force myself in to writing more often; and have a small shared Angular project with small, gradual, self-explanatory commits. For the upcoming blog posts expect more Angular. When I mean Angular It's Angular 2 and above, not to be confused with the first version (now named AngularJS). If you are not familiarized with the differences please check this brilliant <a href="https://www.quora.com/What-is-the-difference-between-AngularJs-and-Angular-2">Quora answer</a>.</p>
<p>As previously talked in previous a blog post, I described how to <a href="https://johnlouros.com/blog/host-your-angular-app-in-aws-s3">host an Angular app in AWS S3</a>. Then how use it in combination with <a href="https://johnlouros.com/blog/using-CloudFront-to-serve-an-SPA-from-S3">AWS CloudFront to tackle some of the features S3 lacks</a>. Now that we know how to serve our SPA, what about deployment? There are multiple ways to 'skin this cat', especially if there's a team involved, but for this example let's make it as easy as possible. Our goal is to run the following command <code>npm run deploy</code> to deploy a compiled Angular app to an S3 bucket.</p>
<p>Let me work your through the necessary tools: starting the most obvious <a href="https://nodejs.org/en/">node.js</a>, make sure both node and npm are installed by opening your preferred command prompt and calling <code>node -v</code> and <code>npm -v</code>. For this example, I'm going to use <a href="https://cli.angular.io/">Angular-CLI</a>, keep in mind this optional, I'm just going to use it to create a new Angular project. To install Angular-CLI run <code>npm install -g @angular/cli</code> then check if it was properly installed by running <code>ng -v</code>.</p>
<p><img src="/Content/img/blog/spa-to-aws/ng-v.jpg" alt="check Angular-CLI version" /></p>
<p>The next step is to create a new Angular project <code>ng new MyAngularApp</code>. I'm not going to change/all any to the actual project, the objective is to compile it as it is and deploy it to S3.</p>
<p><img src="/Content/img/blog/spa-to-aws/ng-new.jpg" alt="create new project using Angular-CLI" /></p>
<p>Next, we need to install AWS SDK <code>npm install aws-sdk --save-dev</code>. This allows us to spawn a client that will interact with S3. Additionally, for every file uploaded to S3, the proper <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types">MIME type</a> needs to be set. If not specified S3 will default to <em>'application/octet-stream'</em> resulting in files being downloaded instead of interpreted and rendered by a browser. To avoid manually mapping files to their respective MIME type let's install a helper library <code>npm install mime-types --save-dev</code></p>
<p><img src="/Content/img/blog/spa-to-aws/npm-i-aws-sdk.jpg" alt="install AWS-SDK using npm" /></p>
<p>You will also need to configure AWS SDK to get access to your AWS account. For me details please check the 'configure' section in <a href="https://aws.amazon.com/sdk-for-node-js/">AWS SDK for Node.js instructions</a></p>
<p>Now let's create our deployment script. It will be a simple JavaScript file that will be interpreted and run by node, invoked by npm run-script. For now, just create a new folder named '<em>scripts</em>' at the root level of your project. Inside this folder create a file name '<em>deploy.js</em>'. The path from your project's root level should look like this '.<em>/scripts/deploy.js</em>'. Grab the following code and paste in your newly created file. The code itself it's simple, the inline comments should help understanding it.</p>
<pre><code class="language-javascript">const AWS = require("aws-sdk"); // imports AWS SDK
const mime = require('mime-types') // mime type resolver
const fs = require("fs"); // utility from node.js to interact with the file system
const path = require("path"); // utility from node.js to manage file/folder paths
// configuration necessary for this script to run
const config = {
s3BucketName: 'your.s3.bucket.name',
folderPath: '../dist' // path relative script's location
};
// initialise S3 client
const s3 = new AWS.S3({
signatureVersion: 'v4'
});
// resolve full folder path
const distFolderPath = path.join(__dirname, config.folderPath);
// Normalize \\ paths to / paths.
function unixifyPath(filepath) {
return process.platform === 'win32' ? filepath.replace(/\\/g, '/') : filepath;
};
// Recurse into a directory, executing callback for each file.
function walk(rootdir, callback, subdir) {
// is sub-directory
const isSubdir = subdir ? true : false;
// absolute path
const abspath = subdir ? path.join(rootdir, subdir) : rootdir;
// read all files in the current directory
fs.readdirSync(abspath).forEach((filename) => {
// full file path
const filepath = path.join(abspath, filename);
// check if current path is a directory
if (fs.statSync(filepath).isDirectory()) {
walk(rootdir, callback, unixifyPath(path.join(subdir || '', filename || '')))
} else {
fs.readFile(filepath, (error, fileContent) => {
// if unable to read file contents, throw exception
if (error) {
throw error;
}
// map the current file with the respective MIME type
const mimeType = mime.lookup(filepath)
// build S3 PUT object request
const s3Obj = {
// set appropriate S3 Bucket path
Bucket: isSubdir ? `${config.s3BucketName}/${subdir}` : config.s3BucketName,
Key: filename,
Body: fileContent,
ContentType: mimeType
}
// upload file to S3
s3.putObject(s3Obj, (res) => {
console.log(`Successfully uploaded '${filepath}' with MIME type '${mimeType}'`)
})
})
}
})
}
// start upload process
walk(distFolderPath, (filepath, rootdir, subdir, filename) => {
console.log('Filepath', filepath);
});
</code></pre>
<p>To handle trigger the deployment we will need a task runner. Feel free to pick whatever you prefer, personally I tend to use <a href="https://docs.npmjs.com/cli/run-script">npm run-script</a>. Define a task name and the command in the project's '<em>package.json</em>' and use <code>npm run <task name></code> to execute it. Let's do it for our example project. Open your project's 'package.json' . In your 'scripts' section add the following <code>"deploy": "node ./scripts/deploy.js"</code> instructing node.js to execute './scripts/deploy.js', which by itself with handle the deployment of the compiled Angular application.</p>
<p>Additionally, you can tell npm to compile the project before deploying it. If you create a new entry with same task name prepended with <em>'pre'</em>, npm will run this task first before executing the task you requested. You can create a multiple step chain using this technique. Add a new entry with the following <code>"predeploy": "ng build -prod -aot"</code> instructing Angular-CLI to compile in production mode with ahead-of-time compilation enabled. Check out a excerpt of '<em>package.json</em>' containing only our modifications.</p>
<pre><code class="language-json">{
"name": "my-angular-app",
"scripts": {
(...) // existing scripts
"predeploy": "ng build -prod -aot", // will run before 'deploy' (notice 'pre')
"deploy": "node ./scripts/deploy.js" // tell node.js to execute './scripts/deploy.js'
},
"dependencies": {
(...) // existing dependencies
},
"devDependencies": {
(...) // existing devDependencies
"aws-sdk": "^2.48.0", // installed with npm
"mime-types": "^2.1.15" // installed with npm
}
}
</code></pre>
<p>Now you can try 'the whole shebang' by executing <code>npm run deploy</code>. Boom, your code is now deployed in S3.</p>
<p><em>(update)</em> My thanks to Diego Arevalo for providing a script update to traverse subfolders.</p>
<p><img src="/Content/img/blog/spa-to-aws/npm-run-deploy.jpg" alt="executing npm run deploy" /></p>
http://johnlouros.com/blog/using-CloudFront-to-serve-an-SPA-from-S3Using AWS CloudFront to serve an SPA hosted on S32017-03-07T00:00:002017-03-07T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>My <a href="/blog/host-your-angular-app-in-aws-s3">previous post</a> explained how AWS S3 could to be configured to host a static website. However, AWS S3 static website hosting might not provide all the necessary options required by modern Single-Page Applications, nor the flexibility to handle custom domains or SSL certificates. This blog post will demonstrate how AWS CloudFront can sit on top (or in front) of AWS S3 to provide a more fine-tuned web service.</p>
<p>It's known that AWS S3 can be setup to host static websites. This is a great capability in terms of simplicity, but it's not the main focus of S3. Keep in mind that S3 is focused on file storage and distribution. Website hosting configurations were purposely kept to a set of basic options. A default document can be configured for the root navigation, an error document can be assigned of any error occurred inside the S3 bucket and that's it in terms of configuration. On top of that, to make it all work we need to allow read-access to any (anonymous) user, meaning everybody will have read-access to all files inside our S3 bucket. If log files are stored in the same bucket, everyone will be able to see them.</p>
<p>Going back to Single-Page Application requirements, most SPA frameworks nowadays support <a href="https://developer.mozilla.org/en-US/docs/Web/API/History_API#Adding_and_modifying_history_entries">HTML5 history.pushState()</a> to change browser location without triggering a server request. This technique works perfectly if all users begin their journey from the root of our application or from '/index.html', but it fails when the user navigates directly to any other page (i.e. '/about' because 'about' is not a file present in S3). In the later scenario, the error page defined in S3 configuration will be returned. To solve this issue we need to make sure all incoming requests return our SPA entry point (usually 'index,html'), even when S3 return a 404 or 403 HTTP error. Elaborating on the previous example, requests to '/about' by default return 404 because 'about' is not a file present in S3, or 403 if the file exists and S3 denies access for any anonymous identity.</p>
<p>What if we want more flexibility, while keep our files stored in S3? This is where <a href="https://aws.amazon.com/cloudfront/">AWS CloudFront</a> comes in to play. CloudFront provides the capability to define custom domains, setup HTTPS using your own SSL certificate or a CloudFront's default certificate (might not be very useful if you want to use a custom domain name) and redirect HTTP error codes to specific locations (this will be used to redirect everything to our SPA entry point). Obviously CloudFront provides more options like caching, allowed HTTP verbs, CDN (edge location) configurations, so on, but they are not entirely relevant to the point I'm trying to get across.</p>
<p>Additionally, CloudFront can also help us defining more detailed S3 bucket access. Commonly we don't want to provide direct access to S3. Ideally, all traffic must be routed through our custom domain. This way CloudFront can gather analytics for us and we will be able to define how our content can be distributed, by leveraging Edge locations (CloudFront CDN). If is done by creating an CloudFrond 'Origin Access Identity' and only allow S3 access to this principal.</p>
<p>The next sequence of screenshots demonstrates how CloudFront can be configured to serve an existing S3 bucket.</p>
<p>From AWS CloudFront, select 'Distributions' and click 'Create Distribution'
<img src="/Content/img/blog/aws-cloudfront/1.jpg" alt="AWS CloudFront screenshot 1" /></p>
<p>Click 'Get Start' from the Web delivery method section
<img src="/Content/img/blog/aws-cloudfront/2.jpg" alt="AWS CloudFront screenshot 2" /></p>
<p>Set the S3 bucket you want to be serve and which folder shall be the root (leave blank for everything)
<img src="/Content/img/blog/aws-cloudfront/3.jpg" alt="AWS CloudFront screenshot 3" /></p>
<p>Select 'Restrict Bucket Access' and either create a new Origin Access Identity. If preferred also select to update S3 bucket policy (but you can do this manually later on). Save this configuration.
<img src="/Content/img/blog/aws-cloudfront/4.jpg" alt="AWS CloudFront screenshot 4" /></p>
<p>Verify the newly created Origin Access Identity (navigation from the left panel)
<img src="/Content/img/blog/aws-cloudfront/5.jpg" alt="AWS CloudFront screenshot 5" /></p>
<p>Navigate to your S3 bucket and click 'Edit bucket policy'
<img src="/Content/img/blog/aws-cloudfront/6.jpg" alt="AWS CloudFront screenshot 6" /></p>
<p>Ensure that only the Origin Access Identity associated with your distribution has access to your S3 bucket
<img src="/Content/img/blog/aws-cloudfront/7.jpg" alt="AWS CloudFront screenshot 7" /></p>
<p>Navigate back to CloudFront and select the distribution previously created.
<img src="/Content/img/blog/aws-cloudfront/8.jpg" alt="AWS CloudFront screenshot 8" /></p>
<p>Select 'Error Pages' tab and click 'Create Custom Error Response'
<img src="/Content/img/blog/aws-cloudfront/9.jpg" alt="AWS CloudFront screenshot 9" /></p>
<p>Create two custom error responses: 404 and 403, return '/index.html' and a 200 response code.
<img src="/Content/img/blog/aws-cloudfront/10.jpg" alt="AWS CloudFront screenshot 10" /></p>
<p>Now you just need to wait for CloudFront finish provision (usually takes 20 minutes) and your all set. Your SPA should work without a problem.</p>
<p><img src="/Content/img/blog/cloudfront.jpg" alt="AWS CloudFront logo" /></p>
http://johnlouros.com/blog/host-your-angular-app-in-aws-s3How to host your Angular 2 application in AWS with S32017-01-31T00:00:002017-01-31T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Lately I have been working on a single-page application (<a href="https://en.wikipedia.org/wiki/Single-page_application">SPA</a>). From a high-level perspective, the goal is to provide a documentation portal for a set of RESTful APIs. The APIs themselves were designed with Swagger, so all information could be easily discovered and consumed from the Swagger output (JSON file containing path, resource definitions, security details, so on). Additional information, not provided by <a href="http://swagger.io/">Swagger</a>, is written in markdown files and presented in a separate location (things like getting started guides, how to authenticate, so on).</p>
<p>The client development is done in <a href="https://angular.io/">Angular 2</a> and it's packaged with <a href="https://webpack.github.io/">WebPack</a>. If you are not familiar with Angular do not worry, it's merely a technical detail. The output of packaging our Angular code it's a index.html, a Cascading Style Sheet and a couple of JavaScript files (thanks to the magic of WebPack, JavaScript is aggregated to a small set of output files). Since all markdown files are written up-front and Swagger it's pre-generated, there's it no need for a back-end service. That's it, a simple web application.</p>
<p>For such simple application, hosting should be easy too, right? We could use Virtual Machines, but then we would have to setup and configure a Web Server, manage Operating System and Software updates, manage load-balancing and all the fun that comes with it (like server rotation, load distribution strategies and testability), install security certificates, delegation permissions, user roles, so on. Honestly, it's way too much work for such simple application. Keep in mind we are just serving static files.</p>
<p>For this challenge, Amazon Web Services offer quite a neat solution, meet AWS S3 or Amazon Simple Storage Service. It is described by Amazon as a "object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web. It is designed to deliver 99.999999999% durability, and scale past trillions of objects worldwide". In other words, this service can be described as a very reliable, auto-scalable, highly-customizable Dropbox for developers (funny enough Dropbox uses S3 as a file storage facility). The pricing model revolves around file transfer and space used, but honestly, it's quite <a href="https://aws.amazon.com/s3/pricing/">cheap</a>. The rules are quite simple, just create a new S3 bucket with a unique name, upload your files, enable 'website hosting' and configure bucket policy to allow read-only for anybody. The following set of screen-shots demonstrate how you can do this.</p>
<p>Login to your AWS account and navigate to <a href="https://console.aws.amazon.com/s3/">S3 console</a>
Create a new S3 bucket, just keep in mind the bucket needs to be unique.
<img src="/Content/img/blog/aws-s3/create-bucket.jpg" alt="Create new S3 bucket" /></p>
<p>Highlight the newly created bucket, select 'Static Website Hosting', enable it and defined the default index file (i.e. index.html)
<img src="/Content/img/blog/aws-s3/enable-website-hosting.jpg" alt="Enable website hosting" /></p>
<p>Define bucket policy by selecting 'Permissions'
<img src="/Content/img/blog/aws-s3/set-bucket-policy.jpg" alt="Define bucket policy" /></p>
<p>By default, S3 buckets are private and not accessible to unauthorized users, however we want to use it as a website. To allow any public anonymous user, read access to all objects inside 'www.johnlours.com' bucket, the following policy can be used. The policy itself states that any action to get a bucket object, for any principal (or user), should be allowed on any object inside 'www.johnlouros.com' bucket.</p>
<pre><code class="language-json">{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::www.johnlouros.com/*"]
}
]
}
</code></pre>
<p>From the bucket policy definition, let's take a closer look at 'Resource' property. It can be broken down in the following sections:</p>
<ul>
<li>'arn:aws:s3:::' defines the Amazon Resource Name (arn), from AWS S3 service (aws:s3)</li>
<li>'www.johnlouros.com' bucket name</li>
<li>'/*' applied to all object in the bucket</li>
</ul>
<p>That's all, now you can navigate to the endpoint provided in the 'Static Website Hosting' section to test your application. As a sanity check open a new browser in-private mode to ensure you're not browsing the website as an authenticate AWS user.</p>
<p>What do you think of this approach? I can let you know what there are a few limitations, but I will leave that to an upcoming article.</p>
<p><img src="/Content/img/blog/Amazon_AngularJS.jpg" alt="Amazon, AngularJS" /></p>
http://johnlouros.com/blog/unit-testing-in-powershellUnit testing in PowerShell2016-12-13T00:00:002016-12-13T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>With the reoccurring need of PowerShell to manage infrastructure setup, monitoring and deployment, it's vastly important that we have the right tools in place in order to keep PowerShell scripts organized and properly maintained. Nowadays, the development of software applications require a common set of strategies/practices so developers can feel confident about making new changes and/or refactoring existing code. Techniques like test-driven development are quite common in the object-oriented programming world. Unfortunately, when it comes to infrastructure code, developers tend to ignore the same strategies. Mostly due to time constrains or the lack of reliable tools to test PowerShell, Shell or Batch scripts.</p>
<p>This article will uniquely focus on PowerShell unit test and static code analysis, providing an option for each scenario. Although this tools are widely recommended, in Software Development there are no "silver-bullets". They might fit some of your needs, but do not expect 100% unit test code coverage and/or unlimited flexibility. As in test-driven development, it's expected that a particular set of guidelines are followed in order to properly test most of your code. On the other hand, we want to test a very versatile and powerful scripting language, capable of doing simple string formats to start provisioning a Virtual Machine in Azure. Obviously we want to be realistic and mindful about what we can and cannot test (do we really want to spin-up a new Azure VM every time the unit tests run?).</p>
<h2>Unit test your PowerShell scripts with Pester</h2>
<p>As described on Pester's GitHub repository "Pester provides a framework for running unit tests to execute and validate PowerShell commands from within PowerShell". Expect all common set of functionalities you can find in any unit test framework like: assertion; before/after test execution actions; test contexts; test cases (or triangulation). Also, expect more advanced functionalities like: mocking existing commands (your own or PowerShell commands like <code>Write-Host</code>); file operation isolation; code coverage metrics.</p>
<p>With Pester both PowerShell script files (<em>.ps1) and PowerShell modules (</em>.psm1) can be tested. However, when testing script files, you must be aware that Pester will execute the entire file (this loads all defined functions). Any code defined outside of a function will always be executed, making it impossible to mock. Rule of thumb: move all of your code into functions. PowerShell caches modules to speed up dependency lookup. Keep that in mind when testing modules, since an older (cached) version might used instead of the latest. Rule of thumb: importing modules in test files, always use <code>-Force</code> switch with <code>Import-Module</code> cmdlet to force reload the latest module version.</p>
<h3>Getting started</h3>
<p>Assuming the following script is saved as 'numbers.ps1'</p>
<pre><code class="language-powershell">param([switch]$shouldWriteToHost)
function Get-RandomNumberBetweenZeroAndNine {
return Get-Random -Minimum 0 -Maximum 10
}
function Sum-Numbers {
param([int]$a, [int] $b)
return $a + $b
}
$a = Get-RandomNumberBetweenZeroAndNine
$b = Get-RandomNumberBetweenZeroAndNine
$result = Sum-Numbers $a $b
if($shouldWriteToHost) {
Write-Host "$a + $b = $result"
}
</code></pre>
<p>Noticeably a portion of the code is not wrapped in a function. As previously stated, it's highly recommended that we place everything in PowerShell functions.
Let's quickly refactor this script to improve testability.</p>
<pre><code class="language-powershell">function Get-RandomNumberBetweenZeroAndNine {
return Get-Random -Minimum 0 -Maximum 10
}
function Sum-Numbers {
param([int]$a, [int] $b)
return $a + $b
}
function Invoke-SumRandomNumbers {
param([switch]$shouldWriteToHost)
$a = Get-RandomNumberBetweenZeroAndNine
$b = Get-RandomNumberBetweenZeroAndNine
$result = Sum-Numbers $a $b
if($shouldWriteToHost) {
Write-Host "$a + $b = $result"
}
}
</code></pre>
<p>One particular problem with this new refactored script is that it doesn't automatically execute. All functions are loaded, but no function gets called, so practically no code gets executed. <code>Invoke-SumRandomNumber</code> needs to be explicitly called to achieve the same behaviour. Later on I will demonstrate how the same behaviour can be replicated by moving all the code in to a PowerShell module. For now let's focus on the tests.</p>
<p>Create a new file PowerShell script file to write your test definitions. As an example let's Mock <code>Get-RandomNumberBetweenZeroAndNine</code> to control the output and verify if it's called twice. Also let's check if <code>Write-Host</code> let's called with the expected result.</p>
<pre><code class="language-powershell">$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path) -replace '\.Tests\.', '.'
# execute all files that do not contain 'Tests' in the name (to load all necessary functions)
. "$here\$sut"
Describe "Invoke-SumRandomNumbers" {
Context "With input parameters" {
# arrange
Mock Get-RandomNumberBetweenZeroAndNine -MockWith { return 2 }
Mock Write-Host {}
# act
Invoke-SumRandomNumbers -shouldWriteToHost
# assert
It "Should call 'Write-Host' with the expected result" {
Assert-MockCalled Write-Host -Exactly 1 -ParameterFilter { $Object -eq '2 + 2 = 4' }
}
It "Should call 'Get-RandomNumberBetweenZeroAndNine' twice" {
Assert-MockCalled Get-RandomNumberBetweenZeroAndNine -Exactly 2
}
}
}
</code></pre>
<p>Finally let's run our unit tests. If Pester module is installed, just run open a PowerShell console, navigate to the directory where the scripts are hosted and run: Invoke-Pester You should get a report similiar to this:
<img src="/Content/img/blog/Pester-run-example.png" alt="Pester run example" /></p>
<p>Going back to describe how can PowerShell can be used. Start with renaming 'numbers.ps1' to 'numbers.psm1' and define what functions should be public. In this case we are just interest in <code>Invoke-SumRandomNumbers</code></p>
<pre><code class="language-powershell">function Get-RandomNumberBetweenZeroAndNine {
return Get-Random -Minimum 0 -Maximum 10
}
function Sum-Numbers {
param([int]$a, [int] $b)
return $a + $b
}
function Invoke-SumRandomNumbers {
param([switch]$shouldWriteToHost)
$a = Get-RandomNumberBetweenZeroAndNine
$b = Get-RandomNumberBetweenZeroAndNine
$result = Sum-Numbers $a $b
if($shouldWriteToHost) {
Write-Host "$a + $b = $result"
}
}
Export-ModuleMember -Function Invoke-SumRandomNumbers
</code></pre>
<p>On the unit test defintion file, import our module and wrap all test contexts with InModuleScope numbers</p>
<pre><code class="language-powershell"># always use '-Force' to load the latest version of the module
Import-Module ".\numbers.psm1" -Force
Describe "Invoke-SumRandomNumbers" {
InModuleScope numbers {
Context "With input parameters" {
# arrange
Mock Get-RandomNumberBetweenZeroAndNine -MockWith { return 2 }
Mock Write-Host {}
# act
Invoke-SumRandomNumbers -shouldWriteToHost
# assert
It "Should call 'Write-Host' with the expected result" {
Assert-MockCalled Write-Host -Exactly 1 -ParameterFilter { $Object -eq '2 + 2 = 4' }
}
It "Should call 'Get-RandomNumberBetweenZeroAndNine' twice" {
Assert-MockCalled Get-RandomNumberBetweenZeroAndNine -Exactly 2
}
}
}
}
</code></pre>
<p>That's it! You're all set.</p>
<h2>Installation instructions</h2>
<pre><code class="language-powershell"># Installing items from the Gallery requires the latest version of the PowerShellGet module,
# which is available in Windows 10, in Windows Management Framework (WMF) 5.0, or in
# the MSI-based installer (for PowerShell 3 and 4).
# Check https://www.powershellgallery.com/ for more details.
# Open a PowerShell command prompt in Administrator mode
# add 'PSGallery' as a trusted PowerShell module repository
Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
# Download and install modules from PSGallery
Install-Module -Name PSScriptAnalyzer
Install-Module -Name Pester
</code></pre>
<h2>References</h2>
<ul>
<li>Pester source code: https://github.com/pester/Pester</li>
<li>PowerShell Gallery is central repository for PowerShell content (just like NuGet.org for NuGet packages) https://www.powershellgallery.com/</li>
<li>Pester module: https://www.powershellgallery.com/packages/Pester/</li>
<li>Windows Management Framework 5.0 download link (already pre-installed on Windows 10) https://www.microsoft.com/en-us/download/details.aspx?id=50395</li>
<li>PowerShellGet (previously known as OneGet) module cmdlets: https://technet.microsoft.com/en-us/library/dn807169.aspx</li>
</ul>
http://johnlouros.com/blog/warm-greetings-from-londonWarm greetings from London2016-12-12T00:00:002016-12-12T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>As you might have noticed, I have been absent from this blog from the last couple of months. Not that I ran out of topics or I don’t want to share my opinions, it’s just “life got in the way”. Putting it bluntly, my priorities shifted so I choose to spend more time doing other tasks. Obviously, I could spare a few minutes here and there to write a new blog post, but sometimes you just need a long period of reflection to observe and rethink what to do next.</p>
<p>So exactly what happened? By the end of the Summer I left to U.S. and moved back to Europe. Consequentiality I had to find a new place, take care of a bunch to paperwork related to the move (which is far more cumbersome when you move to another country), I left my old job, had to refresh my interview skills, and already started on a new gig. It was quite a change for just a couple of months. Now that almost fully adapted to my new routine, I feel it’s the right time to write again.</p>
<p>Before anybody starts wondering, no I didn’t leave the U.S. because of the current political situation, although I must admit that my timing could not be better. Let’s not expand on this topic, so I don’t stain this blog in political arguments. At the end of the day, it is what it is and my arguments can’t change anything.</p>
<p>Anyway, this was just a “hey, I’m still here” and I will be back with more of my tech articles before the end of the week.</p>
<p><img src="/content/img/blog/london.jpg" alt="London" /></p>
http://johnlouros.com/blog/basics-of-cryptography-with-open-sslBasics of cryptography with OpenSSL2016-08-05T00:00:002016-08-05T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>OpenSSL became publicly known (unfortunately) for the wrong reasons. Their development team got the typical backlash that System Administrators usually get: if everything is working fine nobody cares, as soon as something bad happens everybody loses their mind. I'm obviously talking about the <a href="http://heartbleed.com/">Heartbleed bug</a>. Before Heartbleed was found, it was estimated that 61% of all Apache servers used OpenSSL to handle TLS/SSL connections. As soon as it was found a lot of people freaked out. Successfully using this exploit could allow an attacker to read a target server memory, extract its private key and ultimately mount a man-in-the-middle attack . On the other hand, OpenSSL development team consisted of 11 contributors and a budget of less than $1 million a year (most of it from donations). In a world where even large Corporations, with almost unlimited resources, consistently release buggy software, a team of eleven developers should be allowed to make a few mistakes.</p>
<p>A few years passed and now it's water under the bridge. The original team eventually fixed this bug, while others forked OpenSSL source code and made the fixes themselves ([LibreSSL(http://www.libressl.org/)), and some got inspired by the original source code to created their own version (<a href="https://boringssl.googlesource.com/boringssl/">boringSSL</a>).</p>
<p>Whatever on this blog post I want to step back a little and provide a quick overview of the basics of cryptography. There are tons of articles out there that explain every concept of cryptography in detail. However, I'm not a Mathematician, so the most you could get out of me is a blunt explanation. All that I want to show you is how can you use OpenSSL command line to encrypt and decrypt a file using symmetric and asymmetric cryptography.</p>
<p>Let's start to explain the difference between symmetric and asymmetric encryption. Imagine the following scenario: Edward wants to send confidential information to Laura. If they choose to use symmetric encryption, both of them need to have the exactly the same key to encrypt and decrypt messages. With asymmetric encryption, before any information is sent, Laura must generate a public and private key combination. Laura will keep the private key to herself and store it somewhere safe. Then Laura will send the public key to Edward, Edward will use the Laura's public key to encrypt the sensitive information he wants to share with Laura. Then Edward sends the encrypted information to Laura and Laura uses her private key to decrypt the information. In a very simplistic way, public key is used to encrypt and the private key to decrypt. If I correctly recall, asymmetric encryption was used by Edward Snowden and Laura Poitras to talk about NSA practices and leak confidential documents. If you haven't seen it, I strong recommend watching HBO's documentary <a href="http://www.imdb.com/title/tt4044364/">Citizenfour</a> highlights how Edward Snowden orchestrated the leak. But enough about theory, let's do it ourselves.</p>
<p>Download the OpenSSL binaries compiled for Windows at <a href="https://indy.fulgan.com/SSL/openssl-1.0.2h-x64_86-win64.zip">https://indy.fulgan.com/SSL/openssl-1.0.2h-x64_86-win64.zip</a>. Once you have it, navigate to the directory where you extracted OpenSSL binaries and run "openssl.exe version" to check what version you're using. On this blog post I'm going to use version 'OpenSSL 1.0.2h 3 May 2016'. Most likely you will also have to create a configuration file, otherwise every time OpenSSL is executed the following warning might be displayed: 'WARNING: can't open config file: /usr/local/ssl/openssl.cnf'. You can download a sample configuration file from <a href="http://docs.oracle.com/cd/E19509-01/820-3503/ggeyz/index.html">Oracle's website</a>. Save it in 'C:\usr\local\ssl\openssl.cnf', update it with proper directory references (target existing directories) and you're good to go.</p>
<p>If you are running Windows 10 with Anniversary Update (version 10.0.10586 or above), you can use Bash on Ubuntu on Windows. Scott Hanselman have a great walk-thought video on <a href="https://www.youtube.com/watch?v=DmsJHocTt84">how to run Linux on Windows 10</a>. OpenSSL should come pre-installed. As usual I tend to prefer PowerShell, so for this post I will be calling 'OpenSSL.exe' from PowerShell. But the commands will be same, no matter what 'prompt' you prefer.</p>
<p>Ok, let's create a new text file (named 'plain.txt') to test encryption. Keep in mind that I'm using PowerShell.</p>
<pre><code class="language-powershell">Set-Content -Path 'plain.txt' -Value 'Testing encryption with OpenSSL'
</code></pre>
<p>Now let's use AES-256 (symmetric encryption) to encrypt our plain text file.</p>
<pre><code class="language-powershell">.\openssl.exe enc -aes-256-ctr -in plain.txt -out aesEncryptedTxt.bin
</code></pre>
<p>You will be prompted to enter a password. I will be using 'test1' as our password (just for future reference).
To decrypt the file, use the following command (passing password as an argument is optional).</p>
<pre><code class="language-powershell">.\openssl.exe enc -aes-256-ctr -d -in aesEncryptedTxt.bin -pass pass:test1
</code></pre>
<p>Easy right? As you might have noticed, the only thing protecting our text is the selected cipher and provided password. Realistically this is not good enough. For the sake of simplicity, I'm not providing a AES key. Just keep in mind that a AES key will add an additional level of protection in order to prevent brute-force attacks.</p>
<p>For asymmetric encryption let's start by generating a pair of 2048 bits, RSA public/private key.</p>
<pre><code class="language-powershell">.\openssl.exe genrsa -out myKeyPair.pem 2048
</code></pre>
<p>As previously mentioned, the private key must be kept in a secure place. Even better if it's encrypted. Let's use AES-256 to encrypt our key pair (you will be prompted to enter a password).</p>
<pre><code class="language-powershell">.\openssl.exe rsa -in myKeyPair.pem -aes-256-ctr -out myKeyPair-Encrypted.pem
</code></pre>
<p>Now your friend Edward wants to send you a confidential file. He asks you for a public key. Here's how we can generate a public key for Edward.</p>
<pre><code class="language-powershell">.\openssl.exe rsa -in myKeyPair-Encrypted.pem -pubout -out pubKeyForEdward.pem
</code></pre>
<p>The key distribution problem is another problem by itself. For the sake of simplicity let's skip that part. For now, imagine that's you gave Edward a flash drive with the public key. Edward will now use your public key to encrypt the file he wants to send. Using your public key, Edward will be able to encrypt the file, but won't be able to decrypt it (but who cares, he has the original file).</p>
<pre><code class="language-powershell">.\openssl.exe rsautl -encrypt -in plain.txt -pubin -inkey pubKeyForEdward.pem -out edwardEncryptedFile.bin
</code></pre>
<p>Edward has sent the encrypted file to you, to decrypt it just run:</p>
<pre><code class="language-powershell">.\openssl.exe rsautl -decrypt -in edwardEncryptedFile.bin -inkey myKeyPair-Encrypted.pem
</code></pre>
<p>Simple right? Honestly this was incredibly over-simplified. In my defense, most tutorials about cryptography dump a ton of information on newcomers which can be quite overwhelming. My approach is the opposite, by providing the basics I want the reader to feel comfortable and spike his interest on this subject. Just keep in mind that I purposely skipped some concepts for the sake of simplicity.</p>
<p><img src="/content/img/blog/openssl-logo.png" alt="OpenSSL logo" /></p>
http://johnlouros.com/blog/solving-simple-problems-with-client-side-web-appsSolving simple problems with client-side web applications2016-06-07T00:00:002016-06-07T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Whether we like it or not, JavaScript has been exponentially growing in popularity. While the benefits can be obvious, there as some side-effects that are usually overlooked. As more and more features are being pushed to modern web browsers (making it compelling to build web applications that work in almost every device), on the other hand feature fragmentation is getting worst every day. We can point fingers at all the different browser vendors and their constant push of new updates. To make matters worse, the same browser might have different versions for Desktop and Mobile devices (Chrome != Chrome for Android != Android Browser). Some of this problems could be avoided if everyone always ran the latest versions (and Google Chrome started off with great ideas to make that possible) but the reality is very different. Take a look at <a href="http://caniuse.com/">can I use</a> to have a rough idea of feature reach and disparity. At a higher level, consider how many people still use Windows 7 (and uses IE)... Does everyone upgrade to the latest iPhone or Samsung Galaxy as soon as they come out? Most Android phones aren't even compatible with the latest Android version. If fragmentation is gigantic in the most used mobile OS, imagine how big that is problem if you account for all available web browsers. To be fair fragmentation is an issue if your audience uses a diverse variety of browsers and you are using brand new browser features. Both problems are easily identifiable, fixing them might not be so easy. Anyway, just be aware of this issues when you're working on web applications. Now that we are done the warnings, let's jump into a practical example.</p>
<p>Imagine a family member just asked your help to solve simple problem. Analyze each row of a given <a href="https://en.wikipedia.org/wiki/Comma-separated_values">csv file</a> (comma-separated values. Looks like a spreadsheet separated by commas and line breaks, very simple way to structure data), to verify if the element on the last column (let's call it the 'pivot'): is smaller than all elements in the first six columns and (if previous condition is meet) add a new column at the end of the row, containing the difference between the 'pivot' and the value presented on the sixth column. Quite simple, right? Anybody could do it by hand, but we are developers so let's make the machine do the hard work for us.</p>
<p>For this particular case we could develop a Desktop or Mobile application, but let's try to create a Web Application. Plus this problem is so simple, that server processing won't be required at all, let's do anything in client-side JavaScript. In other words, our friend's web browser will do all the work, we just provide the set of instructions it needs to run.</p>
<p>This problem is quite simple to solve, even so let's break it down to a list of straight forward requirements:</p>
<ul>
<li>the user needs to upload a csv</li>
<li>instead of a file upload dialog, drag-drop is preferred</li>
<li>figure out a way to read the contents of the file provided by the user</li>
<li>analyze the csv file</li>
<li>return the resulting csv file to the user (user must download it)</li>
<li>it must run on the latest Google Chrome (for Desktop), don't worry about any other browser</li>
<li>all code must run on the client browser (no file uploads to our web server)</li>
</ul>
<p>Due to the simplicity of this problem, let's jump straight into code. Just check the comments for more details. Let's start with the HTML:</p>
<pre><code class="language-html"><!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="description" content="Simple Match prem analysis">
<meta name="author" content="John Louros">
<title>Basic csv analysis</title>
<style type="text/css">
body { font: normal 16px/20px "Helvetica Neue", Helvetica, sans-serif; background: rgb(237, 237, 236); margin: 0; margin-top: 40px; padding: 0; }
section, header, footer { display: block; }
header, article > * { margin: 20px; }
h1 { padding-top: 10px; }
[contenteditable]:hover:not(:focus) { outline: 1px dotted #ccc; }
#wrapper { width: 600px; margin: 0 auto; background: #fff url('data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/7QBAUGhvdG9zaG9wIDMuMAA4QklNBAQAAAAAACQcAVoAAxslRxwCAAACAAIcAkYAEFBpeGVsbWF0b3IgMS40LjH/4QCpRXhpZgAATU0AKgAAAAgABgESAAMAAAABAAEAAAEaAAUAAAABAAAAVgEbAAUAAAABAAAAXgEoAAMAAAABAAIAAAExAAIAAAARAAAAZodpAAQAAAABAAAAdwAAAAAAAABIAAAAAQAAAEgAAAABUGl4ZWxtYXRvciAxLjQuMQAAA6ABAAMAAAAB//8AAKACAAQAAAABAAACV6ADAAQAAAABAAAA0QAAAAD/2wBDAAICAgICAQICAgICAgIDAwYEAwMDAwcFBQQGCAcICAgHCAgJCg0LCQkMCggICw8LDA0ODg4OCQsQEQ8OEQ0ODg7/2wBDAQICAgMDAwYEBAYOCQgJDg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg4ODg7/wAARCADRAlcDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9/KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooqOgCSiiigApNw96ZRQBJRSbh70bh70ALRUdP8A+BUALRRRQAUUVHQBJSbh70bh70ygCSio6KAJKKj+SpKACik3D3paACiiigAoqOigCSik3D3o3D3oAWk3D3paKAE3D3pajooAkopNw96WgAopP+A0tABRRRQAUVHRQBJRUdFAElFFFABRRSbh70ALRRRQAUUUUAFFFFABRUe/3ooAkoqOigCSiik3D3oAWim/NTqACik3D3paACik3D3paAE3D3paTaPemUASUVHRQBJRUdG/3oAkopNw96T5aAHUUVHQBJRUdFABRTN30o3v7/nQA+imbvpSUASUb/eo6buoAm3+9G/3pm76UbvpQA+iq9O3UATUVDup+76UAP3PRv8AembvpRu+lAD6KjooAkoqOigCSiq9SUASUVHRQBJv96N/vUdFAC7vpT9/vUdN3UATb/ejf71XooAsUVXqXd9KAH0Uzd9KSgCTf70VHRQBJRv96h3U/d9KAEqSmbvpSUASUVDup1AEm/3oqOm7qAH7vpRu+lM3U2gCxRUdFAElG/3qOigCTf70VHRQBJRv96jpd30oAfn/AG6KZu+lG76UAPo3+9M3fSkoAkopm76UbvpQA/f70b/eq9SUASb/AHo3+9R0UASb/ejf71HRQAu76U/f71HRQBJv96N/vUdFAEm/3o3+9M3fSkoAk3+9FR0UASUVHRQBJv8Aeio6KAG7qZu+tM3+9G/3oAkoqOjf70ASUm760zf70UASUVHv96N/vQBJSbvrRu+tM3+9AElFR0UATbqN1Q0UATbqN1Q0UATbqZu+tMooAm3U2o6N/vQBJSbvrTN/vRv96AH7vrS1Hv8AeigCSjn+5UdHyUAP3fWl3r/k1Hv96N/vQBJRUdFAElJu+tM3+9FAD931pajo3+9AElFR7ko3+9BmSUVHRQaElFJu+tM3+9AElFR7/ejf70ASUVHRv96AJKKj3+9G/wB6AH7vrRu+tMooAm3Ubqh3+9G/3oAm3Uzd9aZv96KAH7vrRu+tMooAkoqOjf70ATbqZu+tM3+9G/3oAkoqPf70b/egCbdTaTd9aN31oAN31o3fWmUUAP3fWjd9aZRQA/d9aN31plFAElFR0b/egCSio9/vRv8AegCSnbqho3+9AE26iod/vRQBHS7vpUO760bvrQBNu+lG76VFSbvrQAtSVFu+tG760ATbvpRu+lQ7vrS0AS7vpSVHRQBJRUdJu+tAEtFN3Uzd9aAJaXd9Kh3fWloAl3fSjd9Kip26gB+76UlR0UASUVHRQBLu+lG76VFSbvrQA/dT930qHd9aN31oAm3fSoqTd9aN31oAloqOigCSio6KAJKbuptFADt1P3fSoqTd9aAJt30o3fSod31o3fWgCbd9KN30qKnbqAH7vpRu+lQ7vrRu+tAE276UbvpUO760bvrQBLS7vpUO760bvrQA/dTqi3fWjd9aAJt30o3fSoqKDMkpd30qKk3fWgCWl3fSod31o3fWg0H7qfu+lQ7vrRu+tBmTbvpRu+lRUUGgUUm760bvrQAtO3U2igB26nVFu+tG760AS0u76VDu+tLQBJRUdJu+tAE276UzdTaTd9aAH7qdUW760bvrQBLRUW760bvrQBNu+lFRUUAR7/ejf71Duo3UATb/AHo3+9Q7qN1AE2/3oqHdTaDMsUVXp26gCbf70b/embvpTN1AE2/3oqHdRuoAmo3+9Q7qN1AE2/3o3+9V6duoAm3+9G/3qHdRuoNCbf70b/eod1NoMyxv96Kr07dQBNv96N/vUO6jdQaE2/3o3+9Q7qN1BmTb/ejf71Duo3UATbv9miq9O3UGhNRv96h3UbqDMmoqHdRuoAmoqHdTaDQsb/ejf71Xp26gzJt/vTN30pm6m0GhY3+9G/3qvTt1BmTb/ejf71Duo3UGhNv96N/vTN30pm6gCbf70b/eod1G6gzJt/vRUO6jdQBNRv8Aeod1G6gCaiq9FBoS7vpRu+lM3UbqAH7vpT9/vUO6m0GZYo/3ah3UbqAJvno3+9Q7qN1BoTUVDuo3UGZNv96N/vUO6jdQBNRUO6jdQaE1G/3qHdRuoAm3+9M3fSmbqbQBY3+9G/3qHdRuoAfu+lFM3UUAM3fWjd9agpd30oAm3fWlqDd9KN30oMyek3fWod30pKAJ931o3fWoKXd9KAH7/en7vrUO76UbvpQBNu+tMpm76UlAElFR0UASb/en7vrUO76U/f70AP3fWjd9ah3fSjd9KAJ6Td9ah3fSjd9KAJt31pm/3pm76UlAE+760bvrUO76UbvpQBPUe/3pm76UfP7UAP3+9P3fWod30p+/3oAfu+tG761Du+lJQBPu+tLVepN/vQA/d9aN31qCigCfd9aN31qHd9KN30oAm3fWjd9ah3fSjd9KAH7/AHo3+9R0u76UATbvrRu+tQ7vpSUAT7vrRu+tQUUAT7vrRu+tQ7vpRu+lAE2760bvrUO76UbvpQBNu+tG761BS7vpQA/f70/d9ah3fSj/AIEaAJt31o3fWoKKALFFV6Xd9KAJt31o3fWod30o3fSgCbd9aN31qCigCfd9aN31qCig0J931o3fWoKKDMn3fWjd9aZUdAE+760bvrUO76UbvpQBPRUG76UbvpQBPSbvrUO76UbvpQBPSbvrUFFAE+760VDu+lFADN1NoqOgCbdRuqGigCbdTajo3+9AE26jdTN31o3fWgB+6jdUNG/3oAm3UbqhqSgB26jdUO/3p+760AP3UbqZu+tG760ALRUdFAElO3VDRv8AegCbdRupm760f8CFBoP3UbqhooAkp26od/vUlBmO3Ubqho3+9AE26jdTN31o3fWgB+6jdTN31paAHbqN1Q0UASUUVHv96AJKKj3+9G/3oAkopN31paACnbqbSbvrQAtFR0UATbqN1M3fWmUATbqN1Q7/AHp+760AP3UbqZu+tG760AP3UbqZu+tG760AP3Ubqh3+9FAElO/4FTaTd9aAFopN31o3fWgB+6jdTN31plAE26jdUNSUAO3U2o6N/vQBJTt1M3fWjd9aAH7qN1M3fWloAduo3Uzd9aN31oAWnbqZu+tLQA7dTaTd9aN31oAWiiigCDd9KN30qKpKAF3fSn1Duo3UATUb/eo6KAF3fSjd9Kip26gCbf70zd9KZuptAEu76U/f71HS7vpQAlFR0UASUVHUlAC7vpRu+lRU7dQA6l3fSoqKAJd30pKjqSgBd30o3fSoqKAJKkqvTt1AE2/3pm76UzdRuoAfu+lG76VFRQBLu+lPqvTt1AE1FV6KAJd30o3fSoqduoAm3+9R03dRuoAfu+lG76VFRQBLu+lG76VFRQBJRTd1G6gB+76UbvpSVHQBLu+lG76VFRQBJRTd1G6gB+76UbvpTN1NoAl3fSjd9KiqSgAoopu6gB+76U/f71HRQAu76UbvpTN1G6gB+76U+od1G6gCbf70zd9KZuo3UAOpd30qKigCSio6KAJd30o3fSoqKAJd30pKjooAl3fSjd9KiooAkoqOigAoqruf+/Ruf+/QBaoqruf+/TPNb+9+tAF2nbqoNLL/AHqrNPL/AM9aANaisNridf8AlrULXV1/DcSUAdFRXIyXt/8A893qnJf6j/z9SUAd1RXm0mqap/z+zpVOTV9Z/wCf+egD1WivHpNZ17+HUZ6rNrniH/oIz/8AjlAHtVFeHtr3iP8A6Cl3/wCOVC2veI/+gvd/+OUAe7UV4M2veI/+gvdf+OUxvEPib/oLXf8A45QB75RXgn/CQeJf+gvd/wCf+AUf8JB4l/6C93/n/gFAHvdFeCf8JB4l/wCgvd/5/wCAUf8ACQeJf+gvd/5/4BQB73RXgn/CQeJf+gvd/wCf+AUf8JB4l/6C93/n/gFAHvdFeCf8JB4l/wCgvd/5/wCAUf8ACQeJf+gvd/5/4BQB73RXgn/CQeJf+gvd/wCf+AUf8JB4l/6C93/n/gFAHvdFeCf8JB4l/wCgvd/5/wCAUf8ACQeJf+gvd/5/4BQB73RXgn/CQeJf+gvd/wCf+AUf8JB4l/6C93/n/gFBoe90V4J/wkHiX/oL3f8An/gFH/CQeJf+gvd/5/4BQZnvdFeCf8JB4l/6C93/AJ/4BR/wkHiX/oL3f+f+AUAe90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFBoe90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFBme90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFAHvdFeCf8JB4l/6C93/AJ/4BR/wkHiX/oL3f+f+AUAe90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFBoe90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFBme90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFAHvdFeCf8JB4l/6C93/AJ/4BR/wkHiX/oL3f+f+AUAe90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFAHvdFeCf8JB4l/6C93/AJ/4BR/wkHiX/oL3f+f+AUAe90V4J/wkHiX/AKC93/n/AIBR/wAJB4l/6C93/n/gFBoe90V4H/wkPiX/AKC93T/+Eg8R/wDQWuv/ABygzPeaK8J/t7xH/wBBa7/8cp6694j/AOgpd/8AjlAHudFeKrrfiH/oIz1Mus69/wBBGegD2SivJ49X1v8A5/56uR6pq3/P5PQB6ZRXn8eo6p/HeSVfjvb/APiupKAOxormlur3/n4eplurr/nrQB0O6m1krcS/89amWWX+/QBoUVS82X+/T1dv71AFqiqu5/79FADaKKKDQKRulLRQBHULJU1FAFNlqFl/uVfZah2/WgDNZarSRVq7KY0VAGDJb/7FU5LWukaKoWioA5hrWoWtf9iuna36VC1rQBzDWaf3Ki+xGupa1pjWtAHK/Y6PsX+xXV/ZV96i+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9g9qPsHtXTfZf9n9aPsv8As/rQBzP2D2o+we1dN9l/2f1o+y/7P60Acz9i/wBij7HXTfZf9n9al+yr70AcutlT1sl/u10i2v8AsU9bWgDnltf9ipltf9it5bWplt+tAGItrVyO3rSWDmplioApx2/+zVlYttWViqZUWgCFYqsqtPVaeqUACpU1FFABUlIvSloAKKKKACiiigAooooAKj2fWpKKAI6KkpNv1oAh2UzyvrVnb9aZs+tAFZoqZ5XtVyigCh5VMaKtLZ9aZtX/ACaAKHlfWmeVWltX/Jo2r/k0AZvlUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeRWl5SUeUlAGb5FHkVpeUlHlJQBm+RR5FaXlJR5SUAZvkUeVWl5SUeUlAGb5VHlVpbKNq/5NAFDyvrT/Kq5tX/Jp+1KAKaxULFVyjZ9aAIVSnqtP2fWjZ9aAGbKfT9v1paAI6ft+tLRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQB//2Q==') repeat-x center bottom; border-radius: 10px; border-top: 1px solid #fff; padding-bottom: 76px; }
#holder { border: 10px dashed #ccc; width: 300px; min-height: 300px; margin: 20px auto; }
#holder.hover { border: 10px dashed #0c0; }
#saveOutputFileBtn { margin: 0 auto; display: block; }
.errorMessage { color: #c00 }
.fail { background: #c00; padding: 2px; color: #fff; }
.hidden { display: none !important; }
</style>
<script src="http://code.jquery.com/jquery-2.2.4.min.js"></script>
<!-- source code for jQuery CSV can be found here https://github.com/evanplaice/jquery-csv -->
<script src="https://cdn.rawgit.com/evanplaice/jquery-csv/c99aa27290e103cc4cf76136c86f60d4985dd0d6/src/jquery.csv.min.js"></script>
<script src="js/main.js"></script>
</head>
<body>
<section id="wrapper">
<header>
<h1>Drag and drop, automatic upload</h1>
</header>
<article>
<div id="holder"></div>
<p id="filereader">
<span class="errorMessage">File API &amp; FileReader API not supported</span>
</p>
<p id="statusMessage"></p>
</article>
<article>
<button id="saveOutputFileBtn">save result</button>
</article>
</section>
</body>
</html>
</code></pre>
<p>Finally the JavaScript:</p>
<pre><code class="language-javascript">$(document).ready(function () {
var processedData;
var outputFileName;
// initial state (right after the page loads)
$("#statusMessage").text("Drag an csv file from your desktop on to the drop zone above to kick-off the analysis.");
$("#saveOutputFileBtn").attr("disabled", "disabled");
// check if this browser supports 'FileReader'. Required to read csv contents
if((typeof FileReader != 'undefined') === false) {
$("#filereader").addClass("fail");
} else {
$("#filereader").addClass("hidden");
}
// check if this browser support drag & drop and wire it up
if('draggable' in document.createElement('span')) {
var holder = document.getElementById('holder');
holder.ondragover = function () { this.className = 'hover'; return false; };
holder.ondragend = function () { this.className = ''; return false; };
holder.ondrop = function (e) {
this.className = '';
e.preventDefault();
readfiles(e.dataTransfer.files);
}
}
// read uploaded files
function readfiles(files) {
if (files.length === 0) {
return;
}
// set output file name
outputFileName = files[0].name.replace(".", "_result.");
var reader = new FileReader();
reader.onloadend = function (evt) {
if (evt.target.readyState == FileReader.DONE) { // DONE == 2
var csvData = evt.target.result;
try {
runCsvAnalysis(csvData);
} catch(err) {
$("#statusMessage").text("Unsupported file or unexcepted error processing input. Try another file.");
}
}
};
reader.readAsText(files[0]);
$("#saveOutputFileBtn").text("save result '" + outputFileName + "'");
$("#statusMessage").text("File upload successful!");
}
function runCsvAnalysis(csvContent) {
// parse csv content to array
var dataArr = $.csv.toArrays(csvContent);
// create a copy of the array
var output = dataArr.slice();
// check if the first row container header or values
var firstRow = isNaN(parseInt(dataArr[0][0])) ? 1 : 0;
if (firstRow > 0) {
// add new column name
output[0][12] = "Result";
}
// crappy algorithm, since it does not run any validation and
// expects the values in a particular order. For demo proposes only.
for (firstRow, len = dataArr.length; idx < len; idx++) {
var isMatch = false;
var result = " ";
var row = dataArr[idx];
var pivot = parseInt(row[11]);
if (pivot < parseInt(row[0])
&& pivot < parseInt(row[1])
&& pivot < parseInt(row[2])
&& pivot < parseInt(row[3])
&& pivot < parseInt(row[4])
&& pivot < parseInt(row[5])
) {
isMatch = true;
result = row[5] - pivot;
}
output[idx][12] = result;
}
// save output to processedData
processedData = output;
$("#statusMessage").text("Analysis complete, press the save button below to download the result.");
$("#saveOutputFileBtn").removeAttr("disabled");
}
$("#saveOutputFileBtn").bind("click", function () {
// prepare results to be downloaded
var csvContent = "data:text/csv;charset=utf-8,";
processedData.forEach(function (infoArray, index) {
dataString = infoArray.join(",");
csvContent += index < processedData.length ? dataString + "\n" : dataString;
});
// create a anchor element to automatically download the results file
var encodedUri = encodeURI(csvContent);
var link = document.createElement("a");
link.setAttribute("href", encodedUri);
link.setAttribute("download", outputFileName);
link.click(); // This will trigger file download
});
});
</code></pre>
<p>To exercise the code use the following test csv:</p>
<pre><code class="language-csv">Column A,Column B,Column C,Column D,Column E,Column F,Column G,Column H,Column I,Column J,Column K,Column L
49,66,92,86,80,56,100,48,65,46,58,55
96,75,35,93,92,21,35,81,80,30,66,43
92,75,79,54,97,9,85,87,41,75,55,82
10,48,80,45,32,34,72,15,97,0,44,23
8,67,14,34,85,16,33,73,22,64,79,46
67,70,89,66,91,65,33,96,59,61,64,1
25,92,27,6,30,78,33,10,89,91,10,92
42,63,44,53,59,33,61,41,82,66,100,79
45,49,23,24,74,94,38,53,6,31,76,17
</code></pre>
<p>Quite simple right? Keep in mind that this code was only tested in Chrome. This is a very basic example but you get the idea. Hopefully this will serve as inspiration for someone. Please send any questions or comments that you might have.</p>
<p><img src="/content/img/blog/gifs/basic-csv-analyzer.gif" alt="web services" /></p>
http://johnlouros.com/blog/manage-your-build-process-with-cake-buildManage your build workflow with Cake-Build 2016-05-16T00:00:002016-05-16T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>The first step of any robust development workflow relies on a structured and well-defined build process. Having a manageable build process can spare your development team from wasted time, headaches and unnecessary complexity. If you're handling a handful of projects with low compilation overhead, this particular topic might be irrelevant to you. Today's article is mainly targeted for those who must work with multiple projects, of different kinds (Web applications, Scheduled tasks, Console applications, Mobile application, database script, so on) and deploy multiple artifact types.</p>
<p>Before we jump in to my suggestion, just keep in mind that your build strategy might have an impact on your deployment workflow. Always be aware of artifact inter-dependencies, versioning, parameterization and deployment requirements. Ideally artifacts should be self-contained (should not require any external dependencies) however sometimes that's not always the case. Besides of the typical environment dependencies like (as an example) requiring Windows Server 2012 R2, .Net Framework 4.5.1, Web Deploy 3.5 and IIS with URL Rewrite installed, some other artifact interdependencies might exist, like (another example) adding a particular set of DLLs to the global assembly cache <a href="https://msdn.microsoft.com/en-us/library/yf1d93sz.aspx">GAC</a> before enabling a set of scheduled tasks. My point is, try to fully understand your artifact dependencies, be aware of the deployment workflow and adapt your build process to make your provisioning and deployment processes as simple as possible. However, today I'm just going to focus on the build process.</p>
<p>As far as I'm aware, most development teams rely on proprietary build management systems to define their build process (examples: <a href="https://www.jetbrains.com/teamcity/">TeamCity</a>, <a href="https://www.atlassian.com/software/bamboo">Bamboo</a>, <a href="https://www.visualstudio.com/en-us/products/tfs-overview-vs.aspx">TFS</a>, <a href="https://jenkins.io/">Jenkins</a>). However, build definitions setup in this systems are not inter-changeable, meaning that you can not setup a build definition in TeamCity and later on move it to Bamboo. To be honest, most of the time this is not an issue. Let's be real, how many times do you expect your build system will change? Nonetheless, another approach might be desirable. So what if all build definitions could live right beside your code? On top of retaining history of all build definition changes, developers could easily debug build issues without needing access to a build agent and quickly modify build definitions, on their branch, without interfering anybody else's work. Obviously my previous arguments could be seen as negatives since you're giving more power to the developers, which might be a bad idea depending on how disciplined they are. Don't get me wrong, sometimes too much power can be problematic.</p>
<p>Now, what do I have to propose? <a href="http://cakebuild.net/">Cake-Build</a>! As it is self-described <em>"Cake (C# Make) is a cross platform build automation system with a C# DSL to do things like compiling code, copy files/folders, running unit tests, compress files and build NuGet packages."</em>. In other words you can write C# to manage your build tasks, so your build definition is fully written in C#! Isn't that awesome!? To get started I would recommend following their <a href="http://cakebuild.net/docs/tutorials/getting-started">getting started guide</a>. If you're looking for something even quicker, follow this steps:</p>
<ol>
<li>Copy the 3 required files to root of your repository:</li>
</ol>
<ul>
<li><a href="https://github.com/cake-build/example/blob/master/build.ps1">build.ps1</a></li>
<li><a href="https://gist.github.com/jlouros/def8b01afa58c623077220bf0f6881ac">build.cake</a></li>
<li><a href="https://github.com/cake-build/example/blob/master/tools/packages.config">tools/packages.config</a></li>
</ul>
<ol start="2">
<li>Open <em>build.cake</em> in a text editor and search-and-replace all "./src/Example.sln" entires to match your *.sln location (always use forward slashes)</li>
<li>Open PowerShell, navigate to the root of your repository and execute <em>build.ps1</em></li>
</ol>
<p>Simple right? This should be enough to get you started. For the next steps I suggest reading the "Fundamentals" section in the <a href="http://cakebuild.net/docs/tutorials/getting-started">getting started page</a> and install Cake's Visual Studio Code <a href="https://marketplace.visualstudio.com/items?itemName=cake-build.cake-vscode">extension</a> to edit *.cake files. I know this is a bit raw, but should be enough to spike your interest. Seriously give it a try and let me know what do you think.</p>
<p>References:</p>
<ul>
<li><a href="http://cakebuild.net/">Cake-Build website</a></li>
<li><a href="http://cakebuild.net/dsl">reference documentation</a></li>
<li><a href="https://github.com/cake-build/cake">source code</a></li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=cake-build.cake-vscode">Cake - Visual Studio Code extension</a></li>
</ul>
<p><img src="/content/img/blog/cake-build.png" alt="Cake Build screenshot" /></p>
http://johnlouros.com/blog/how-to-encrypt-web-config-sectionsHow to encrypt web.config sections2016-05-03T00:00:002016-05-03T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Here's another quick tip for anybody interested in protecting sensitive information declared on your Web application web.config. In this example I'm going to use <a href="https://msdn.microsoft.com/en-us/library/ms995355.aspx">Windows Data Protection API (DPAPI)</a> to encrypt connection strings and session state SQL connections string on all web.configs found under 'C:\inetpub' (default location for web applications running on IIS).</p>
<p>Web.config sections encrypted with DPAPI will only be able to be decrypted on the machine where you originally ran the encryption method. In other words, you won't be able to copy-paste your DPAPI encrypted web.config files to different server. If you intend to run the encryption once and move the web.configs to different servers, you must use RSA encryption. There are a few additional commands you will need to invoke, but it's very straight forward (please check the reference here).</p>
<pre><code class="language-powershell"># search for all 'web.config' located under 'C:\inetpub\'
Get-ChildItem 'C:\inetpub\' -Filter 'web.config' -Recurse | ForEach-Object {
$directory = $_.Directory.FullName
$filePath = $_.FullName
$webConfig = [xml](Get-Content $filePath)
# check if there are any connection string sections declared
if($webConfig.SelectSingleNode('//connectionStrings').HasChildNodes) {
Write-Output "encrypting '$filePath' connection strings..."
# let's tell 'aspnet_regiis' to encrypt it
& $env:windir\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -pef 'connectionStrings' $directory -prov 'DataProtectionConfigurationProvider'
}
# check if there are any session state sections with the attribute 'sqlConnectionString' defined
if(-not ([string]::IsNullOrWhiteSpace($webConfig.SelectSingleNode('//system.web/sessionState[@sqlConnectionString]')))) {
Write-Output "encrypting '$filePath' session state..."
# let's tell 'aspnet_regiis' to encrypt it
& $env:windir\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -pef 'system.web/sessionState' $directory -prov 'DataProtectionConfigurationProvider'
}
}
</code></pre>
<p>Here's the original connection string section:</p>
<pre><code class="language-xml"><connectionStrings>
<add name="mainConnStr" connectionString="Data Source=db.host.name;Initial Catalog=MainDb;User ID=SqlDbo;Password=SqlDbo!;Network Library=DBMSSOCN" providerName="System.Data.SqlClient" />
</connectionStrings>
</code></pre>
<p>And the same connection string section after being encrypted:</p>
<pre><code class="language-xml"><connectionStrings configProtectionProvider="DataProtectionConfigurationProvider">
<EncryptedData>
<CipherData>
<CipherValue>AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAc7hBUpoi9ku74ZbnX4J4qwQAAAACAAAAAAADZgAAwAAAABAAAACPqFkO0wnEWkEA9BMJ77SMAAAAAASAAACgAAAAEAAAALoHB6Bdff35S/FrrupWBLfoAQAA7ZfFtIwvwshcqcd29HBzEpkX5g3JViNda/SeEHvxrEfcMXfJVYeMv8e+gnhMASutpyTNsnYouc3pA3WuI/zrtiy8fIF4qaABhLj6CLAyaSTaDhajHvw/rC9Zv8+JjF6Z1ZWl5XqIxJ0Ia/Ba2/j23I1pwvb1DncTHWh8zI49FmXBWBivbDn+VWPLgPL7Z2trfVVNdJlZG0JysSeLzvv6EAU0BE5neOxPw1NYfzKzih9sVJfNSeKMASZ5CxSAw15ubTmdK8i2fPJCUtfpgIfmUqHtS2H3t+01kVBqaF93+B/fdTkI/B5WQzw/FZmt7m2ns152qmt5tohnddR8ggrryazaBkXqlPSBavK+yK4K9vNobxUMf18Y1EWgMaXiWjAc0Z/Pa+ZlmaA5iZH2+nPUsdoqtho7x2jdjmqNGtPn3EOxEwxtUQWq904ejj0g5bL6Sx8pCPvr8ddjsglPZPAajtiGc8UAq8K5VnT4BPRlOXWx4LVVcdaXrltr3/vDxOLMj5iMXvyU0EjafeuXRpHYBZkwjnQSr0SDH81YRgvpwn3bkAhLFhIEyTEmRk7oNq1+u5mDwBDocbP0NMDqoZkltHENDVEJ+CQayhH0sIL1c6xOJTQ2PJjo6qBDXUv4ZqY5zHRu+KYj5H4UAAAA7cJgigwdSaG4+Jxyeq8xmjMYHhg=</CipherValue>
</CipherData>
</EncryptedData>
</connectionStrings>
</code></pre>
<p>Decrypting them is equally easy simply run the same command without the provider specified and use "-pdf" as the desired command</p>
<pre><code class="language-powershell">& $env:windir\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -pdf 'connectionStrings' $directory
</code></pre>
<p>Additionally, only users with elevated privileges with the able to run this command, so obviously you are not hiding connection string details from System Administrators. On the other hand, if an attacker is able to get elevated privileges, connection strings might be the least of your concerns (assuming the SQL user defined on your connection string doesn't have Database Owner access). This method simply adds another small barrier on your application security.</p>
<p>For reference, the following sections usually contain sensitive information that you need to encrypt:</p>
<ul>
<li><em><appSettings></em>. This section contains custom application settings.</li>
<li><em><connectionStrings></em>. This section contains connection strings.</li>
<li><em><identity></em>. This section can contain impersonation credentials.</li>
<li><em><sessionState></em>. The section contains the connection string for the out-of-process session state provider.</li>
</ul>
<p>Here's the reference articles, and even though the following articles mention ASP.NET 2.0, those are compatible with any version above 2.0:</p>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/ff647398.aspx">How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/ff650304.aspx">How To: Encrypt Configuration Sections in ASP.NET 2.0 Using RSA</a></li>
</ul>
<p><img src="/content/img/blog/iis-logo.png" alt="web services" /></p>
http://johnlouros.com/blog/wsdl-min-occursTips to avoid breaking existing SOAP APIs2016-04-15T00:00:002016-04-15T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>These days it might be a bit uncommon to find anybody creating new SOAP (Simple Object Access Protocol) web services. However that does not mean SOAP web services are dead. Due to public perception, Software companies avoid mentioning components that might be considered "old" (or not trendy). In a highly competitive market, where companies keep fighting for the best Developers, referencing older technologies might throw some candidates off. Still, that does not mean components developed with "older" technologies do not require maintenance.</p>
<p>Doing API changes while keep backwards compatibility can be real challenge, that is why for this blog post I will share a few ideas to help avoid breaking existing SOAP APIs. Before continuing, just keep in mind that I will focus on .Net implementation and different frameworks might have different behaviors and/or rules. Here's a few tips that might be helpful.</p>
<p><em>Mark new properties as optional.</em> If you are adding new properties to the classes used in your web service, make sure they are optional. This way, new API changes won't break any functionality for clients that are not using the latest version of your web service. In this particular case we need to clearly understand the difference between 'minOccurs' and 'nillable' element attributes on the WSDL XML schema. The attribute names are pretty much self-explanatory, but what's the actual different between one and the other? 'nillable' states if the property can is <a href="https://msdn.microsoft.com/en-us/library/1t3y8s4s.aspx">Nullable</a>, keep in mind that most C# primitives types are not nullable (bool, int, decimal, so on). 'minOccurs' defines if the property should be always present or not. For new properties make sure 'minOccurs' is equal to '0' and 'nillable' is true.</p>
<p>Marking a new property as Nullable is easy, but how can be make sure it's skippable? By default non-primitive types can be omitted, but that's not the case for Nullable primitives. To solve this simply add a new boolean property, decorate it with "XmlIgnoreAttribute" to ensure it's not part of the WSDL, use exactly the same name as your new property and append "Specified" (something like "{new property name}Specified") and lastly just define the getter returning if you new property null or not. This is not that easy to follow in words, so check the example presented below.</p>
<pre><code class="language-cs">// our new nullable property
public float? NewProp { get; set; }
// this statement ensures "NewProp" can be omitted. Remember to append "Specified" in front of the property name
[System.Xml.Serialization.XmlIgnoreAttribute]
public bool NewPropSpecified { get { return NewProp != null; } }
</code></pre>
<p><em>Preserve property ordering unchanged.</em> Ensure property ordering remains unchanged on any classes used on your web service. This is important because ordering in preserved when the XSDL XML schema is generated. New properties must always go to the bottom. Here's an example Web service:</p>
<pre><code class="language-cs">public class WsInput
{
public string Str { get; set; }
public decimal Dec { get; set; }
public int Num { get; set; }
// our new property, marked as nullable and added to bottom
public float? NewProp { get; set; }
// this statement ensures "NewProp" can be omitted. Append "Specified" in front of the property name
[System.Xml.Serialization.XmlIgnoreAttribute]
public bool NewPropSpecified { get { return NewProp != null; } }
}
public class WsOutput
{
public string Msg { get; set; }
}
[System.Web.Services.WebService(Namespace = "http://tempuri.org/")]
[System.Web.Services.WebServiceBinding(ConformsTo = System.Web.Services.WsiProfiles.BasicProfile1_1)]
[System.ComponentModel.ToolboxItem(false)]
public class MyWebService : System.Web.Services.WebService
{
[System.Web.Services.WebMethod]
public WsOutput DummyWsMethod(WsInput input)
{
return new WsOutput();
}
}
</code></pre>
<p><em>New changes shall not have any impact on previous existing calls.</em> Compare WSDL changes after your changes are applied. Make sure you create a test suite that fully tests your existing web service. After applying your changes, run the test suite without updating the client WSDL. Everything should pass flawlessly.</p>
<p><img src="/content/img/blog/web-services.jpeg" alt="web services" /></p>
http://johnlouros.com/blog/how-to-automate-windows-security-prompt-inputHow to automate Windows Security prompt input2016-03-21T00:00:002016-03-21T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Here's another post about automation. This time let's automate Windows Security input prompts text input. <em>Disclaimer</em>, use your best judgment to figure when you should or shouldn't use this scripts. Keep in mind that for the next set of examples passwords are kept in plain text. If you plan to actually use this files, at least use <em>SecureStrings</em> <a href="https://technet.microsoft.com/en-us/library/hh849818.aspx">(check reference here)</a>. Please be careful, I won't be responsible for any misuse of the scripts presented in this post.</p>
<p>Moving on from the security disclaimers, some might ask why automate password prompts? Well, most Software development companies have Local domain Virtual Machine farms that are for internal development proposes only. Assuming all VMs are created the same Administrator account credentials, it would be neat if we could automate password prompts every time a developer wants Remote Desktop or explore the VM's file system. In my opinion, this is the only acceptable scenario where password prompts can be automated. Anyway let's take a look at the scripts.</p>
<p>Automating Remote Desktop access</p>
<pre><code class="language-powershell"><#
.SYNOPSIS
Start Remote Desktop with 'local\administrator' credentials
.NOTES
Useful to rapidly connect to a Go Lab VM
#>
Param(
[string]$hostname = "dev.vm-001.local"
)
Process
{
# helper function to locate a open program using by a given Window name
Function FindWindow([string]$windowName, [int]$retries = 5, [int]$sleepInterval = 1000) {
[int]$currentTry = 0;
[bool]$windowFound = $false;
Do {
$currentTry++;
Start-Sleep -Milliseconds $sleepInterval
Try {
[Microsoft.VisualBasic.Interaction]::AppActivate($windowName)
$windowFound = $true;
} Catch {
Write-Host " [$currentTry out of $retries] failed to find Window with title '$windowName'" -ForegroundColor Yellow
$windowFound = $false;
}
} While ($currentTry -lt $retries -and $windowFound -eq $false)
return $windowFound;
}
# import required assemblies
Add-Type -AssemblyName Microsoft.VisualBasic
Add-Type -AssemblyName System.Windows.Forms
# test if the provided hostname is valid
$testedHostname = Test-Connection $hostname -Count 1 -ErrorAction SilentlyContinue
if($testedHostname -eq $null) {
Write-Error "the provided hostname could not be resolved '$hostname'" -ErrorAction Stop
}
$vmIp = $testedHostname.IPV4Address.IPAddressToString
# open Remote Desktop with 'local\administrator'
Write-Host "starting connection to '$testedHostname' using 'local\administrator' credentials!"
cmdkey /generic:TERMSRV/$vmIp /user:local\administrator
mstsc /v:$vmIp
# first prompt to enter the password
if(FindWindow("Windows Security")) {
Start-Sleep -Milliseconds 500
[System.Windows.Forms.SendKeys]::SendWait('Password1{ENTER}')
}
# second prompt to accept the certificate
if(FindWindow("Remote Desktop Connection")) {
Start-Sleep -Milliseconds 250
[System.Windows.Forms.SendKeys]::SendWait('Y')
}
Write-Host "done!"
}
</code></pre>
<p><img src="/content/img/blog/run-remote-desktop-ps1.gif" alt="automated Remote Desktop interaction" /></p>
<p>Automate opening Windows Explorer on remote machine file system</p>
<pre><code class="language-powershell"><#
.SYNOPSIS
Open Windows Explorer in the 'local\administrator' credentials
.NOTES
Quickly open a new Windows Explorer Windows using 'local\adminstrator' credentials
#>
param(
[string]$hostname = "dev.vm-001.local"
)
Process
{
# helper function to locate a open program using by a given Window name
Function FindWindow([string]$windowName, [int]$retries = 5, [int]$sleepInterval = 1000) {
[int]$currentTry = 0;
[bool]$windowFound = $false;
Do {
$currentTry++;
Start-Sleep -Milliseconds $sleepInterval
Try {
[Microsoft.VisualBasic.Interaction]::AppActivate($windowName)
$windowFound = $true;
} Catch {
Write-Host " [$currentTry out of $retries] failed to find Window with title '$windowName'" -ForegroundColor Yellow
$windowFound = $false;
}
} While ($currentTry -lt $retries -and $windowFound -eq $false)
return $windowFound;
}
# import required assemblies
Add-Type -AssemblyName Microsoft.VisualBasic
Add-Type -AssemblyName System.Windows.Forms
# test if the provided hostname is valid
$testedHostname = Test-Connection $hostname -Count 1 -ErrorAction SilentlyContinue
if($testedHostname -eq $null) {
Write-Error "the provided hostname could not be resolved '$hostname'" -ErrorAction Stop
}
$vmRootLocation = Join-Path "\\$($testedHostname.Address)" "\C$\"
Write-Host "opening Windows Explorer at '$vmRootLocation' using 'local\administrator' credentials!"
explorer /root,$vmRootLocation
# handle the security prompt to enter username and password
if(FindWindow("Windows Security")) {
Start-Sleep -Milliseconds 250
[System.Windows.Forms.SendKeys]::SendWait('local\administrator{TAB}')
[System.Windows.Forms.SendKeys]::SendWait('Password1{ENTER}')
}
Write-Host "done!"
}
</code></pre>
<p><img src="/content/img/blog/run-explorer-ps1.gif" alt="open Windows Explorer on a remote location" /></p>
<p>Open SQL Server Management Studio or Profiler, using different user credentials. Useful if you want to login using Windows Authentication.</p>
<pre><code class="language-powershell"><#
.SYNOPSIS
Opens a local instance of the selected SQL tools, using a 'local\administrator' credentials
.NOTES
Useful for profiling Go Lab environments since 'local\administrator' is the default local administrator of those machines
#>
Param([Switch] $SqlProfiler)
Process
{
# a new Command Prompt Window should be opened with this title
$newCmdlineWindowTitle = "C:\Windows\System32\cmd.exe"
# import required assemblies
Add-Type -AssemblyName Microsoft.VisualBasic
Add-Type -AssemblyName System.Windows.Forms
Start-Process "cmd.exe"
Start-Sleep -Milliseconds 1000
[Microsoft.VisualBasic.Interaction]::AppActivate($newCmdlineWindowTitle)
if($SqlProfiler) {
# open SQL Server Profiler
Write-Output "about to open SQL Server Profiler"
[System.Windows.Forms.SendKeys]::SendWait('runas /netonly /user:local\administrator "C:\Program Files {(}x86{)}\Microsoft SQL Server\110\Tools\Binn\PROFILER.EXE"{ENTER}')
}
else {
# open SQL Server Management Studio
Write-Output "about to open SQL Server Management Studio"
[System.Windows.Forms.SendKeys]::SendWait('runas /netonly /user:local\administrator "C:\Program Files {(}x86{)}\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Ssms.exe"{ENTER}')
}
Start-Sleep -Milliseconds 500
[System.Windows.Forms.SendKeys]::SendWait('Password1{ENTER}')
# close command prompt window
Start-Sleep -Milliseconds 500
[Microsoft.VisualBasic.Interaction]::AppActivate($newCmdlineWindowTitle)
[System.Windows.Forms.SendKeys]::SendWait('exit{ENTER}')
Write-Host "done!"
}
</code></pre>
<p><img src="/content/img/blog/run-sql-tool-ps1.gif" alt="open SQL Server using a different user account" /></p>
<p>You can also checkout this script in Github by following <a href="https://github.com/jlouros/PowerShell-toolbox/tree/master/Misc/Automate%20Windows%20Security%20prompt%20input">this link</a>. I hope this scripts are useful to you, just be careful and don't leave any personal passwords in plain text!</p>
http://johnlouros.com/blog/create-a-schedule-task-to-periodically-run-a-powershell-scriptCreate a schedule task to periodically run a PowerShell script2016-03-09T00:00:002016-03-09T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Lately, the anti-virus used by the company I work for have been giving me a few headaches. In a nutshell, every time Visual Studio is opened or the Visual Studio test framework tries find tests, the anti-virus starts his virus scan. By itself that shouldn't be a problem, the scan shouldn't consume too many resources. The reality is the anti-virus scan completely gets out of hand with the amount of resources it's using, making my computer unusable while the scan is being performed. On top of that, Visual Studio is on hold while the scan is running. Here's a few screenshots of the Task Manager while the Anti-Virus is running.</p>
<p><img src="/content/img/blog/slow-Anti-Virus-Task-Manager-Tab1.png" alt="slow anti-Virus Task manager tab process" />
<img src="/content/img/blog/slow-Anti-Virus-Task-Manager-Tab2.png" alt="slow anti-Virus Task manager tab performance" /></p>
<p>Obviously this problem was reported to our IT department, but as usual all I got was generic response stating the issue will be investigated, which is the same as saying "whatever dude, we might look into it whenever we have some free time to spare". What's upsetting is that I don't have control to the anti-virus settings. On the bright side I've administrator access to my computer, so I would just have to figure out a solution on my own. It's known that every time a new instance of Visual Studio is opened the anti-virus process spawns up and starts running. All I need to do is find the anti-virus process kill it. It's a bit too extreme, but Visual Studio is my bread and butter so yes I'm willing to compromise security in favor for productivity. Plus the problem was reported and nothing was done to resolve it and waiting 30-45 minutes every time I need to open a new Visual Studio instance, is absolutely ridiculous. By the way, in this blog post the name of Anti-virus used won't be mentioned, since it's not relevant for the message I want to pass.</p>
<p>How can this problem be solved? We already know the root cause. Every time Visual Studio is opened, the anti-virus process must be killed and everything comes back to normal. Easy right? But doing this every time is incredibly boring, so let's automate it. Let's create a schedule task, starting at every time a user logon, that every five minutes checks if the Anti-Virus is running and kills it.</p>
<p>To create a scheduled task, the traditional <em>'schtask'</em> command <a href="%5Bhttps://msdn.microsoft.com/en-us/library/windows/desktop/bb736357.aspx">(reference)</a> or PowerShell Cmdlets can be used <a href="https://technet.microsoft.com/en-us/library/jj649816.aspx">(reference)</a>. Although, if either of those are used, some options are intentionally not available, like defining a trigger that starts at logon and is invoked every five minutes. Don't know the reason why but most likely those options are blocked for compatibility reasons (old systems might not support it). To bypass this limitation simply create and fully configure new schedule task using the UI <em>'taskschd.msc'</em> and export it (right-click, Export...). A XML file will be saved containing the entire schedule task definition. Later you can use this schedule task definition file to re-create the schedule task using PowerShell or Windows Command Prompt. Just provide the task name and the XML location.</p>
<pre><code class="language-xml"><?xml version="1.0" encoding="UTF-16"?>
<Task version="1.4" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<RegistrationInfo>
<Date>2016-03-02T12:51:46.404796</Date>
<Author>DOMAIN\USER</Author>
</RegistrationInfo>
<Triggers>
<LogonTrigger>
<Repetition>
<Interval>PT5M</Interval>
<StopAtDurationEnd>false</StopAtDurationEnd>
</Repetition>
<ExecutionTimeLimit>PT30M</ExecutionTimeLimit>
<Enabled>true</Enabled>
<Delay>PT30S</Delay>
</LogonTrigger>
</Triggers>
<Principals>
<Principal id="Author">
<UserId>DOMAIN\USER</UserId>
<RunLevel>HighestAvailable</RunLevel>
</Principal>
</Principals>
<Settings>
<MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
<DisallowStartIfOnBatteries>false</DisallowStartIfOnBatteries>
<StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
<AllowHardTerminate>true</AllowHardTerminate>
<StartWhenAvailable>false</StartWhenAvailable>
<RunOnlyIfNetworkAvailable>false</RunOnlyIfNetworkAvailable>
<IdleSettings>
<StopOnIdleEnd>true</StopOnIdleEnd>
<RestartOnIdle>false</RestartOnIdle>
</IdleSettings>
<AllowStartOnDemand>true</AllowStartOnDemand>
<Enabled>true</Enabled>
<Hidden>false</Hidden>
<RunOnlyIfIdle>false</RunOnlyIfIdle>
<DisallowStartOnRemoteAppSession>false</DisallowStartOnRemoteAppSession>
<UseUnifiedSchedulingEngine>false</UseUnifiedSchedulingEngine>
<WakeToRun>false</WakeToRun>
<ExecutionTimeLimit>PT1H</ExecutionTimeLimit>
<Priority>7</Priority>
</Settings>
<Actions Context="Author">
<Exec>
<Command>C:\StartupScripts\Kill-Process.vbs</Command>
</Exec>
</Actions>
</Task>
</code></pre>
<p>To create a schedule task using the XML presented above, simply save the XML in a known location and execute the following PowerShell:</p>
<pre><code class="language-powershell">& schtasks.exe /create /TN "Kill Process Task" /XML ".\ScheduleTask-To-Kill-Process.xml"
</code></pre>
<p>The PowerShell script to actually kill the anti-virus process:</p>
<pre><code class="language-powershell">#Requires -RunAsAdministrator
# define the process name here
$target = "PROCESS-NAME-TO-KILL"
$process = Get-Process $target -ErrorAction SilentlyContinue
if ($process -ne $null)
{
$process.Kill()
}
</code></pre>
<p>One of the down sides of calling PowerShell from a schedule task is that every time the task is triggered, you will see a new command prompt begin opened and closed right away. Obviously this is quite distracting. To avoid this, you can create a VBScript that invokes your PowerShell script from a hidden command prompt shell.</p>
<pre><code class="language-vbscript">Dim objShell,objFSO,objFile
Set objShell=CreateObject("WScript.Shell")
Set objFSO=CreateObject("Scripting.FileSystemObject")
'enter the path for your PowerShell Script
strPath="C:\StartupScripts\Kill-Process.ps1"
'verify file exists
If objFSO.FileExists(strPath) Then
'return short path name
set objFile=objFSO.GetFile(strPath)
strCMD="powershell -nologo -command " & Chr(34) & "&{" & objFile.Path & "}" & Chr(34)
'Uncomment next line for debugging
'WScript.Echo strCMD
'use 0 to hide window
objShell.Run strCMD,0
Else
'Display error message
WScript.Echo "Failed to find " & strPath
WScript.Quit
End If
</code></pre>
http://johnlouros.com/blog/fix-typescript-file-encoding-in-win10-app-developmentFix TypeScript file encoding in Win10 app development2016-02-08T00:00:002016-02-08T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Allowing developers to pick the programming language they feel most comfortable when writing Windows 10 applications, has been Microsoft's the strategy to appeal more developers join their ecosystem. Since JavaScript popularity has increased tremendously in the last years, it's only natural that Microsoft supports application development using HTML and JavaScript. For increased productivity and compile-time validation, many of us choose TypeScript in favor of vanilla JavaScript. On this article I'll talk how can you use TypeScript for HTML/JavaScript Windows 10 development and how can you avoid file encoding problems reported by 'Windows App Certification Kit'.</p>
<p>Visual Studio 2015 provides out-of-the-box support of TypeScript. On build, Visual Studio will automatically parse your TypeScript files and generate the correspondent JavaScript files. When writing Windows 10 applications with HTML/JavaScript the same behavior is triggered, however files compiled from TypeScript will make your application unfit to be submitted to Windows Store. You won't notice this issue during development since you will be able to run and debug your application without a problem.</p>
<p>The problem arises once you run 'Windows Certification Kit', since JavaScript files generated from TypeScript are saved in UTF-8 without BOM, while they must be saved in UTF-8 with BOM. Unfortunately, as of it is today , using Visual Studio 2015 with Update 1 you won't be able to modify the encoding of the resulting files from TypeScript compilation. At least no option can be found in 'Project Settings' -> 'TypeScript compile'. Hopefully Microsoft will fix this in the near future. Also, keep in mind that manually changing the encoding of the generated JavaScript won't help since the file will be re-generated on each build. Here's a screenshot of a failed 'Window App Certification Kit' report, highlighting the problem previously mentioned:</p>
<p><img src="/content/img/blog/ts-uwa-fix/WACK-screenshot.png" alt="WACK screenshot" /></p>
<p>This issues would be easily solvable if Windows 10 projects supported <a href="https://github.com/Microsoft/TypeScript/wiki/tsconfig.json">'tsconfig.json' (TypeScript configuration)</a>. Using 'tsconfig.json' developers are able to define Typescript compiler options like ignore comments; allow var (anonymous types); define target ECMAScript; so on. For this particular problem we specify set if output JavaScript files must be saved with Byte-Order-Mark by changing "emitBOM".</p>
<p>Knowing the problem can fixed using 'tsconfig.json', as a workaround we can create a new project that supports 'tsconfig.json' and target the output JavaScript files to the directory of our Windows 10 project. Here's a step by step tutorial.</p>
<p><strong>Step 1.</strong> create a new Windows 10 HTML project:</p>
<p><img src="/content/img/blog/ts-uwa-fix/step-1-create-new-project.gif" alt="Step 1" /></p>
<p>Now let's create a new 'ASP.Net Core 1 (previously know as ASP.Net 5) console app'. Create a new folder named 'scripts' and add your TypeScript files here.</p>
<p><strong>Step 2.</strong> create a new ASP .Net Core 1.0 console application and add your TypeScript files:</p>
<p><img src="/content/img/blog/ts-uwa-fix/step-2-create-console-app.gif" alt="Step 2" /></p>
<p>Create a new 'tsconfig.json' in the same location, open it and point the output directory (or output file, if you want to combine all TypeScript compiled files into a single JavaScript file) to the 'js' folder of your Windows 10 project. Also remember to set 'emitBOM' to true.</p>
<p><strong>Step 3.</strong> use TypeScript configuration file to define the output of the compiled JavaScript:</p>
<p><img src="/content/img/blog/ts-uwa-fix/step-3-configure-tsconfig.gif" alt="Step 3" /></p>
<p>Build the console app to trigger TypeScript compilation. This will generate the JavaScript file(s) that must be included in your Windows 10 application, so locate them (in Visual Studio click on "Show All Files") and include them to the project. Now you are all set! To make this approach completely flawless, set the console application project as a dependency of your Windows 10 project. This way, every build will trigger the console app be build first, before compiling the Windows 10 project.</p>
<p>Just a quick side note. Your Windows 10 application won't be aware your TypeScript files (only knows about the compiled JavaScript), in this case debugging will be trickier since Visual Studio will direct you to the compiled JavaScript during debugging sessions. To make debugging easier, simply tell TypeScript compiler to generate source maps (set 'sourceMap' to true in tsconfig.json). This way, the debugger will redirect you to the respective location in your TypeScript code. Feel free to include "*.js.map" files in your project. Just keep in mind that even with "emitBOM" setting enabled the source maps will be saved in UTF-8 without BOM. However this won't be a problem since JavaScript source maps are ignored by the 'Window App Certification Kit'</p>
http://johnlouros.com/blog/typescript-presentationTypeScript presentation2016-01-25T00:00:002016-01-25T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Recently I spent some time working on a presentation about TypeScript, for my current company. The main purpose was to study the pros and cons and validate if it would make sense to use it on our Web Applications. The presentation itself is targeted for a boarder audience, with little or no experience with JavaScript. It begins by explaining the origin and evolution of JavaScript; concluding with how TypeScript can aid developers avoid common mistakes and tackle some flaws inherited from JavaScript's specification.</p>
<p>The slides themselves don't contain much information, I just used them to guide and emphasize my arguments. They might not be very useful without the presentation script, but I thought it was worth sharing anyway.</p>
<p>Here's a link to the slides. If you have any questions or comments about either my presentation or TypeScript, please let me know. <a href="/projects/TypeScriptPresentation/">TypeScript presentation slides</a></p>
<p><img src="/content/img/blog/typescript.jpg" alt="TypeScript" /></p>
http://johnlouros.com/blog/enabling-strong-cryptography-for-all-dot-net-applicationsEnabling strong cryptography for all .Net applications2016-01-06T00:00:002016-01-06T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>On my previous post <a href="/blog/disabling-cryptographic-protocols-for-pci-compliance">'Disabling cryptographic protocols for PCI compliance (focused on SSL 3.0 and TLS 1.0)'</a> I mentioned how can you disable incoming SSL 3.0 and TLS 1.0 connections, by tweaking <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa380123.aspx">schannel</a> settings in the Windows registry. Along with it, I also mentioned how to tweak ServicePointManager security settings to modify what cryptographic protocols shall be used for outgoing connections. On this post, I'm going to demonstrate another possible solution for this problem by modifying <em>strong cryptography</em> settings of all .Net based applications.</p>
<p>As seen in my previous post, <a href="https://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.aspx">ServicePointManager</a> changes are applied per <a href="https://msdn.microsoft.com/en-us/library/yb506139.aspx">AppDomain</a>, so if you have multiple applications hosted in different domains, you will have to manage ServicePointManager for each one of them. Depending on the use case, this might not be ideal.
To globally modify the available cryptographic protocols for all .Net applications (versions 4 and above), just enable 'strong cryptography' on the Windows registry.
If strong cryptography is disabled, only SSL 3.0 and TLS 1.0 will be used for secure connections. Otherwise TLS 1.0, TLS 1.1 and TLS 1.2 will be used.</p>
<p>To verify how this registry setting is being used, just dig into <a href="http://referencesource.microsoft.com/#System/net/System/Net/SecureProtocols/SslStream.cs,121">.Net source code</a>.
As you can see, when strong cryptography is disabled, the default protocols used for SslStreams are SSL 3.0 and TLS 1.0.</p>
<pre><code class="language-cs">// extracted from .Net source code, link above
private SslProtocols DefaultProtocols()
{
SslProtocols protocols = SslProtocols.Tls12 | SslProtocols.Tls11 | SslProtocols.Tls;
if (ServicePointManager.DisableStrongCrypto)
{
protocols = SslProtocols.Tls | SslProtocols.Ssl3;
}
return protocols;
}
</code></pre>
<p>Following <em>ServicePointManager.DisableStrongCrypto</em> <a href="http://referencesource.microsoft.com/#System/net/System/Net/ServicePointManager.cs,eba3fbc0f3f0e767">implementation</a> we can see how this flag gets his value from 'SchUseStrongCrypto' property set in the Windows registry ('HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft.NETFramework\v4.0.30319: SchUseStrongCrypto').
If this registry setting does not exist, in .Net version 4.5.2 and below the value will be set to 'True', while in .Net 4.6.1 and above will be set to 'False'.
Given that, if you just install .Net 4.6.1 and do not change strong cryptography Windows registry property, SSL 3.0 will be disabled and TLS 1.0, TLS 1.1 and TLS 1.2 will be used.
The opposite happens for all version 4 of .Net, below 4.6.2.</p>
<p>Anyway, if you want to make sure strong cryptography is enabled, just run the following PowerShell script with elevated privileges.</p>
<pre><code class="language-powershell"># set strong cryptography on 64 bit .Net Framework (version 4 and above)
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
# set strong cryptography on 32 bit .Net Framework (version 4 and above)
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord
</code></pre>
<p><img src="/content/img/blog/abstract-security.jpg" alt="Security" /></p>
http://johnlouros.com/blog/leveraging-multi-subnet-failoverLeveraging Multi-Subnet failover2015-12-04T00:00:002015-12-04T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>MSDN has a ton of information about this topic, from database setup to SqlClient configuration, however if you are fairly new to this topic you might get overwhelmed with all the available information. My goal for this blog post is to simply the idea behind this concept so anybody can understand the basics. Just keep in mind that the Network setup can be far more complex, but the fundamentals will be the same.</p>
<p>Starting with the definition, MultiSubnetFailover is a .Net framework feature for [System.Data].SqlClient (SQL Server data provider) that enhances how SqlClient interacts with an AlwaysOn Availability Group, providing faster detection of and connection to the (currently) active server.</p>
<p>In order to keep things simple, the example I'm going to use consists of 1 Web Application server, 1 DNS server, and 2 databases with data replication setup between then (DataSync on the diagram). In this case, it's assumed the Web Application can pick any of the available databases, given that data replication will ensure data inserted, updated or deleted to one database, is properly reflected to the other. There's way more to database replication than this, for now just keep in mind that doesn't matter what database the web application picks. Diagram representation below:
<img src="/content/img/blog/multisubnetfailover-diagram.png" alt="Multi-subnet-failover test environment diagram" />
On the DNS server, a new hostname will be defined 'msfqamockdb' that will point to '10.200.200.89' (msfm-db1) and '10.200.200.239' (msfm-db2). Doing 'nslookup' on 'msfqamock' should return the following:
<img src="/content/img/blog/multisubnetfailover-nslookup.png" alt="Multi-subnet-failover environment, nslookup" />
If both servers are available, no problem, the web application can pick whatever it's preferred. If I'm not wrong, it will pick the first result of the list of IPs assigned to the given hostname. For the sake of argument, let's say '10.200.200.89' database is picked. While the database is up and running, everything will be OK, but imagine that someone unexpectedly pulls the plug on that database. By default (with MultiSubnetFailover disabled) the SqlClient will get the list of IP address and serially tries to re-connect to each of them. Each connection attempt timeout is 21 seconds (by default) and it will be performed serially in order. With MultiSubnetFailover the connection attempts will be performed in parallel (as close to parallel as possible) without waiting for TCP ACK (acknowledgment), the first server to respond will be picked to establish the connection, resulting in much faster reconnect times.</p>
<p>MultiSubnetFailover has a set of limitations that you must be aware:</p>
<ul>
<li>Hostnames with more than 64 IP addresses is not supported</li>
<li>Only supported using TCP protocol</li>
<li>Doesn't support SQL named instances</li>
<li>Connecting to a mirrored SQL Server instance won't work</li>
</ul>
<p>Additionally, as a best practice, try to keep your SQL operations as atomic as possible and avoid leaving SQL connections open for a long time, mostly because MultiSubnetFailover will only do his magic during [System.Data.SqlClient].SqlConnection.Open(). Also, do not except 100% availability. Even performing the simplest query, if the database goes down while a connection is open, you will get an SQL exception.</p>
<p>To enable this feature, simply append 'MultisubnetFailover=true' to a connection string (usually found in either 'App.config' or 'Web.config'). As mentioned before, connecting to a mirrored SQL Server instance won't work, so make sure the connection string does not have 'Failover Partner' set. If you are using .Net framework 4.6.1, you don't need to do anything, since SqlClient transparently detects whether your application is connecting to an AlwaysOn Availability Group. You can read more about it <a href="http://blogs.msdn.com/b/dotnet/archive/2015/11/30/net-framework-4-6-1-is-now-available.aspx">here</a></p>
<p>If you need proof that this feature properly works, feel free to download my test application from my <a href="https://github.com/jlouros/MultiSubnetFailover-TestApp">GitHub</a>. It contains a .Net 4.5 and .Net 4.6.1 versions, but the use exactly the same code. The goal was to test how MultiSubnetFailover behaves in both framework versions.</p>
<p>To know more about this features, here's a couple of useful references:</p>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring(v=vs.110).aspx">SqlConnection.ConnectionString property (MSDN)</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/hh205662(v=vs.110).aspx">SqlClient Support for High Availability, Disaster Recovery (MSDN)</a></li>
<li><a href="https://technet.microsoft.com/en-us/library/ms165614(v=sql.90).aspx">SQL Server named instances reference (TechNet)</a></li>
<li><a href="https://github.com/dotnet/corefx/tree/master/src/System.Data.SqlClient">System.Data.SqlClient source code (GitHub)</a></li>
</ul>
http://johnlouros.com/blog/disabling-cryptographic-protocols-for-pci-complianceDisabling cryptographic protocols for PCI compliance (focused on SSL 3.0 and TLS 1.0)2015-12-01T00:00:002015-12-01T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>PCI DSS (Payment Card Industry, Data Security Standard) requires that cryptographic protocols with known vulnerabilities, must be disabled (recently introduced in revision 3.1). This includes SSL 2.0, SSL 3.0 and TLS 1.0, meaning that after June of 2016, any environment supporting those protocols will automatically fail a PCI audit. At the time of this writing, only TLS 1.1 and TLS 1.2 should be enabled (TLS 1.3 still in draft phase).</p>
<p>In a Windows environment, cryptographic protocols can be managed in the registry. By default, all common protocols (from SSL 2.0 and above) are enabled for incoming and outgoing connections, however you might not be able to find them in the registry, since they are hidden by default. That just means we need to check if the registry entry exist, before applying any modification. If it doesn't exist, it must be created first.
Additionally for each protocol, there are 'Server' and 'Client' settings. To simplify it, they can be interpreted as 'incoming' (server) and 'outgoing' (client) connection settings. E.g. your Web-server can reject incoming TLS 1.0, but allow outgoing TLS 1.0 connections. This configuration makes sense if your web application needs to fetch information from an external resource using TLS 1.0, but wants to blocks any incoming TLS 1.0 connection.</p>
<p>For compatibility reasons, multiple protocols can be enabled. On encrypted connections, before data starts flowing between client and server, they must agree on what protocol should be used. Today, any modern browser will automatically handle this negotiation step, retrying all available protocols until they find one that both can communicate over. However that doesn't mean that all applications/frameworks will do the same. .Net in particular (by default in version 4.5), will try to use TLS 1.0 (if you have it enabled on the client) and closes the connection if the server doesn't support it. If you're trying to make a HTTPS request using .Net (e.g. using HttpClient) you could encounter the following error:
<img src="/content/img/blog/tls-connection-failure.png" alt=".Net HttpClient HTTPS connection failure" /></p>
<p>It simply states that if the server can't 'speak' in TLS 1.0, the connection can't be established. Why the framework doesn't try to negotiate to use other protocol, only Microsoft can tell, I'm sure they have a good reason, for now just be aware of that.
If you want to force .Net to retry the connection with different protocols, simply add the following statement before making the request.</p>
<p>System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls | System.Net.SecurityProtocolType.Tls11 | System.Net.SecurityProtocolType.Tls12;</p>
<p>If you're configuring the server, use the following PowerShell script to fully disable SSL 2.0 and SSL 3.0; enable TLS 1.1 and TLS 1.2; block incoming but allow outgoing TLS 1.0 connections.
You will have to reboot the machine to have this changes applied.</p>
<pre><code class="language-powershell">$protocols = @{
'SSL 2.0'= @{
'Server-Enabled' = $false
'Client-Enabled' = $false
}
'SSL 3.0'= @{
'Server-Enabled' = $false
'Client-Enabled' = $false
}
'TLS 1.0'= @{
'Server-Enabled' = $false
'Client-Enabled' = $true
}
'TLS 1.1'= @{
'Server-Enabled' = $true
'Client-Enabled' = $true
}
'TLS 1.2'= @{
'Server-Enabled' = $true
'Client-Enabled' = $true
}
}
$protocols.Keys | ForEach-Object {
Write-Output "Configuring '$_'"
# create registry entries if they don't exist
$rootPath = "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\$_"
if(-not (Test-Path $rootPath)) {
New-Item $rootPath
}
$serverPath = "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\$_\Server"
if(-not (Test-Path $serverPath)) {
New-Item $serverPath
New-ItemProperty -Path $serverPath -Name 'Enabled' -Value 4294967295 -PropertyType 'DWord'
New-ItemProperty -Path $serverPath -Name 'DisabledByDefault' -Value 0 -PropertyType 'DWord'
}
$clientPath = "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\$_\Client"
if(-not (Test-Path $clientPath)) {
New-Item $clientPath
New-ItemProperty -Path $clientPath -Name 'Enabled' -Value 4294967295 -PropertyType 'DWord'
New-ItemProperty -Path $clientPath -Name 'DisabledByDefault' -Value 0 -PropertyType 'DWord'
}
# set server settings
if($protocols[$_]['Server-Enabled']) {
Set-ItemProperty -Path $serverPath -Name 'Enabled' -Value 4294967295
Set-ItemProperty -Path $serverPath -Name 'DisabledByDefault' -Value 0
} else {
Set-ItemProperty -Path $serverPath -Name 'Enabled' -Value 0
Set-ItemProperty -Path $serverPath -Name 'DisabledByDefault' -Value 1
}
# set client settings
if($protocols[$_]['Client-Enabled']) {
Set-ItemProperty -Path $clientPath -Name 'Enabled' -Value 4294967295
Set-ItemProperty -Path $clientPath -Name 'DisabledByDefault' -Value 0
} else {
Set-ItemProperty -Path $clientPath -Name 'Enabled' -Value 0
Set-ItemProperty -Path $clientPath -Name 'DisabledByDefault' -Value 1
}
}
</code></pre>
<p>If you want to test what protocols are enabled, use the following console application code. Just create a empty .Net 4.5 console application, keep in mind that you might need to add a reference to System.Net.</p>
<pre><code class="language-cs">using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;
using System.Net.Http;
namespace Playground
{
class Program
{
static string _httpsTestUrl;
static void Main(string[] args)
{
Console.ForegroundColor = ConsoleColor.Green;
_httpsTestUrl = "https://";
Console.Write("'enter test Url': https://");
_httpsTestUrl += Console.ReadLine();
Console.WriteLine();
Task.Run(async delegate
{
await RunTests().ConfigureAwait(false);
}).Wait();
Console.WriteLine("\n\nPress any key to exit!");
Console.ReadKey();
}
static async Task RunTests()
{
await PerformRequest(SecurityProtocolType.Ssl3);
await PerformRequest(SecurityProtocolType.Tls);
await PerformRequest(SecurityProtocolType.Tls11);
await PerformRequest(SecurityProtocolType.Tls12);
}
static async Task PerformRequest(SecurityProtocolType protocol)
{
string protocolName = Enum.GetName(typeof(SecurityProtocolType), protocol);
try
{
using (HttpClient httpClient = new HttpClient())
{
// sets the protocol to be used
ServicePointManager.SecurityProtocol = protocol;
// performs the request
HttpResponseMessage response = await httpClient.GetAsync(_httpsTestUrl);
Console.ForegroundColor = ConsoleColor.White;
Console.WriteLine(string.Format("'{0}': test passed!", protocolName));
}
}
catch (Exception ex)
{
Console.ForegroundColor = ConsoleColor.White;
Console.WriteLine(string.Format("'{0}': test failed!", protocolName));
List<string> errors = new List<string>();
do
{
errors.Add(ex.Message);
ex = ex.InnerException;
} while (ex != null);
string errMessage = errors.Distinct().Aggregate((x, y) => string.Format("{0}\n >>> {1}", x, y));
// write the error message to the console
Console.ForegroundColor = ConsoleColor.Gray;
Console.WriteLine(string.Format(" >>> {0}", errMessage));
}
}
}
}
</code></pre>
<p>Start the application, enter the URL of the server you want to test and let the application do the rest.
<img src="/content/img/blog/test-tls-console-app.png" alt="test TLS .Net console application screenshot" /></p>
<p>If you are looking for more detailed information, can you use <a href="https://nmap.org/">nmap</a>. Download the command line for Windows from <a href="https://nmap.org/dist/nmap-7.00-win32.zip">https://nmap.org/dist/nmap-7.00-win32.zip</a> or check another download options at <a href="https://nmap.org/download.html">https://nmap.org/download.html</a>
Run nmap executable, use 'ssl-enum-ciphers' script and specify the ports and Url you want to test.</p>
<pre><code class="language-powershell">.\nmap.exe --script ssl-enum-ciphers -p 443 google.com
</code></pre>
<p>Here's an example:
<img src="/content/img/blog/nmap-ssl-enum-ciphers.png" alt="nmap ssl enum cipher screenshot" /></p>
<p>This article just focus on insecure cryptographic protocols mentioned in the latest revision of PCI DSS. For more information about PCI DSS, please download the full document from <a href="https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1.pdf">https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1.pdf</a></p>
http://johnlouros.com/blog/empowering-social-interactions-in-your-organizationEmpowering social interactions in your organization2015-11-23T00:00:002015-11-23T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Social networks are, without a question, one of the greatest technological achievements of the last decade. It's hard to imagine a world without them. Even if you spend no time keeping up with your virtual persona, and you despise the ones that do, you must acknowledge the impact of Social networks in today’s society. It's around us in the real world; just look at all the junk mail from last week and the abundance of "like us on Facebook", or "follow us on Twitter", or "check us out on Google+"… Turn on the TV and check the little hashtag at the corner of the screen. Go to a restaurant and do a FourSquare check-in to get the latest deal. It's literally everywhere.</p>
<p>It's very common to criticize Social networks and I agree with most complaints. It's too invasive, our private data is not respected, some folks are way too dependent of them, creates a false illusion of reality. In fact, social networking can be seen as a constant pissing contest, or a great stalking platform, but on this post I want to focus on the good parts and how it can be a beneficial tool for your company.</p>
<p>Take a moment and check your work email account. If you work for a medium or large corporation, most likely, the vast majority of the emails present in your inbox are either irrelevant to you or not urgent. Perhaps you work for an unrealistically well-organized company, but my personal experience is a little bit different. Using some unprecise statistics to demonstrate my point; 45% of the emails are food related, like "cookie at my desk, get them while they last", "lunch is ready at room xyz", "some leftovers from today's meeting", so on. I'm not complaining about the food, how doesn't love free food, but is that message really that important? Should I stop everything and run straight to the "cookie jar"? Are those communications improving your work productivity? Promoting these interactions could, improve work moral and perhaps allow more personal contact, but it would not be considered high importance from a managerial standpoint. Another 35% of emails can be accounted as "non critical events and announcements"; this included messages from one of the "Chiefs", "Directors ", "Vice Presidents", human resources or IT, that are interesting information, but non urgent communication. Per example, "Welcome Jane Doe to her new position as Director of 'Department'", "Announcing internal changes to 'the Department that you don't have any interaction and that you didn’t even knew it existed'", "Running OS updates on our pre-production environment", "Customer 'x' will be onsite tomorrow, please smile", "We just hired some hot-shot from our competitor, say hi and give him your warm welcome" and so on. Not that these messages are total rubbish, they are definitely more important than food related subjects, however are they critical? The vast majority of people will skim and ignore them. This leaves 20% of your inbox to messages relevant to you and/or your team. This accounts for meetings, documentation, questions, and important notices from your colleagues and blocking issues. Many of us, create a bunch of inbox rules to get rid of the clutter, however the right solution would not require this extra leg work.</p>
<p>What if the organization provided a platform where anybody from the company could dispense all this information? This would avoid the constant disruption from unnecessary email notifications. You guess it right, on a social network. Just like Facebook, each employee manages their level of engagement. Anyone can post whatever they find relevant or interesting, and discuss those topics, in a more casual platform with less impact on each person workflow. However, the most commonly known social networks, like Facebook, were not designed for work interactions. Additionally most of us want to keep our personal lives separate from our work interactions. Totally understandable, after all your boss doesn't need to know how drunk you were last weekend...</p>
<p>Since Facebook isn't the right solution, what is? Personally I had some experience with <a href="https://www.yammer.com/">Yammer</a> and I am really happy with it. It solves the problem mentioned before and I find it particularly useful to share tech articles with my team and to have discussions about it. Let's say somebody found an article about a new version of a software component that our company uses internally, Yammer makes it easy to share and comment on that piece. That component might never be used, but at least we talked about it and we have a log of that discussion. In the future, if the company ever considers using/updating that component, they already have some pre-research data that they can use and some points to consider before working on it. Yammer also has a set of applications that provide real-time notifications, for those who need to get the latest news the quickest way possible. For myself, I just check the web application a couple times a day and on my commute back home I might read that really long article that somebody posted. It's totally up to you to decide when and how you want to be interrupted, with the benefit of less clutter in your inbox.</p>
<p>Another tool worth mentioning is <a href="http://kudosnow.com/">Kudos</a>. It's self-described as an employee recognition program and corporate social network designed to engage your teams with enhanced communication, collaboration, appreciation, recognition, and rewards. Timely recognition and meaningful feedback is absolutely crucial for cultivating and maintaining an engaged team. Using an interface similar to popular social networks, Kudos enhances communication in your organization. Build and promote organizational culture by facilitating communication with every member of your organization. Team recognition is a powerful tool that most companies are not using well to create engagement at work. Sadly, most organizations do not have a well-thought-out strategy to engage their teams, or if they do, they are not satisfied with the outcome.<br />
To conclude, I do not have any affiliation with the vendors of the tools mentioned before. This article mainly reflects my experience with them. Do you know any alternatives that might be worth sharing? Feel free to leave your comment below.</p>
<p><img src="/content/img/blog/social-networking-for-business.jpg" alt="social networking for business" /></p>
http://johnlouros.com/blog/prototyping-the-right-wayPrototyping the right way2015-09-09T00:00:002015-09-09T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>A common trend that I see with Software Development teams is the absolute need to over-engineer prototype solutions. "Hey Mark, can you build me a website where I can upload the photos from my phone?". Two months later Mark comes back with a budget estimate of two million dollars to cover hiring and infrastructure costs and a three year roadmap... "Dude, I just want a website to upload my photos... It will cost me 3 years and 2 million?". Seriously Mark, why couldn't you focus on the goal, keep it small and simple? Why did you have to go "all-in" right from the start?</p>
<p>To be fair, this reaction can sometimes be explained when Developers work for unreasonable, non-technical decision makers. Almost everybody have heard of stories about entire Engineering departments getting blamed (sometimes even fired), when the accountable Engineers clearly stated that the product was not ready to be released; even so, the upper management ignored their assessment, forced them to release the product and inevitably things went sideways; now the upper, upper management is asking questions and who gets the blame? That's right the Engineers, who can't justify their case because at the end of the day they handled the release. In other words, they had the keys to the atomic bomb...</p>
<p>For anybody that have to deal with this kind of management, it's understandable that all scenarios must be covered right from the start. At the same time, any kind of battle will be lost against this type of management. So respectfully give them a book about modern Software Development strategies or tell them to attend a Agile workshop because their management style will not fit the current market.</p>
<p>Anyway, let's put this scenario aside and imagine that you are working with reasonable people. A prototype, just like any typical development task, where you can ignore the process and development guidelines/rules to get the necessary answers as fast as possible. So "divide and conquer" is usually useful to break down the problem in smaller chunks, but you can also take a more holistic approach: start by identifying the two main areas that can be handled individually (like user interface and functionality). Each major area can be broken down smaller pieces like: user experience, data persistence, testability, application logic, so on (areas might vary depending on the research you're performing). Additionally and most importantly, identify who are you doing this for and what are the expectations. Knowing your target audience (in the context of a prototype demo) it's absolutely essential to keep you on track.</p>
<p>Are you building this prototype for yourself? Great! What do you want from it? What do you absolutely need? Just to rephrase, try to keep it simple. The whole idea is keeping focused on your goal. But at the end of the day, you own it, so you do whatever feels appropriate.</p>
<p>However if your building the prototype for somebody else, you will need to focus on what the target audience is expecting. A lot of people, just focus 100% on the user-interface and user-experience, mostly because "eye-candy" sells (just look at what Apple has been able to achieve). To be fair, doesn't look as lazy as a PowerPoint and it gives the team a lot of breathing room, since users can glaze at a fancy UI, while the backend is being developed. It also gives the users the opportunity to experience some product features upfront, even if nothing is wired up. They will probably come back with a ton of ridiculous suggestions but at least you "kept the ball rolling". Just take everything that they say with a grain of salt, otherwise might get stuck on doing UI tweaks and never get the actual product working. Realistically, if you just focus on the UI, at the end of the day you just have a fancier interactive slide-show. It might look better than a PowerPoint, but it's still just a "fancier" PowerPoint. And yes, I'm</p>
<p>saying that the customer isn't always right. As a paid professional you should guide to customer to the right path and not blindly follow their orders.</p>
<p>Beside the UI, there are two others concepts that are commonly over-engineer right "out of the gate": data persistence and dependency injection. "Dude, it's a prototype!", you can worry about it later. "But I really need it!". So save whatever you need to a file, you don't need a full blown SQL server, ORM and data access layer, right from the start.</p>
<p>Anyway, I think I went a little off-road there, so let me get back on track. So let me give you a real example, something that I have seen before. The CTO goes to the Engineering team with a new challenge: "Alright team, let's see if we can build the next version of our current website, however for this project let's figure out if we can use a RESTful API to communicate with the backend. Our main concern is the search functionality. We don't know how it will work using REST, so do your research and present your findings in 2 weeks!". Pretty clear right? Based of an existing website, let's create a RESTful API. Since the search functionality is the boss's main concern, let's start there. Does the team have any other concerns? Great, write them down and try to find a solutions for those problems after you dealt with the search issue. What actually happened was quite interesting, the team started with the setup of the build controller, which from me was a bit pointless since they should be able to build and test any prototype on their computer. Then they created a solution for the project (obviously you need this) but instead of just creating a project for the new website, right from the start they created projects for unit testing, integration test, UI test, core and shared functionality, data access layer, logging, tracking and analytics handlers. Keep in mind, we don't have any functionality, but right out of gate we have a bunch of projects, that work mainly as placeholders at this point(first indication of over-engineering). Then they download and wired up a dependency injection framework (which a bit funny when there is no application logic); they moved on to setup a SQL server instance, pick a ORM, create the data access layer and wire it up; then the picked a logging framework, started conversions about monitoring and error handling, threw tracing and analytics concerns in the middle and the challenges kept coming. However what about the CTO's main concern? Was anybody working on that? (Team's response)"Well, we don't know how we will handle the search piece, but we have the initial setup ready...", as you might imagine the CTO was quite upset, since nobody come up with a solution for the questions he had.</p>
<p>A ready good rule to keep your prototypes lean is "once you're done, you have to throw it away". Gather whatever notes you need, just don't copy any code. This technique is usually enforced if follow a Test-Driven-Development methodology, since you lift all process constraints to get your prototype/research done, but after showing the findings the prototype have to be deleted. That way to don't have to worry about any technical debt created by your prototype development.</p>
<p>There are multiple interpretations of what a prototype should be and what's their purpose. In this article I consider prototype an research task that will help you figure out answers for some of the unknowns. It might help you prove that you can do a certain thing the way you envisioned, or even the opposite. It might help you figure out that a particular strategy, won't work on mobile devices (just an example). This is vastly different from a product in alpha phase. A prototype should be something you would be willing to throw away. Anyhow, just keep in mind that prototypes must focus on what's absolutely essential.</p>
<p><img src="/content/img/blog/prototype.jpg" alt="prototype meme" /></p>
http://johnlouros.com/blog/batch-download-images-from-a-websiteBatch download images from a website2015-06-05T00:00:002015-06-05T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Time for another <em>quick tip</em>. Today let's write a simple PowerShell script to download a bunch of images from a website. As an example, I will use Bing to search for '<em>funny pictures</em>' and where the resulting images are 'Free to share and use commercially'. This can be quite useful when you are looking for images to use on your projects, presentations, websites, whatever. Just a small disclaimer, anyone is absolutely free to use and modify the PowerShell script I have written for this post, however I won't be held responsible for how it's used. Any license infringements will be the responsibly of the person executing the script.</p>
<p>Before we jump into the script, let's break down the problem in tiny little actions. If you had to do this manually, this were the steps you would need:</p>
<ol>
<li>open a web browser (Chrome, FireFox, IE, Edge or however you prefer)</li>
<li>go to Bing.com</li>
<li>select 'Images'</li>
<li>set the license filter to 'Free to share and use commercially'</li>
<li>search for 'funny pictures'</li>
<li>select an image (by clicking on it)</li>
<li>right-click on the image</li>
<li>click on 'Save image as...'</li>
<li>select a folder to download the image</li>
<li>repeat steps 6 to 9 until you get all the images you want</li>
</ol>
<p><img src="/content/img/blog/bing-funny-pictures.jpg" alt="Bing funny pictures search results" /></p>
<p>This a lot of work, so let's use PowerShell to this job for us. If you aren't familiar with PowerShell, you can simply open <em>"Windows PowerShell ISE"</em> on your computer, copy-paste my script and run it. Now, let's break down the same problem in smaller actions that PowerShell will execute:</p>
<ol>
<li>we will need a Web Client to download HTML and image from Bing</li>
<li>we will also need to construct the URL with desired search options</li>
<li>download the HTML from Bing's results page</li>
<li>use a regular expression to look for URLs terminating in <em>'.jpg'</em> or <em>'.png'</em></li>
<li>create a folder on your computer, to store the downloaded images</li>
<li>download each image individually</li>
</ol>
<p>Now, here's the resulting script:</p>
<pre><code class="language-powershell"># script parameters, feel free to change it
$downloadFolder = "C:\Downloaded Images\"
$searchFor = "funny pictures"
$nrOfImages = 12
# create a WebClient instance that will handle Network communications
$webClient = New-Object System.Net.WebClient
# load System.Web so we can use HttpUtility
Add-Type -AssemblyName System.Web
# URL encode our search query
$searchQuery = [System.Web.HttpUtility]::UrlEncode($searchFor)
$url = "http://www.bing.com/images/search?q=$searchQuery&first=0&count=$nrOfImages&qft=+filterui%3alicense-L2_L3_L4"
# get the HTML from resulting search response
$webpage = $webclient.DownloadString($url)
# use a 'fancy' regular expression to finds Urls terminating with '.jpg' or '.png'
$regex = "[(http(s)?):\/\/(www\.)?a-z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-z0-9@:%_\+.~#?&//=]*)((.jpg(\/)?)|(.png(\/)?)){1}(?!([\w\/]+))"
$listImgUrls = $webpage | Select-String -pattern $regex -Allmatches | ForEach-Object {$_.Matches} | Select-Object $_.Value -Unique
# let's figure out if the folder we will use to store the downloaded images already exists
if((Test-Path $downloadFolder) -eq $false)
{
Write-Output "Creating '$downloadFolder'..."
New-Item -ItemType Directory -Path $downloadFolder | Out-Null
}
foreach($imgUrlString in $listImgUrls)
{
[Uri]$imgUri = New-Object System.Uri -ArgumentList $imgUrlString
# this is a way to extract the image name from the Url
$imgFile = [System.IO.Path]::GetFileName($imgUri.LocalPath)
# build the full path to the target download location
$imgSaveDestination = Join-Path $downloadFolder $imgFile
Write-Output "Downloading '$imgUrlString' to '$imgSaveDestination'..."
$webClient.DownloadFile($imgUri, $imgSaveDestination)
}
</code></pre>
<p>You can also view the script on <a href="https://github.com/jlouros/PowerShell-toolbox/blob/2fd875ae8c878956b154da5f5955d3d2eb45f1a0/Web/Get-ImagesFromWebsite.ps1">GitHub</a>. Enjoy!</p>
http://johnlouros.com/blog/list-and-kill-remote-desktop-connectionsList and kill remote desktop connections2015-05-28T00:00:002015-05-28T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Today I marking the official start of my new blog series entitled <em>quick tips</em>. For this series I will focus on simple things that developers might find handy; from scripts, to commands, hot-keys and other tips that, maybe you already know, maybe not. Most of the posts from this series, will be direct, quick and simple. Personally, I just want to share the notes I have been gathering though the years. For my first <em>quick tip</em> post, I will show how you can list all remote desktop connections, and kill a particular session.</p>
<p>Sometimes you might encounter the following error when trying to establish a remote desktop connection: <em>” The terminal server has exceeded the maximum number of allowed connections”</em>. This happens because there’s a maximum limit of allowed remote connections. On top of that, maybe somebody forgot to logoff their remote desktop connection, and their inactive session might occupying a spot that you could use.
<img src="/content/img/blog/ts-max-connections.png" alt="maximum terminal connections error" /></p>
<p>Let’s begin by opening the <em>command prompt</em> (or PowerShell) using: <strong>[Win]</strong> + <strong>[r]</strong>; type <strong>cmd</strong> (or <strong>powershell</strong>) and press <strong>[enter]</strong></p>
<p>Now we are going to use <code>qwinsta</code> to (paraphrasing documentation) <em>“ Display information about Remote Desktop Services sessions.”</em>. If you use the command without any additional arguments, information about your local computer sessions will be display. However, most likely you want to target a remote computer; to do that simply enter the server name, or machine IP, using <em>/SERVER:</em> argument. Example <code>qwinsta /SERVER:mywebserver</code> or <code>qwinsta /SERVER:192.168.1.15</code>
<img src="/content/img/blog/qwinsta.png" alt="qwinsta screenshot" /></p>
<p>To disconnect, or reset a particular session, just use <code>rwinsta</code> and supply the server name and the session Id you want to reset. Session Ids are display on <code>qwinsta</code> resulting output. Example <code>rwinsta /SERVER:mywebserver 70</code></p>
<p>For more information about this two command, please take a look at their TechNet documentation pages: <a href="https://technet.microsoft.com/en-us/library/cc731503.aspx">qwinsta</a> ; <a href="https://technet.microsoft.com/en-us/library/cc754785.aspx">rwinsta</a>.</p>
http://johnlouros.com/blog/is-your-website-mobile-friendlyIs your website mobile-friendly? It better be...2015-05-17T00:00:002015-05-17T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>As you might have heard, last month (April of 2015) Google announced some changes to their search algorithm upon "unfriendly" mobile websites would get demoted in future searches. As Google claim, recent researches suggest that a poor mobile user experience tend to transmit a careless/sloppy impression, of both website owner and device being used. This decision is highly compressible since web traffic coming from mobile devices, is rapidly growing and Android (owned by Google) being most used mobile OS, is responsible to provide an enjoyable mobile experience not only with Apps but with web browsing too. While Google can't fix websites to be mobile friendly, they can <em>”derank”</em> them from their search results.</p>
<p>However, Google is not alone in this fight against those unruly websites. Microsoft just announced that their search engine (Bing) will apply the same strategy (you can read more about it in this Engadget <a href="http://www.engadget.com/2015/05/14/microsoft-bing-mobile-friendly-results/">article</a>). When two of the top three search engines declare war, we can only imagine that there will be casualties... “Death to Desktop only!”. Seriously now. You might ask who doesn’t have a mobile-friendly website this days? Just check this article from <a href="http://techcrunch.com/2015/04/21/googles-mobile-friendly-update-could-impact-over-40-of-fortune-500">TechCrunch</a>, where they claim 40% of Fortune 500 websites might be impacted by this decision. And they are just talking about Fortune 500, now imagine everyone else. If this turns out to be as impactful as expected, in the upcoming months there will be an increase of job openings for Web Developers.</p>
<p>All of this is great and in fact it is. No user should have a terrible browsing experience just because website owners don't care about mobile optimization. However, this strategy won’t fix all the problems. There are two different concerns that should be analyzed separately: mobile friendliness and mobile performance. Mobile-friendly is the capability of viewing a website without the need of scrolling, zooming in/out, pinching, and doing all crazy kind of interactions just to read the website content. So a mobile-friendly website can be roughly described as capable of adapting to device dimensions. On the other hand, mobile performance is defined by the time a website takes to load, navigate, interact and respond. From what I have read, Google’s decision is uniquely targeting mobile-friendliness and not performance. While is this not perfect, this is a great first step. A great mobile experience relies on viewability, responsiveness and speed. If you are working on mobile optimization, please consider all this metrics.</p>
<p>In combination with this announcement Google also released a tool to analyze how mobile-friendly a website is. Please test your website at <a href="https://www.google.com/webmasters/tools/mobile-friendly/">https://www.google.com/webmasters/tools/mobile-friendly/</a>, the resulting analysis can give you a great set of suggestions to make your website mobile-friendly.</p>
<p><img src="/content/img/blog/mobile-optimazation.jpg" alt="mobile optimization meme" /></p>
http://johnlouros.com/blog/creating-a-chrome-extension-using-Visual-Studio-CodeCreating a Chrome extension using Visual Studio Code2015-05-10T00:00:002015-05-10T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>With the announcement of Visual Studio Code last week, I thought the best way to try it out was to write a small tutorial. In this post, I'm going to describe how to create a simple Google Chrome extension, that generates a QR code for the currently opened browser tab. Then, any extension user can scan the code and view the opened website on his mobile device. As expected, for this tutorial, I am going to use Visual Studio Code.</p>
<p>Before we start, ensure that you have <a href="https://code.visualstudio.com/">Visual Studio Code</a> installed on your machine, I am using version 0.1.0 (first release after //Build 2015/); and <a href="https://www.google.com/chrome/browser/desktop/">Google Chrome</a>, for this tutorial I am using version 42.0.2311.135.
For this tutorial I will be using Windows 10, but both products work on Linux and Mac OS, so feel free to use whatever you like.</p>
<p>If you are not familiar with Chrome extensions, don't worry, anyone can describe them as an web applications that extend Google Chrome base functionalities. If you already know HTML, CSS and JavaScript this will be piece of cake.</p>
<p>Let's start by creating the necessary folder structure. Create a new folder called "QR generator for Chrome", the image below shows how to do it with PowerShell, but you can do it with Explorer, Finder, Shell, or any other way you prefer<img src="/content/img/blog/chrome-ext-tutorial/step1.png" alt="create workspacefolder" /></p>
<p>Open up Visual Studio Code. We want to work on our recently created folder, so go to "File" -> "Open Folder..." or use [Alt] + [F]; [F] and select our "QR generator for Chrome" folder. This will be our workspace folder.</p>
<p>Now that you have the VSCode pointing to your workspace folder, create a new folder called "app". This folder will contain all the code necessary for our extension. <img src="/content/img/blog/chrome-ext-tutorial/step2.png" alt="create folder in VSCode" /></p>
<p>Any Chrome extension requires a JSON manifest file upon the developer defines a common set of extension properties like: name, version, permissions, icon, so on. So let's create a new file called "manifest.json". <img src="/content/img/blog/chrome-ext-tutorial/step3.png" alt="create manifest.json" /></p>
<p>However, we don't know anything about Chrome extension manifest files. Wouldn't be helpful if we had some kind of auto-complete and validation? The good news is somebody already thought about this. We can add a JSON schema file to enable auto-complete and validation for Chrome extensions manifest files. Go to <a href="http://schemastore.org/json/">http://schemastore.org/json/</a> and locate the JSON schema file for Chrome extension manifest <a href="http://json.schemastore.org/chrome-manifest">http://json.schemastore.org/chrome-manifest</a> <img src="/content/img/blog/chrome-ext-tutorial/step4.png" alt="JSON Schema Store website" /></p>
<p>To associate this schema with manifest.json files, let's open Visual Studio Code workspace settings. Open the "command palette..." using [Ctrl] + [Shift] + [P]; type "settings" and select "Preferences: Open Workspace Settings". <img src="/content/img/blog/chrome-ext-tutorial/step5.png" alt="open Workspace settings" /></p>
<p>On the left panel you can view the default settings. Those we can't override here (and we don't want to). On the right side we have our own "settings.json" which overrides any of the default settings, on this workspace (this are project settings not global settings). On the "settings.json", type "json.", wait for the auto-complete and select "json.schemas". <img src="/content/img/blog/chrome-ext-tutorial/step6.png" alt="defining workspace settings" /></p>
<p>This will copy the entire "json.schemas" section of the default settings. Now you can add, remove and modify whatever you want. For this workspace, just add a reference to <a href="http://json.schemastore.org/chrome-manifest">http://json.schemastore.org/chrome-manifest</a> for any files that matches "manifest.json". Once you save this changes, Visual Studio Code will create a new file at the root of the project ".settings/settings.json" upon any workspace settings are stored. <img src="/content/img/blog/chrome-ext-tutorial/step7.png" alt="adding a new JSON schema to our project" /></p>
<p>Let's get back to our manifest file to define the base settings for our extensions. The screenshot below highlights what we want to define, but besides of the self-describing properties (like name, description and version), let's talk about <em>"permissions"</em> and <em>"browser_action"</em>. Since we want to generate a QR code for the current tab, we need to ask Chrome about the active tab Url. To do that, we have to ask permission to access it. Just add <em>"activeTab"</em> to the <em>"permissions"</em> array to enable it. On the <em>"browser_action"</em> section, you will define the extension icon, title and default popup. <em>"default_popup"</em> is your "index.html" or the main page, and since the extension will be displayed as a popup, I think the name is definitely appropriate. For now, just create a simple (and valid) HTML file with "Hello world" and create a 19x19 png file for your icon.<img src="/content/img/blog/chrome-ext-tutorial/step8.png" alt="final manifest.json" /></p>
<p>To test your extension locally, open Google Chrome and type "chrome://extensions" on the address bar; enable "Developer mode" by clicking on check-box on the top right corner; next click on "Load unpacked extension..." and select "app" folder located inside your project, ex. "C:\QR generator for Chrome\app". Now, you should be able to see your Chrome extension in top right corner of chromes, right beside the "hamburger button". <img src="/content/img/blog/chrome-ext-tutorial/step9.png" alt="loading unpacked extension to Google Chome" /></p>
<p>Now that you got the basics of creating a chrome extension, let's take care of the QR code generator. Since there's plenty of open-source implementations of QR code generators, written in JavaScript, let's pick one from GitHub. I found a very nice project from Shim Sangmin called qrcodejs. You can check it out at <a href="http://davidshimjs.github.io/qrcodejs/">http://davidshimjs.github.io/qrcodejs/</a>. We will only need the minified version of his QRCode solution, so download it and add it to your project inside a folder called "qrcode". We will also need to create a JavaScript file, where we will write the code, to handle: the event when the user opens up the extension; the request to get the current tab Url and finally, to call the code to generate the QR code. Just create a new JavaScript file called "popup.js". This is all the JavaScript files we need. Now we need to reference them in your HTML file. Additionally, we need a placeholder for the QR code, so on the HTML body create a <em>"div"</em>, set the id to "qrcode" and the style to 100px with, 100px height (margin-top is optional).
<img src="/content/img/blog/chrome-ext-tutorial/step10.png" alt="extension HTML and QR library code" /></p>
<p>As mentioned before, our JavaScript file will have to handle a particular set of events to make everything work. First, when the use opens up the extension (in our case when the DOM content is loaded), we will need to ask Chrome what's the current tab Url (using the getCurrentTabUrl function). Once Chrome replies back, we will call the QRCode library to generate a QR code for the given Url; we also tell in what DOM element the QR code should be placed. I tried to comment the "popup.js" the best I could, for the space I had available. <img src="/content/img/blog/chrome-ext-tutorial/step11.png" alt="extension JavaScript code" /></p>
<p>And here's the final result (I'm using this blog as example) <img src="/content/img/blog/chrome-ext-tutorial/step12.png" alt="final result" /></p>
<p>You can find the source code for this project on <a href="https://github.com/jlouros/QrCodeGenerator-ChromeExtension">GitHub</a>. It's under MIT license so feel free to use it as you like it.</p>
http://johnlouros.com/blog/build-2015Build 20152015-05-03T00:00:002015-05-03T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>This has been a week full of events, from Kentucky Derby, to Mayweather versus Pacquiao and Chelsea becoming the Premier League Champion, just to name a few. In the Technology world, Microsoft held their biggest developers conference called "Build", in San Francisco. Everybody was anxious to see what Microsoft had to offer, since the company was determined to prove that they don't want to keep trailing behind Apple and Google. That message was clearly stated when, recently, they announced cross-platform support for .Net framework (or a subset of .Net), a new unified truly universal experience for Windows and the ground-breaking holographic device called Hololens.</p>
<p>Obviously I couldn't go by without talking about this event. However it would require an entire book to talk about everything that was presented at Build, so I will focus on some of the things that really caught my attention. Or the be fair, the things that I'm most interested.</p>
<p>With Windows 10, Microsoft's plan is very ambitious! Create one the truly universal Operating System, with the capability of running on the most powerful Desktop machine, to a tiny portable device like Raspberry Pi, from a device with touch-screen to a device with no screen at all. For developers this means we can create one application that can be used (in his own way) on every possible device that runs Windows 10. This is phenomenal since developers can relay on a common set of API's and a common design language to code against, instead of having a different project targeting each particular device. The idea is to develop an adaptive application where the same code runs against any device type.</p>
<p><img src="/content/img/blog/universal-apps-devices.png" alt="Universal Apps" /></p>
<p>Hololens, this was everybody was waiting to see and try out. Microsoft holographic device shows great potential a can be the stepping stone for merged-reality experiences. For anyway curious about Hololens development, simply start learning more about developing apps for Windows 10 and 3D development (learning Unity can be a great starting point). For details, please check out the official website <a href="https://www.microsoft.com/microsoft-hololens">HoloLens</a></p>
<p>Regarding Visual Studio, some of the features presented for version 2015 were quite impressive, including an extensible toolset for code analysis, a completely revamped XAML development experience and some great debugging improvements, just to name a few. However, what really surprised us all was the announcement of new a cross-platform code editor called Visual Studio Code. Built with <a href="http://www.typescriptlang.org/">TypeScript</a> and <a href="http://electron.atom.io/">Electron</a> and IntelliSense provided by <a href="https://github.com/dotnet/roslyn">Rosyln</a> and <a href="http://www.omnisharp.net/">OmmniSharp</a>, Visual Studio Code is the best editor to work on your .Net projects outside of the Windows environment. Just don't expect the same type of experience as the full blown Visual Studio, this is lightweight editor with support for git, debugging and IntelliSense.</p>
<p>Another interesting announcement was Windows 10 compilation support for Android and iOS code. In a way to fight the "App Gap" between Windows and the competition (Android and iOS), Microsoft announced two new projects (one for each platform) where you can simply grab your existing Java or Object-C projects, open them in Visual Studio and compile them for Windows 10 with a few changes. There aren't details for general availability and how this new kind of application will perform, but at least it's a good step to close the "App gap" so many complain about, let's just see if it works.</p>
<p><img src="/content/img/blog/build-2015.png" alt="Build 2015" /></p>
http://johnlouros.com/blog/aiming-for-a-streamlined-development-processAiming for a streamlined development process2015-04-26T00:00:002015-04-26T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>A well defined development process is key for a successful Software Development team. Transparency and common understanding, will help new developers get up to speed more quickly and allow more flexibility for further modifications. This may sound a bit chaotic, but your development process should be prepared to allow constant improvements (modifications). Obviously you shouldn't change something just for the sake of it; identify what value does your modification bring and make a conscious decision before applying it; ask your team members, or even the whole department, what do they think about your change and how it will help them; also, things that look obvious to you, might not seem so obvious to someone else, so think about the impact your tweak will have on your team workflow. What I am saying can be interpreted as common sense, and it should be, however you would be amazed by how often people disregard common sense. And I can tell you, changing something without a reason can be as dangerous as not changing at all.</p>
<p>A fundamental concept, for a well defined development process you must: identify all the requirements; gather information about the used tools and components; what does the team should aim for; what's the expected action when a particular event occurs. This exercise helps to set expectations, or more simplistically helps you identify what question must be answered: "does my code build?", "all the unit tests pass?", "am I able to deploy this code?", "does my code meet the stakeholders exceptions?", so on.</p>
<p>Today I want to share a example set of requirements, components, tools and concepts you should aim for. Actions and steps require a more involved analysis and it deserves a post of their own. Meanwhile here's a simplified guideline you can follow:</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>all tests must pass</li>
<li>existing tests can't be ignored</li>
<li>test code coverage shouldn't drop</li>
<li>new code follows the defined coding standards</li>
<li>technical debt should not increase</li>
<li>all applications should be properly versioned</li>
<li>set versions should use the same conventions</li>
<li>new code was properly reviewed and approved</li>
<li>modifications don't create new security issues</li>
<li>an application package is unique and should be built only once</li>
<li>an application should only have one build/package process</li>
<li>tasks, issues or any work items must be associated with the corresponding "changeset"</li>
<li>application performance should meet the defined standards</li>
<li>every team should use the same workflow (i.e.: git-flow)</li>
<li>the defined process should be enforced by default and prevent any workarounds</li>
</ul>
<p><strong>Components:</strong></p>
<ul>
<li>version control system</li>
<li>version control system repository</li>
<li>integrated development environment (IDE)</li>
<li>build controller
<ul>
<li>build agents</li>
</ul>
</li>
<li>build artifact repository</li>
<li>deployment manager
<ul>
<li>deployment agent</li>
</ul>
</li>
<li>virtualization manager</li>
<li>feature management system</li>
<li>issue tracking system</li>
<li>code review system</li>
<li>test execution system</li>
<li>performance measuring tools</li>
</ul>
<p><strong>Tools:</strong></p>
<ul>
<li>git</li>
<li>GitHub</li>
<li>reviewboard</li>
<li>Visual Studio</li>
<li>ReSharper</li>
<li>Jenkins</li>
<li>Artifactory</li>
<li>SonarQube</li>
<li>Go (ThoughtWorks)</li>
<li>Confluence</li>
<li>JIRA</li>
<li>MS System Center</li>
<li>YSlow</li>
<li>OWAP Zap</li>
<li>HipChat</li>
</ul>
<p><strong>Aim for:</strong></p>
<ul>
<li>consistent application versioning
<ul>
<li>all application must be versioned</li>
<li>use the same schema to generate versions (ex: 2015.02.25.2) #{Year}.{Month}.{Day}.{Build counter}</li>
</ul>
</li>
<li>applications must be independently deployable. Reject any hardcoded environment settings</li>
<li>use the exactly the same package on any environment. Build once, use the same binaries on Dev, QA, Staging, Production environments.</li>
<li>each application should be built using the same build process/build definition. Do not break it down to a build definition for each branch. If you want to tweak the process, you just have to change it in one place. Avoid unnecessary complexity.</li>
<li>each "changeset" must be associated with a correspondent task. This way the company can easily track the investment and maintenance cost of a particular feature.</li>
<li>try to keep a consistent set of tools. Avoid multiple different build controllers or issue tracking systems. Whatever tools were picked, keep them and if you need to change it, do it across the board.</li>
<li>clear and transparent process. Everyone should have a easy way to view the entire process workflow and the current state of the process. If the team is waiting on an approval, who should they contact.</li>
<li>enforced and automated workflow. A well defined process is great, but it's even better is you can automate it. Wikis are a great place for documentation, they can't enforce a particular workflow. Use proper tools to manage your workflow.</li>
</ul>
<p><img src="/content/img/blog/agile-process-meme.jpg" alt="agile process meme" /></p>
http://johnlouros.com/blog/my-presentation-about-PowerShell-5My presentation about PowerShell 52015-04-19T00:00:002015-04-19T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Last week I did a presentation about "PowerShell 5 and OneGet" for my company. It was an introductory level session where I talked about some of the new features of PowerShell 5 and what is/how to use OneGet. It was a time boxed session set to last no more than thirty minutes, so I had to pick the topics I thought were the most relevant to developers. Small disclaimer, I did my presentation on the month of April of 2015, so keep in mind that some of the things I am going to talk about today, might be out of date by the time you read this post. Additionally, you may be wondering where is the PowerPoint for my presentation; well on my presentations I try to stay away from PowerPoints as much as I can, in a way to provide more demonstrations and engage everybody in a interactive session. On this blog post, I want to highlight the most relevant things I mentioned on my presentation.</p>
<p>Let's start with PowerShell, one of the best features included in version 5 is the support for classes and enums. It might not sound like a big deal, but at least for me, classes are far more "cleaner" and easy to manage than a swirl of functions. To be fair, before PowerShell 5 you could create a C# script with your class definition and instance it by calling <code>New-Object</code>, but this approach doesn't look very clean and can get quite challenging to maintain in no time, simply because you write your class in plain text and tell PowerShell to recognize it as a new type. Here's an example:</p>
<p>Defining class a before PowerShell 5:</p>
<pre><code class="language-powershell">$myClassScript = @"
public class MyClass
{
private string _myName;
public MyClass(string myName)
{
_myName = myName;
}
public static int Add(int a, int b)
{
return (a + b);
}
public int Multiply(int a, int b)
{
return (a * b);
}
public string GetName()
{
return _myName;
}
}
"@
# add our custom Type
Add-Type -TypeDefinition $myClassScript
# call 'MyClass' static method
[MyClass]::Add(4, 3)
# create a new object of type 'MyCalss'
$myClasstObject = New-Object MyClass "Jane Doe"
# call method Multiple from a existing instance
$myClasstObject.Multiply(5, 2)
$myClasstObject.GetName()
</code></pre>
<p>Defining a class in PowerShell 5:</p>
<pre><code class="language-powershell">Class MyClass
{
#Properties
[string]$_myName
#Constructor
MyClass([string]$myName)
{
$this._myName = $myName
}
#Methods
Static [int] Add([int]$a, [int]$b)
{
return $a + $b
}
[int] Multiply([int]$a, [int]$b)
{
return $a * $b
}
[string] GetName()
{
return $this._myName
}
}
[MyClass]::Add(5,9)
$myClassObject = [MyClass]::New("Jane Doe")
$myClassObject.Multiply(5,4)
$myClassObject.GetName()
</code></pre>
<p>Another awesome feature in PowerShell 5 is static code analysis (or script analysis). This feature was included in "Windows Management Framework 5.0 Preview February 2015" and it's still in experimental mode (download it <a href="http://blogs.msdn.com/b/powershell/archive/2015/02/18/windows-management-framework-5-0-preview-february-2015-is-now-available.aspx">here</a>). With PowerShell repositories becoming larger and more complex to manage, there is a need to create coding standards and guidelines so all scripts are easier to interpret and manage across developers. The perfect scenario for code analysis usage is as a step on your development lifecycle, so on every commit to the main remote repository, the code analysis would run to validate the scripts.</p>
<p>The last feature I want to talk about is OneGet. This is not necessary tied to PowerShell 5 development, but since will be supported out-of-box in version 5, I thought it was worth mentioning. To Briefly explain it, imagine it as the evolution of NuGet. As you may know, NuGet is the main package manager used in Visual Studio. It's incredibly useful and easy to use. For example, do you want to include Entity Framework in project? Just right-click on the project name, click "Manage NuGet Packages...", search for it, install it and "bam" you're done, the package manager will download the required DLL's for your and execute any required scripts (defined by the package maker). Well, somebody thought, why not use this concept for applications? After all Linux has it for years (check apt-get) and that's how <a href="https://chocolatey.org/">Chocolatey</a> was born. Then the folks at Microsoft thought, "why don't we add support to a package manager for PowerShell modules?" Well, since the list was growing, the decided to create the OneGet project which will be the supporting platform for all of their package providers. Do you want to know more, check out their <a href="https://github.com/OneGet/oneget">GitHub project</a>.</p>
<p>I hope this information was somehow useful to you, or at least that got you interested in knowing more about PowerShell. Before I conclude this post, I would like to share some references, including the GitHub repository where the PowerShell scripts, used during the presentation, are hosted.</p>
<ul>
<li><a href="https://github.com/jlouros/PowerShell-toolbox/tree/master/Misc/PowerShell%205%20Presentation%20Scripts%20%28April%202015%29">Presentation demo scripts</a></li>
<li><a href="http://blogs.msdn.com/b/powershell/archive/2014/05/20/setting-up-an-internal-powershellget-repository.aspx">Setting up an Internal PowerShellGet Repository</a></li>
<li><a href="http://learn-powershell.net/2014/04/11/setting-up-a-nuget-feed-for-use-with-oneget/">Create Oneget repository (NuGet Server)</a></li>
<li><a href="http://www.howtogeek.com/200610/more-details-about-oneget-windows-10s-package-management-manager/">More Details About OneGet, Windows 10’s Package-Management-Manager</a></li>
<li><a href="http://blogs.msdn.com/b/powershell/">Official PowerShell blog</a></li>
<li><a href="http://www.powertheshell.com/isesteroids2/">ISESteroids 2.0 (PowerShell ISE extension)</a></li>
<li><a href="http://www.jsnover.com/blog/2013/12/07/write-host-considered-harmful/">Write-Host Considered Harmful</a></li>
<li><a href="https://www.powershellgallery.com/">Official PowerShell module gallery</a></li>
<li><a href="https://technet.microsoft.com/en-us/library/hh857339.aspx#BKMK_new50">new PowerShell 5.0 features (TechNet)</a></li>
<li><a href="http://www.powershellmagazine.com/">PowerShell Magazine</a></li>
<li><a href="http://www.dotnetrocks.com/default.aspx?showNum=1113">.Net Rocks “Managing an IT codebase with Steve Evans”</a></li>
<li><a href="https://github.com/jlouros/PowerShell-toolbox">my personal PowerShell toolbox</a></li>
</ul>
<p><img src="/content/img/blog/PowerShell-WhatsNew.png" alt="Whats new in PowerShell" /></p>
http://johnlouros.com/blog/how-to-fix-VS2015-CTP6-NuGet-installation-failureHow to fix VS2015 CTP6 NuGet installation failure2015-04-09T00:00:002015-04-09T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>As soon as I saw the first peek of Windows 10 <strong>adaptive UX</strong> displayed at <a href="https://www.youtube.com/watch?v=dDhHIkWKoWw">Mobile World Congress 2015</a>, I couldn't wait to try it out. I was already using Visual Studio 2015 CTP5, but the requirements clearly stated that I needed Visual Studio 2015 CTP6 and Windows 10 Technical Preview SDK (and Windows 10 installed, obviously), you can check the full requirements <a href="http://dev.windows.com/en-US/windows-10-developer-preview-tools">here</a>.</p>
<p>So I began VS2015 CTP6 installation, but at the end the installer returned a warning with the following statement <strong>"Microsoft NuGet - Visual Studio 2015 Package failed"</strong>. Since it was a warning I wasn't too worried. But I wanted to make sure everything was OK, before installing Windows 10 SDK, so I open Visual Studio and <em>boom</em> VS crashes. Obviously NuGet installation had something to do with it.</p>
<p>The problem happened because I used Visual Studio 2015 web installer and it automatically downloaded the latest version of NuGet from <a href="https://visualstudiogallery.msdn.microsoft.com/5d345edc-2e2d-4a9c-b73b-d53956dc458d">Visual Studio Extensions Gallery</a>, however the latest version at that time (3.0.60225.100 released on 2/26/2015) is broken in VS2015 CTP6.</p>
<p>However the solution is quite simple, just follow this steps:</p>
<ol>
<li>close all your Visual Studio instances</li>
<li>download Visual Studio 2015 CTP6 ISO (the full 4GB ), here's the <a href="http://go.microsoft.com/?linkid=9875733">link</a></li>
<li>mount the ISO and find the NuGet package installer at <em>"D:\packages\WPT\nuget14_VisualStudio.cab"</em>
<img src="/content/img/blog/VS2015-CTP6-ISO-NuGetInstallerLocation.png" alt="NuGet installer location" /></li>
<li>extract the contents of the ".cab" file to a directory, let's use <em>"C:\VS2015-CTP6-NuGet-Fix"</em> as a example
<img src="/content/img/blog/NuGetInstaller-Extracted.png" alt="NuGet install extracted" /></li>
<li>open <em>"Visual Studio 2015 Developer command prompt"</em> usually found on the start menu, under Visual Studio 2015 folder
<img src="/content/img/blog/VS2015-DevCmd.png" alt="VS2015 dev cmd" /></li>
<li>uninstall NuGet using the following command by typing the following command <code>VSIXInstaller /u:NuGet.0d421874-a3b2-4f67-b53a-ecfce878063b</code>
<img src="/content/img/blog/Uninstall-NuGet-VSIX.png" alt="uninstall NuGet cmd" /></li>
<li>close the developer command prompt and re-open it</li>
<li>manually install the <em>".vsix"</em> extracted in step 3 using the following command <code>VSIXInstaller c:\VS2015-CTP6-NuGet-Fix\NuGet.Tools.vsix /admin</code>
<img src="/content/img/blog/Manual-NuGet-VSIX-Install.png" alt="install NuGet cmd" /></li>
</ol>
<p>Open Visual Studio 2015 and <em>voila</em>.
I hope this helps.</p>
http://johnlouros.com/blog/an-odd-letter-from-itAn odd letter from IT2015-04-05T00:00:002015-04-05T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Today I want to comment on an e-mail that was sent by the IT department of a technology company. To give you some context, the official operating system used by the company was Windows 7. However, due to some development needs, some Software Engineers and IT folks were using Windows 8. It wasn't an officially supported OS, but a reasonable amount of people were using it and honestly no problems or incompatibilities were found. Anyway, I copy pasted the e-mail so you can read it and take some conclusions of our own. Some of the content was change to keep it anonymous, but the general idea remains the same. Here it is:</p>
<p><em>"Hi {company employees}:</em></p>
<p><em>{Company name} has standardized on Windows 7 for the company provided Dell laptops. As a company, {Company name} has decided to skip the Windows 8 upgrade. There are some occasions that have merited a Windows 8 installation, and for those the IT department has handled the operating systems installations.</em></p>
<p><em>ALL operating system installations/upgrades are to be managed by IT. You have been allowed certain privileges on your company provided machines so that you productivity isn’t stifled, but NO employee should take it upon themselves to upgrade the OS of their laptops or any of the virtual machines they have access to. As the IT department is ultimately responsible for managing these devices, and not the employees, IT is also responsible for the management of the operating systems. Clearly, running the updates for the devices is OK, but never… never… should an employee take it upon themselves to upgrade the operating systems of any company provided machine.</em></p>
<p><em>If you are running any Windows Operating system other than Windows 7, of if you are running Mac Yosemite OS, please contact IT so that we can place you back on the corporate approved Operating systems. We will find out eventually anyway.</em></p>
<p><em>Team Leads please make sure that your employees are adhering to all corporate policies… including this one.</em></p>
<p><em>If you have any questions, comments, or concerns regarding this email please reach out to me and we can discuss this further."</em></p>
<p>Now let me give you the full version of the story. The entire company got this e-mail after one of the Software Engineers purposely installed Windows 8 on his machine and requested some assistance from IT to join the company’s domain group. Keep in mind that this is a simple operation and the company's domain rules were quite loose, the only real requirement was running McAfee Anti-Virus software, which was not a problem. Anyway, the reason why the Engineer needed Windows 8 on his machine was simply because he needed some virtualization Software compatible with Windows Azure, so he could create a custom Virtual Machine image and then upload it to Azure. That way he could easily spin-up new instances of that Virtual Machine image in Azure. You might not know this, but Windows 8 Professional version comes with Hyper-V virtualization software which is free and compatible with Azure. To make matters a little bit more annoying, in the development group some of the Engineers were already running Windows 8. Some of them installed it on their own, others actually got some help from the IT department. However on that particular Friday morning the Chief of IT probably had a bad night and came up with this new mandate and rapidly wrote the e-mail I posted above… The way I see it, if there was an additional cost (which was not the case since all the developers had active MSDN subscriptions) and there was some security concern (which there wasn't since Windows 8 is far more secure than Windows 7) or if this was experimenting with unreleased/unsupported software, I would totally understand his email. However the developer had a specific need to use Windows 8, but now, because of the bad mood of their IT leader, he wouldn't be able to use the tools he wanted to solve his problem. On top of all this, the developer was called irresponsible, irrational and disrespectful to the IT, when he actually had a valid reason to make the upgrade and when others before him already have done the same. But the worst part was that IT instead of providing support and assist the developer's needs, they fought it and made an unfunded decision, that caused a significant impact on the developer's productivity.</p>
<p>What upset me the most about this episode, was the outcome. The company's leadership stood behind IT's decision and enforced the idea by saying it would reduce costs and maintainability. A decision that I saw as a excuse to IT's laziness and that impacted the developer's productivity, was accepted by the leaders. Now, instead of being just the IT folks dragging down developer's productivity, management's came to drag us even more. This is something what seriously didn't understood. I always thought of IT was a task force created to aid everybody's needs and to improve the company's productiveness, not to fight.</p>
<p><img src="/content/img/blog/IT-department.png" alt="IT department joke" /></p>
http://johnlouros.com/blog/resignation-letter-templateResignation letter template2015-03-29T00:00:002015-03-29T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>Today I just want to share a template of the letter of resignation I recently wrote. Thought it could be useful for somebody else, so why not sharing it. You can take a sneak peak bellow, or download it from my OneDrive public share (check the link at the bottom of this post).</p>
<p><em>"{Current date (ex: January 7th, 2015)}</em></p>
<p><em>Dear {your superior name, or HR responsible},</em></p>
<p><em>Please accept this letter as a formal notification that I am resigning from my position as {your role} with {company name}. My last day will be {date of your last day (ex: January 23rd, 2015)}, as previously discussed.</em></p>
<p><em>Thank you so much for the opportunity to work in this position for the past {amount of time you have being working for this company}. I sincerely appreciate the opportunities I have had to grow in the original position and the trust you had in my abilities. All the experiences and skills acquired here, as well as the knowledge about the technology industry, will shape my future career.</em></p>
<p><em>During my last weeks, I will do everything possible to wrap up my duties and train other team members. Please let me know if there is anything else I can do to aid during the transition.</em></p>
<p><em>I wish the company continued success, and I hope to stay in touch in the future."</em></p>
<p>Here's a link to a Word template of my <a href="http://1drv.ms/1HW5hjF">letter of resignation</a>. Feel free to download it and use it as you please, as long as you don't make me responsible for any further modifications. I am not encouraging anybody to resign, so I shall not be accountable for any decisions of actions made by somebody else. This template is provided as it is.</p>
<p><img src="/content/img/blog/letter-of-resignation-meme.jpg" alt="letter of resignation meme" /></p>
http://johnlouros.com/blog/the-problem-with-todays-software-developer-interviewsThe problem with today’s Software Developer interviews2015-03-22T00:00:002015-03-22T00:00:00John Louroshttp://johnlouros.com/UnhandledXcept@outlook.com<p>During the month of March of 2015 I was actively looking for a new job opportunity in Boston. I had just moved there and I am now sharing my experience. Boston’s marketplace is quite intriguing, something worth writing about. This post just highlights my point of view on how interviews for Software Developers are conducted. Hopefully you might find some useful information for your future possible job search, but I would even be more amazed if I could change some interviewer's perspective.</p>
<p>Let me give you some context before I begin. I can’t say that I am an expert in this subject; after all I only worked for two different U.S. companies in the last 3 years. In both of them, besides fulfilling my role as a Software Developer, I was responsible for conduct some technical interviews. Also, both of those companies were located in Philadelphia, which is a totally different market than Boston.</p>
<p>Now, regarding Boston. As you may know, Boston is also referred to as the Silicon Valley of the East. Boston is not bigger than other cities in the Northeast, like Philadelphia or New York, however all the major technology companies have some kind of development office here (Microsoft, Google, Amazon, Apple, Twitter, Akamai, you name it). On top of that, Boston is also the hometown for some of the best U.S. colleges like M.I.T., Harvard and Boston University. So here’s my first point: small city, big tech companies, great Colleges equals demanding market. Now, there’s also a great need for Software Developers here, but they can put themselves to high standards just because every year, there’s a new wave of great minds walking into their Colleges and Universities. Some of them eventually move to other locations (like Seattle, San Francisco, or New York), but also a lot of them will stay. So for a large company in Boston, they can raise the quality standards and simply wait until they find somebody that will pass their challenging technical tests. To be fair, some of these technical tests are universal, meaning no matter where you are, you will be presented with the same test. However, what I want to talk about is how this affects other smaller technology companies located in Boston, more specifically what I call “the Google interview syndrome”.</p>
<p>Now here’s some additional context if you’re not familiar with Google’s interview process. If you are familiar, feel free to skip this paragraph. The concept is quite simple, no matter what development position you are applying for, you are given a set of problems that requires some actionable knowledge of algorithms, data structures and optimization. This is a transparent process; before they schedule an interview, you will receive a bunch of material to get prepared. Anyway, in a nutshell, you know it will be a challenging process. The premise is, if you can solve this generic problems you will be able to execute whatever task they may ask you in the future. However, taking a simplistic view makes it look like “one size fits all”. To be fair, there must be tons of different development roles at Google, having a generic set of questions simplifies and reduces the maintenance cost of the interviews quite a lot. At the same time, isn’t this process quite unfair? Imagine that you are an incredible high-level coder (like a JavaScript developer) with a decent set of open-source projects, a few professionally executed projects, some professional years of experience under your belt and knowledgeable business perspective. Now you are applying for a Front-end developer position, nothing fancy, just the typical website HTML, CSS, JavaScript development (no framework, or dev-kit development). However, if in the midst of a 5 hour interview process (and considering that you are a nervous wreck), you choose the least adequate algorithm for the problem presented, you are out. Now, I don’t want to criticize their interview process; they have a reason to do this and the reason is a developer should be able to fulfill any role. And I think they are right, after all they're Google, tomorrow they might want to reinvent the entire Internet, so it would require all of their developers refocused to meet that goal.</p>
<p>The following image, twitted by <a href="https://twitter.com/KeLuKeLuGames">@kelukelugames</a> perfectly describes my point.
<img src="/content/img/blog/job-search-joke.png" alt="job hunting as a developer" /></p>
<p>Back to the Boston companies. Let’s first agree on something, there are few companies like Google. I would say the truly comparable companies are Microsoft and Amazon. For these three, I would agree with an interview process similar to Google. Anybody else, probably not so much (this is a generalization, but you get the point). Now let me ask you one thing, do you think a company with a very specific goal, a very specific business and market, should use this highly generic “Googleish” type interviews? Shouldn’t they present problems more focused on their challenges and more relevant to their business, instead of the generalized questions presented by Google? Probably yes. Well, this wasn't my recent experience. Honestly it felt like a lot of the interviewers spent quite some time studying for Google interviews, and either they failed the interview process or worked for these companies before, and now this process of recruitment seems to be the only chosen one. I am not dismissing Google's interview process; a generalized set of questions woks for a big company with hundreds of different positions. But a smaller company could, and should take advantage of their smaller size and customize each position for the exact requirements needed. This allows each new addition to fulfill a specific role in the company, more easily meeting their business needs.</p>
<p>To conclude, I would like to quickly describe what I consider a good technical interview. First keep in mind the interviewee might be nervous, try to provide a comfort zone. Respectfully, treat them like you’re a friend and that both of you are there with the same goal, solving a problem. Now I believe you should present a real problem; something you solved before that can be presented simply without a great business knowledge. Allow and tell them to define some assumptions, after all they might not know the business enough to correctly solve the presented problem. But most importantly, communicate and be engaged in the interview. Try to make it a teamwork exercise where both of you are designing a solution for the presented problem. Also one key point, when the interviewee is talking to you, look at him, don’t be on your laptop working on something else. This actually happened to me, you can’t believe how disrespected I felt, so please don’t do that.</p>
<p>Disclaimer, this blog represents my personal perspective. All of the writing here is of my responsibility and only mine. It does not reflect any opinions of the company I work for, or any of the others I had worked for. Feel free to disagree or even correct me if you share a different perspective.</p>