tips & tricks comments edit

In many case I need to run applications as administrator in Windows. Also I like pressing the Window button on the keyboard and start typing the name of the application.

The problem is to be able run the application as an administrator, I have to switch to the mouse. Right click on the name of the application and select Run As Administrator.

To avoid this I found this shortcut: Shift + Ctrl + Enter. This way it display the privilege escalation confirmation dialog and I can simly tab and press enter. So all this without reaching out to the mouse!

tips & tricks comments edit

Like many people Chrome is my main browser. I like the experience as a user and a developer. Also as a developer, I always bump into caching issues. I fail to see my changes because everything is cached in the browser.

One way of clearing cache is:

  1. Open Developer tools
  2. Go to Application tab
  3. Clear all application data

This was my preferred method before I got a tip from a colleague:

  1. Open Developer tools
  2. Right click on the refresh icon

It show 3 options to clear cache. Note that, these options are only available when the developer tools are open.


tips & tricks comments edit

Using multiple monitors is great but quite often I have to disconnect them and move my laptop around. When I connect them back sometimes I find some windows act weirdly. For example a minimized window goes to an non-existent monitor when I try to maximize it.

To fix that issue, I got this quick tip from a colleague:

Instead of clicking on the application in task bar, Shift + Right Click. In the context window there’s a Maximize option which brings the lost window to the main monitor.

tips & tricks comments edit

I like LINQPad for prototyping C# applications and trying out short snippets. In many scenarios I have to see the output of what I’m trying out. I used to treat my snippets as they if they were small console applications and I used to use Console.WriteLine() statements to display the output.

Not anymore! In a tech video on YouTube I saw this option and loved it:

In LinqPad, there’s a generic extension method called Dump(). It writes the output to the console. Exactly as Console.WriteLine but in a much more concise way.

For example:

var nums = new[] { 1, 2, 3, 4, 5, 6 };
var sum = nums.Aggregate((a,b) => a + b);

This displays 21 and it can be shortened with Dump method

var nums = new[] { 1, 2, 3, 4, 5, 6 };
nums.Aggregate((a,b) => a + b).Dump();

It shows the same result but in a shorter way.


devops comments edit

Some time ago I developed a script to backup my GitHub account and blogged about it here. The idea is to have a backup copy always and have redundancy.

But what is instead of a local copy, we have two online copies? I’ve learned that it’s very easy to achieve as Git supports pushing to multiple repositories at the same time.

For this I created a repository in CodeCommit which is free up to 50GB.

The secret to achieve this is the following command:

git remote set-url --add --push

Steps to push to multiple repositories

01. First run it with the original value (which can be found by running git remote -v command)

git remote -v

02. Copy the URL for push and run the following command:

git remote set-url --add --push origin {push URL from above}

03. Then run it with the 2nd remote like this:

git remote set-url --add --push origin ssh://{repo_name}

Testing the new repo

When you run git remote -v again you should see something like this:

origin{repo_name}.git (fetch)
origin{repo_name}.git (push)
origin	ssh://{repo_name} (push)

Now put a test file in the local repo and push it with

git push -u origin master

It should appear in both remote repositories. It also works well with GUI tools (I use SourceTree).

This way I get to have an online backup without too much hassle and I get to change my provider whenever I want if I wanted to.


dev, aws, ses comments edit

It’s important to read titles on AWS console pages. The reason I decided to post this is that I saw in forum other people making the same mistake. In this case, I’m talking about the rulesets.

The problem with this user interface design is that you can easily see the “Create a New Rule Set” button. You can create your rules inside it and save it and expect it to take affect instantly but this new rule would be inactive. So basically if you want your rule to start working right away, make sure to click on the blue button first. This way you add your rule in the Active Rule Set.

No other AWS service has this kind of distinction in AWS console. I guess that’s what’s causing this confusion for me as well as other people. Hope this post comes in handy for someone who’s having this issue or just to create some awareness.

dev, aws, iam comments edit

Normally the login URL for IAM users is in this format

https://{Account Id}

But it is possible to make this URL more memorable and user-friendly.

In order to achieve this, click on the Customize link which is to the right of the link:

This brings up the Create Account Alias dialog box. Since the name we provide is used in the URL it needs to be unique globally.

If you click on Customize again it asks if you would like to delete the alias and use the account number again. This way you can revert your changes.


dev, csharp comments edit

Today I was looking at a code sample and I noticed a rather unusual syntax in a while condition. It looked like this:

int T = Int32.Parse(Console.ReadLine());
    // Do stuff

I started looking around for –> operator assuming it’s a new addition to the language but it wasn’t listed in the C# operators list.

Playing around with this code in a sample console application I realized that it’s just a decrement operator and greater than comparison grouped together. So it’s essentially equivalent of this:

int n = 4;
while ( n-- > 1 )
    Console.WriteLine($"Hello World! {n}");

But looks better when it looks like a single operator. Same can be done with the increment and less than comparison but it a bit looks uglier:

int n = 4;
while ( n ++< 8 )
    Console.WriteLine($"Hello World! {n}");

It can also be used in a for loop:

var items = new int[] { 1, 2, 3, 4, 5 }; 
var m = 4;
for (; m --> 1 ;)
    Console.WriteLine($"Hello World! {m} {items[m]}");

which would print

Hello World! 3 4
Hello World! 2 3
Hello World! 1 2

I don’t I would use this notation in my own applications but might be helpful when reading somebody else’s code.


sysops comments edit

Yesterday I had to find the count of objects in a folder in an S3 bucket. I only had access to AWS via command line and was working on a Windows Server.

After a bit digging around I found the solution using PowerShells’ Measure-Object cmdlet.

The solution to get the object count was:

aws s3 ls s3://{bucket}/path/to/files | Measure-Object

This can also be used in local folders as well. It also can be used to get the minimum / maximum / average / total size of the folder too so quite handy to get some quick stats about a folder/bucket

While trying it out again I had to install AWS CLI on my Mac. So just for future reference to be able to use AWS CLI on macOS you can install it with Brew as following:

brew install awscli


sysops comments edit

Today I learned an easy way to retrieve stored Wi-Fi passwords on a machine without installing an external application. The original article is in the resources section.

All it takes is 2 commands:

First list all the previous WiFi connections with this command:

netsh wlan show profiles

Then copy the connection name and replace ConnectionName in the command below:

netsh wlan show profile name="ConnectionName" key=clear


utility comments edit

I often need to combine PDFs when I do my weekly planning. As a Windows user I didn’t have an easy way of doing it (free applications generealy come with bloatware) so I developed myself a small console applicaiton in C# to merge PDFs into one. As I decided to switch to Mac for daily usage I was again in need to achieve the same task. Fortunately, the solution is built-in to Mac already.

Apparently, the PDF preview tool allows you to drag and drop other documents as pages so it’s an easy way of doing it. But I wanted to automate the process and found this article which shows how to do it via command line.

The solution

So here how it works:

Basically there’s a Python script that is shipped with OSX which does the job:

"/System/Library/Automator/Combine PDF Pages.action/Contents/Resources/" -o Output.pdf /SomePath/Input1.pdf /SomePath/Input2.pdf /SomePath/*.pdf

This is it essentially. To make it a bit easier a soft link can be created

cd /usr/local/bin
sudo ln -s "/System/Library/Automator/Combine PDF Pages.action/Contents/Resources/" PDFconcat

In the article the symlink parameter (-s) is not mentioned and I was getting “Operation not permitted” error without it. I found this parameter in the comments section. It worked this way.

So now the usage becomes much simpler:

./PDFconcat -o Week.pdf ./Days/*.pdf

Looks like the order of the parameters matter. For instance the following doesn’t work

./PDFconcat  ./Days/*.pdf -o week.pdf

It doesn’t show any errors but just doesn’t produce the output either.


devops, aws, route53 comments edit

One of my favourite AWS services is Route53. WGoing through my hosted zones on Route53, I decided to take a closer look into routing policies.

Especially after I read Loggly’s blog post on using Route53 as load balancer I thought I should give using Weighted Routing a shot.

If you assign an equal value to all nodes it works in a round robin fashion and distributes equal load to all nodes. But the nice thing is you can customize the weights based on your configuration.

Basically the idea is to create duplicate record sets (e.g and A record for pointing to and another for to with a unique identifier and a weight.


dev comments edit

This is a very old feature really but didn’t bother to check it out before: Basically in Visual Studio you can select from up to 20 last-copied items in the clipboard. Instead of Ctrl + V, using Ctrl + Shift + V cycles through the 20 items starting from the most recent and going backwards.

Also, as SQL Server Management Studio is based on Visual Studio, it works in SSMS too!

It’s a handy feature and will try to utilise it more often…


dev, security comments edit

I’ve been playing around with SSL certificates for years but never occurred to me what’s the purpose of Extended Validation. Primarily because they are so out of my price range I didn’t even consider them. Today I learned that one benefit of having EV cert is having the company name next to the green padlock.

I don’t think it’s worth the effort and a whole lot of extra money but it sure looks cool compared to a small padlock icon only!


dev, database comments edit

A few weeks ago I needed to migrate data from a SQL Server instance on Azure to another one. I looked around a bit but couldn’t find a satisfactory solution. I had to export the data as insert scripts and run them on the target server. Yesterday a colleague told me a far better way to do this using Visual Studio 2015: SQL Server Object Explorer.

It’s a fantastic tool for such scenarios. All you have to do is create 2 connections for source and target

Then you right-click on one of them and select Data Comparison which pops up the following dialog.

The rest is just following the comparison wizard. Select the source, select the destination and compare. It displays all the differences and you can select which ones to apply to the destination and run! That’s all.

dev, security comments edit

I used wildcard certificates in the past but didn’t come across SAN (Subject Alternative Name) before. Apparently what it does is allow you to define multiple domains with a single certificate.

  • A wildcard certificate works for multiple (unlimited) subdomains on a single domain.
  • SAN works for multiple domains (e.g., etc). It has a limitation though. You can initially add up to 4 domain names. If you exceed that the next limit is 25

If I have to choose I’d still go with wildcards as they are cheaper but SAN seems to have its uses too

Wildcard certs are great for protecting multiple subdomains on a single domain. In many cases, the wildcard cert makes more sense than a SAN because it allows for unlimited subdomains and you don’t need to define them at the time of purchase. You could provision * and in at anytime during the life of the certificate, you decided to add or, that cert would just work, no reissue required.

Then there is something called UCC (Unified Communications Certificate). This is a Microsoft thing which means

A Unified Communications Certificate (UCC) is an SSL certificate that secures multiple domain names and multiple host names within a domain name. A UCC lets you secure a primary domain name and up to 99 additional Subject Alternative Names (SANs) in a single certificate. UCCs are ideal for Microsoft® Exchange Server 2007, Exchange Server 2010, and Microsoft Live® Communications Server.

UCCs are compatible with shared hosting. However, the site seal and certificate "Issued To" information will only list the primary domain name. Please note that any secondary hosting accounts will be listed in the certificate as well, so if you do not want sites to appear 'connected' to each other, you should not use this type of certificate.


dev, devops comments edit

I have an used PC at home which I’m planning to turn into a server for trivial stuff. The question in my mind was should I first install OpenStack then use Docker on it or do I need OpenStack at all? If not how do I “host” containers. Found the answer here nicely described in the article below:

Docker relies on a built-in feature of the Linux operating system named LXC (Linux containers).* LXC utilizes the built-in operating system features of process isolation for memory, and to a lesser degree, CPU and networking resources. Docker images do not require a complete boot of a new operating system, and as a result, provide a much lighter alternative for packaging and running applications on shared compute resources. In addition, it allows direct access to the device drivers which makes I/O operations faster than with a hypervisor approach. The latter makes it possible to use Docker directly on bare metal which, often times causes people to ask whether the use of a cloud such as OpenStack is really necessary if they’re already using Docker.

So I’ll just install a fresh Linux distro and try Docker on top of it straight away.


networking, dev comments edit

Just starting to use Lets Encrypt in a Windows server. Going through tutorials I was wondering how I would generate a wildcard certificate since it’s free! To my surprise I discovered that they don’t support wildcard certs at all. For the time being at least. This is from their FAQ:

Will Let’s Encrypt issue wildcard certificates?

We currently have no plans to do so, but it is a possibility in the future. Hopefully wildcards aren’t necessary for the vast majority of our potential subscribers because it should be easy to get and manage certificates for all subdomains.

But looking at the heated discussion here there’s a big demand for that


networking comments edit

I was researching HTTP/2 yesterday (which I ended up blogging here. While analysing the traffic to capture HTTP/2 packets, I noticed a bunch of packets with a protocol named QUIC. I had not heard about it before so looked it up. Apparently it’s an experimental protocol developed by Google.

The documentation describes it as “TCP+TLS+HTTP2 but implemented on top of UDP”.

One neat feature it supports is “multiplexing”. In TCP when a packet is lost it needs to be retransmitted, otherwise no streams on the connection can continue. But with QUIC “healthy” streams can continue without delay caused by the streams with lost packets.

Looks like currently it’s only supported by Google Servers and Chrome on the client.

There is a Chrome extension called HTTP/2 and SPDY indicator which also shows QUIC sessions.


dev comments edit

Recently I needed to export some data from SQL Server into JSON files. I found out that this is now a trivial task with SQL Server 2016 as it has built-in support for JSON.

The easiest way to get results is using the AUTO keyword:

SELECT * FROM Customers