My personal views, thoughts and opinions.

Friday, April 13, 2018

Installing Windows on external USB hard-drive from Linux

Create VirtualBox machine with Windows:
  • Download Windows
  • Create VM and install Windows
  • Install Extension pack
  • for WinXP you might need to install USB 3.0 driver

In Windows VM:
  1. Download and install Win2USB
  2. Download Windows installation image
  3. Connect the external HDD and enable it for VirtualBox guest
  4. Start Win2USB and select install to the USB HDD

Wednesday, October 04, 2017

Node.js and MongoDB with Cloud Foundry & SAP CP

I recently created a step-by-step guide on how to use Node.js and MongoDB in SAP Cloud Platform. It focuses on Cloud Foundry and Node.js development practices I've gathered developing several applications.

The source of the sample app is available here:

Monday, February 20, 2017

Apple TimeCapsule from Linux Mint 18

To connect to Apple's TimeCapsule from Linux Mint 18.1 you need to add to /etc/samba/smb.conf global section:
# Enable to access Apple TimeCapsule
client use spnego = no
Then restart samba with sudo service smbd restart and point your Nemo to //timecapsule_address/Data

Saturday, January 21, 2017

No USB devices in Virtualbox

I use several VMs with Virtualbox for testing and running old but useful programs. I even had my Cannon printer working in the guest OS.

After I updated to the latest Linux Mint last week, the printer was no longer visible.

I did some searching and came across the following thread that helped me solve the issue:

In short I had to:
  • install the correct Extension pack to have USB
  • add my user to virtualbox group
  • log off and back in

Wednesday, November 16, 2016

Abacus: Concourse pipelines

If you work with Cloud Foundry you noticed that most of the development is powered by Concourse CI.

Abacus has 3 Concourse pipelines

Test pipeline

This pipeline builds Abacus and runs tests against Pouch, Couch and Mongo. The last step is to deploy Abacus on Cloud Foundry with in-memory PouchDB as a smoke test.

Future plans: automatic promotion of changes from develop to master branch so we can ensure that master is always stable.

Deploy pipeline

Deploys Abacus large profile and:
  • creates and binds DB service to Abacus applications
  • maps a route to each Abacus collector and reporting instance

The pipeline uses route mapping to get enable monitoring of a specific Abacus instances through CF router.

Abacus configuration is externalized in abacus-config directory that should be provided to the pipeline. The pipeline supports templates to
  • automatically fill the application manifest.yml files
  • extract sensitive information in one central place

Future plans: blue-green deployment.

Monitoring pipeline

Pings all Abacus applications using OPTIONS request. Sets up Grafana dashboard and reports the result in Riemann.


Future plans: pull more metrics from Abacus.

All pipelines work with custom Docker images we build with the Dockerfiles here.

Friday, November 11, 2016

Abacus: MongoDB support

Abacus initially supported CouchDB. However we wanted to use MongoDB because SAP had existing MongoDB installations and expertise. 

We also wanted to see if we can use the Mongo's aggregation abilities: pipeline and map-reduce operations in the long run.

That was the reason I started porting the existing dbclient module supporting basic CouchDB operations like get, put, remove, allDocs and bulkDocs to Mongo.

Since we already had the specification in the form of unit tests for the couchclient I started implementing the client in semi-TDD manner. It took me a week to understand all the details.

The main difficulties came from the fact that I wanted to keep the CouchDB-based behaviour in the Mongo port. I had to mimic the old behaviour in the new mongoclient module to reduce the impact on the existing Abacus code base. This brought to life two simulated features:

Once unit tests ran fine I started the integration suite. As I expected there were problems. The first one - we wanted to create a DB document with eureka port number as "$". Well Mongo does not like fields starting with $ in documents. Fixed.

The integration tests lead to more simulated features in the Mongo port:

For all of the new features I added unit tests to the existing specification of the dbclient module.

Now I had a working mongo client for Abacus. I hurried to try it on Cloud Foundry installation and to my surprise it didn't work. 

The problem: I was using MongoDB that supported only one DB and I didn't had the permissions to create new ones. I implemented a custom partitioning that used collections instead of databases.

Everything was fine until we needed a bigger and highly-available instance of Mongo. Then we found out we don't support replica sets and in particular in combination with collections. A new test and a new feature.

Time for integration in the project. Jean-Sebastien Delfino came up with the idea to provide a minimalistic "layer" that based on an environment variable selects Couch or Mongo client. This played really well when he combined it with a set of scripts to select the DB in development.

If you want to use Abacus with MongoDB check out our configuration page and the Concourse pipeline (my next blog).

Sunday, July 03, 2016

Abacus: Enabling contribution

In the late 2015, together with my colleague Georgi Sabev, I started investigating how to generate application usage for Cloud Foundry applications. We needed metering and aggregation of that usage across the Cloud Foundry entities (application, space, organization).

We found the newly created CF-Abacus project to be a perfect fit to what we were looking for:
  • open source
  • part of the Cloud Foundry Foundation
  • running as a set of applications/micro-service
  • supported arbitrary usage 

Cloud Foundry applications

We soon found that although Abacus could handle a wide variety of measures, metrics and formulas, there was nothing in place that can get the applications state from Cloud Foundry and somehow translate it into usage. We had to create our own stuff to do the job.

We went on and started creating the cf-bridge - a component that should do 3 simple things:

Well the idea sounded simple enough. However for both me and Georgi this was the first clash with Node.js. And we both hated JavaScript. I always connected it with the endless struggle to get some code working equally well (or equally bad) on all browsers. And even on Internet Explorer 6.

We started poking around and copy-pasting stuff to get something working.

To our surprise the code in Abacus looked and read well (with the exception of variable names). So we quickly borrowed some code snippets and bundled them to form the scaffold of the the cf-bridge.

Now what? We needed to try it out. After some reading on how to do it, we finally managed to craft a set of commands. Started the app and got a lot of errors. First clash with jslint: "The JavaScript Code Quality Tool"Since there was no easy way around it we simply fixed our code to comply with the project settings.

Finally we had our app ... well not really working, but at least started. We previously worked on extending Diego. And Diego was a TDD project. So the idea of having a test before we continue further was not too far and did come quite easy. The Mocha test framework was also pretty familiar to us. Looked and behaved like Ginkgo or RSpec.

Time-based metrics

Since this is not the end of the blog you might expect we had some more troubles. Next obstacle we faced was the lack of good way to measure usage for Cloud Foundry applications. We could sample the usage, but we did not like this idea much.

Next thing we know if how we write to the Abacus team on Slack. After several long threads in which Jean-Sebastien Delfino tried to explain us how things work, he decided to rid us of our misery and proposed a special kind of metrics that could handle our use case - the "time-based metrics".

The Abacus team even created a sample plan to help us go forward and implement our bridge.

Contribution enablers

We managed to create a working resource provider for metering CF application memory - the cf-bridge. But that wouldn't be that easy (or even possible) without a few things that enabled us to contribute to Abacus:
  • clean project code   
Abacus is an open-source project. Having a clean code is pretty important for contributors. They have to be able to understand what's the intention of a snippet without additional documentation. Or at least without a detailed doc on the subject.
  • code quality tools
What kills your app, teaches you a thing or two. We didn't have much experience with JS and wrote code that looked like machine-translated from Java or Go. JSLint helped us get up to speed on JS syntax. Plus it puts the code conventions on a higher level (almost where golang is). Code quality tools are a must for a project that wants to enable as many contributors as possible.
  • ECMA 6
Abacus used some ECMA 6 features long before the official support in Node.js was available. Most of them are just a "syntax-sugar". But some made our life easier and our code cleaner. All these new and shiny features made us think that we might be wrong about JavaScript.
  • test tooling
Mocha is a great framework, but Abacus added lots more. We found out that the chai and sinon modules were automatically prepared for us. What's more we liked the code coverage reports generated on every test run to help us see what's left to be tested.
  • dev turnaround 
The dev process with Abacus is one of the easiest I've experienced. Partially because we used a dynamic language, partially because of Node.js and npm, but mostly because of the tooling around the project.
  • welcoming community
You may have the perfect tooling, but in the end you are working with people. And if these people are not welcoming, willing to help and encouraging your contribution, you cannot do much. No matter that it was obvious that we are not experienced in the matter, Abacus commiters provided tremendous help. Without them we would not be able to create the code and donate it to the project.
Bottom line
Both the tooling and the community helped us to do our job. Without them it would be pretty hard to contribute. Not really a surprise I guess.

But if you don't have one of them you'll need to somehow balance and stake a lot more on the other. For example bad or missing tooling can be compensated by a good community. But what about vice-versa?

Well sometimes you need to change your way of thinking as well. Then only a good community can help. As is the case with Cloud Foundry Dojo program.

Google+ Followers