My personal views, thoughts and opinions.

Wednesday, November 16, 2016

Abacus: Concourse pipelines

If you work with Cloud Foundry you noticed that most of the development is powered by Concourse CI.

Abacus has 3 Concourse pipelines

Test pipeline

This pipeline builds Abacus and runs tests against Pouch, Couch and Mongo. The last step is to deploy Abacus on Cloud Foundry with in-memory PouchDB as a smoke test.

Future plans: automatic promotion of changes from develop to master branch so we can ensure that master is always stable.

Deploy pipeline

Deploys Abacus large profile and:
  • creates and binds DB service to Abacus applications
  • maps a route to each Abacus collector and reporting instance

The pipeline uses route mapping to get enable monitoring of a specific Abacus instances through CF router.

Abacus configuration is externalized in abacus-config directory that should be provided to the pipeline. The pipeline supports templates to
  • automatically fill the application manifest.yml files
  • extract sensitive information in one central place

Future plans: blue-green deployment.

Monitoring pipeline

Pings all Abacus applications using OPTIONS request. Sets up Grafana dashboard and reports the result in Riemann.

Screenshot:

Future plans: pull more metrics from Abacus.


All pipelines work with custom Docker images we build with the Dockerfiles here.

Friday, November 11, 2016

Abacus: MongoDB support

Abacus initially supported CouchDB. However we wanted to use MongoDB because SAP had existing MongoDB installations and expertise. 

We also wanted to see if we can use the Mongo's aggregation abilities: pipeline and map-reduce operations in the long run.

That was the reason I started porting the existing dbclient module supporting basic CouchDB operations like get, put, remove, allDocs and bulkDocs to Mongo.

Since we already had the specification in the form of unit tests for the couchclient I started implementing the client in semi-TDD manner. It took me a week to understand all the details.

The main difficulties came from the fact that I wanted to keep the CouchDB-based behaviour in the Mongo port. I had to mimic the old behaviour in the new mongoclient module to reduce the impact on the existing Abacus code base. This brought to life two simulated features:

Once unit tests ran fine I started the integration suite. As I expected there were problems. The first one - we wanted to create a DB document with eureka port number as "$". Well Mongo does not like fields starting with $ in documents. Fixed.

The integration tests lead to more simulated features in the Mongo port:


For all of the new features I added unit tests to the existing specification of the dbclient module.

Now I had a working mongo client for Abacus. I hurried to try it on Cloud Foundry installation and to my surprise it didn't work. 

The problem: I was using MongoDB that supported only one DB and I didn't had the permissions to create new ones. I implemented a custom partitioning that used collections instead of databases.

Everything was fine until we needed a bigger and highly-available instance of Mongo. Then we found out we don't support replica sets and in particular in combination with collections. A new test and a new feature.

Time for integration in the project. Jean-Sebastien Delfino came up with the idea to provide a minimalistic "layer" that based on an environment variable selects Couch or Mongo client. This played really well when he combined it with a set of scripts to select the DB in development.

If you want to use Abacus with MongoDB check out our configuration page and the Concourse pipeline (my next blog).

Sunday, July 03, 2016

Abacus: Enabling contribution

In the late 2015, together with my colleague Georgi Sabev, I started investigating how to generate application usage for Cloud Foundry applications. We needed metering and aggregation of that usage across the Cloud Foundry entities (application, space, organization).

We found the newly created CF-Abacus project to be a perfect fit to what we were looking for:
  • open source
  • part of the Cloud Foundry Foundation
  • running as a set of applications/micro-service
  • supported arbitrary usage 


Cloud Foundry applications

We soon found that although Abacus could handle a wide variety of measures, metrics and formulas, there was nothing in place that can get the applications state from Cloud Foundry and somehow translate it into usage. We had to create our own stuff to do the job.

We went on and started creating the cf-bridge - a component that should do 3 simple things:

Well the idea sounded simple enough. However for both me and Georgi this was the first clash with Node.js. And we both hated JavaScript. I always connected it with the endless struggle to get some code working equally well (or equally bad) on all browsers. And even on Internet Explorer 6.

We started poking around and copy-pasting stuff to get something working.

To our surprise the code in Abacus looked and read well (with the exception of variable names). So we quickly borrowed some code snippets and bundled them to form the scaffold of the the cf-bridge.

Now what? We needed to try it out. After some reading on how to do it, we finally managed to craft a set of commands. Started the app and got a lot of errors. First clash with jslint: "The JavaScript Code Quality Tool"Since there was no easy way around it we simply fixed our code to comply with the project settings.

Finally we had our app ... well not really working, but at least started. We previously worked on extending Diego. And Diego was a TDD project. So the idea of having a test before we continue further was not too far and did come quite easy. The Mocha test framework was also pretty familiar to us. Looked and behaved like Ginkgo or RSpec.


Time-based metrics

Since this is not the end of the blog you might expect we had some more troubles. Next obstacle we faced was the lack of good way to measure usage for Cloud Foundry applications. We could sample the usage, but we did not like this idea much.

Next thing we know if how we write to the Abacus team on Slack. After several long threads in which Jean-Sebastien Delfino tried to explain us how things work, he decided to rid us of our misery and proposed a special kind of metrics that could handle our use case - the "time-based metrics".

The Abacus team even created a sample plan to help us go forward and implement our bridge.


Contribution enablers

We managed to create a working resource provider for metering CF application memory - the cf-bridge. But that wouldn't be that easy (or even possible) without a few things that enabled us to contribute to Abacus:
  • clean project code   
Abacus is an open-source project. Having a clean code is pretty important for contributors. They have to be able to understand what's the intention of a snippet without additional documentation. Or at least without a detailed doc on the subject.
  • code quality tools
What kills your app, teaches you a thing or two. We didn't have much experience with JS and wrote code that looked like machine-translated from Java or Go. JSLint helped us get up to speed on JS syntax. Plus it puts the code conventions on a higher level (almost where golang is). Code quality tools are a must for a project that wants to enable as many contributors as possible.
  • ECMA 6
Abacus used some ECMA 6 features long before the official support in Node.js was available. Most of them are just a "syntax-sugar". But some made our life easier and our code cleaner. All these new and shiny features made us think that we might be wrong about JavaScript.
  • test tooling
Mocha is a great framework, but Abacus added lots more. We found out that the chai and sinon modules were automatically prepared for us. What's more we liked the code coverage reports generated on every test run to help us see what's left to be tested.
  • dev turnaround 
The dev process with Abacus is one of the easiest I've experienced. Partially because we used a dynamic language, partially because of Node.js and npm, but mostly because of the tooling around the project.
  • welcoming community
You may have the perfect tooling, but in the end you are working with people. And if these people are not welcoming, willing to help and encouraging your contribution, you cannot do much. No matter that it was obvious that we are not experienced in the matter, Abacus commiters provided tremendous help. Without them we would not be able to create the code and donate it to the project.
 
Bottom line
 
Both the tooling and the community helped us to do our job. Without them it would be pretty hard to contribute. Not really a surprise I guess.

But if you don't have one of them you'll need to somehow balance and stake a lot more on the other. For example bad or missing tooling can be compensated by a good community. But what about vice-versa?

Well sometimes you need to change your way of thinking as well. Then only a good community can help. As is the case with Cloud Foundry Dojo program.

Sunday, January 10, 2016

Unreadable PDF under Linux

I receive an email with a PDF document every week. Until I was using Windows everything was ok, but now that I'm happy Linux Mint user I found out that the content is unreadable.

I started my Windows VM only to find out that the PDF is unreadable under Chrome in exactly the same manner as under Firefox on Linux.

Then I noticed the "Open with" option and decided to give Adobe Reader a try. It was showing the content correctly. I had a look at the fonts used in the doc and found there "Verdana".

Then it all come into place. Microsoft fonts are licensed, so they are not included by default with Linux distros. 

I made a search and found out that there is a special package called "MS Core Fonts Installer". A search in Mint's Software Manager revealed the exact name and after two clicks, and my promise that I won't sell the Microsoft precious fonts for billions of dollars when everyone can download them for free, I got the files on my system.

A fast check and ... 50/50 result. I could open the PDF with Document Viewer, but Chrome & Firefox failed to show any meaningful data. Well fair enough. At least I can now check the content with an additional click.

Saturday, January 09, 2016

Canon Pixma B200 error

My Canon Pixma MP550 printer went out of ink. I bought new cartridges and the first thing I did when I got home was to try to change them.

I started the printer to check which cartridge was empty. I tried to get to the ink level menu, but the printer brought the B200 error calling for a real technicians and instructing me to unplug the power cord and leave him be.

I decided the error didn't matter, since I can change all the cartridges, as they happen to run out of ink simultaneously, and opened the service cover. Well, needless to say the printer simply showed B200 again as hesitant to bother with anything.

After a short search I came upon various videos on how to fix this. And one of the videos contained a comment with a great link to Tom's Hardware blog.

Inside I found the following steps on how to workaround the issue:
  1. Turn OFF Power
  2. Open the print head bay (as though you were about to change inks)
  3. Turn ON power
  4. Wait for print carriage to start moving to the left and let it go past half way
  5. Before print carriage reaches left hand side (but after going halfway across) shut the cover.
  6. Leave the Printer turned on
  7. Good to go.

Monday, September 07, 2015

xip.io and your home dd-wrt router

I installed Cloud Foundry on my old 8-core Linux box. Everything went fine to the point where I tried to access the installation. 

Targeting the API failed with "host not found". I tried to  dig api.10.244.0.34.xip.io but this failed.

The DNS settings on my machine came from my home router flashed with DD-WRT. I checked the DNS servers and found out that I'm using my Internet Provider's DNS servers. Blaming them I added Google's 8.8.8.8 as a third address.

I restarted the router and the computer (just in case) but the problem persisted. Searching the net provided me with a clue that DD-WRT has a problem with external DNS providers. I went for a solution that would enable only xip.io to resolve host names, so I would have a good protection against attacks.

However it turned out that adding rebind-domain-ok=xip.io to DD-WRT's DNSMasq config caused the DD-WRT DNS to stop functioning.

In the end it turned out that DD-WRT for my router does not seem to work correctly with custom DNSMasq config and a lot of people also suggested that.

So I ended up setting No DNS Rebind to Disable in the Services menu of DD-WRT.

Tuesday, April 28, 2015

Diego's Docker Registry in Cloud Foundry

Into Cloud Foundry

In the middle of 2014 together with Georgi Sabev I started working on a new project inside SAP. We wanted to have Private Docker Registry in SAP HANA Cloud Platform (HCP). But priorities shifted and in the end of 2014 we adopted Cloud Foundry to enable us to add some new features in HCP.

Being part of the Cloud Foundry Foundation, SAP wanted to contribute back and in the same time eat its own dog food. So it was decided that we will learn the Pivotal way by participating the Cloud Foundry Dojo

Diego

The Dojo was extremely helpful. It allowed us to meet great people in Pivotal and in particular the guys working on Diego

If you want to know more about Diego you can check Onsi's last year talk on CF Summit. And if this is not enough go see the design notes.

New Project

Being 10 hours away from where the action is (875 Howard), we started implementing a new feature in Diego, aiming to:
  • guarantee that we fetch the same layers when the user scales an application up.
  • guarantee uptime (if the docker hub goes down we shall be able to start a new instance)
  • support private docker images (access to them require credentials)
The goals also include:
  • enabling the use of private docker images
  • highly available backends
  • effective image management

How?
  
We decided to simply cache the docker images in Docker Registry.
 
Why?

Because by caching the images we can ensure:
  • consistent and faster scale-up
  • uptime without Docker Hub
  • support for private docker images

Private Docker Images

The caching really helps for supporting private images, although this is not obvious.

Normally we need credentials to access Docker Hub and pull the image on every scale-up request. This would mean storing credentials in DB or requiring them on every user request. That is insecure and inconvenient for the users, operators and developers.

We already have the image cached, so can we simply pass the credentials to Docker Hub on staging and then throw them away? Sure -  all subsequent scale up requests will not need access to the Hub in this way.

MVP0

The MVP consists of two parts:
To try it locally you need to install CF, then Diego and finally Docker Registry. You may follow the short guide we provide, which will redirect you to Diego readme for CF and Diego installation.

You can check the registry readme on how to push & cache your docker image. The MVP0 requires you to opt-in for caching of your docking image. That's because we need your feedback about the feature, before we really make it productive in Diego.

If you are lazy like me and just want to see it in action, watch this full length (1:23) video.

The Docker Registry also provides small test suite to check if the registry works correctly.

CF Summit 2015

Georgi Sabev will present the Docker Registry on the CF Summit 2015. If you want to know more, to get in touch about a bug/feature or buy him a drink, please join the session.

Google+ Followers