Sunday, June 1, 2014

TypeError: unicode does not have the buffer interface


You will encounter this error if you installed latest pip and setup tools (if you use tools like this: http://pip.readthedocs.org/en/latest/installing.html for pip installation).

The error pattern should look similar to : https://gist.github.com/saurabhsurana/db696248f5dddbfbd23d

The issue happened when new setup tools (setuptools-4.0) was released on June 1st.

Work around here is to revert back to setuptools 3.6, then things should work fine for you.

pip install -U "setuptools==3.6"

Sunday, March 16, 2014

OpenStack TripleO

OpenStack TripleO (OOO - OpenStack On OpenStack ) project is aimed at simplifying OpenStack installation.
TripleO is a combination of multiple small services, it combines the powers of OpenStack Disk Image Builder (Image Building) and OpenStack Heat (Orchestration service) to deliver a simplified OpenStack installation process. The process involves set of scripts and mustache templates for configuration file that are configured with the help of disk image builder, and then set of services that read the data from Heat metadata service and applies that to the mustache templates when instance is running.
TripleO uses some keywords very frequently, which were very confusing at first.

Seed Cloud/Bootstrap/TripleO CD:

Seed Cloud is the host on which TripleO CD or bootstrap disk is loaded with, which allows you to setup/run the TripleO UnderCloud installation. Once the UnderCloud is up and running, Seed Cloud can continue to exist or it might go away, as the sole purpose of the Seed Cloud is to help setup the UnderCloud layer.

UnderCloud:

This the bootstrap host (one single standalone host in most cases), on which is running all the major OpenStack services (nova, glance, keystone, heat etc). This can be considers a OpenStack installation in itself, but instead of creating VMs for you, this will go an install OpenStack on your bare-metal server. In simple terms, it will treat your bare-metal as any other nova resources and will load them with pre-build images of the specific service that you want to install on the particular host. UnderCloud hosts/services are used by Admin directly/indirectly (with the help of some scripts) to standup the actual OpenStack Cloud. End Users of your intended OpenStack service will never have to deal with UnderCloud or they will not even know about its existence Although it will continue to exist/run to help you manage your OpenStack installation (lets say you want to re-image and existing host or add new host to the setup etc).

OverCloud:

This is the real OpenStack installation which was setup by the UnderCloud. With the help of UnderCloud once all the bare metal servers are setup with necessary OpenStack service, that give you a real OpenStack Cloud Service Setup which is referred to as OverCloud. This is the OpenStack cloud service which your users will use to create OpenStack/Nova resources like instances, networks, volumes etc.

InCloud:

This is the set of resources end users will create using the OverCloud OpenStack services. These are actual virtual resources like nova compute instances or cinder volumes.

TechSaga-TripleO

Sunday, October 20, 2013

SaltStack Multiple Salt Master Setup

SaltStack supports configuring salt minions to talk to multiple salt master. This allows us to have a HA salt master setup. Which means if one of the salt master dies, minions will still be reachable from the second salt master.  If you are running your salt-stack setup inside the cloud, this is very helpful.

Configuring Salt Minion:

In order to configure a minion to talk to multiple master, minion’s configuration may looks something like this
master:
  - <IP Of Salt Master Host-1>
  - <IP Of Salt Master Host-2>

After this when salt minion service starts up, the minion will establish connection with both the master (will perform the key exchange with both of the masters separately). But for things to work correctly, both the masters needs to accept the key of the minion, if for any reason one of the master reject the key, then minion will not be able to talk to any of the masters.

Multi-Salt-Master

Configuring Salt Master:


No doubt that both the salt master should look identical in terms of configurations, data (states/pillar) available on them. But in its current form, SaltStack doesn’t provide any in-built functionalities for maintaining these salt masters in sync. So one will have to create their own setup for maintaining sync. Your best available bets right now are some combination of rsync and use of the salt stack to control the salt master, refer to this post.

Monday, October 7, 2013

Managing salt master as minion


Although SaltStack provides a simplified approach to infrastructure management, one of the most critical pieces of the setup, Salt Master, needs to be managed in the same setup.  One of the challenges of maintaining/managing salt master is to be able to reproduce the same state of the salt master in case of failures or be able to manage multiple salt master maintain multiple salt master in identical configuration over a period of time.
A simple trick for solving this problem would be to install a minion on the master itself and manage that salt minion just like any other node in the infrastructure. This will not only help us maintain the same state on multiple salt master but will also help us reproduce the state of the machine if required.
The only risk I see in managing master as any other minion is to avoid managing salt-master service and if possible avoid managing salt master configuration file. Apart from these two things I think we should be able to manage any other thing on the salt master with salt stack.
TechSaga

Sunday, September 29, 2013

SaltStack Introduction

SaltStack is an awesome tool, can be used for Configuration Management, Remote Execution, Infrastructure Automation etc etc etc.  Moreover its OpenSource, very powerful, easily extendible, used by large organizations for managing huge infrastructure setups, what else you want.  Simply put its Just an AWSOME tool and its powers are beyond your expectation. The best thing I like about salt stack is if configured correctly, you never have to login to multiple machines, you always login into SaltMaster and from there you can pretty much control everything in your whole environment.
Usually a SaltStack setup has one master and multiple minion in the setup. Salt Master has States and PIllar configured for every node in the infrastructure.  Every node in the infrastructure that needs to be managed by salt, has salt minion installed on it and is configured to talk to Salt Master. When a minion starts talking to master, minion presents it public RSA key to the master. If master accepts the key for the minion, it then starts identifying that minion with a specific id.

State:

state is configuration in the yaml format, that defines what packages to be installed, what configuration files to be copied, or what services to be enabled/started, etc. Salt Stack has in build states for general purpose uses but for specific cases you an extend and add your own salt state. In the state file that we define on the salt master, we just configure these in build or custom defined states to suite to our needs.

Pillar:

Salt Pillar is set of data that can be used to configure the state more effectively. Pillar can be data that might change over a period of time. Things like IP address of remote machines, credentials, Package Versions etc can be configured in pillar data. If you plan on using the same states on multiple environments, you can use the pillar data to configure things per environment.

Modules:

SaltStack modules are set of commands that you can run on the minion to control it remotely. Modules are powerful and allow you to control every node in your infrastructure remotely from the master. You can use large set of in build salt modules, or can write your own.
Salt has much more than only states or pillar. It has grains, which can be used to get more information about every minion and the configure your states/pillar accordingly. It has returners that allows you to grab the output of every salt commands.

Sunday, June 16, 2013

OpenStack : Authentication Model

OpenStack has multiple different services that work together to provide a end to end cloud computing experience.
Keystone is an OpenStack project that provides Identity, Token, Catalog and Policy services for all the projects/services in OpenStack.
Overall authentication process works like this:
  1. In order to use any of the OpenStack services user needs to first get a auth_token from the keystone service. User provides Credentials (Username/password) in the POST body to the Keystone Token Service. User after acquiring the token should be able to use the same token until the specified token validity period (Most of the OpenStack CLI exploit this and instead of storing the token they will always go back to the keystone and get a new token for every cli command, but if you are writing the script, you can optimize this, as the standard token validity is at least few hours). If you want to see the token you can use ‘nova credentials’ cli call to get a token and check its validity etc.
  2. Once user gets the token from Keystone, this same token will be used in all the subsequent calls to any of the OpenStack Services. The token goes in the request header “X-Auth-Token”.
  3. Service API layer then extracts this token and validates this token with keystone and ensures that token is not expired and it was issued to the correct tenant who has made the API call
  4. In many cases where one OpenStack service needs to talk to other OpenStack service to complete the user action (like Nova talking to Cinder while creating a instance with volume etc) will use the same token that it extracted from user’s request header, and will pass on that header in the request to the next service.
  5. The next service will against validate this token with keystone.
TechSaga-OpenStack-Auth-Model

Monday, May 13, 2013

Openstack Nova Instance Security Group

Once you create a instance with Openstack nova, now you need a way to access that instance.
A Security group is sort of a local firewall for your nova instance (behind the scene it is a set of iptable rules on the compute host). It lets you configure access to your instance with the help of REST API, where you can define the inbound access
  • To individual port or range of port
  • For a particular network protocol (tcp, udp, icmp etc)
  • From a network CIDR (a single IP address or a range of IP address
For the Ubuntu instances we booted in here, we will enable SSH access to this instance with the help of security group added to the instance. Security group that is associated with the instance can be identified with ‘'nova show’ command, this lists all the attribute of the nova instance.
Once you have security group associated with the instance, there are security group rules command to help you manage the group. To enable SSH access, we will have to enable access to port 22 and  TCP protocol. 
$ nova secgroup-add-rule dev tcp 22 22 10.20.4.29/32
+-------------+-----------+---------+---------------+--------------+
| IP Protocol | From Port | To Port | IP Range      | Source Group |
+-------------+-----------+---------+---------------+--------------+
| tcp         | 22        | 22      | 10.20.4.29/32 |              |
+-------------+-----------+---------+---------------+--------------+

This will enable access to port 22 from IP address 10.20.4.29. Instead of a single IP, one can specify a range with the help of correct CIDR to allow access from a specific network.

Security group is a very good way of controlling access to your instances which run in the open stack cloud, and one should really lock these rules down to only what is essentially required. Like open access only from your network and try to avoid using CIDR 0.0.0.0/0 which allows access from anywhere.

For production services, it is always advisable to also consider using iptables on your local nova instances. Security groups although being a set of iptables, they are applied on the compute host and not directly on your instance.

Sunday, April 28, 2013

Creating instance on OpenStack

After you get your nova/OpenStack account, the first thing that you would want to do is create/boot a cloud instance in that account.
Here are the steps that will help you in creating your first OpenStack/nova instance:

Selecting the image to boot your instance from:

In order to create a nova instance we will need to specify the OS image that we want to boot/create the instance from. Usually you should have some (public) images available on nova, image-list should list the available images, from this list you can pick the image of your choice:
$ nova image-list
+--------+--------------------------------------------------+--------+---------+
| ID     | Name                                             | Status | Server  |
+--------+--------------------------------------------------+--------+---------+
| 54021  | CentOS 5.8 Server                                | ACTIVE |         |
| 78265  | Debian Squeeze 6.0.3 Server 64-bit               | ACTIVE |         |
| 209316 | Fedora 18 Server 64-bit                          | ACTIVE |         |
| 80812  | Ubuntu Precise 12.04 LTS Server 64-bit           | ACTIVE |         |
| 190306 | Ubuntu Quantal 12.10 Server 64-bit               | ACTIVE |         |
| 367479 | Ubuntu Raring 13.04 Server 64-bit                | ACTIVE |         |
+--------+--------------------------------------------------+--------+---------+ 

Selecting the key_name:

After the instance is created, for being able to ssh into the instance, we should specify the key_name with which the instance should be booted. keypair-list should give list of existing keys available on nova, while choosing the key name from the list, make sure that you have the private key for that:
$ nova keypair-list
+-------------------+------------------------------------------------------------+
| Name              | Fingerprint                                                |
+-------------------+------------------------------------------------------------+
| dev               | 75:56:d7:06:49:33:53:8a:bb:9e:03:34:6c:56:34:12            |
+-------------------+------------------------------------------------------------+

Selecting Security Group:

After When Instance is running, there should be a security group associated with the instance that defines the access for instance. Nova Security Group is set of rules that defines port access from set of IP range. SecGroup-List should show the list of security groups available on nova.
$ nova secgroup-list
+----------+-------------+
| Name     | Description |
+----------+-------------+
| default  | default     |
| dev      | dev         |
+----------+-------------+

Selecting Instance Flavor:

While starting a new instance we have to specify the flavor, which defines the resources (CPU, Memory, Disk etc) for the instance being started. Choose your flavor type wisely, this the settings that defines the cost of your instance.
$ nova flavor-list
+-----+--------------------+--------------+------+-------------+------+---------+
| ID  | Name               | Memory_MB    | Disk | Ephemeral   | Swap | VCPUs   | 
+-----+--------------------+--------------+------+-------------+------+---------+
| 100 | standard.xsmall    | 1024         | 30   | N/A         | 0    | 1       |  
| 101 | standard.small     | 2048         | 60   | N/A         | 0    | 2       |  
| 102 | standard.medium    | 4096         | 120  | N/A         | 0    | 2       |  
| 103 | standard.large     | 8192         | 240  | N/A         | 0    | 4       | 
| 104 | standard.xlarge    | 16384        | 480  | N/A         | 0    | 4       | 
| 105 | standard.2xlarge   | 32768        | 960  | N/A         | 0    | 8       |
+-----+--------------------+--------------+------+------------+------+----------+
Now since we have identified minimum items that we needs, lets start with booting a instance. ‘nova boot` command on nova will take these parameters and will start a nova instance with those settings.
$ nova boot --flavor 101 --image 75845  --security_group dev  --key_name dev Dev
+------------------+------------------------------------------------------------+
| Property         | Value                                                      |
+------------------+------------------------------------------------------------+
| image            | Ubuntu Precise 12.04 LTS Server 64-bit                     |
| flavor           | standard.small                                             |
| id               | 1923229                                                    |
| security_groups  | [{u'name': u'dev',  u'links':                              |
|                  | [{u'href': u'http://192.168.1.1/v1.1/1234/os-security-groups/314739',|   
|                  |   u'rel': u'bookmark'}], u'id': 314739}]                   |
| user_id          | 4567                                                       |
| uuid             | c5de77e2-5e0a-4cf1-827e-dc92e5af4cb9                       |  
| accessIPv4       |                                                            |
| accessIPv6       |                                                            |
| metadata         | {}                                                         |
| status           | BUILD(scheduling)                                          |
| updated          | 2013-07-20T18:39:26Z                                       |
| hostId           |                                                            |
| key_name         | dev                                                        |
| name             | Dev                                                        |
| created          | 2013-07-20T18:39:26Z                                       |
| tenant_id        | 1234                                                       |
| adminPass        | bYUcg4vSArNqfEZT                                           |
| config_drive     |                                                            |
+------------------+-------------------------------------------------------------+
When we do nova list we can see the instance status the other details like IP address. 
$ nova list
+---------+-----------+----------+--------------------------------------------+
| ID      | Name      | Status   | Networks                                   |
+---------+-----------+----------+--------------------------------------------+
| 1923229 | Dev       | ACTIVE   | private=10.4.45.149, 15.185.126.57         |
+---------+-----------+----------+--------------------------------------------+
Nova list will list the instance and other details like status and network. Network will list the IP address with which you can access/reach this instance. There are usually two IP addresses listed in nova list output, first one is the private IP (accessible only within the network) and the second one is public IP, accessible over internet. If you have a floating IP assigned to the instance that will be the third IP in that list. When instance status is ACTIVE, at this time we can try login to the instance, with a SSH session. If you are in the same network as that of the instance, you might be able to use the private IP, but if you are reaching this over internet choose the public IP.
$ ssh –i <Path of SSH Private Key> ubuntu@15.185.126.57

Saturday, April 20, 2013

Using Nova CLI

For installing nova client, we need pip installed on the machine. (For installing pip )
$ pip install python-novaclient
Using Environment Variables:

After nova client is installed, export following environment variables to help point CLI to correct endpoints with required credentials

export OS_PASSWORD=demo_password
export OS_AUTH_URL=http://10.9.128.184:5000/v2.0
export OS_USERNAME=demo
export OS_TENANT_NAME=demo
If your credentials and URL is correct then at this stage you should be able to at least get a valid token back:

$ nova credentials
+------------------+-----------------------------------------------------------+
| User Credentials | Value |
+------------------+-----------------------------------------------------------+
| id | 4330bcb01bbb45c8af382b72a1f46fff |
| name | admin |
| roles | [{"name": "demo"}, {"name": "member"}, {"name": "demo"}] |
| roles_links | [] |
| username | admin |
+------------------+-------------------------------------------------------- --+
+-----------+------------------------------------------------------------------+
| Token | Value |
+-----------+------------------------------------------------------------------+
| expires | 2013-04-21T05:05:55Z |
| id | MIIQQwYJKoZIhvcNAQcCoIIQNDCCEDACAQExCTAHBgUrDgMCGjCCDpkGCSqGSIb3 |
............
| | WqY8IykYMUIn8zLkOUP0U68noC8GN3XqnBnbWJdKc31On3aI3NnrGNI |
| issued_at | 2013-04-21T04:05:55.189632 |
| tenant | {"enabled": true, "description": null, "name": "demo", "id": |
| | "31afefb2955b4575aea1ccb4729380ea"} |
+-----------+------------------------------------------------------------------+
From this above output you will get details about your account. User ID, Roles, Tenant ID, and Authentication Token (which will be used while talking to other Openstack services like nova-api, cinder etc


Monday, January 30, 2012

Amazon RDS

 
Amazon RDS allows us to quickly create a relational database instance and flexibly scale the associated compute resources and storage capacity to meet our application demand. Amazon RDS manages the database instance on our behalf by performing backups, handling failover, and maintaining the database software. By using Amazon RDS we offload DB management tasks(resizing, replication, patch management) to Amazon who maintain network infrastructure and datacenters, leveraging the same industry-leading practices and procedures that keep web properties running optimally.

Security:
By default all the ports to Amazon RDS are blocked and we can provide authorized access from certain specific list of IP addresses to the RDS port. It also support SSL connection to encrypt the traffic between the Database server and client. Moreover RDS can be hosted inside the VPC. Slightly annoying feature for RDS that, at the time of creation it ask for master username and password, but that account is really not a root account for database and still has slightly stricter permission that default root on locally install RDS.

Reliability and Backup:
Amazon's Multi-AZ deployment model that enhances database availability while protecting our latest database updates against unplanned outages. When we create or modify our DB Instance to run as a Multi-AZ deployment, Amazon RDS automatically provision and manage a “standby” replica in a different Availability Zone (independent infrastructure in a physically separate location, but in the same region). Database updates are made concurrently on the primary and standby resources to prevent replication lag. In the event of planned database maintenance, DB Instance failure, or an Availability Zone failure, Amazon RDS automatically failover to the up-to-date standby so that database operations can resume quickly without administrative intervention.

What makes it tempting:
  • Automatic backup : You can configure the backup retention policy in terms of days.
  • Choice of backup/maintenance window: You can choose, to select the backup and maintain windows for your DB instance, such that those operations will be performed in that window to ensure the uptime of your service.
This information can be updated without restarting the Amazon RDS DB instance, thus giving us 24/7 availability & seamless control of the database access. Amazon RDS generates an SSL certificate for each DB Instance, thus allowing us to encrypt the DB Instance connections for enhanced security.
At the time of deletion, the DB Instance is marked for deletion and once the instance no longer indicates ‘deleting’ status, it has been removed. At this point the instance is no longer accessible and unless a final snapshot copy was asked for, it cannot be restored and will not be listed by any of the tools or APIs.

Amazon RDS hosted on cloud will always serve traffic originating from Amazon VPC (Virtual private cloud). Amazon VPC allows us to provision a private, isolated section of the Amazon Web Services (AWS) Cloud where we will launch AWS resources in a virtual defined network. With Amazon VPC, we can define a virtual network topology that closely resembles a traditional network which anyone can operate in their own datacenter. By doing this, we gain complete control over our virtual networking environment, including selection of our own IP address range, creation of subnets, and configuration of route tables and network gateways.

For example, we have a public-facing subnet for Expressway Service Gateway (ESG) which serve legitimate traffic with mutual authentication mechanism from the Internet, and place our backend systems such as tenant-manager and tenants hosted on boot strap agents on a private-facing subnet with no Internet access. We leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet. Any traffic originating from this point (either via tenant-manager or tenants) will be redirected to RDS via Amazon VPC internet gateway over https channel.

Using Amazon Cloud Watch we can monitor activities on Amazon RDS. Amazon Cloud Watch provides monitoring for AWS cloud resources and the applications we run on AWS. System administrator can use it to collect and track metrics, gain insight, and react immediately to keep our applications and businesses running smoothly. Administrator can also monitor custom metrics generated by RDS. With Amazon Cloud Watch, we gain system-wide visibility into resource utilization, application performance, and operational health status of RDS.
Links for reference:
http://aws.amazon.com/rds/
http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
http://d36cz9buwru1tt.cloudfront.net/AWS_Running_Databases_in_the_Cloud.pdf
http://aws.amazon.com/rds/#features