- Boris Renski, How to Monetize the OpenStack Wave
- Boris Renski, The New Open Source Superpower
- Boris Renski, Some Brutally Honest Thoughts on Citrix’s Defection
- Boris Renski, I hear the Essex Train a-coming
- Boris Renski, Long Live Enterprise Clouds
- Oleg, Under the hood of Swift. The Ring
- Artem Andreev, Introducing OpenStackAgent for Xen-based Clouds. What?
- Oleg, Diablo RPM repository
- Boris Renski, Meet & Drink: OpenStack in Production – Event Highlights
- Boris Renski, Converging OpenStack with Nexenta
- Boris Renski, OpenStack Meet & Drink: Toast to Diablo – Event Highlights
- Yury Taraday, What is this Keystone anyway?
- Alexander Gordeev, Cloudpipe Image Creation Automation
- Boris Renski, Cloud Accelerates Open Source Adoption
- Boris Renski, Tracing the IT Evolution from the Big Bang to the Big Crunch
- Boris Renski, Our Contribution to the Vegas Economy
- Yury Taraday, LDAP identity store for OpenStack Keystone
- Roman Bogorodskiy, vCider Virtual Switch Overview
- Boris Renski, Bay Area OpenStack Meet & Drink Highlights
- Oleg, Clustered LVM on DRBD resource in Fedora Linux
- Alexander Sakhnov, OpenStack Nova: basic disaster recovery
- Yury Taraday, OpenStack Nova and Dashboard authorization using existing LDAP
- Oleg, Shared storage for OpenStack based on DRBD
- Max Lvov, OpenStack Deployment on Fedora using Kickstart
- Boris Renski, Make your bet on open source infrastructure computing
Thursday, May 17, 2012
Our blog has moved!
Friday, May 4, 2012
How to Monetize the OpenStack Wave
Thursday, April 12, 2012
The New Open Source Superpower
Today is yet another important day in the history of OpenStack. The initial list of founding organizations for the independent OpenStack foundation has been announced and we, at Mirantis, are proud to be on that list.
While there is a lot to talk about on what this means for the infrastructure cloud market, I’d like to focus on what this means as far as illustrating the sheer momentum of the OpenStack beast. The non-profit legal entity that will house OpenStack has not yet been formed, but 18 organizations have already pledged significant financial (and not only financial) support to the foundation. The current financing model calls for $500K/year commitment from a Platinum sponsor and $50-$200K/year – from a Gold sponsor. Judging by the current composition of the supporting organizations, it is clear that the new foundation will launch with the initial budget north of $5M.
So how does this measure up to the rest of the FLOSS ecosystem? Well, there is a reason why OpenStack has been repetitively tagged as the Linux of the cloud. With the $5M annual budget, the newly formed OpenStack foundation takes the second spot in the entire FLOSS world. And it is second only to… you guessed it… the Linux foundation itself. According to form 990 filed by the Linux foundation in 2010 its operating revenues were $9.6M. Yes, the Linux foundation budget is still double that the OpenStack…but…come on…Linux is close to 20% of the server market. It also happens to power the majority of all mobile devices. OpenStack = Linux was a vision… judging by these numbers, this vision may soon be realized.
Another interesting thing that these numbers portray is why OpenStack (unlike CloudStack) has opted to create its own foundation, rather than surrendering everything to the governance of the Apache Foundation. With the Apache Foundation budget under $1M, OpenStack eats it for breakfast.
Now many of you will argue that none of this matters. Apache foundation houses many great projects that are far more mature and popular than OpenStack… true. But can you tell me, how many of these are truly vendor agnostic? And I am not talking about developer tools like Ant, Maven, Beehive etc. All Apache projects fall into two categories – they are either developer tools or vendor centric enterprise products: Tomcat – VMWare, Hadoop – Cloudera, Cloud.com – will now be Citrix =).
In my opinion, there is a reason for it and it is somewhat tied to foundation budgets. Open source is heavily driven by marketing. The number one open source company – RedHat - spends 2-3x more on marketing relative to its revenue than any of its closed source competitors. Ultimately, it is the marketing spend on an open source project that heavily affects its vendor independence status. If the entire spend comes from a single pocket, there is a single vendor that dominates that product.
Unlike most Apache open source projects, OpenStack (while still under RackSpace) was backed by a significant marketing and PR budget. Consequently, when foundation plans were being discussed, it was the desire to continue this centralized marketing effort that precluded OpenStack from considering the Apache foundation as its home. A significant chunk of the $5M raised will be spent by the foundation to promote and protect the OpenStack brand and the projects that the foundation will house. In a sense, this implies that for anyone to derail the vendor independent status of OpenStack, one will need the marketing budget, comparable to the $5M the foundation has raised… I say this is a decent barrier to start with.
Thursday, April 5, 2012
Some Brutally Honest Thoughts on Citrix’s Defection
When I first heard the announcement about Cloud.com being spun off into the Apache foundation, my initial reaction was to interpret the event as a hostile move by one of the OpenStack community insiders. Citrix is one of the founding members of OpenStack, with representation on the project policy board; the company has been quite active evangelizing the community through various events and code contributions. So why, all of a sudden, a move that may appear to undermine the OpenStack momentum?
Let’s take a look at the history. When Citrix bought Cloud.com for more than $200 million in July, 2011, insider information suggested the company had revenue of only a several million. While high valuations were not uncommon in the cloud space, a 40x revenue multiple is quite unusual. Why did Citrix do it? The only answer that comes to mind was that it wanted to quickly gain credibility in the cloud market.
I believe that corporate politics and relationships also played a role in this deal. Cloud.com was backed by Redpoint Ventures, which had an existing track record of selling its portfolio companies to Citrix. But, more importantly, Cloud.com founder and CEO – Sheng Liang – was also the founder and CTO of Teros Networks, a Web security company that was acquired by the very same Citrix just a few years before Cloud.com was founded. In fact, I am pretty sure, that in some sense cloud.com was Citrix’s skunk works project; acquisition by Citrix was the key part of the Cloud.com business plan. While there is nothing wrong with the approach and I can only complement the strategy, the early connection between Citrix and Cloud.com was key to its successful exit and the events that followed.
Just one year before the acquisition of Cloud.com, OpenStack was announced at OSCON and nobody knew what to think of it. It took the open source community by a storm and it soon became evident to all those competing for open cloud dominance, that simply ignoring the OpenStack phenomenon was not an option. “Open cloud strategy” soon became synonymous with the “OpenStack Strategy”. Citrix, a founding member of OpenStack itself, was in a bit of a tight spot. One choice was to abandon its Cloud.com project. Given the OpenStack momentum at the time, this could inevitably translate to the swift death of Cloud.com and $17 million in losses to the VCs backing it. Alternatively, Citrix could go all in, acquire the Cloud.com community to boast its credibility in the open source cloud space and take a stab at creating the dominant distribution of OpenStack, ultimately becoming to OpenStack what Red Hat has become to Linux. In the end, the scales tipped towards the latter option. In May, 2011 Citrix announced its distribution of OpenStack – project Olympus. Two months thereafter, the Cloud.com acquisition was announced.
However, when the dust settled, it became evident that Citrix’s involvement with Cloud.com and OpenStack (Project Olympus), instead of being complimentary as Citrix has anticipated, has been perceived as strange and surprising. CloudStack is Java based, whereas OpenStack is all Python. On the compute side, CloudStack focused on Xen, whereas the dominant hypervisor for OpenStack so far has been KVM. CloudStack was licensed under GPL, and OpenStack under Apache 2.0. Ultimately, Citrix’s cloud.com acquisition was sending confusing messages to both communities and Citrix’s customer base. A few months after Citrix’s acquisition, the Cloud.com community had little momentum left. At the same time, the OpenStack community remained wary of Citrix due to its involvement with CloudStack. Consequently, not much has happened with Project Olympus since its announcement over a year ago until it was officially abandoned with the latest announcement.
Today, Citrix announced that Cloud.com will find a new home with the Apache foundation. Is it a hostile move that will undermine OpenStack? I see it more as an act of desperation. Clearly, that wasn’t the initial plan, when Citrix first acquired Cloud.com. Consequently Citrix has failed to build the community around Cloud.com, miscalculated the synergies between the two communities, got trumped by OpenStack momentum, and dumped what’s left of Cloud.com to the Apache foundation. They have already announced CloudStack would be open source twice before, yet have received no outside contributions to date. The last commit to Cloud.com on GitHub by a non-Citrix employee is dated several months ago.
At this point, Citrix has a spotty history when it comes to open source. Open source is built on trust and they are hard to trust right now. Having burned bridges at their last two communities (Xen / Linux) and now OpenStack, it is going to be big challenge for them to revive CloudStack from its present semi-dead state.
Saturday, March 31, 2012
I hear the Essex Train a-coming
With Essex train in the wilds of testing, and the Essex release intended date less than 10 days away, we are pretty excited about everyone descending on San Francisco -- practically our home town -- for the Design Summit and conference.
Here at Mirantis, the company famous across OpenStack community for distributing vodka bottles at OpenStack meetups, we are gearing up in a big way for the summit and conference. If you haven't seen the agenda, here's what we've got teed up:
(1) We’ll start the frenzy with just-in-time-training: we have a few seats left at our 2-day OpenStack Boot Camp, crammed into the weekend of April 14-15, right before the summit and conference. REGISTER HERE and come to the event fully prepared to torment the presenters with insidious technical questions about OpenStack technology and its future.
(2) Our team will participate in / moderate a few exciting sessions during the conference: OpenStack and Block Storage, OpenStack and High Performance Computing, Expanding the Community. Please be sure to pay us a visit.
(3) …and just to show how happy we are to have you here, we invite the community at the conference to join Mirantis Summit Kick-Off Party. Click for a preview of what’s to come! Vodka bottles and fun times in the best traditions of all our events are guaranteed. Be sure not to miss.
Looking forward to receiving everyone at the 2012 OpenStack Design Summit and Conference.
Tuesday, March 13, 2012
Long Live Enterprise Clouds
I disagree with both of these.
So let’s start with “open cloud is good, enterprise cloud is bad” stance. In my view, making this comparison is like saying a kitchen knife is better than a swiss-army knife. A kitchen knife is simple, sharp and has no moving parts. Just like an open cloud it is designed to be a simple solution to a single, concrete problem. Ever try to prepare a full meal with a swiss-army knife? Sure, when you go camping, it’s probably fine. But when your mother-in-law is coming to dinner?
The fundamental difference between enterprise and open clouds is in their approach to the problem. Open cloud mentality comes from a service provider view of the world where the cloud is not built to support the business, but rather IS THE BUSINESS. The approach is to build from the bottom up for a narrow problem, just like one would if you were a software company and the cloud was your product, aimed at capturing a chunk of some market.
In the open cloud world, apps do not dictate the infrastructure; you start with the infrastructure, and go up the stack. In other words, the information technology infrastructure dictates the way the application solves a business problem.
In the enterprise world – it’s precisely the other way around . IT exists to support the business. The applications are king and they dictate the IT underneath. Is this the best scenario when it comes to simplifying the infrastructure and containing its cost? Definitely not! Is there an alternative for the enterprise? Definitely not! Reason? There’s an irreducible domain knowledge gap between IT and business. In the case of AWS, Salesforce or Google – IT is the business. In enterprise – IT’s job is to salute the business and support it.
Let’s take a concrete example (real life story by the way). We are now enjoying the dawning of the age of big data. Say some entrepreneur decides to take advantage of Hadoop and Lucene and build a new engine for parsing and aggregating bioinformatics data, and can extract results that were never before possible. He then sells his marvelous, vertically focused innovation to Pharma companies. If I’m at Pfizer and I don’t buy it, but my rivals at Roche buy – I get left behind. But say my IT does not do Hadoop and Lucene and I can’t take the solution to run in a public cloud because of regulatory compliance. Now what do I do?
If you guessed that I call my CIO and tell him to stand up the environment that will support this, you’re right. IT has to follow the lead of the business, or the whole business fails. This happens over and over again. Over time, IT has to support an extremely diverse environment. Conceivably, the gap may shrink over time as IT becomes an ever increasingly dominant business process in any vertical, but don’t plan on it happening this month. Or even next month.
Now, there is a common view that it doesn’t have to be that way that stems from an elegant but very one-dimensional comparison between IT infrastructure and electricity. I.e. application infrastructure is a commodity, just like electricity. All apps should be built to run on top of this common, standardized infrastructure, and just like we all have the same shape of electrical outlets (except for, well, the Europeans) this is where we’ll all be soon.
It sounds great, but I’m sad to say that I have to call bullshit. Electricity and application infrastructure is not the same. Unlike with electricity, there is massive innovation at the bottom part of the application stack. We didn’t even use virtualization until recently. Yesterday it was all about disk, today it is all about SSD.
We don’t know what new paradigms will emerge in the coming years. This innovation shakes the entire stack. Going back to my example, Hadoop was not widely used just not too long ago. Had it not existed, the new app would not have been possible and IT would not have had to buy new infrastructure and deploy unknown middleware on it. But because it does, IT has to adjust. And tomorrow will be a new paradigm and IT will have to adjust again and again.
Commoditization and standardization can only happen in stagnant industries like electricity generation and distribution or growing potatoes, where the world has pretty much stopped. Until that kind of stable stagnation becomes a common theme in the application infrastructure space, there will always be expensive enterprise clouds and open, inexpensive, commodity clouds. The enterprise will be constantly configuring its swiss army knife, aimed at minimizing the pain of dealing with diversity in the stack.
Tuesday, February 14, 2012
Under the hood of Swift. The Ring
There are three types of entities that Swift recognizes: accounts, containers and objects. Each type has the ring of its own, but all three rings are put up the same way. Swift services use the same source code to create and query all three rings. Two Swift classes are responsible for this tasks: RingBuilder and Ring respectively.
Ring data structure
Every Ring of three in Swift is the structure that consists of 3 elements:- a list of devices in the cluster, also known as devs in the Ring class;
- a list of lists of devices ids indicating partition to data assignments, stored in variable named _replica2part2dev_id;
- an integer number of bits to shift an MD5-hashed path to the account/container/object to calculate the partition index for the hash (partition shift value, part_shift).
List of devices
A list of devices includes all storage devices (disks) known to the ring. Each element of this list is a dictionary of the following structure:Key | Type | Value |
---|---|---|
id | integer | Index of the devices list |
zone | integer | Zone the device resides in |
weight | float | The relative weight of the device to the other devices in the ring |
ip | string | IP address of server containing the device |
port | integer | TCP port the server uses to serve requests for the device |
device | string | Disk name of the device in the host system, e.g. sda1. It is used to identify disk mount point under /srv/node on the host system |
meta | string | General-use field for storing arbitrary information about the device. Not used by servers directly |
Partitions assignment list
This data structure is a list of N elements, where N is the replica count for the cluster. The default replica count is 3. Each element of partitions assignment list is an array('H'), or Python compact efficient array of short unsigned integer values. These values are actually index into a list of devices (see previous section). So, each array('H') in the partitions assignment list represents mapping partitions to devices ID.The ring takes a configurable number of bits from a path's MD5 hash and converts it to the long integer number. This number is used as an index into the array('H'). This index points to the array element that designates an ID of the device to which the partition is mapped. Number of bits kept from the hash is known as the partition power, and 2 to the partition power indicates the partition count.
For a given partition number, each replica's device will not be in the same zone as any other replica's device. Zones can be used to group devices based on physical locations, power separations, network separations, or any other attribute that could make multiple replicas unavailable at the same time.
Partition Shift Value
This is the number of bits taken from MD5 hash of '/account/[container/[object]]' path to calculate partition index for the path. Partition index is calculated by translating binary portion of hash into integer number.Ring operation
The structure described above is stored as pickled (see Python pickle) and gzipped (see Python gzip.GzipFile) file. There are three files, one per ring, and usually their names are:account.ring.gz
container.ring.gz
object.ring.gz
These files must exist in /etc/swift directory on every Swift cluster node, both Proxy and Storage, as services on all these nodes use it to locate entities in cluster. Moreover, ring files on all nodes in the cluster must have the same contents, so cluster can function properly.There are no internal Swift mechanisms that can guarantee that the ring is consistent, i.e. gzip file is not corrupt and can be read. Swift services have no way to tell if all nodes have the same version of rings. Maintenance of ring files is administrator's responsibility. These tasks can be automated by means external to the Swift itself, of course.
The Ring allows any Swift service to identify which Storage node to query for the particular storage entity. Method Ring.get_nodes(account, container=None, obj=None) is used for identification of target Storage node for the given path (/account[/container[/object]]). It returns the tuple of partition and dictionary of nodes. The partition is used for constructing the local path to object file or account/container database. Nodes dictionary elements have the same structure as the devices in list of devices (see above).
Ring management
Swift services can not change the Ring. Ring is managed by swift-ring-builder script. When new Ring is created, first administrator should specify builder file and main parameter of the Ring: partition power (or partition shift value), number of replicas of each partition in cluster, and the time in hours before a specific partition can be moved in succession:When the temporary builder file structure is created, administrator should add devices to the Ring. For each device, required values are zone number, IP address of the Storage node, port on which server is listening, device name (e.g. sdb1), optional device meta-data (e.g., model name, installation date or anything else) and device weight:
Device weight is used to distribute partitions between the devices. More the device weight, more partitions are going to be assigned to that device. Recommended initial approach is to use the same size devices across the cluster and set weight 100.0 to each device. For devices added later, weight should be proportional to the capacity. At this point, all devices that will initially be in the cluster, should be added to the Ring. Consistency of the builder file can be verified before creating actual Ring file:
In case of successful verification, the next step is to distribute partitions between devices and create actual Ring file. It is called 'rebalance' the Ring. This process is designed to move as few partitions as possible to minimize the data exchange between nodes, so it is important that all necessary changes to the Ring are made before rebalancing it:
The whole procedure must be repeated for all three rings: account, container and object. The resulting .ring.gz files should be pushed to all nodes in cluster. Builder files are also needed for the future changes to rings, so they should be backed up and kept in safe place. One of approaches is to put them to the Swift storage as ordinary objects.
Physical disk usage
Partition is essentially the block of data stored in the cluster. This does not mean, however, that disk usage is constant for all partitions. Distribution of objects between the partitions is based on the object path hash, not the object size or other parameters. Objects are not partitioned, which means that an object is kept as a single file in storage node file system (except very large objects, greater than 5Gb, which can be uploaded in segments - see the Swift documentation).The partition mapped to the storage device is actually a directory in structure under /srv/node/<dev_name>. The disk space used by this directory may vary from partition to partition, depending on size of objects that have been placed to this partition by mapping hash of object path to the Ring.
In conclusion it should be said that the Swift Ring is a beautiful structure, though it lacks a degree of automation and synchronization between nodes. I'm going to write about how to solve these problems in one of the following posts.
More information
More information about Swift Ring can be found in following sources:Official Swift documentation - base source for description of data structure
Swift Ring source code on Github - code base of Ring and RingBuilder Swift classes.
Blog of Chmouel Boudjnah - contains useful Swift hints