Thursday, December 29, 2011

Diablo RPM repository

Recently we've deployed OpenStack Diablo release for one of our customers. The target operating system happened to be CentOS 6.0. During deployment testing we've stumbled upon a number of bugs in OpenStack RPMs that we've tried to use.

All existing RPMs of OpenStack that we've found contained problems that prevented components from operating correctly with each other:
1. Incompatible protocol in packaged version of Keystone (already fixed): https://lists.launchpad.net/openstack/msg04876.html
2. Json template bug (already fixed): https://bugs.launchpad.net/keystone/+bug/865448/
3. ISCSI target management troubles: https://bugzilla.redhat.com/show_bug.cgi?id=737046

In addition, there was no packaged nova-vnc in CentOS repositories.
So we've fixed these bugs and established our own repository for OpenStack Diablo. Packages added there have been tested in a real-world deployment.

You can easily install the repository on your CentOS system using wget:

$ sudo wget -O /etc/yum.repos.d/epel-mirantis.repo http://download.mirantis.com/epel-el6-mirantis/epel-mirantis.repo


You can browse the repository here: Mirantis OpenStack Diablo

Tuesday, December 20, 2011

Meet & Drink: OpenStack in Production – Event Highlights

As a matter of tradition at this point, we offer a photo report, covering OpenStack Meetup event series hosted by Mirantis and Silicon Valley Cloud Center. Our December 14th event focused on sharing experience around running OpenStack in production. I moderated a panel consisting of Ken Pepple – director of cloud development at Internap, Ray O’Brian – CTO of IT at NASA and Rodrigo Benzaquen – R&D director at MercadoLibre.

This time we went all out and even recorded the video of the event:




For those that are not in the mood to watch this 50 minute panel video, here is a quick photo report:


We served wine and beer with pizza, salad and deserts...



...While people ate, drank, and mingled...



…and then they drank some more…



We started the panel with myself saying smart stuff about OpenStack. After the intro we kicked off with questions to the panel.



The panelists talked...



...and talked...



...and then talked some more.



Meanwhile, the audience listened...



...and listened.



Everyone in our US team was sporting these OpenStack shirts.



At the end we gave out 5 signed copies of "Deploying OpenStack" books, written by one of our panelists - Ken Pepple. Roman (pictured above) did not get a copy.

Thursday, November 24, 2011

Converging OpenStack with Nexenta

For those folks that have missed our webcast on using OpenStack Compute with NexentaStor for managing VM volumes, recording is below.

Please note, you can download the NexentaStor driver for OpenStack here: http://www.nexentastor.org/projects/osvd/files.

You can also read additional information about this project here: http://wiki.openstack.org/NexentaVolumeDriver



If you need help installing / troubleshooting the Nexenta driver for OpenStack, please do contact us.

Thursday, September 29, 2011

OpenStack Meet & Drink: Toast to Diablo – Event Highlights

As usual, here are the highlights from the last Bay Area OpenStack Meet & Drink: Toast to Diablo – September 28th, 2011. Thanks to WireRE for hosting us, Dave Nielsen – for helping to organize, and all the attendees – for coming. Once again, this was the biggest MeetUp thus far with 150 in attendance. For those of you that didn’t come – here is what you missed:




We started our Diablo release celebration with wine, beer and pizza. Fun mingling with fellow stackers. As people kept arriving it got almost too crowded.




Mirantis founder – Alex Freedland – passionately explaining something to David Allen.




Mike Scherbakov from Mirantis, Josh McKenty from Pison and Eric from CloudScaling debating OpenStack with noticeable vigor.




Eric Windisch proudly sporting his uber cool CloudScaling shirt, listing to Mike Scherbakov from Mirantis.




While the crowd was mingling, Dave Nielsen took people on datacenter tours. The datacenter basically looked like this.




As usual, I opened with some thank you's and acknowledgements to our sponsors and organizers. Marc Padovani of HP Cloud Services – clapping and anxiously waiting his turn to tell the crowd about OpenStack based hpcloud.com.




With 150 stackers in attendance, we didn’t have quite enough chairs to accommodate everyone.




Dave Nielsen talking about our venue host – WiredRE.




Chris Kemp – CEO and Founder of Nebula announced the OpenStack Silicon Valley LinkedIn group that Nebula recently started.




…meanwhile, Josh McKenty was waiting for his turn to speak…




Don’t remember why, but for some reason Josh’s presentation involved talking about O-Ren Ishi from Kill Bill. Whatever it was, Chris Kemp got a kick out of it.




Everybody likes Kill Bill, so the crowd was cheering.




Geva Perry shared his perspective on why OpenStack’s strength is in its ecosystem of developers and partners.




Jason Venner of X.com talked about OpenStack and CloudFoundry. He was careful not to reveal anything with respect to the upcoming “October 13th” announcement of X.commerce platform.

In closing we had Marc Padovani from HP talk about hpcloud and HP’s commitment to OpenStack. The presentation quickly turned into a Q&A grilling session, with stackers expressing their suspicions over hpcloud.com being a smoke screen, rather than real offering. Marc did his best to address the questions without incriminating his big corporation… My wife got too tired of taking pictures at that point, so there are none of Marc… sorry Marc.





Hungry stackers drank most of the wine and ate most of the food. Whatever was left over, people took home. We kept one last bottle of Cloud Wine. I intend to give it as a gift to our 500th MeetUp member – Ilan Rabinovich. Ilan – if you read this, ping me on twitter @zer0tweets to claim your prize!

Thank you to everyone and we’ll do it again in 3 months.

Friday, September 23, 2011

What is this Keystone anyway?

The simplest way to authenticate a user is to ask for credentials (login+password, login+keys, etc.) and check them over some database. But when it comes to lots of separate services as it is in the OpenStack world, we have to rethink that. The main problem is an inability to use one user entity to be authorized everywhere. For example, a user expects Nova to get one's credentials and create or fetch some images in Glance or set up networks in Quantum. This cannot be done without a central authentication and authorization system.

So now we have one more OpenStack project - Keystone. It is intended to incorporate all common information about users and their capabilities across other services, along with a list of these services themselves. We have spent some time explaining to our friends what, why, and how it is and now we decided to blog about it. What follows is an explanation of every entity that drives Keystone’s life. Of course, this explanation can become outdated in no time since the Keystone project is very young and it has developed very fast.

The first basis is the user. Users are users; they represent someone or something that can gain access through Keystone. Users come with credentials that can be checked like passwords or API keys.

The second one is tenant. It represents what is called the project in Nova, meaning something that aggregates the number of resources in each service. For example, a tenant can have some machines in Nova, a number of images in Swift/Glance, and couple of networks in Quantum. Users are always bound to some tenant by default.

The third and last authorization-related kinds of objects are roles. They represent a group of users that is assumed to have some access to resources, e.g. some VMs in Nova and a number of images in Glance. Users can be added to any role either globally or in a tenant. In the first case, the user gains access implied by the role to the resources in all tenants; in the second case, one's access is limited to resources of the corresponding tenant. For example, the user can be an operator of all tenants and an admin of his own playground.

Now let’s talk about service discovery capabilities. With the first three primitives, any service (Nova, Glance, Swift) can check whether or not the user has access to resources. But to try to access some service in the tenant, the user has to know that the service exists and to find a way to access it. So the basic objects here are services. They are actually just some distinguished names. The roles we've talked about recently can be not only general but also bound to a service. For example, when Swift requires administrator access to create some object, it should not require the user to have administrator access to Nova too. To achieve that, we should create two separate Admin roles - one bound to Swift and another bound to Nova. After that admin access to Swift can be given to user with no impact on Nova and vice versa.

To access a service, we have to know its endpoint. So there are endpoint templates in Keystone that provide information about all existing endpoints of all existing services. One endpoint template provides a list of URLs to access an instance of service. These URLs are public, private and admin ones. The public one is intended to be accessible from the global world (like http://compute.example.com), the private one can be used to access from a local network (like http://compute.example.local), and the admin one is used in case admin access to service is separated from the common access (like it is in Keystone).

Now we have the global list of services that exist in our farm and we can bind tenants to them. Every tenant can have its own list of service instances and this binding entity is named the endpoint, which “plugs” the tenant to one service instance. It makes it possible, for example, to have two tenants that share a common image store but use distinct compute servers.

This is a long list of entities that are involved in the process but how does it actually work?

  1. To access some service, users provide their credentials to Keystone and receive a token. The token is just a string that is connected to the user and tenant internally by Keystone. This token travels between services with every user request or requests generated by a service to another service to process the user's request.
  2. The users find a URL of a service that they need. If the user, for example, wants to spawn a new VM instance in Nova, one can find an URL to Nova in the list of endpoints provided by Keystone and send an appropriate request.
  3. After that, Nova verifies the validity of the token in Keystone and should create an instance from some image by the provided image ID and plug it into some network.
    • At first Nova passes this token to Glance to get the image stored somewhere in there.
    • After that, it asks Quantum to plug this new instance into a network; Quantum verifies whether the user has access to the network in its own database and to the interface of VM by requesting info in Nova.
    All the way this token travels between services so that they can ask Keystone or each other for additional information or some actions.

Here is a rough diagram of this process:

Friday, September 16, 2011

Cloudpipe Image Creation Automation


Cloudpipe is used in OpenStack to provide access to project’s instances when using VLAN networking mode. It is just a custom Virtual Machine (VM) prepared in a special way, i.e. coming with an accordingly configured openvpn and startup scripts. More details on what cloudpipe is and why it is needed are available in OpenStack documentation.
The process of creating an image involves a lot of manual steps which crave to be automated. To simplify these steps, I wrote a simple script that uses some libvirt features to provide fully automated solution, in a way that you don't even have to bother with preparing base VM manually.
The solution can be found on a github and consists of 3 parts:
  • The first ubuntukickstart.sh is the main part. Only this part should be executed. When you run it, it will configure the virtual network and PXE. Then it will start a new VM to install a minimal server Ubuntu by kickstart, so the installation is fully automated and unattended.
  • The second cloudpipeconf.sh is used to turn minimal server Ubuntu to cloudpipe. It is being executed when the VM is ready to make this turning.
  • The last ssh.fs is used to ssh into the VM and shutdown it.
So, if you need the cloudpipe image, just run ubuntukickstart.sh and wait. You'll get the cloudpipe image without any mouse clickings and keyboard pressings!
More detailed information about how it works can be found in README file.
Don’t hesitate to leave a comment If you have any questions or concerns.

Thursday, September 8, 2011

Cloud Accelerates Open Source Adoption

Historically, commercial software provided enterprises with reliability and scalability, especially for mission-critical tasks. No one wanted to risk failure in finance, operations, or any essential or enterprise-wide areas. So, enterprises considered open source technology only for less important, more tactical purposes.

Recently, however, many large IT organizations have developed significant open source strategies. Cisco, Dell, NASA, and Rackspace came together to give birth to OpenStack. VMWare acquired SpringSource and shortly thereafter, announced Cloud Foundry, their open source PaaS. Amazon, salesforce.com, and others built solutions entirely on an open source stack. Whole categories of technologies, such as noSQL databases, made their way to mass adoption shortly after being open sourced by Google and Facebook. There has been more activity in open source during the last two years than in the preceding decade. So what’s going on here?

Without a doubt, cloud is the IT topic that’s been grabbing headlines and investment dollars in the past few years. The recent high level of activity in open source noticeably correlates with the cloud movement, because there is a deep, synergetic relationship between the two. In fact, cloud is the primary driver for the increased adoption of open source.

In general, open source projects typically require two components to get community uptake. First, the nature of the project itself has to be technologically challenging. Successful open source projects are largely about solving a set of complex technological tasks vs. just writing a lot of code to support complex business process, such as the case with building enterprise software. Linux, MySQL and BitTorrent are all good examples here. Second, it requires a high rate of end user adoption. The more people and organizations that start using the open source technology at hand, the more mature the community and the technology itself becomes.

Cloud has created an enormous amount of technologically challenging fodder for the open source community. The adoption of cloud translates to greater scale at the application infrastructure layer. Consequently, all cloud vendors, from infrastructure to application, are forced to innovate and build proprietary application infrastructure solutions aimed at tackling scale-driven complexity. Facebook’s Cassandra and Google’s Google File System/Hadoop/BigTable stack are prime examples of this innovation.

However, it is important to note that neither Facebook, nor Google are in the business of selling middleware. Both make money on advertising. Their middleware stack may be a competitive advantage, but it is by no means THE competitive advantage. Because companies want to keep IT investments as low as possible, a the natural way to drive down costs associated with scale-driven complexity is to have the open developer community help address at least some of the issues to support and growing the stack. The result? Instances like Facebook’s open sourcing of Cassandra and Rackspace contributing its object storage code to OpenStack. Ultimately, cloud drives complexity while cloud vendors channel that complexity down to the open developer community.

What about end user adoption? Historically, enterprises were slow to adopt open source. Decades of lobbying by vendors of proprietary software have drilled the idea of commercial software superiority deep into the bureaucracy of enterprise IT. Until recently, the biggest selling point for commercial enterprise software was reliability and scalability for mission-critical tasks; open source was “OK” for less important, more tactical purposes. Today, after leading cloud vendors like Amazon, Rackspace, and Google built solutions on top of an open source stack, the case against open source for mission-critical operations or incapable of supporting the required scale is no longer valid.

But the wave of open source adoption is not just about the credibility boost it received in recent years. It is largely about the consumption model. Cloud essentially refers to the new paradigm for delivery of IT services. It is an economic model that revolves around “pay for what you get, when you get it.” Surprisingly, it took enterprises a very long time to accept this approach, but last year was pivotal in showing that it is tracking and is the way of the future. Open source historically has been monetized leveraging a model that is much closer to “cloud” than that of commercial software. In the case of commercial software, you buy the license and pay for implementation upfront. If you are lucky to implement, you continue to pay for a subscription that is sold in various forms – support, service assurance, etc. With open source, you are free to implement first, and if it works, you may (or may not) buy commercial support, which is also frequently sold as a subscription to a particular SLA. The cloud hype has helped initiate the shift in the standard for the IT services consumption model. As enterprises wrap their minds around cloud, they shy further away from the traditional commercial software model and move closer to the open source / services-focused model.

It is also important to note that the consumption model issue is not simply a matter of perception. There are concrete, tactical drivers behind it. As the world embraces the services model, it is becoming increasingly dominated by service-level agreements (SLAs). People are no longer interested in licensing software products that are just a means to an end. Today, they look for meaningful guarantees where vendors (external providers or internal IT) assure a promised end result. This shift away from end user licensing agreements (EULAs) and toward SLAs is important. If you are a cloud vendor such as Salesforce.com, you are in the business of selling SLA-backed subscription services to your customer. If, at the same time, you rely on a third party vendor for a component of your stack, the SLA of your vendor has to provide the same or better guarantees that you pass on to your client. If your vendor doesn’t offer an SLA or only offers an end user license agreement, you end up having to bridge the gap. These gaps that an organization is forced to bridge ultimately affects its enterprise value. As we move away from the EULA-driven economy and more towards SLAs, open source stands to benefit.

Ultimately, as cloud continues to mature, we will continue to see more and faster growth in open source. While the largest impact so far has been in the infrastructure space, open source popularity will eventually start spreading up the stack towards the application layer.

Thursday, August 25, 2011

Tracing the IT Evolution from the Big Bang to the Big Crunch

How enterprises are progressing from overgrown, difficult-to-manage IT systems to high performance open source infrastructure

Over the history of computing, we can trace a pattern of continuous decomposition, from a single system into disparate components. Early on, these individual parts made it easier to design, program and maintain systems, and meet the fast-growing demand for more power and more capacity.

The industry began with the mainframe, where the entire stack from hardware to application logic was contained in a single box. The next phase was the move from mainframe to client-server. This was followed by SOA (service-oriented architecture). This process of decomposition is a natural byproduct of growth in scale. As we consume increasingly more computing and storage, efficiency gains are achieved through specialization.

Such continuous decomposition is a typical pattern of many industries. Several centuries ago, the model was subsistence farming, where every family as a single unit grew all of their own crops. Today, food production has decomposed into a collection of highly specialized industries.

However, this process of decomposition in IT injects complexity. At a certain scale, highly decomposed systems become extremely challenging to manage. This then drives a pressing need to abstract away from some of the individual components to a higher level. This is largely what we are observing today with infrastructure computing. The complex mammoth of enterprise IT, today comprised of a spaghetti mix of application servers, relational and noSQL databases, messaging queues, caching and search services, etc., is no longer manageable.

Gartner labeled 2011 as the year of cloud platforms or PaaS. Thinking of PaaS, we intuitively think Heroku, Force.com, and Google App Engine, all off-premise cloud platforms. But the cloud movement is not just about on-premise versus off-premise. It's about creating an effective means to abstract away from application infrastructure complexity. As mainframes exploded into myriad sub-components, we experienced sort of a Big Bang in enterprise IT. What we are starting to observe now is the Big Crunch, turning application infrastructure back into a more unified, manageable artifact.

OpenStack

OpenStack is one of the most interesting initiatives topping the headlines during the last several months, and it's directly related to the Big Crunch. An open source project with the promise to help consolidate the many disparate components of application infrastructure, OpenStack is only a year old and is far from fulfilling this promise today. However, I believe that OpenStack for application infrastructure will eventually become what Linux became to application logic many years ago - a single interface unifying all application infrastructure components and exposing a standardized set of APIs to applications running on top of it.

Open Source Cloud Projects and How They Differ

OpenStack is not the first open source cloud project. Eucalyptus, OpenNebula, and Cloud.com all emerged before OpenStack and all of them are still very much alive. However, OpenStack is different from these others because it's the only one that has gained enough critical mass to get on a steady course to mass adoption.

What enabled OpenStack to reach this point was not an accident, but a clever strategy by RackSpace and other founding members. Rather than following a more common, vendor-centric approach to building an open source community, like Eucalyptus and Cloud.com did, RackSpace quickly figured out that getting a "cloud operating system" to mass adoption would require more marketing muscle then any single vendor has. So it positioned OpenStack as a decentralized, community-driven project from the very beginning and set out to get the support of big players in the application infrastructure space, namely Dell, Cisco, and Citrix. It didn't go after just any infrastructure player, but specifically focused on those who were arguably late to the cloud game and aching to make up the distance they lost to the likes of VMware and IBM. Ultimately, OpenStack's blitz to success is a result of unleashing an enormous amount of marketing energy in a short period of time, carefully coordinated between a number of application infrastructure power houses.

Following Amazon to Open Source Infrastructure

Today, OpenStack is focused on low level infrastructure services - compute, storage, image service, etc., and much work still remains to be done by the community in that area. However, we know the trend and have already seen it with Amazon Web Services (AWS). AWS initially started as Infrastructure as a Service (IaaS) with EC2 and S3 offerings; it then evolved into a fully blown Platform as a Service (PaaS). The value in solving application infrastructure complexity in a broader sense, by embedding higher level services like automated deployment, message queues, map reduce, and monitoring, is simply too compelling. At some point, we expect to see OpenStack creeping into the PaaS space, the same way AWS is doing today.

This gradual transition from simply being a compute and storage infrastructure orchestrator into a complete cloud operating system will happen naturally for OpenStack. It will be driven by infrastructure vendors of all sizes that are looking to plug their solutions into the OpenStack ecosystem. With more than 100 member companies on board already today, we see various announcements to this effect right and left: Gluster contributes its file system, Dell builds a deployment services, CloudCruiser builds a cost management solution, etc.

What's Ahead for OpenStack

The openness and decentralized nature of OpenStack is central to the realization of its vision of the cloud operating system. Instead of trying to solve all application infrastructure complexity inside one monolithic system, such as with the VMware stack, OpenStack harnesses the naturally occurring decomposition in the infrastructure space. This is the Big Bang in infrastructure we've all experienced. Individual vendors with competence in one particular area of application infrastructure can plug their solutions (storage, caching, monitoring, etc.) into OpenStack. As OpenStack continues to gain adoption, it will become a channel for infrastructure vendors to sell their offerings in the same way that the Apple app store is a channel for mobile app developers. At the same time, OpenStack will help abstract end users and resident applications away from the complexity of disparate infrastructure solutions.

Today we are still in the early days of OpenStack. It's far from being the ultimate platform. It may also be less feature-rich than competing offerings from Microsoft or VMware. However, this is unimportant today. What's important is that the need for the Big Crunch that will decrease application infrastructure complexity is obvious. The magnitude of effort required to make this happen is not something any single vendor could credibly pull off. Ultimately, it's not OpenStack features that matter, but the "idea" behind this project and the degree of uptake it has already received in the community. When many people come together to realize a sensible vision, that vision inevitably becomes a reality.

Tuesday, August 16, 2011

Our Contribution to the Vegas Economy

Here are the highlights on our corporate team-building in Vegas last week. Special thanks to Rachel and Athena for making this party happen. Thank you to all who participated and helped make it fun.





We started by warming up with some drinks in the airport bar on the way over to Vegas.





The luggage belt broke upon our arrival and it took over an hour to get our luggage. By then, the buzz from the airport bar session started to wear off… =(.





The taxi line outside the airport was loooong… so we decided to embellish our Vegas experience immediately by taking a limo to the hotel.





Finally arrived; herding around the Aria hotel entrance.

After a brief bite to eat in Vettro cafĂ© in Aria (which, by the way, is a horrible restaurant… don’t go there), we split up into two groups - the strong and the weak. The weak went to sleep or gamble. The strong went clubbing. Came back to the hotel room only at 4am.





The next morning, we woke up to this view; 51st floor in Aria. Don’t get too excited – as with many Vegas hotels, they don’t have floors 40-50 in Aria.





Breakfast… some people slept in late, so our ranks were slim at breakfast.





Ilya enjoyed his fries enormously!





Next stop – quintessential Vegas pool party at Liquid Lounge. $5 to anyone who can spot Mike Scherbakov and Julia Varigina in the crowd.





Why would anyone herd in the pool with 100 people in it, music blasting and no seating space, when you can quietly lounge next to one of 10 other pools in the hotel? The point of the pool party only comes to you after a few drinks… as you can see from the stampede by the bar, we were not the only ones to feel that way.





Once you get a drink in your hand – it’s BLAST OFF!





No comment.





Winding down at the pool… next stop: corporate dinner.





It was dark and all we had was a point and shoot… so not so many pictures at the dinner. But basically this is what it looked like.





After the dinner we went to watch a show – Absinthe. This group picture was taken immediately after.

11:48pm – time to split up again into gamblers and partiers. Since I belonged to the party group, you don’t get to see the pictures of the gamblers… sorry.





Second night of clubbing looked like this. 1:45am and Mike is asleep on a couch at Tryst. This is called a SHUT DOWN!





And my SHUT DOWN happened in the airport on the way back.

Friday, August 12, 2011

LDAP identity store for OpenStack Keystone

After some time working with OpenStack installation using existing LDAP installation for authentication, we encountered one big problem. The latest Dashboard code dropped support of old bare authentication in favor of Keystone-based one. That time Keystone had no support for multiple authentication backends, so we had to develop this feature.
Now we have a basic support of LDAP authentication in Keystone which provides subset of functionality that was present in Nova. Currently, the main limitation is inability to actually integrate with the existing LDAP tree due to limitations in backend, but it works fine in isolated corner of LDAP.
So, after a long time of coding and fighting with new upstream workflows, we can give you a chance to try it out.
To do it, one should:
  1. Make sure that all necessary components are installed. They are Nova, Glance, Keystone and Dashboard.

    Since the latter pair is still in incubator, you’ll have to download them from the source repository:
  2. Set up Nova to authorize requests in Keystone:

    It assumes that you’re in the same dir where you’ve downloaded Keystone sources. Replace nova.conf path if it differs in your Nova installation.
  3. Add schema information to your LDAP installation.

    It heavily depends on your LDAP server. There is a common .schema file and .ldif for the latest version of OpenLDAP in keystone/keystone/backends/ldap/ dir. For local OpenLDAP installation, this will do the trick (if you haven’t change the dir after previous steps):

  4. Modify Keystone configuration at keystone/etc/keystone.conf to use ldap backend:
    • add keystone.backends.ldap to the backends list in [DEFAULT] section;
    • remove Tenant, User, UserRoleAssociation and Token from the backend_entities list in [keystone.backends.sqlalchemy] section;
    • add new section (don’t forget to change URL, user and password to match your installation):
  5. Make sure that ou=Groups,dc=example,dc=com and ou=Users,dc=example,dc=com subtree exists or set LDAP backend to use any other ones by adding tenant_tree_dn, role_tree_dn and user_tree_dn parameters into [keystone.backends.ldap] section in config file.
  6. Run Nova, Keystone and Dashboard as usual.
  7. Create some users, tenants, endpoints, etc. in Keystone by using keystone/bin/keystone-manage command or just run keystone/bin/sample-data.sh to add the test ones.

  8. Now you can authenticate in Dashboard using credentials of one of created users. Note that from this point all user, project and role management should be done through Keystone using either keystone-manage command or syspanel on Dashboard.

Thursday, June 30, 2011

Bay Area OpenStack Meet & Drink Highlights

For those of you that weren’t able to make it yesterday and maybe for those of you who want to reminisce about the events of last night, Bay Area OpenStack Meet & Drink was probably the most well-attended OpenStack meetup in the valley to date, outside of the OpenStack summit this spring. A diverse crowd of over 120 stackers showed up – ranging from folks just learning the basics of OpenStack to hardcore code committers.





We originally planned on hosting a 30-40 person tech meetup session in a small cozy space at the Computer History Museum. However, with over 100 RSVPs we had to go all out and rent out Hahn Auditorium, making space for all of those wanting to participate.





First 40 minutes – people eating drinking and mingling. The food line was a bit overwhelming.





Cloud wine was served with dinner.





Joe Arnold from Cloudscaling brought a demo server, running SWIFT for people to play around with.





I opened the ceremony with a 5-minute intro – polling the audience on their experience with OpenStack, saying a few words about Mirantis and upcoming events, as well as introducing Mirantis team members.





Meanwhile, Joe was getting all too excited to do his pitch of SWIFT.





Joe did his 10-minute talk on “Swift in the Small.” You can read up on the content that was presented in Joe’s blog: http://joearnold.com/2011/06/27/swift-in-the-small/. You can also view the slides here: http://bit.ly/mMRcpt. And the live recording of the presentation can be found here: http://bit.ly/mJOr2R





We gave out Russian Standard vodka bottles at the meetup as favors. To complete the theme and give the audience a taste of Russian hospitality, we had an accordionist perform a 5-minute stunt immediately after Joe’s pitch on Swift (see his performance here: http://bit.ly/iiYveN).





Party time…





Mike Scherbakov from our team of stackers talked about implementing Nova in Mirantis’ internal IT department, taking quite a few questions from the audience. The deck of his presentation is here: http://slidesha.re/jyS4WL. The recording of the talk can be found here: part 1; part 2; part 3; and part 4.

I’d like to thank everyone for coming and we’ll appreciate any comments or suggestions on the event. We plan to have our next meetup at the end of September. If you would like to help organize, present your OpenStack story, or offer any ideas on how to make the experience better, please ping me on twitter @zer0tweets or send me an email – borisr at mirantis dot com.