tag:blogger.com,1999:blog-91202068292100522092024-03-13T10:39:29.298-07:00Mirantis Official BlogYury Koldobanovhttp://www.blogger.com/profile/15542385579516592101noreply@blogger.comBlogger26125tag:blogger.com,1999:blog-9120206829210052209.post-62215123265100104152012-05-17T17:47:00.001-07:002012-05-18T12:30:46.352-07:00Our blog has moved!<div style="text-align: left;">
<span style="line-height: 20px;"><br /></span></div>
<div style="text-align: left;">
<span style="line-height: 20px;">To help simplify search and navigation in the context of our main website, mirantis.com, we've moved our blog to mirantis.com/blog. You'll find a list of the posts we've moved below for your convenient reference. See you at <a href="http://www.mirantis.com/blog">www.mirantis.com/blog</a></span></div>
<ul>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/how-to-monetize-the-openstack-wave.php">How to Monetize the OpenStack Wave</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/the-new-open-source-superpower.php">The New Open Source Superpower</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/some-brutally-honest-thoughts-on-citrixs-defection.php">Some Brutally Honest Thoughts on Citrix’s Defection</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/i-hear-the-essex-train-acoming.php">I hear the Essex Train a-coming</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/long-live-enterprise-clouds.php">Long Live Enterprise Clouds</a></li>
<li>Oleg, <a href="http://mirantis.com/blog/mirantis-blog/under-the-hood-of-swift-the-ring.php">Under the hood of Swift. The Ring</a></li>
<li>Artem Andreev, <a href="http://mirantis.com/blog/mirantis-blog/introducing-openstackagent-for-xenbased-clouds-what.php">Introducing OpenStackAgent for Xen-based Clouds. What?</a></li>
<li>Oleg, <a href="http://mirantis.com/blog/mirantis-blog/diablo-rpm-repository.php">Diablo RPM repository</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/meet-drink-openstack-in-production-event-highlights.php">Meet & Drink: OpenStack in Production – Event Highlights</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/converging-openstack-with-nexenta.php">Converging OpenStack with Nexenta</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/openstack-meet-drink-toast-to-diablo-event-highlights.php">OpenStack Meet & Drink: Toast to Diablo – Event Highlights</a></li>
<li>Yury Taraday, <a href="http://mirantis.com/blog/mirantis-blog/what-is-this-keystone-anyway.php">What is this Keystone anyway?</a></li>
<li>Alexander Gordeev, <a href="http://mirantis.com/blog/mirantis-blog/cloudpipe-image-creation-automation.php">Cloudpipe Image Creation Automation</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/cloud-accelerates-open-source-adoption.php">Cloud Accelerates Open Source Adoption</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/tracing-the-it-evolution-from-the-big-bang-to-the-big-crunch.php">Tracing the IT Evolution from the Big Bang to the Big Crunch</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/our-contribution-to-the-vegas-economy.php">Our Contribution to the Vegas Economy</a></li>
<li>Yury Taraday, <a href="http://mirantis.com/blog/mirantis-blog/ldap-identity-store-for-openstack-keystone.php">LDAP identity store for OpenStack Keystone</a></li>
<li>Roman Bogorodskiy, <a href="http://mirantis.com/blog/mirantis-blog/vcider-virtual-switch-overview.php">vCider Virtual Switch Overview</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/bay-area-openstack-meet-drink-highlights.php">Bay Area OpenStack Meet & Drink Highlights</a></li>
<li>Oleg, <a href="http://mirantis.com/blog/mirantis-blog/clustered-lvm-on-drbd-resource-in-fedora-linux.php">Clustered LVM on DRBD resource in Fedora Linux</a></li>
<li>Alexander Sakhnov, <a href="http://mirantis.com/blog/mirantis-blog/openstack-nova-basic-disaster-recovery.php">OpenStack Nova: basic disaster recovery</a></li>
<li>Yury Taraday, <a href="http://mirantis.com/blog/mirantis-blog/openstack-nova-and-dashboard-authorization-using-existing-ldap.php">OpenStack Nova and Dashboard authorization using existing LDAP</a></li>
<li>Oleg, <a href="http://mirantis.com/blog/mirantis-blog/shared-storage-for-openstack-based-on-drbd.php">Shared storage for OpenStack based on DRBD</a></li>
<li>Max Lvov, <a href="http://mirantis.com/blog/mirantis-blog/openstack-deployment-on-fedora-using-kickstart.php">OpenStack Deployment on Fedora using Kickstart</a></li>
<li>Boris Renski, <a href="http://mirantis.com/blog/mirantis-blog/make-your-bet-on-open-source-infrastructure-computing.php">Make your bet on open source infrastructure computing</a></li>
</ul>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-224697695912485952012-05-04T15:00:00.000-07:002012-05-07T11:49:43.132-07:00How to Monetize the OpenStack Wave<br />
<div class="MsoNormal" style="text-align: left;">
After OpenStack was announced at OSCON in the summer of
2010, the degree of momentum behind this new open source platform has been
nothing short of spectacular. Startups and enterprises alike have placed their
strategic bets to monetize the OpenStack wave in various ways. As an ecosystem
insider and one of the founding sponsors of the OpenStack Foundation, I wanted
to offer my views on how various organizations are looking to skin this cat.</div>
<div class="MsoNormal">
<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
I’d like to focus on three of the many efforts currently
underway. These three, in particular, happen to be the most vocal about their position and represent three distinct strategy camps. They
are Nebula with its OpenStack appliance, Piston with its PentOS cloud operating
system, and Dell’s Crowbar, an OpenStack installer.<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
While all these approaches appear radically different on the
surface, all three ultimately offer the same exact final product to the customer:
a cloud-enabled rack of servers.
Moreover, in all three cases, the customer is required to purchase a
particular type of hardware for the solution to work. The Nebula appliance in the
case of Nebula; an Arista Switch with Silicon Mechanics servers in the case of
Piston, and Dell PowerEdge C servers, in
the case of Crowbar. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Let’s take a closer look at these three approaches and the key
differences between them. </div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Nebula,
founded by NASA employees that were originally behind the OpenStack compute
(aka Nova) project, has opted to deliver an integrated OpenStack appliance. Advertised
as an open solution (…OpenStack, Open Flow, Open Compute – you name it), in
actuality, it is a proprietary network switch pre-loaded with software that will
automatically deploy the OpenStack-like environment onto a set of servers that
plug into it. While maybe not the most open option, Nebula’s offering is
probably the most straightforward. Grab the appliance, plug in some Open Compute
compliant servers and storage and, <span style="font-family: Georgia, serif; font-size: 10pt; line-height: 115%;">voilà!
You</span> have a cloud (or so they claim).</div>
<div class="MsoNormal" style="margin-bottom: 9.0pt; mso-outline-level: 2;">
<o:p></o:p></div>
<div class="MsoNormal">
Piston, also founded by NASA alums, went the software route.
Instead of building a physical network switch, the company chose to deliver PentOS,
the cloud operating system based on OpenStack that is hardware agnostic … kind
of. At this point, there is only one certified reference architecture that
specifically requires Arista switches and servers from Silicon Mechanics. Longer
term, there is a vision to support most commodity hardware configurations. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Finally, there is Dell and Crowbar. Dell’s approach to
riding the OpenStack wave is, perhaps, the most creative. Crowbar is neither a
hardware appliance nor an enterprise version of OpenStack. It is a
configuration management tool built around OpsCode’s Chef, designed
specifically to deploy OpenStack on Dell servers. Crowbar effectively serves as
the glue between the hardware and any distribution of OpenStack (and not only
OpenStack). To generalize these
approaches, I would classify Nebula as belonging to the “hardware camp,” Piston
to the “OpenStack distro camp,” and Dell to the “tools camp.” </div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Aside from the
three players I explicitly described above, there are other members in each of
these three camps. Nebula’s hardware camp is shared by MorphLabs with their
converged OpenStack infrastructure offering. The OpenStack distro camp, which
Piston belongs to, has folks like CloudScaling and StackOps under its roof.
Finally, the Dell Crowbar camp is shared by Canonical, with its Puppet- based
Juju installer. I believe Rackspace CloudBuilders will eventually join the
tools camp in some shape or form. </div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
The key difference between these three camps is in the
degree of openness and rigidity of coupling between hardware and software. Ultimately, it all comes down to the age old
tradeoff between simplicity and flexibility. These three camps are
generalizations of approaches where companies have placed their bets at
different points of the simplicity vs. flexibility trade-off scale. The hardware camp is betting on simplicity, at
the cost of openness. Just like an iPad, Nebula’s appliance may be very simple
to use and beautifully designed, but nobody has a clue about what’s inside
except for the people that built it. The tools camp is betting on openness, but
does so at the expense of ease-of-use. The OpenStack distro camp is somewhere
in the middle.</div>
<div class="MsoNormal">
<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
I believe that the tools approach is more effective in the
context of large scale deployments, such as enterprise-wide cloud rollouts,
cloud enablement of service providers, or infrastructure refactoring for SaaS
vendors. Traditionally, the scale of application infrastructure is inversely
related to one’s propensity to use proprietary, closed products. Leveraging third-party
solutions for massive infrastructure build-outs simply doesn’t yield the
agility and economics required to be successful. For that exact reason, when
one examines Facebook, Amazon, or Google infrastructure, virtually no third
party, proprietary solutions can be found inside the stack. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
The simplicity approach targeted by the OpenStack hardware
camp and OpenStack distro camp is what appeals to smaller, department-level enterprise
deployments, such as situations where IT is not the core competency of the
organization and application infrastructure is fragmented into a series of
diverse silos of smaller scale. These are the scenarios where organizations
don’t care about lock-in or cost at scale; they need an out-of-the-box solution
that works and a button to push for support when it doesn’t. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Now, if we looked at the three approaches from the pure
business perspective, there is an interesting dynamic at play. (As a co-founder
of an OpenStack systems integrator closely partnered with Dell, my opinion on
the business aspect is completely biased.) Nevertheless, here is how I see it… <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
On the one hand, diverse enterprise IT silos is the bigger
chunk of the overall application infrastructure pie, at least for now. On the other hand, when it comes to simple,
proprietary enterprise solutions aimed at these diverse IT silos, the line of
vendors offering them is out the door and around the corner, with VMWare
controlling the bulk of the market. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Historically, the philosophy of the OpenStack community in
general was about the very notion of openness, scale, and flexibility.
RackSpace, a company with the service provider mentality, has been dominating
the project technical board since inception and drove the OpenStack brand
forward. Their build-for-scale, build-for-flexibility approach has spilled out into
the community and affected both technical and marketing decisions. Consequently,
the majority of OpenStack deployments we hear about today – Internap, Korea
Telecom, Mercado Libre, Yahoo etc. – are all in the service provider / Internet
application vendor space. I don’t see OpenStack as something that was built for
enterprise IT silos. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Monetizing OpenStack by betting on openness at the expense
of simplicity, such as the companies in the tools camp have done, may not be
about penetrating the biggest market, but at least it is in line with the
current momentum of the OpenStack community. On the other hand, leveraging the still
young and fairly immature OpenStack platform, architecturally aimed at Web-scale
deployments to build proprietary offerings that target enterprise IT silos and
rival VMware, Citrix, and Microsoft, is a risky business proposition. <o:p></o:p></div>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-67111039017591600522012-04-12T08:28:00.002-07:002012-04-12T11:29:53.536-07:00The New Open Source Superpower<p class="MsoNormal">Today is yet another important day in the history of OpenStack. The initial list of founding organizations for the independent OpenStack foundation has been announced and we, at Mirantis, are proud to be on that list.<o:p></o:p></p> <p class="MsoNormal">While there is a lot to talk about on what this means for the infrastructure cloud market, I’d like to focus on what this means as far as illustrating the sheer momentum of the OpenStack beast. The non-profit legal entity that will house OpenStack has not yet been formed, but 18 organizations have already pledged significant financial (and not only financial) support to the foundation.<a href="http://wiki.openstack.org/Governance/Foundation/Funding"> The current financing model</a> calls for $500K/year commitment from a Platinum sponsor and $50-$200K/year – from a Gold sponsor. Judging by the current composition of the supporting organizations, it is clear that the new foundation will launch with the initial budget north of $5M. <o:p></o:p></p> <p class="MsoNormal">So how does this measure up to the rest of the <a href="http://en.wikipedia.org/wiki/Free_and_open_source_software#FLOSS">FLOSS ecosystem</a>? Well, there is a reason why OpenStack has been repetitively tagged as the Linux of the cloud. With the $5M annual budget, the newly formed OpenStack foundation takes the second spot in the entire FLOSS world. And it is second only to… you guessed it… the Linux foundation itself<a href="http://www.itworld.com/it-managementstrategy/260688/nonprofit-open-source-organizations-booming">. According to form 990 filed by the Linux foundation in 2010 its operating revenues were $9.6M.</a> Yes, the Linux foundation budget is still double that the OpenStack…but…come on…Linux is close to 20% of the server market. It also happens to power the majority of all mobile devices. OpenStack = Linux was a vision… judging by these numbers, this vision may soon be realized. <o:p></o:p></p> <p class="MsoNormal">Another interesting thing that these numbers portray is why OpenStack (<a href="http://mirantis.blogspot.com/2012/04/some-brutally-honest-thoughts-on.html">unlike CloudStack</a>) has opted to create its own foundation, rather than surrendering everything to the governance of the Apache Foundation. <a href="http://www.apache.org/foundation/records/990-2010.pdf">With the Apache Foundation budget under $1M</a>, OpenStack eats it for breakfast. <o:p></o:p></p> <p class="MsoNormal">Now many of you will argue that none of this matters. Apache foundation houses many great projects that are far more mature and popular than OpenStack… true. But can you tell me, how many of these are truly vendor agnostic? And I am not talking about developer tools like Ant, Maven, Beehive etc. All Apache projects fall into two categories – they are either developer tools or vendor centric enterprise products: Tomcat – VMWare, Hadoop – Cloudera, Cloud.com – will now be Citrix =). <o:p></o:p></p> <p class="MsoNormal">In my opinion, there is a reason for it and it is somewhat tied to foundation budgets. Open source is heavily driven by marketing. The number one open source company – RedHat - <a href="http://saviorodrigues.wordpress.com/2006/12/07/obvious-untruths-open-source-advertising-not-necessary/">spends 2-3x more on marketing relative to its revenue than any of its closed source competitors</a>. Ultimately, it is the marketing spend on an open source project that heavily affects its vendor independence status. If the entire spend comes from a single pocket, there is a single vendor that dominates that product. <o:p></o:p></p> <p class="MsoNormal">Unlike most Apache open source projects, OpenStack (while still under RackSpace) was backed by a significant marketing and PR budget. Consequently, when foundation plans were being discussed, it was the desire to continue this centralized marketing effort that precluded OpenStack from considering the Apache foundation as its home. A significant chunk of the $5M raised will be spent by the foundation to promote and protect the OpenStack brand and the projects that the foundation will house. In a sense, this implies that for anyone to derail the vendor independent status of OpenStack, one will need the marketing budget, comparable to the $5M the foundation has raised… I say this is a decent barrier to start with. <o:p></o:p></p>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-47507518280541561772012-04-05T13:36:00.003-07:002012-04-05T13:55:27.468-07:00Some Brutally Honest Thoughts on Citrix’s Defection<p class="MsoNormal"></p><p class="MsoNormal"></p><p class="MsoNormal">When I first heard <a href="http://www.datacenterknowledge.com/archives/2012/04/04/roundup-cloudstacks-move-to-apache/">the announcement about Cloud.com being spun off into the Apache foundation</a>, my initial reaction was to interpret the event as a hostile move by one of the OpenStack community insiders. Citrix is one of the founding members of OpenStack, with representation on the project policy board; the company has been quite active evangelizing the community through <a href="http://www.meetup.com/openstack/events/47512732/">various events</a> and code contributions. So why, all of a sudden, a move that may appear to undermine the OpenStack momentum? <o:p></o:p></p> <p class="MsoNormal">Let’s take a look at the history. When Citrix bought Cloud.com for more than $200 million<span class="MsoHyperlink"> in July, 2011</span>, insider information suggested the company had revenue of only a several million. While high valuations were not uncommon in the cloud space, a 40x revenue multiple is quite unusual. Why did Citrix do it? The only answer that comes to mind was that it wanted to quickly gain credibility in the cloud market. <o:p></o:p></p> <p class="MsoNormal">I believe that corporate politics and relationships also played a role in this deal. Cloud.com was backed by Redpoint Ventures, which had an existing track record of selling its <a href="http://www.citrix.com/english/ne/news/news.asp?newsid=33887">portfolio companies to Citrix</a>. But, more importantly, Cloud.com founder and CEO – Sheng Liang – was also the founder and CTO of Teros Networks, a Web security company that was acquired by <a href="http://www.citrix.com/English/ne/news/news.asp?newsID=22707">the very same Citrix just a few years before Cloud.com was founded.</a> In fact, I am pretty sure, that in some sense cloud.com was Citrix’s skunk works project; acquisition by Citrix was the key part of the Cloud.com business plan. While there is nothing wrong with the approach and I can only complement the strategy, the early connection between Citrix and Cloud.com was key to its successful exit and the events that followed. <o:p></o:p></p> <p class="MsoNormal">Just one year before the acquisition of Cloud.com, OpenStack was announced at OSCON and nobody knew what to think of it. It took the open source community by a storm and it soon became evident to all those competing for open cloud dominance, that simply ignoring the OpenStack phenomenon was not an option. “Open cloud strategy” soon became synonymous with the “OpenStack Strategy”. Citrix, a founding member of OpenStack itself, was in a bit of a tight spot. One choice was to abandon its Cloud.com project. Given the OpenStack momentum at the time, this could inevitably translate to the swift death of Cloud.com and $17 million in losses to the VCs backing it. Alternatively, Citrix could go all in, acquire the Cloud.com community to boast its credibility in the open source cloud space and take a stab at creating the dominant distribution of OpenStack, ultimately becoming to OpenStack what Red Hat has become to Linux. In the end, the scales tipped towards the latter option. In May, 2011 <a href="http://www.citrix.com/English/ne/news/news.asp?newsID=2311980">Citrix announced its distribution of OpenStack – project Olympus</a>. Two months thereafter, the Cloud.com acquisition was announced. <o:p></o:p></p> <p class="MsoNormal">However, when the dust settled, it became evident that Citrix’s involvement with Cloud.com and OpenStack (Project Olympus), <a href="http://blogs.citrix.com/2011/07/18/bringing-cloudstack-and-openstack-together/">instead of being complimentary as Citrix has anticipated</a>, has been perceived as strange and surprising. CloudStack is Java based, whereas OpenStack is all Python. On the compute side, CloudStack focused on Xen, whereas the dominant hypervisor for OpenStack so far has been KVM. CloudStack was licensed under GPL, and OpenStack under Apache 2.0. Ultimately, Citrix’s cloud.com acquisition was sending confusing messages to both communities and Citrix’s customer base. A few months after Citrix’s acquisition, the Cloud.com community had little momentum left. At the same time, the OpenStack community remained wary of Citrix due to its involvement with CloudStack. Consequently, not much has happened with Project Olympus since its announcement over a year ago until it was officially abandoned with the latest announcement. <o:p></o:p></p> <p class="MsoNormal">Today, Citrix announced that Cloud.com will find a new home with the Apache foundation. Is it a hostile move that will undermine OpenStack? I see it more as an act of desperation. Clearly, that wasn’t the initial plan, when Citrix first acquired Cloud.com. Consequently Citrix has failed to build the community around Cloud.com, miscalculated the synergies between the two communities, got trumped by OpenStack momentum, and dumped what’s left of Cloud.com to the Apache foundation. They have already announced CloudStack would be open source twice before, yet have received no outside contributions to date. The last commit to Cloud.com on GitHub by a non-Citrix employee is dated several months ago. <o:p></o:p></p> <p class="MsoNormal">At this point, Citrix has a spotty history when it comes to open source. Open source is built on trust and they are hard to trust right now. Having burned bridges at their last two communities (Xen / Linux) and now OpenStack, it is going to be big challenge for them to revive CloudStack from its present semi-dead state. <o:p></o:p></p><p></p> <p class="MsoNormal"><br /></p><p></p>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-49810255253993952572012-03-31T03:13:00.002-07:002012-03-31T09:06:32.023-07:00I hear the Essex Train a-coming<p class="MsoPlainText">With Essex train in the wilds of testing, and the Essex release intended date less than 10 days away, we are pretty excited about everyone descending on San Francisco -- practically our home town -- for the Design Summit and conference.</p> <p class="MsoPlainText">Here at Mirantis, the company famous across OpenStack community for distributing vodka bottles at OpenStack meetups, we are gearing up in a big way for the summit and conference. If you haven't seen the <a href="http://openstack.org/conference/san-francisco-2012/sessions/">agenda</a>, here's what we've got teed up:</p> <p class="MsoPlainText" style="margin-left:.5in;text-indent:-.25in;mso-list:l0 level1 lfo1"><!--[if !supportLists]-->(1)<span style="font-family: 'Times New Roman'; font-size: 7pt; "> </span><!--[endif]-->We’ll start the frenzy with just-in-time-training: we have a few seats left at our 2-day OpenStack Boot Camp, crammed into the weekend of April 14-15, right before the summit and conference. <a href="http://www.mirantis.com/training">REGISTER HERE</a> and come to the event fully prepared to torment the presenters with insidious technical questions about OpenStack technology and its future. <span style="font-size: 100%; "> </span></p> <p class="MsoPlainText" style="margin-left:.5in;text-indent:-.25in;mso-list:l0 level1 lfo1"><!--[if !supportLists]-->(2)<span style="font-family: 'Times New Roman'; font-size: 7pt; "> </span><!--[endif]-->Our team will participate in / moderate a few exciting sessions during the conference: <a href="http://openstackconferencespring2012.sched.org/event/f09acb72f11eb4cc126f74622c7d1e86?iframe=yes&w=990&sidebar=no&bg=no#?iframe=yes&w=990&sidebar=no&bg=no">OpenStack and Block Storage</a>, <a href="http://openstackconferencespring2012.sched.org/event/3fb937d7faf553b7746bd1510f6c45fe?iframe=yes&w=990&sidebar=no&bg=no#?iframe=yes&w=990&sidebar=no&bg=no">OpenStack and High Performance Computing</a>, <a href="http://openstackconferencespring2012.sched.org/event/eda8d9162bfd04eb1074ef07daeed890?iframe=yes&w=990&sidebar=no&bg=no#sched-body-outer">Expanding the Community</a>. Please be sure to pay us a visit.</p> <p class="MsoPlainText" style="margin-left:.5in;text-indent:-.25in;mso-list:l0 level1 lfo1"><!--[if !supportLists]-->(3)<span style="font-family: 'Times New Roman'; font-size: 7pt; "> </span><!--[endif]-->…and just to show how happy we are to have you here, we invite the community at the conference to join <a href="http://openstacksummitkickoff2012.eventbrite.com/">Mirantis Summit Kick-Off Party</a>. <a href="http://www.youtube.com/watch?v=zUd8tmrhH5k&feature=related">Click for a preview of what’s to come!</a> Vodka bottles and fun times in the best traditions of all our events are guaranteed. Be sure not to miss.</p> <p class="MsoPlainText">Looking forward to receiving everyone at the 2012 OpenStack Design Summit and Conference.<o:p></o:p></p>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-55370867904043557852012-03-13T17:28:00.002-07:002012-03-13T19:22:55.399-07:00Long Live Enterprise CloudsOne active arena of cloud blog battles lately was about open vs. enterprise clouds. Randy Bias of Cloudscaling is on the forefront of open cloud advocacy with a clear stance that <a href="http://www.cloudscaling.com/blog/cloud-computing/clouds-are-complex-but-simplicity-scales-a-winning-strategy-for-cloud-builders/">anything that is not an open cloud - is a legacy cloud.</a> Massimo, in his <a href="http://it20.info/2012/02/the-cloud-magic-rectangle-tm/">recent blog</a> did a great job categorizing different types of clouds. He, too, gently implies that as organizations’ IT strategies mature, they will be making a shift away from “orchestrated clouds” to “design for fail clouds.” Much of the “cool cloud thinking” today is based on the notion that a) open clouds are good; b) writing applications for infrastructure (rather than tuning infrastructure for the apps) is the future of IT.<br /><br />I disagree with both of these.<br /><br />So let’s start with “open cloud is good, enterprise cloud is bad” stance. In my view, making this comparison is like saying a kitchen knife is better than a swiss-army knife. A kitchen knife is simple, sharp and has no moving parts. Just like an open cloud it is designed to be a simple solution to a single, concrete problem. Ever try to prepare a full meal with a swiss-army knife? Sure, when you go camping, it’s probably fine. But when your mother-in-law is coming to dinner?<br /><br />The fundamental difference between enterprise and open clouds is in their approach to the problem. Open cloud mentality comes from a service provider view of the world where the cloud is not built to support the business, but rather IS THE BUSINESS. The approach is to build from the bottom up for a narrow problem, just like one would if you were a software company and the cloud was your product, aimed at capturing a chunk of some market. <br /><br />In the open cloud world, apps do not dictate the infrastructure; you start with the infrastructure, and go up the stack. In other words, the information technology infrastructure dictates the way the application solves a business problem. <br /><br />In the enterprise world – it’s precisely the other way around . IT exists to support the business. The applications are king and they dictate the IT underneath. Is this the best scenario when it comes to simplifying the infrastructure and containing its cost? Definitely not! Is there an alternative for the enterprise? Definitely not! Reason? There’s an irreducible domain knowledge gap between IT and business. In the case of AWS, Salesforce or Google – IT is the business. In enterprise – IT’s job is to salute the business and support it.<br /><br />Let’s take a concrete example (real life story by the way). We are now enjoying the dawning of the age of big data. Say some entrepreneur decides to take advantage of Hadoop and Lucene and build a new engine for parsing and aggregating bioinformatics data, and can extract results that were never before possible. He then sells his marvelous, vertically focused innovation to Pharma companies. If I’m at Pfizer and I don’t buy it, but my rivals at Roche buy – I get left behind. But say my IT does not do Hadoop and Lucene and I can’t take the solution to run in a public cloud because of regulatory compliance. Now what do I do?<br /><br />If you guessed that I call my CIO and tell him to stand up the environment that will support this, you’re right. IT has to follow the lead of the business, or the whole business fails. This happens over and over again. Over time, IT has to support an extremely diverse environment. Conceivably, the gap may shrink over time as <a href="http://www.cio.com/article/701222/How_Cloud_Computing_Is_Forcing_IT_Evolution">IT becomes an ever increasingly dominant business process in any vertical</a>, but don’t plan on it happening this month. Or even next month.<br /><br />Now, there is a common view that it doesn’t have to be that way that stems from an elegant but very one-dimensional comparison between IT infrastructure and electricity. I.e. application infrastructure is a commodity, just like electricity. All apps should be built to run on top of this common, standardized infrastructure, and just like we all have the same shape of electrical outlets (except for, well, the Europeans) this is where we’ll all be soon.<br /><br />It sounds great, but I’m sad to say that I have to call bullshit. Electricity and application infrastructure is not the same. Unlike with electricity, there is massive innovation at the bottom part of the application stack. We didn’t even use virtualization until recently. Yesterday it was all about disk, today it is all about SSD.<br /><br />We don’t know what new paradigms will emerge in the coming years. This innovation shakes the entire stack. Going back to my example, Hadoop was not widely used just not too long ago. Had it not existed, the new app would not have been possible and IT would not have had to buy new infrastructure and deploy unknown middleware on it. But because it does, IT has to adjust. And tomorrow will be a new paradigm and IT will have to adjust again and again.<br /><br />Commoditization and standardization can only happen in stagnant industries like electricity generation and distribution or growing potatoes, where the world has pretty much stopped. Until that kind of stable stagnation becomes a common theme in the application infrastructure space, there will always be expensive enterprise clouds and open, inexpensive, commodity clouds. The enterprise will be constantly configuring its swiss army knife, aimed at minimizing the pain of dealing with diversity in the stack.Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-5864116897232123922012-02-14T05:51:00.000-08:002012-02-16T12:25:07.458-08:00Under the hood of Swift. The RingThis is the first post in series that summarizes our analysis of Swift architecture. We've tried to highlight some points that are not clear enough in the official documentation. Our primary base was an in-depth look into the source code. The Ring is the vital part of Swift architecture. This half database, half configuration file keeps track of where all data resides in the cluster. For each possible path to any stored entity in the cluster, the Ring points to the particular device on the particular physical node.<br /><br />There are three types of entities that Swift recognizes: accounts, containers and objects. Each type has the ring of its own, but all three rings are put up the same way. Swift services use the same source code to create and query all three rings. Two Swift classes are responsible for this tasks: <tt>RingBuilder</tt> and <tt>Ring</tt> respectively.<br /><br /><h3>Ring data structure</h3>Every Ring of three in Swift is the structure that consists of 3 elements:<br /><ul><li>a list of devices in the cluster, also known as <tt>devs</tt> in the <tt>Ring</tt> class;</li><li>a list of lists of devices ids indicating partition to data assignments, stored in variable named <tt>_replica2part2dev_id</tt>;</li><li>an integer number of bits to shift an MD5-hashed path to the account/container/object to calculate the partition index for the hash (partition shift value, <tt>part_shift</tt>).</li></ul><h5>List of devices</h5>A list of devices includes all storage devices (disks) known to the ring. Each element of this list is a dictionary of the following structure:<br /><table cellpadding="1" cellspacing="1" border="0"><tbody><tr><th width="15%">Key</th><th width="15%">Type</th><th>Value</th></tr><tr><td>id</td><td>integer</td><td>Index of the devices list</td></tr><tr><td>zone</td><td>integer</td><td>Zone the device resides in</td></tr><tr><td>weight</td><td>float</td><td>The relative weight of the device to the other devices in the ring</td></tr><tr><td>ip</td><td>string</td><td>IP address of server containing the device</td></tr><tr><td>port</td><td>integer</td><td>TCP port the server uses to serve requests for the device</td></tr><tr><td>device</td><td>string</td><td>Disk name of the device in the host system, e.g. <tt>sda1</tt>. It is used to identify disk mount point under <tt>/srv/node</tt> on the host system</td></tr><tr><td>meta</td><td>string</td><td>General-use field for storing arbitrary information about the device. Not used by servers directly</td></tr></tbody></table>Some device management can be performed using values in the list. First, for the removed devices, the <tt>'id'</tt> value is set to <tt>'None'</tt>. Device IDs are generally not reused. Second, setting <tt>'weight'</tt> to 0.0 disables the device temporarily, as no partitions will be assigned to that device.<br /><h5>Partitions assignment list</h5>This data structure is a list of N elements, where N is the replica count for the cluster. The default replica count is 3. Each element of partitions assignment list is an <tt>array('H')</tt>, or Python compact efficient array of short unsigned integer values. These values are actually index into a list of devices (see previous section). So, each <tt>array('H')</tt> in the partitions assignment list represents mapping partitions to devices ID.<br /><br />The ring takes a configurable number of bits from a path's MD5 hash and converts it to the long integer number. This number is used as an index into the <tt>array('H')</tt>. This index points to the array element that designates an ID of the device to which the partition is mapped. Number of bits kept from the hash is known as the partition power, and 2 to the partition power indicates the partition count.<br /><br />For a given partition number, each replica's device will not be in the same zone as any other replica's device. Zones can be used to group devices based on physical locations, power separations, network separations, or any other attribute that could make multiple replicas unavailable at the same time.<br /><h5>Partition Shift Value</h5>This is the number of bits taken from MD5 hash of <tt>'/account/[container/[object]]'</tt> path to calculate partition index for the path. Partition index is calculated by translating binary portion of hash into integer number.<br /><br /><h3>Ring operation</h3>The structure described above is stored as pickled (see <a href="http://docs.python.org/library/pickle.html">Python <tt>pickle</tt></a>) and gzipped (see <a href="http://docs.python.org/library/gzip.html#gzip.GzipFile">Python <tt>gzip.GzipFile</tt></a>) file. There are three files, one per ring, and usually their names are:<br /><pre><code>account.ring.gz
container.ring.gz
object.ring.gz</code></pre>These files must exist in <tt>/etc/swift</tt> directory on every Swift cluster node, both Proxy and Storage, as services on all these nodes use it to locate entities in cluster. Moreover, ring files on all nodes in the cluster must have the same contents, so cluster can function properly.<br /><br />There are no internal Swift mechanisms that can guarantee that the ring is consistent, i.e. gzip file is not corrupt and can be read. Swift services have no way to tell if all nodes have the same version of rings. Maintenance of ring files is administrator's responsibility. These tasks can be automated by means external to the Swift itself, of course.<br /><br />The Ring allows any Swift service to identify which Storage node to query for the particular storage entity. Method Ring.get_nodes(account, container=None, obj=None) is used for identification of target Storage node for the given path (<tt>/account[/container[/object]]</tt>). It returns the tuple of partition and dictionary of nodes. The partition is used for constructing the local path to object file or account/container database. Nodes dictionary elements have the same structure as the devices in list of devices (see above).<br /><br /><h3>Ring management</h3>Swift services can not change the Ring. Ring is managed by swift-ring-builder script. When new Ring is created, first administrator should specify builder file and main parameter of the Ring: partition power (or partition shift value), number of replicas of each partition in cluster, and the time in hours before a specific partition can be moved in succession:<br /><br /><textarea cols="60" rows="2">swift-ring-builder <builder_file> create <part_power> <replicas> <min_part_hours></textarea><br />When the temporary builder file structure is created, administrator should add devices to the Ring. For each device, required values are zone number, IP address of the Storage node, port on which server is listening, device name (e.g. <tt>sdb1</tt>), optional device meta-data (e.g., model name, installation date or anything else) and device weight:<br /><br /><textarea cols="60" rows="2">swift-ring-builder <builder_file> add z<zone>-<ip>:<port>/<device_name>_<meta> <weight></textarea><br />Device weight is used to distribute partitions between the devices. More the device weight, more partitions are going to be assigned to that device. Recommended initial approach is to use the same size devices across the cluster and set weight 100.0 to each device. For devices added later, weight should be proportional to the capacity. At this point, all devices that will initially be in the cluster, should be added to the Ring. Consistency of the builder file can be verified before creating actual Ring file:<br /><br /><textarea cols="60" rows="2">swift-ring-builder <builder_file></textarea><br />In case of successful verification, the next step is to distribute partitions between devices and create actual Ring file. It is called 'rebalance' the Ring. This process is designed to move as few partitions as possible to minimize the data exchange between nodes, so it is important that all necessary changes to the Ring are made before rebalancing it:<br /><br /><textarea cols="60" rows="2">swift-ring-builder <builder_file> rebalance</textarea><br />The whole procedure must be repeated for all three rings: account, container and object. The resulting <tt>.ring.gz</tt> files should be pushed to all nodes in cluster. Builder files are also needed for the future changes to rings, so they should be backed up and kept in safe place. One of approaches is to put them to the Swift storage as ordinary objects.<br /><br /><h3>Physical disk usage</h3>Partition is essentially the block of data stored in the cluster. This does not mean, however, that disk usage is constant for all partitions. Distribution of objects between the partitions is based on the object path hash, not the object size or other parameters. Objects are not partitioned, which means that an object is kept as a single file in storage node file system (except very large objects, greater than 5Gb, which can be uploaded in segments - see <a href="http://docs.openstack.org/trunk/openstack-object-storage/admin/content/using-swift-to-manage-segmented-objects.html">the Swift documentation</a>).<br /><br />The partition mapped to the storage device is actually a directory in structure under <tt>/srv/node/<dev_name></tt>. The disk space used by this directory may vary from partition to partition, depending on size of objects that have been placed to this partition by mapping hash of object path to the Ring.<br /><br />In conclusion it should be said that the Swift Ring is a beautiful structure, though it lacks a degree of automation and synchronization between nodes. I'm going to write about how to solve these problems in one of the following posts.<br /><br /><h3>More information</h3>More information about Swift Ring can be found in following sources:<br /><a href="http://swift.openstack.org/overview_ring.html">Official Swift documentation</a> - base source for description of data structure<br /><a href="https://github.com/openstack/swift/tree/master/swift/common/ring">Swift Ring source code on Github</a> - code base of <tt>Ring</tt> and <tt>RingBuilder</tt> Swift classes.<br /><a href="http://blog.chmouel.com/">Blog of Chmouel Boudjnah</a> - contains useful Swift hintsAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-24303108588343868542012-01-30T05:28:00.000-08:002012-01-30T06:34:53.866-08:00Introducing OpenStackAgent for Xen-based Clouds. What?<div dir="ltr" style="text-align: left;" trbidi="on">
<h2>
What it is all about</h2>
Not long ago we’ve been working on deployment of OpenStack Cactus-based public cloud using Xen as an underlying hypervisor. One of the problems we’ve faced were Windows guest instances failing to set up their administrator password to those generated by nova on instance creation. As it turned out the overall process of compute-guest instance communication in OpenStack-Xen environment is rather tricky (see the illustration). One of the core components of the process is so called guest agent - a special user space service which runs within a guest OS and executes commands provided from outside. Originally we’ve used the guest agent implementation provided by Rackspace. One can find the source code both for *nix and Windows OS on the <a href="https://launchpad.net/openstack-guest-agents" target="_blank">Launchpad page</a>. Although the project seemed to be quite stable at the moment the service built from C# code and combined with Cactus version of nova plugin for Xen was unable to set the password for the Windows instances. Deep log analysis revealed the problem at the stage of cryptography engine initialization. It should be noted that the procedure of resetting administrator’s password itself is complex. It first includes Diffie-Hellman key exchange between compute and guest agent. Next the password is encrypted for the sake of security and sent via the public channel i.e. Xen Store to the agent. For the deadline was coming in several hours we had no time to set up a proper environment for debugging and therefore we decided to perform a rather immature step which turned out to be a success afterwards. Hastily we implemented our own guest agent service using pywin32 library. Later on, it acquired several additional features including MSI installer and grew up into a separate project named OpenStackAgent. And now we would like to introduce it to the community.<br />
<a name='more'></a><a href="http://2.bp.blogspot.com/-I9pPJISIoQQ/TyaUpch9ajI/AAAAAAAAAAM/q0PlspKQENs/s1600/openstackagent.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="335" src="http://2.bp.blogspot.com/-I9pPJISIoQQ/TyaUpch9ajI/AAAAAAAAAAM/q0PlspKQENs/s400/openstackagent.png" width="400" /></a>
<br />
<h2>
</h2>
<h2>
What it is currently capable of</h2>
<ul>
<li>Instance spawn time and run time password changing.</li>
<li>Updating itself from network on “update” command retrieval.</li>
<li>Running and logging :)</li>
</ul>
The capabilities look quite limited, don’t they? However it is yet of version 0.0.5 and the development is on the way. Thus visit the <a href="http://github.com/Mirantis/osagent/wiki" target="_blank">project homepage</a> soon to find the new features.
<br />
<h2>
</h2>
<h2>
What else is going to be implemented</h2>
<ul>
<li>Support *nix OS as well. We’re planning to share the same code base for all the types of guest OSes.</li>
<li>Switch to pyInstaller extension for distribution package creation. Get rid of py2exe to make building really cross-platform.</li>
<li>Support guest network adapter configuration commands and file injection as well.</li>
<li>Tests, tests and once again tests.</li>
</ul>
<h2>
</h2>
<h2>
What does one need to use it</h2>
<b>Building</b><br />
In order to build the agent one will need the following software to be installed
<br />
<ul>
<li>Python interpreter of version 2.7 or higher</li>
<li><a href="http://www.voidspace.org.uk/python/modules.shtml#pycrypto" target="_blank">PyCrypto version 2.x. Pre-compiled binary distribution for Windows</a>.</li>
<li><a href="http://sourceforge.net/projects/py2exe/files/" target="_blank">py2exe extension for Python</a></li>
<li><a href="http://pypi.python.org/pypi/py2exe2msi" target="_blank">py2exe2msi extension</a>. Easily installable from PyPi. with <i>easy_install py2exe2msi</i> command</li>
</ul>
After everything is ready to build run <i>python guest_agent/setup.py py2exe2msi</i> and find the compiled MSI package in the current working directory.<br />
<br />
<b>Running</b><br />
In order to run the compiled service the following requirements has to be satisfied on the target machine
<br />
<ul>
<li><a href="http://www.blogger.com/%5Bhttp://www.microsoft.com/download/en/details.aspx?displaylang=en&id=29" target="_blank">Microsoft Visual C 2008 SP1 Runtime</a>. </li>
<li>The latest version of Xen Guest Utilities installed</li>
</ul>
Just install the MSI package from "Building" step and the service will be started automatically. In order to troubleshoot look up the system application event log or log file located at <i>%WINDIR%\Logs\OpenStackAgent.log</i>
<br />
<br />
<b>Updating</b><br />
Just install the MSI package of a newer version into the system. It will automatically replace all the required components and restart services.<br />
<h2>
</h2>
<h2>
What should one do to contribute</h2>
Fork it, update it, merge it using the <a href="http://github.com/Mirantis/osagent" target="_blank">GitHub repository</a> but make sure you follow the Apache 2.0 license.</div>Artem Andreevhttp://www.blogger.com/profile/02872709165936773918noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-57057275379760970072011-12-29T03:41:00.000-08:002012-01-09T22:34:10.040-08:00Diablo RPM repositoryRecently we've deployed OpenStack Diablo release for one of our customers. The target operating system happened to be CentOS 6.0. During deployment testing we've stumbled upon a number of bugs in OpenStack RPMs that we've tried to use.<br /><br />All existing RPMs of OpenStack that we've found contained problems that prevented components from operating correctly with each other:<br />1. Incompatible protocol in packaged version of Keystone (already fixed): <a href=https://lists.launchpad.net/openstack/msg04876.html>https://lists.launchpad.net/openstack/msg04876.html</a><br />2. Json template bug (already fixed): <a href=https://bugs.launchpad.net/keystone/+bug/865448/>https://bugs.launchpad.net/keystone/+bug/865448/</a><br />3. ISCSI target management troubles: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=737046>https://bugzilla.redhat.com/show_bug.cgi?id=737046</a><br /><br />In addition, there was no packaged <b>nova-vnc</b> in CentOS repositories.<br />So we've fixed these bugs and established our own repository for OpenStack Diablo. Packages added there have been tested in a real-world deployment.<br /><br />You can easily install the repository on your CentOS system using wget:<br /><br /><pre>$ sudo wget -O /etc/yum.repos.d/epel-mirantis.repo http://download.mirantis.com/epel-el6-mirantis/epel-mirantis.repo</pre><br /><br />You can browse the repository here: <a href=http://download.mirantis.com/epel-el6-mirantis/>Mirantis OpenStack Diablo</a>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-46561769231723198062011-12-20T15:06:00.001-08:002011-12-21T17:36:06.369-08:00Meet & Drink: OpenStack in Production – Event Highlights<font style="font-family:arial"; size="2">As a matter of tradition at this point, we offer a photo report, covering OpenStack Meetup event series hosted by Mirantis and Silicon Valley Cloud Center. Our December 14th event focused on sharing experience around running OpenStack in production. I moderated a panel consisting of Ken Pepple – director of cloud development at Internap, Ray O’Brian – CTO of IT at NASA and Rodrigo Benzaquen – R&D director at MercadoLibre.<br /><br />This time we went all out and even recorded the video of the event:</font> <br /><br /><iframe src="http://player.vimeo.com/video/33982906?title=0&byline=0&portrait=0" width="400" height="225" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe><br /><br /><font style="font-family:arial"; size="2">For those that are not in the mood to watch this 50 minute panel video, here is a quick photo report:</font> <br /><br /><a href="http://3.bp.blogspot.com/-qW9GtzK79cM/TvEaRAgzWzI/AAAAAAAAAI4/RW7zBKxEc4I/s1600/1_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://3.bp.blogspot.com/-qW9GtzK79cM/TvEaRAgzWzI/AAAAAAAAAI4/RW7zBKxEc4I/s400/1_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688356683843328818" /></a><br /><font style="font-family:arial"; size="2">We served wine and beer with pizza, salad and deserts...</font> <br /><br /><br /><a href="http://4.bp.blogspot.com/-TuZXU4JlR64/TvEZJAPE4fI/AAAAAAAAAIs/Nkktb5uTgWE/s1600/2_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://4.bp.blogspot.com/-TuZXU4JlR64/TvEZJAPE4fI/AAAAAAAAAIs/Nkktb5uTgWE/s400/2_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688355446818398706" /></a><br /><font style="font-family:arial"; size="2">...While people ate, drank, and mingled...</font> <br /><br /><br /><a href="http://3.bp.blogspot.com/-YeEvqfQQYn0/TvEaeG0bdlI/AAAAAAAAAJE/wT88LX_Oim4/s1600/3_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://3.bp.blogspot.com/-YeEvqfQQYn0/TvEaeG0bdlI/AAAAAAAAAJE/wT88LX_Oim4/s400/3_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688356908874561106" /></a><br /><font style="font-family:arial"; size="2">…and then they drank some more…</font> <br /><br /><br /><a href="http://3.bp.blogspot.com/-RP1VDFpFyuc/TvEaiV6uCLI/AAAAAAAAAJQ/rYOdO5kYkEg/s1600/4_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://3.bp.blogspot.com/-RP1VDFpFyuc/TvEaiV6uCLI/AAAAAAAAAJQ/rYOdO5kYkEg/s400/4_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688356981646952626" /></a><br /><font style="font-family:arial"; size="2">We started the panel with myself saying smart stuff about OpenStack. After the intro we kicked off with questions to the panel.</font> <br /><br /><br /><a href="http://4.bp.blogspot.com/-HOQQqNw7y50/TvEamRznWhI/AAAAAAAAAJc/4tBzKSbU2oc/s1600/5_small.jpg"><img style="cursor:pointer; cursor:hand;width: 267px; height: 400px;" src="http://4.bp.blogspot.com/-HOQQqNw7y50/TvEamRznWhI/AAAAAAAAAJc/4tBzKSbU2oc/s400/5_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357049262889490" /></a><br /><font style="font-family:arial"; size="2">The panelists talked...</font><br /><br /><br /><a href="http://1.bp.blogspot.com/-T3_hX5-Ly_E/TvEap_Aed0I/AAAAAAAAAJo/DS6L1C7vRZs/s1600/6_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://1.bp.blogspot.com/-T3_hX5-Ly_E/TvEap_Aed0I/AAAAAAAAAJo/DS6L1C7vRZs/s400/6_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357112936036162" /></a><br /><font style="font-family:arial"; size="2">...and talked...</font><br /><br /><br /><a href="http://3.bp.blogspot.com/-q9xv16MmRZ4/TvEatqqlsPI/AAAAAAAAAJ0/8j19bboFx9k/s1600/7_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://3.bp.blogspot.com/-q9xv16MmRZ4/TvEatqqlsPI/AAAAAAAAAJ0/8j19bboFx9k/s400/7_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357176195002610" /></a><br /><font style="font-family:arial"; size="2">...and then talked some more.</font><br /><br /><br /><a href="http://2.bp.blogspot.com/-2gqtZIPU5vI/TvEawNqu2XI/AAAAAAAAAKA/wGUkiSnQ2Cg/s1600/8_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-2gqtZIPU5vI/TvEawNqu2XI/AAAAAAAAAKA/wGUkiSnQ2Cg/s400/8_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357219950582130" /></a><br /><font style="font-family:arial"; size="2">Meanwhile, the audience listened...</font><br /><br /><br /><a href="http://2.bp.blogspot.com/-p6MUXd5F034/TvEazFkyjNI/AAAAAAAAAKM/kZ-BSlwb_1U/s1600/9_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-p6MUXd5F034/TvEazFkyjNI/AAAAAAAAAKM/kZ-BSlwb_1U/s400/9_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357269317782738" /></a><br /><font style="font-family:arial"; size="2">...and listened.</font><br /><br /><br /><a href="http://3.bp.blogspot.com/-PUm_vc75aDk/TvEa2f4eQ7I/AAAAAAAAAKY/kB0WBa_LOJA/s1600/10_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://3.bp.blogspot.com/-PUm_vc75aDk/TvEa2f4eQ7I/AAAAAAAAAKY/kB0WBa_LOJA/s400/10_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357327919268786" /></a><br /><font style="font-family:arial"; size="2">Everyone in our US team was sporting these OpenStack shirts.</font><br /><br /><br /><a href="http://2.bp.blogspot.com/-4EQ93P2yIAk/TvEa5sVkCvI/AAAAAAAAAKk/LcIh0Dfn33U/s1600/11_small.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-4EQ93P2yIAk/TvEa5sVkCvI/AAAAAAAAAKk/LcIh0Dfn33U/s400/11_small.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5688357382802115314" /></a><br /><font style="font-family:arial"; size="2">At the end we gave out 5 signed copies of "Deploying OpenStack" books, written by one of our panelists - Ken Pepple. Roman (pictured above) did not get a copy.</font>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-48108789672544802522011-11-24T15:10:00.000-08:002011-11-24T15:17:12.758-08:00Converging OpenStack with NexentaFor those folks that have missed our webcast on using OpenStack Compute with NexentaStor for managing VM volumes, recording is below. <br /><br />Please note, you can download the NexentaStor driver for OpenStack here: <a href="http://www.nexentastor.org/projects/osvd/files">http://www.nexentastor.org/projects/osvd/files</a>. <br /><br />You can also read additional information about this project here: <a href="http://wiki.openstack.org/NexentaVolumeDriver">http://wiki.openstack.org/NexentaVolumeDriver</a><br /><br /><iframe src="http://player.vimeo.com/video/32498061?title=0&byline=0&portrait=0" width="400" height="320" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe><br /><br />If you need help installing / troubleshooting the Nexenta driver for OpenStack, please do <a href="mailto: info@mirantis.com">contact us. </a>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-22014510695090931402011-09-29T15:01:00.000-07:002011-09-29T15:38:00.777-07:00OpenStack Meet & Drink: Toast to Diablo – Event Highlights<span style="font-family:arial;">As usual, here are the highlights from the last Bay Area OpenStack Meet & Drink: Toast to Diablo – September 28th, 2011. Thanks to WireRE for hosting us, Dave Nielsen – for helping to organize, and all the attendees – for coming. Once again, this was the biggest MeetUp thus far with 150 in attendance. For those of you that didn’t come – here is what you missed:</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-I92M3a_WGnM/ToTufnZ2dRI/AAAAAAAAAGg/NF4kNiYuYmc/s1600/01.JPG"><img style="cursor:pointer; cursor:hand;width: 268px; height: 400px;" src="http://3.bp.blogspot.com/-I92M3a_WGnM/ToTufnZ2dRI/AAAAAAAAAGg/NF4kNiYuYmc/s400/01.JPG" alt="" id="BLOGGER_PHOTO_ID_5657909258804950290" border="0" /></a><br /><br /><span style="font-family:arial;">We started our Diablo release celebration with wine, beer and pizza. Fun mingling with fellow stackers. As people kept arriving it got almost too crowded.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-rtl5KFWJ-NA/ToTuiqN14PI/AAAAAAAAAGo/2jyFKOzEj0M/s1600/02.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://2.bp.blogspot.com/-rtl5KFWJ-NA/ToTuiqN14PI/AAAAAAAAAGo/2jyFKOzEj0M/s400/02.JPG" alt="" id="BLOGGER_PHOTO_ID_5657909311099494642" border="0" /></a><br /><br /><span style="font-family:arial;">Mirantis founder – Alex Freedland – passionately explaining something to David Allen.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-D1wI1vKOja8/ToTul0z3cJI/AAAAAAAAAGw/7QatRBv2ltY/s1600/03.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://1.bp.blogspot.com/-D1wI1vKOja8/ToTul0z3cJI/AAAAAAAAAGw/7QatRBv2ltY/s400/03.JPG" alt="" id="BLOGGER_PHOTO_ID_5657909365482942610" border="0" /></a><br /><br /><span style="font-family:arial;">Mike Scherbakov from Mirantis, Josh McKenty from Pison and Eric from CloudScaling debating OpenStack with noticeable vigor.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-zd60OcrCgd0/ToTuory4GPI/AAAAAAAAAG4/w2WQ0_aBXJ0/s1600/04.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://1.bp.blogspot.com/-zd60OcrCgd0/ToTuory4GPI/AAAAAAAAAG4/w2WQ0_aBXJ0/s400/04.JPG" alt="" id="BLOGGER_PHOTO_ID_5657909414602479858" border="0" /></a><br /><br /><span style="font-family:arial;">Eric Windisch proudly sporting his uber cool CloudScaling shirt, listing to Mike Scherbakov from Mirantis.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-syCwsU6sXQ8/ToTus1_dSmI/AAAAAAAAAHA/Nl5tqg4CkcU/s1600/05.JPG"><img style="cursor:pointer; cursor:hand;width: 267px; height: 400px;" src="http://3.bp.blogspot.com/-syCwsU6sXQ8/ToTus1_dSmI/AAAAAAAAAHA/Nl5tqg4CkcU/s400/05.JPG" alt="" id="BLOGGER_PHOTO_ID_5657909486059080290" border="0" /></a><br /><br /><span style="font-family:arial;">While the crowd was mingling, Dave Nielsen took people on datacenter tours. The datacenter basically looked like this.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-olVO3oR8Mck/ToTvAV_J89I/AAAAAAAAAHQ/4LWz2RKF3po/s1600/06.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://1.bp.blogspot.com/-olVO3oR8Mck/ToTvAV_J89I/AAAAAAAAAHQ/4LWz2RKF3po/s400/06.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657909821065262034" /></a><br /><br /><span style="font-family:arial;">As usual, I opened with some thank you's and acknowledgements to our sponsors and organizers. Marc Padovani of HP Cloud Services – clapping and anxiously waiting his turn to tell the crowd about OpenStack based hpcloud.com.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-NIbljwSH5EQ/ToTvDomLnJI/AAAAAAAAAHY/p6CslYYfZwI/s1600/07.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://4.bp.blogspot.com/-NIbljwSH5EQ/ToTvDomLnJI/AAAAAAAAAHY/p6CslYYfZwI/s400/07.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657909877600394386" /></a><br /><br /><span style="font-family:arial;">With 150 stackers in attendance, we didn’t have quite enough chairs to accommodate everyone.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-WTOcv855j7A/ToTvGRZTJII/AAAAAAAAAHg/YFNl0h-8ctw/s1600/08.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://3.bp.blogspot.com/-WTOcv855j7A/ToTvGRZTJII/AAAAAAAAAHg/YFNl0h-8ctw/s400/08.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657909922911954050" /></a><br /><br /><span style="font-family:arial;">Dave Nielsen talking about our venue host – WiredRE.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-yuPM9JqrLE8/ToTvJsrxgJI/AAAAAAAAAHo/aEJDVD5yN5E/s1600/09.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-yuPM9JqrLE8/ToTvJsrxgJI/AAAAAAAAAHo/aEJDVD5yN5E/s400/09.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657909981776806034" /></a><br /><br /><span style="font-family:arial;">Chris Kemp – CEO and Founder of Nebula announced the OpenStack Silicon Valley LinkedIn group that Nebula recently started.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-VgwAoq0IjQA/ToTvMG87IYI/AAAAAAAAAHw/t6BN2uJtDhM/s1600/10.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://2.bp.blogspot.com/-VgwAoq0IjQA/ToTvMG87IYI/AAAAAAAAAHw/t6BN2uJtDhM/s400/10.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657910023187800450" /></a><br /><br /><span style="font-family:arial;">…meanwhile, Josh McKenty was waiting for his turn to speak…</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-Jjr4ZTALNm0/ToTvPEozB4I/AAAAAAAAAH4/6D9wRcMG674/s1600/11.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-Jjr4ZTALNm0/ToTvPEozB4I/AAAAAAAAAH4/6D9wRcMG674/s400/11.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657910074106120066" /></a><br /><br /><span style="font-family:arial;">Don’t remember why, but for some reason Josh’s presentation involved talking about O-Ren Ishi from Kill Bill. Whatever it was, Chris Kemp got a kick out of it.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-WqZMm0zefqM/ToTvRVjMB3I/AAAAAAAAAIA/kbd_tVxEPEQ/s1600/12.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://4.bp.blogspot.com/-WqZMm0zefqM/ToTvRVjMB3I/AAAAAAAAAIA/kbd_tVxEPEQ/s400/12.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657910113005733746" /></a><br /><br /><span style="font-family:arial;">Everybody likes Kill Bill, so the crowd was cheering.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-_R0MP5C9cH0/ToTvTqsKZuI/AAAAAAAAAII/pQ2PlYwOv3A/s1600/13.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://3.bp.blogspot.com/-_R0MP5C9cH0/ToTvTqsKZuI/AAAAAAAAAII/pQ2PlYwOv3A/s400/13.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657910153040258786" /></a><br /><br /><span style="font-family:arial;">Geva Perry shared his perspective on why OpenStack’s strength is in its ecosystem of developers and partners.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-9lq_HpK7VBQ/ToTvV7z-U2I/AAAAAAAAAIQ/MGTEdhVtMRc/s1600/14.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 268px;" src="http://4.bp.blogspot.com/-9lq_HpK7VBQ/ToTvV7z-U2I/AAAAAAAAAIQ/MGTEdhVtMRc/s400/14.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657910191996162914" /></a><br /><br /><span style="font-family:arial;">Jason Venner of X.com talked about OpenStack and CloudFoundry. He was careful not to reveal anything with respect to the upcoming “October 13th” announcement of X.commerce platform.<br /><br />In closing we had Marc Padovani from HP talk about hpcloud and HP’s commitment to OpenStack. The presentation quickly turned into a Q&A grilling session, with stackers expressing their suspicions over hpcloud.com being a smoke screen, rather than real offering. Marc did his best to address the questions without incriminating his big corporation… My wife got too tired of taking pictures at that point, so there are none of Marc… sorry Marc.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-8fZTGWueG9E/ToTvYTCginI/AAAAAAAAAIY/mtGiq_IeRcY/s1600/15.JPG"><img style="cursor:pointer; cursor:hand;width: 268px; height: 400px;" src="http://4.bp.blogspot.com/-8fZTGWueG9E/ToTvYTCginI/AAAAAAAAAIY/mtGiq_IeRcY/s400/15.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5657910232590879346" /></a><br /><br /><span style="font-family:arial;">Hungry stackers drank most of the wine and ate most of the food. Whatever was left over, people took home. We kept one last bottle of Cloud Wine. I intend to give it as a gift to our 500th MeetUp member – Ilan Rabinovich. Ilan – if you read this, ping me on twitter @zer0tweets to claim your prize!<br /><br />Thank you to everyone and we’ll do it again in 3 months.</span>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-64804252894454761532011-09-23T01:16:00.000-07:002011-09-23T01:24:10.534-07:00What is this Keystone anyway?<p>The simplest way to authenticate a user is to ask for credentials (login+password, login+keys, etc.) and check them over some database. But when it comes to lots of separate services as it is in the <a href="http://openstack.org/">OpenStack</a> world, we have to rethink that. The main problem is an inability to use one user entity to be authorized everywhere. For example, a user expects <a href="http://nova.openstack.org/">Nova</a> to get one's credentials and create or fetch some images in <a href="https://launchpad.net/glance">Glance</a> or set up networks in <a href="http://wiki.openstack.org/Quantum">Quantum</a>. This cannot be done without a central authentication and authorization system.</p>
<p>So now we have one more OpenStack project - <a href="http://wiki.openstack.org/keystone">Keystone</a>. It is intended to incorporate all common information about users and their capabilities across other services, along with a list of these services themselves. We have spent some time explaining to our friends what, why, and how it is and now we decided to blog about it. What follows is an explanation of every entity that drives Keystone’s life. Of course, this explanation can become outdated in no time since the Keystone project is very young and it has developed very fast.</p>
<p>The first basis is the user. Users are users; they represent someone or something that can gain access through Keystone. Users come with credentials that can be checked like passwords or API keys.</p>
<p>The second one is tenant. It represents what is called the project in Nova, meaning something that aggregates the number of resources in each service. For example, a tenant can have some machines in Nova, a number of images in Swift/Glance, and couple of networks in Quantum. Users are always bound to some tenant by default.</p>
<p>The third and last authorization-related kinds of objects are roles. They represent a group of users that is assumed to have some access to resources, e.g. some VMs in Nova and a number of images in Glance. Users can be added to any role either globally or in a tenant. In the first case, the user gains access implied by the role to the resources in all tenants; in the second case, one's access is limited to resources of the corresponding tenant. For example, the user can be an operator of all tenants and an admin of his own playground.</p>
<p>Now let’s talk about service discovery capabilities. With the first three primitives, any service (Nova, Glance, Swift) can check whether or not the user has access to resources. But to try to access some service in the tenant, the user has to know that the service exists and to find a way to access it. So the basic objects here are services. They are actually just some distinguished names. The roles we've talked about recently can be not only general but also bound to a service. For example, when Swift requires administrator access to create some object, it should not require the user to have administrator access to Nova too. To achieve that, we should create two separate Admin roles - one bound to Swift and another bound to Nova. After that admin access to Swift can be given to user with no impact on Nova and vice versa.</p>
<p>To access a service, we have to know its endpoint. So there are endpoint templates in Keystone that provide information about all existing endpoints of all existing services. One endpoint template provides a list of URLs to access an instance of service. These URLs are public, private and admin ones. The public one is intended to be accessible from the global world (like http://compute.example.com), the private one can be used to access from a local network (like http://compute.example.local), and the admin one is used in case admin access to service is separated from the common access (like it is in Keystone).</p>
<p>Now we have the global list of services that exist in our farm and we can bind tenants to them. Every tenant can have its own list of service instances and this binding entity is named the endpoint, which “plugs” the tenant to one service instance. It makes it possible, for example, to have two tenants that share a common image store but use distinct compute servers.</p>
<p>This is a long list of entities that are involved in the process but how does it actually work?</p>
<ol>
<li>To access some service, users provide their credentials to Keystone and receive a token. The token is just a string that is connected to the user and tenant internally by Keystone. This token travels between services with every user request or requests generated by a service to another service to process the user's request.</li>
<li>The users find a URL of a service that they need. If the user, for example, wants to spawn a new VM instance in Nova, one can find an URL to Nova in the list of endpoints provided by Keystone and send an appropriate request.</li>
<li>After that, Nova verifies the validity of the token in Keystone and should create an instance from some image by the provided image ID and plug it into some network. <ul>
<li>At first Nova passes this token to Glance to get the image stored somewhere in there. </li>
<li>After that, it asks Quantum to plug this new instance into a network; Quantum verifies whether the user has access to the network in its own database and to the interface of VM by requesting info in Nova.</li>
</ul>
All the way this token travels between services so that they can ask Keystone or each other for additional information or some actions.</li>
</ol>
<p>Here is a rough diagram of this process:<a href="https://docs.google.com/drawings/d/12xmhLS3Jwqr3IbDkXj9Ta223fH49vRcZSLl23rjtL8A/edit?hl=en_US"><img src="https://docs.google.com/drawings/pub?id=12xmhLS3Jwqr3IbDkXj9Ta223fH49vRcZSLl23rjtL8A&w=716&h=554" width="100%" /></a></p>
Unknownnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-84244591828750862152011-09-16T12:06:00.000-07:002011-09-16T12:06:35.680-07:00Cloudpipe Image Creation Automation<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<br /></div>
Cloudpipe is used in OpenStack to provide access to project’s instances when using VLAN networking mode. It is just a custom Virtual Machine (VM) prepared in a special way, i.e. coming with an accordingly configured
openvpn and startup scripts. More details on what cloudpipe is and why it is needed are available in <a href="http://docs.openstack.org/cactus/openstack-compute/admin/content/cloudpipe-per-project-vpns.html">OpenStack documentation</a>.
<br />
The process of creating an image involves <a href="http://nova.openstack.org/devref/cloudpipe.html">a lot of manual steps</a> which crave to be automated.
To simplify these steps, I wrote a simple script that uses some libvirt features to provide fully automated solution, in a way that you don't even have to bother with preparing base VM manually.
<br />
The solution can be found <a href="https://github.com/Mirantis/cloudpipe-image-auto-creation">on a github</a> and consists of 3 parts:
<br />
<ul>
<li>The first <span style="font-family: monospace;">ubuntukickstart.sh</span> is the main part. Only this part should be executed.
When you run it, it will configure the virtual network and PXE.
Then it will start a new VM to install a minimal server Ubuntu by
kickstart, so the installation is fully automated and unattended.
</li>
<li>The second <span style="font-family: monospace;">cloudpipeconf.sh</span> is used to turn minimal server Ubuntu to cloudpipe. It is being executed when the VM is ready to make this turning.
</li>
<li>
The last <span style="font-family: monospace;">ssh.fs</span> is used to ssh into the VM and shutdown it.
</li>
</ul>
So, if you need the cloudpipe image, just run <span style="font-family: monospace;">ubuntukickstart.sh</span> and
wait. You'll get the cloudpipe image without any mouse clickings and
keyboard pressings!
<br />
More detailed information about how it works can be found in <a href="https://github.com/Mirantis/cloudpipe-image-auto-creation/blob/master/README.markdown">README</a> file.
<br />
Don’t hesitate to leave a comment If you have any questions or concerns.</div>
Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-33990792063848851582011-09-08T16:05:00.001-07:002011-09-08T16:20:32.565-07:00Cloud Accelerates Open Source Adoption<span style="font-family:arial;">Historically, commercial software provided enterprises with reliability and scalability, especially for mission-critical tasks. No one wanted to risk failure in finance, operations, or any essential or enterprise-wide areas. So, enterprises considered open source technology only for less important, more tactical purposes.<br /><br />Recently, however, many large IT organizations have developed significant open source strategies. Cisco, Dell, NASA, and Rackspace came together to give birth to OpenStack. VMWare acquired SpringSource and shortly thereafter, announced Cloud Foundry, their open source PaaS. Amazon, salesforce.com, and others built solutions entirely on an open source stack. Whole categories of technologies, such as noSQL databases, made their way to mass adoption shortly after being open sourced by Google and Facebook. There has been more activity in open source during the last two years than in the preceding decade. So what’s going on here?<br /><br />Without a doubt, cloud is the IT topic that’s been grabbing headlines and investment dollars in the past few years. The recent high level of activity in open source noticeably correlates with the cloud movement, because there is a deep, synergetic relationship between the two. In fact, cloud is the primary driver for the increased adoption of open source.<br /><br />In general, open source projects typically require two components to get community uptake. First, the nature of the project itself has to be technologically challenging. Successful open source projects are largely about solving a set of complex technological tasks vs. just writing a lot of code to support complex business process, such as the case with building enterprise software. Linux, MySQL and BitTorrent are all good examples here. Second, it requires a high rate of end user adoption. The more people and organizations that start using the open source technology at hand, the more mature the community and the technology itself becomes.<br /><br />Cloud has created an enormous amount of technologically challenging fodder for the open source community. The adoption of cloud translates to greater scale at the application infrastructure layer. Consequently, all cloud vendors, from infrastructure to application, are forced to innovate and build proprietary application infrastructure solutions aimed at tackling scale-driven complexity. Facebook’s Cassandra and Google’s Google File System/Hadoop/BigTable stack are prime examples of this innovation.<br /><br />However, it is important to note that neither Facebook, nor Google are in the business of selling middleware. Both make money on advertising. Their middleware stack may be a competitive advantage, but it is by no means THE competitive advantage. Because companies want to keep IT investments as low as possible, a the natural way to drive down costs associated with scale-driven complexity is to have the open developer community help address at least some of the issues to support and growing the stack. The result? Instances like Facebook’s open sourcing of Cassandra and Rackspace contributing its object storage code to OpenStack. Ultimately, cloud drives complexity while cloud vendors channel that complexity down to the open developer community.<br /><br />What about end user adoption? Historically, enterprises were slow to adopt open source. Decades of lobbying by vendors of proprietary software have drilled the idea of commercial software superiority deep into the bureaucracy of enterprise IT. Until recently, the biggest selling point for commercial enterprise software was reliability and scalability for mission-critical tasks; open source was “OK” for less important, more tactical purposes. Today, after leading cloud vendors like Amazon, Rackspace, and Google built solutions on top of an open source stack, the case against open source for mission-critical operations or incapable of supporting the required scale is no longer valid.<br /><br />But the wave of open source adoption is not just about the credibility boost it received in recent years. It is largely about the consumption model. Cloud essentially refers to the new paradigm for delivery of IT services. It is an economic model that revolves around “pay for what you get, when you get it.” Surprisingly, it took enterprises a very long time to accept this approach, but last year was pivotal in showing that it is tracking and is the way of the future. Open source historically has been monetized leveraging a model that is much closer to “cloud” than that of commercial software. In the case of commercial software, you buy the license and pay for implementation upfront. If you are lucky to implement, you continue to pay for a subscription that is sold in various forms – support, service assurance, etc. With open source, you are free to implement first, and if it works, you may (or may not) buy commercial support, which is also frequently sold as a subscription to a particular SLA. The cloud hype has helped initiate the shift in the standard for the IT services consumption model. As enterprises wrap their minds around cloud, they shy further away from the traditional commercial software model and move closer to the open source / services-focused model.<br /><br />It is also important to note that the consumption model issue is not simply a matter of perception. There are concrete, tactical drivers behind it. As the world embraces the services model, it is becoming increasingly dominated by service-level agreements (SLAs). People are no longer interested in licensing software products that are just a means to an end. Today, they look for meaningful guarantees where vendors (external providers or internal IT) assure a promised end result. This shift away from end user licensing agreements (EULAs) and toward SLAs is important. If you are a cloud vendor such as Salesforce.com, you are in the business of selling SLA-backed subscription services to your customer. If, at the same time, you rely on a third party vendor for a component of your stack, the SLA of your vendor has to provide the same or better guarantees that you pass on to your client. If your vendor doesn’t offer an SLA or only offers an end user license agreement, you end up having to bridge the gap. These gaps that an organization is forced to bridge ultimately affects its enterprise value. As we move away from the EULA-driven economy and more towards SLAs, open source stands to benefit.<br /><br />Ultimately, as cloud continues to mature, we will continue to see more and faster growth in open source. While the largest impact so far has been in the infrastructure space, open source popularity will eventually start spreading up the stack towards the application layer.</span>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-74183460774353260082011-08-25T14:13:00.000-07:002011-08-25T14:29:54.779-07:00Tracing the IT Evolution from the Big Bang to the Big Crunch<i>How enterprises are progressing from overgrown, difficult-to-manage IT systems to high performance open source infrastructure</i>
<br />
<br /><span style="font-family:arial;">Over the history of computing, we can trace a pattern of continuous decomposition, from a single system into disparate components. Early on, these individual parts made it easier to design, program and maintain systems, and meet the fast-growing demand for more power and more capacity.
<br />
<br />The industry began with the mainframe, where the entire stack from hardware to application logic was contained in a single box. The next phase was the move from mainframe to client-server. This was followed by SOA (service-oriented architecture). This process of decomposition is a natural byproduct of growth in scale. As we consume increasingly more computing and storage, efficiency gains are achieved through specialization.
<br />
<br />Such continuous decomposition is a typical pattern of many industries. Several centuries ago, the model was subsistence farming, where every family as a single unit grew all of their own crops. Today, food production has decomposed into a collection of highly specialized industries.
<br />
<br />However, this process of decomposition in IT injects complexity. At a certain scale, highly decomposed systems become extremely challenging to manage. This then drives a pressing need to abstract away from some of the individual components to a higher level. This is largely what we are observing today with infrastructure computing. The complex mammoth of enterprise IT, today comprised of a spaghetti mix of application servers, relational and noSQL databases, messaging queues, caching and search services, etc., is no longer manageable.
<br />
<br />Gartner labeled 2011 as the year of cloud platforms or PaaS. Thinking of PaaS, we intuitively think Heroku, Force.com, and Google App Engine, all off-premise cloud platforms. But the cloud movement is not just about on-premise versus off-premise. It's about creating an effective means to abstract away from application infrastructure complexity. As mainframes exploded into myriad sub-components, we experienced sort of a Big Bang in enterprise IT. What we are starting to observe now is the Big Crunch, turning application infrastructure back into a more unified, manageable artifact.
<br />
<br /><h3>OpenStack</h3>OpenStack is one of the most interesting initiatives topping the headlines during the last several months, and it's directly related to the Big Crunch. An open source project with the promise to help consolidate the many disparate components of application infrastructure, OpenStack is only a year old and is far from fulfilling this promise today. However, I believe that OpenStack for application infrastructure will eventually become what Linux became to application logic many years ago - a single interface unifying all application infrastructure components and exposing a standardized set of APIs to applications running on top of it.
<br />
<br /><h3>Open Source Cloud Projects and How They Differ</h3>OpenStack is not the first open source cloud project. Eucalyptus, OpenNebula, and Cloud.com all emerged before OpenStack and all of them are still very much alive. However, OpenStack is different from these others because it's the only one that has gained enough critical mass to get on a steady course to mass adoption.
<br />
<br />What enabled OpenStack to reach this point was not an accident, but a clever strategy by RackSpace and other founding members. Rather than following a more common, vendor-centric approach to building an open source community, like Eucalyptus and Cloud.com did, RackSpace quickly figured out that getting a "cloud operating system" to mass adoption would require more marketing muscle then any single vendor has. So it positioned OpenStack as a decentralized, community-driven project from the very beginning and set out to get the support of big players in the application infrastructure space, namely Dell, Cisco, and Citrix. It didn't go after just any infrastructure player, but specifically focused on those who were arguably late to the cloud game and aching to make up the distance they lost to the likes of VMware and IBM. Ultimately, OpenStack's blitz to success is a result of unleashing an enormous amount of marketing energy in a short period of time, carefully coordinated between a number of application infrastructure power houses.
<br />
<br /><h3>Following Amazon to Open Source Infrastructure</h3>Today, OpenStack is focused on low level infrastructure services - compute, storage, image service, etc., and much work still remains to be done by the community in that area. However, we know the trend and have already seen it with Amazon Web Services (AWS). AWS initially started as Infrastructure as a Service (IaaS) with EC2 and S3 offerings; it then evolved into a fully blown Platform as a Service (PaaS). The value in solving application infrastructure complexity in a broader sense, by embedding higher level services like automated deployment, message queues, map reduce, and monitoring, is simply too compelling. At some point, we expect to see OpenStack creeping into the PaaS space, the same way AWS is doing today.
<br />
<br />This gradual transition from simply being a compute and storage infrastructure orchestrator into a complete cloud operating system will happen naturally for OpenStack. It will be driven by infrastructure vendors of all sizes that are looking to plug their solutions into the OpenStack ecosystem. With more than 100 member companies on board already today, we see various announcements to this effect right and left: Gluster contributes its file system, Dell builds a deployment services, CloudCruiser builds a cost management solution, etc.
<br />
<br /><h3>What's Ahead for OpenStack</h3>The openness and decentralized nature of OpenStack is central to the realization of its vision of the cloud operating system. Instead of trying to solve all application infrastructure complexity inside one monolithic system, such as with the VMware stack, OpenStack harnesses the naturally occurring decomposition in the infrastructure space. This is the Big Bang in infrastructure we've all experienced. Individual vendors with competence in one particular area of application infrastructure can plug their solutions (storage, caching, monitoring, etc.) into OpenStack. As OpenStack continues to gain adoption, it will become a channel for infrastructure vendors to sell their offerings in the same way that the Apple app store is a channel for mobile app developers. At the same time, OpenStack will help abstract end users and resident applications away from the complexity of disparate infrastructure solutions.
<br />
<br />Today we are still in the early days of OpenStack. It's far from being the ultimate platform. It may also be less feature-rich than competing offerings from Microsoft or VMware. However, this is unimportant today. What's important is that the need for the Big Crunch that will decrease application infrastructure complexity is obvious. The magnitude of effort required to make this happen is not something any single vendor could credibly pull off. Ultimately, it's not OpenStack features that matter, but the "idea" behind this project and the degree of uptake it has already received in the community. When many people come together to realize a sensible vision, that vision inevitably becomes a reality.</span>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-60399410911283376952011-08-16T16:15:00.000-07:002011-08-16T16:43:18.361-07:00Our Contribution to the Vegas Economy<span style="font-family:arial;">Here are the highlights on our corporate team-building in Vegas last week. Special thanks to Rachel and Athena for making this party happen. Thank you to all who participated and helped make it fun.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-1hrgKbUMShI/Tkr8SrMS8bI/AAAAAAAAAEU/gOpwLuIZbgg/s1600/1.JPG"><img style="cursor:pointer; cursor:hand;width: 300px; height: 400px;" src="http://1.bp.blogspot.com/-1hrgKbUMShI/Tkr8SrMS8bI/AAAAAAAAAEU/gOpwLuIZbgg/s400/1.JPG" alt="" id="BLOGGER_PHOTO_ID_5641598880997110194" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">We started by warming up with some drinks in the airport bar on the way over to Vegas.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-d2lvHt7EDP4/Tkr8hd7G5uI/AAAAAAAAAEc/IzYzehInIfw/s1600/2.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://4.bp.blogspot.com/-d2lvHt7EDP4/Tkr8hd7G5uI/AAAAAAAAAEc/IzYzehInIfw/s400/2.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599135133394658" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">The luggage belt broke upon our arrival and it took over an hour to get our luggage. By then, the buzz from the airport bar session started to wear off… =(.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-9nii8HrPsLE/Tkr8n_SVGFI/AAAAAAAAAEk/4aPS2ka9u0Y/s1600/3.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://1.bp.blogspot.com/-9nii8HrPsLE/Tkr8n_SVGFI/AAAAAAAAAEk/4aPS2ka9u0Y/s400/3.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599247168378962" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">The taxi line outside the airport was loooong… so we decided to embellish our Vegas experience immediately by taking a limo to the hotel.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-uoVZYublMI0/Tkr8svKyzzI/AAAAAAAAAEs/OkN24_ohcUo/s1600/4.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://1.bp.blogspot.com/-uoVZYublMI0/Tkr8svKyzzI/AAAAAAAAAEs/OkN24_ohcUo/s400/4.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599328741150514" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Finally arrived; herding around the Aria hotel entrance.</span>
<br />
<br /><span style="font-family:arial;">After a brief bite to eat in Vettro café in Aria (which, by the way, is a horrible restaurant… don’t go there), we split up into two groups - the strong and the weak. The weak went to sleep or gamble. The strong went clubbing. Came back to the hotel room only at 4am.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-r1DrGuStlOo/Tkr8weiTrtI/AAAAAAAAAE0/m2ExlK9YbwE/s1600/5.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://1.bp.blogspot.com/-r1DrGuStlOo/Tkr8weiTrtI/AAAAAAAAAE0/m2ExlK9YbwE/s400/5.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599392995847890" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">The next morning, we woke up to this view; 51st floor in Aria. Don’t get too excited – as with many Vegas hotels, they don’t have floors 40-50 in Aria.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-l-41KSsiumM/Tkr8_5Wf-8I/AAAAAAAAAE8/rVDHgCUbyUQ/s1600/6.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://3.bp.blogspot.com/-l-41KSsiumM/Tkr8_5Wf-8I/AAAAAAAAAE8/rVDHgCUbyUQ/s400/6.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599657892117442" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Breakfast… some people slept in late, so our ranks were slim at breakfast.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/--9tHy0wF1qQ/Tkr9FLGeJdI/AAAAAAAAAFE/berMYTmSX4E/s1600/7.JPG"><img style="cursor:pointer; cursor:hand;width: 300px; height: 400px;" src="http://3.bp.blogspot.com/--9tHy0wF1qQ/Tkr9FLGeJdI/AAAAAAAAAFE/berMYTmSX4E/s400/7.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599748556072402" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Ilya enjoyed his fries enormously!</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-O2NmXR-17-Q/Tkr9JiVIrPI/AAAAAAAAAFM/P2n8qz0KTps/s1600/8.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://3.bp.blogspot.com/-O2NmXR-17-Q/Tkr9JiVIrPI/AAAAAAAAAFM/P2n8qz0KTps/s400/8.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599823511071986" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Next stop – quintessential Vegas pool party at Liquid Lounge. $5 to anyone who can spot Mike Scherbakov and Julia Varigina in the crowd.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-3g1PU3KGUCw/Tkr9PYiN3dI/AAAAAAAAAFU/Fqsrz-4KO6E/s1600/9.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://3.bp.blogspot.com/-3g1PU3KGUCw/Tkr9PYiN3dI/AAAAAAAAAFU/Fqsrz-4KO6E/s400/9.JPG" alt="" id="BLOGGER_PHOTO_ID_5641599923960798674" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Why would anyone herd in the pool with 100 people in it, music blasting and no seating space, when you can quietly lounge next to one of 10 other pools in the hotel? The point of the pool party only comes to you after a few drinks… as you can see from the stampede by the bar, we were not the only ones to feel that way.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-I1UC12eOnjs/Tkr9ZrDS2BI/AAAAAAAAAFk/MRccIrydWI4/s1600/10.JPG"><img style="cursor:pointer; cursor:hand;width: 300px; height: 400px;" src="http://1.bp.blogspot.com/-I1UC12eOnjs/Tkr9ZrDS2BI/AAAAAAAAAFk/MRccIrydWI4/s400/10.JPG" alt="" id="BLOGGER_PHOTO_ID_5641600100730066962" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Once you get a drink in your hand – it’s BLAST OFF!</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-BCjmcEgL1yg/Tkr9eXJMETI/AAAAAAAAAFs/A5eKXVe-JlE/s1600/11.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://2.bp.blogspot.com/-BCjmcEgL1yg/Tkr9eXJMETI/AAAAAAAAAFs/A5eKXVe-JlE/s400/11.JPG" alt="" id="BLOGGER_PHOTO_ID_5641600181285425458" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">No comment.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-xSS1yMpoyUs/Tkr9lBVXs_I/AAAAAAAAAF0/Lae_Js4xOmo/s1600/12.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://4.bp.blogspot.com/-xSS1yMpoyUs/Tkr9lBVXs_I/AAAAAAAAAF0/Lae_Js4xOmo/s400/12.JPG" alt="" id="BLOGGER_PHOTO_ID_5641600295690023922" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Winding down at the pool… next stop: corporate dinner.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-rhnvGhY_ttw/Tkr9q4K4GSI/AAAAAAAAAF8/lx6tJftCkkE/s1600/13.JPG"><img style="cursor:pointer; cursor:hand;width: 300px; height: 400px;" src="http://4.bp.blogspot.com/-rhnvGhY_ttw/Tkr9q4K4GSI/AAAAAAAAAF8/lx6tJftCkkE/s400/13.JPG" alt="" id="BLOGGER_PHOTO_ID_5641600396309305634" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">It was dark and all we had was a point and shoot… so not so many pictures at the dinner. But basically this is what it looked like.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-tV2FyRMSFus/Tkr9vw6SL7I/AAAAAAAAAGE/_hCD18_arJo/s1600/14.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://2.bp.blogspot.com/-tV2FyRMSFus/Tkr9vw6SL7I/AAAAAAAAAGE/_hCD18_arJo/s400/14.JPG" alt="" id="BLOGGER_PHOTO_ID_5641600480260009906" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">After the dinner we went to watch a show – Absinthe. This group picture was taken immediately after.</span>
<br />
<br /><span style="font-family:arial;">11:48pm – time to split up again into gamblers and partiers. Since I belonged to the party group, you don’t get to see the pictures of the gamblers… sorry.</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-APphFJyFmAs/Tkr-VTq7B2I/AAAAAAAAAGM/1DtVxPAPMwc/s1600/15.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://1.bp.blogspot.com/-APphFJyFmAs/Tkr-VTq7B2I/AAAAAAAAAGM/1DtVxPAPMwc/s400/15.JPG" alt="" id="BLOGGER_PHOTO_ID_5641601125245978466" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">Second night of clubbing looked like this. 1:45am and Mike is asleep on a couch at Tryst. This is called a SHUT DOWN!</span>
<br />
<br />
<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-E40Iuz-3ZAI/Tkr-eUw-xaI/AAAAAAAAAGU/7rWW4K6qDkc/s1600/16.JPG"><img style="cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="http://4.bp.blogspot.com/-E40Iuz-3ZAI/Tkr-eUw-xaI/AAAAAAAAAGU/7rWW4K6qDkc/s400/16.JPG" alt="" id="BLOGGER_PHOTO_ID_5641601280158647714" border="0" /></a>
<br />
<br />
<br /><span style="font-family:arial;">And my SHUT DOWN happened in the airport on the way back.</span>
<br />
<br />Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-33848879937763339592011-08-12T06:19:00.000-07:002011-08-15T11:54:08.692-07:00LDAP identity store for OpenStack KeystoneAfter some time working with <a href="http://openstack.org">OpenStack</a> installation using existing LDAP installation for authentication, we encountered one big problem. The latest <a href="http://wiki.openstack.org/Projects/IncubatorApplication/OpenStackDashboard">Dashboard</a> code dropped support of old bare authentication in favor of <a href="http://wiki.openstack.org/Projects/IncubatorApplication/Keystone">Keystone</a>-based one. That time Keystone had no support for multiple authentication backends, so we had to develop this feature.
<br />Now we have a basic support of LDAP authentication in Keystone which provides subset of functionality that was present in <a href="http://nova.openstack.org/">Nova</a>. Currently, the main limitation is inability to actually integrate with the existing LDAP tree due to limitations in backend, but it works fine in isolated corner of LDAP.
<br />So, after a long time of coding and fighting with new upstream workflows, we can give you a chance to try it out.
<br />To do it, one should:
<br /><ol><li>Make sure that all necessary components are installed. They are Nova, Glance, Keystone and Dashboard.
<br />
<br />Since the latter pair is still in incubator, you’ll have to download them from the source repository:
<br /><script src="https://gist.github.com/1142034.js"> </script></li><li>Set up Nova to authorize requests in Keystone:
<br /><script src="https://gist.github.com/1142037.js"> </script>
<br />It assumes that you’re in the same dir where you’ve downloaded Keystone sources. Replace nova.conf path if it differs in your Nova installation.
<br /></li><li>Add schema information to your LDAP installation.
<br />
<br />It heavily depends on your LDAP server. There is a common .schema file and .ldif for the latest version of OpenLDAP in keystone/keystone/backends/ldap/ dir. For local OpenLDAP installation, this will do the trick (if you haven’t change the dir after previous steps):
<br /><script src="https://gist.github.com/1142040.js"> </script>
<br /></li><li>Modify Keystone configuration at <tt>keystone/etc/keystone.conf</tt> to use ldap backend:
<br /><ul><li>add <tt>keystone.backends.ldap</tt> to the <tt>backends</tt> list in <tt>[DEFAULT]</tt> section;
<br /></li><li>remove <tt>Tenant</tt>, <tt>User</tt>, <tt>UserRoleAssociation</tt> and <tt>Token</tt> from the <tt>backend_entities</tt> list in <tt>[keystone.backends.sqlalchemy]</tt> section;
<br /></li><li>add new section (don’t forget to change URL, user and password to match your installation):
<br /><script src="https://gist.github.com/1142041.js"> </script></li></ul></li><li>Make sure that <tt>ou=Groups,dc=example,dc=com</tt> and <tt>ou=Users,dc=example,dc=com</tt> subtree exists or set LDAP backend to use any other ones by adding <tt>tenant_tree_dn</tt>, <tt>role_tree_dn</tt> and <tt>user_tree_dn</tt> parameters into <tt>[keystone.backends.ldap]</tt> section in config file.
<br /></li><li>Run Nova, Keystone and Dashboard as usual.
<br /></li><li>Create some users, tenants, endpoints, etc. in Keystone by using keystone/bin/keystone-manage command or just run keystone/bin/sample-data.sh to add the test ones.</li>
<br />Now you can authenticate in Dashboard using credentials of one of created users. Note that from this point all user, project and role management should be done through Keystone using either keystone-manage command or syspanel on Dashboard.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-81338215194128643612011-06-30T15:06:00.000-07:002011-07-05T11:38:50.851-07:00Bay Area OpenStack Meet & Drink Highlights<span style="font-family:arial;">For those of you that weren’t able to make it yesterday and maybe for those of you who want to reminisce about the events of last night, Bay Area OpenStack Meet & Drink was probably the most well-attended OpenStack meetup in the valley to date, outside of the OpenStack summit this spring. A diverse crowd of over 120 stackers showed up – ranging from folks just learning the basics of OpenStack to hardcore code committers.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-_j2QB__r_xg/Tgz1E-nI_rI/AAAAAAAAADE/6UO4I-0rHi8/s1600/1.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-_j2QB__r_xg/Tgz1E-nI_rI/AAAAAAAAADE/6UO4I-0rHi8/s400/1.jpg" alt="" id="BLOGGER_PHOTO_ID_5624139500554354354" border="0" /></a><br /><br /><br /><span style="font-family:arial;">We originally planned on hosting a 30-40 person tech meetup session in a small cozy space at the Computer History Museum. However, with over 100 RSVPs we had to go all out and rent out Hahn Auditorium, making space for all of those wanting to participate.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-t6sXxnhQ4Vo/Tgz1adia7qI/AAAAAAAAADM/bxH4yvCcwuY/s1600/2.jpg"><img style="cursor:pointer; cursor:hand;width: 266px; height: 400px;" src="http://2.bp.blogspot.com/-t6sXxnhQ4Vo/Tgz1adia7qI/AAAAAAAAADM/bxH4yvCcwuY/s400/2.jpg" alt="" id="BLOGGER_PHOTO_ID_5624139869633310370" border="0" /></a><br /><br /><br /><span style="font-family:arial;">First 40 minutes – people eating drinking and mingling. The food line was a bit overwhelming.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-FS94_4XeepM/Tgz1mJnSXcI/AAAAAAAAADU/ddPWW9GUiyY/s1600/3.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 299px;" src="http://1.bp.blogspot.com/-FS94_4XeepM/Tgz1mJnSXcI/AAAAAAAAADU/ddPWW9GUiyY/s400/3.jpg" alt="" id="BLOGGER_PHOTO_ID_5624140070443441602" border="0" /></a><br /><br /><br /><span style="font-family:arial;">Cloud wine was served with dinner.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-BDTYakqb1nk/Tgz1zXEKCZI/AAAAAAAAADc/cKUJvJI7zSI/s1600/4.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-BDTYakqb1nk/Tgz1zXEKCZI/AAAAAAAAADc/cKUJvJI7zSI/s400/4.jpg" alt="" id="BLOGGER_PHOTO_ID_5624140297392490898" border="0" /></a><br /><br /><br /><span style="font-family:arial;">Joe Arnold from Cloudscaling brought a demo server, running SWIFT for people to play around with.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/-t5nDsSWDRC0/Tgz2FpV6Z9I/AAAAAAAAADk/eSaW-LOJqnY/s1600/5.jpg"><img style="cursor:pointer; cursor:hand;width: 268px; height: 400px;" src="http://3.bp.blogspot.com/-t5nDsSWDRC0/Tgz2FpV6Z9I/AAAAAAAAADk/eSaW-LOJqnY/s400/5.jpg" alt="" id="BLOGGER_PHOTO_ID_5624140611536447442" border="0" /></a><br /><br /><br /><span style="font-family:arial;">I opened the ceremony with a 5-minute intro – polling the audience on their experience with OpenStack, saying a few words about Mirantis and upcoming events, as well as introducing Mirantis team members.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-g32bl2Un4Yw/Tgz2PJrVkLI/AAAAAAAAADs/8N-Wz4FfEQk/s1600/6.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://4.bp.blogspot.com/-g32bl2Un4Yw/Tgz2PJrVkLI/AAAAAAAAADs/8N-Wz4FfEQk/s400/6.jpg" alt="" id="BLOGGER_PHOTO_ID_5624140774835065010" border="0" /></a><br /><br /><br /><span style="font-family:arial;">Meanwhile, Joe was getting all too excited to do his pitch of SWIFT.</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-Hb0YECvTCX4/Tgz2YyxfjTI/AAAAAAAAAD0/Z4cNvwI-K98/s1600/7.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 266px;" src="http://1.bp.blogspot.com/-Hb0YECvTCX4/Tgz2YyxfjTI/AAAAAAAAAD0/Z4cNvwI-K98/s400/7.jpg" alt="" id="BLOGGER_PHOTO_ID_5624140940485561650" border="0" /></a><br /><br /><br /><span style="font-family:arial;">Joe did his 10-minute talk on “Swift in the Small.” You can read up on the content that was presented in Joe’s blog: <a href="http://joearnold.com/2011/06/27/swift-in-the-small/">http://joearnold.com/2011/06/27/swift-in-the-small/</a>. You can also view the slides here: <a href="http://bit.ly/mMRcpt">http://bit.ly/mMRcpt</a>. And the live recording of the presentation can be found here: <a href="http://bit.ly/mJOr2R">http://bit.ly/mJOr2R</a></span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-XoqTJfbfVU0/Tgz2kSMx2PI/AAAAAAAAAD8/y3j8SR8pWUI/s1600/8.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 266px;" src="http://1.bp.blogspot.com/-XoqTJfbfVU0/Tgz2kSMx2PI/AAAAAAAAAD8/y3j8SR8pWUI/s400/8.jpg" alt="" id="BLOGGER_PHOTO_ID_5624141137900067058" border="0" /></a><br /><br /><br /><span style="font-family:arial;">We gave out Russian Standard vodka bottles at the meetup as favors. To complete the theme and give the audience a taste of Russian hospitality, we had an accordionist perform a 5-minute stunt immediately after Joe’s pitch on Swift (see his performance here: <a href="http://bit.ly/iiYveN">http://bit.ly/iiYveN</a>).</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-K0rwK5K0ywM/Tgz20VsGJeI/AAAAAAAAAEE/w3LCSlADqts/s1600/9.jpg"><img style="cursor:pointer; cursor:hand;width: 400px; height: 267px;" src="http://2.bp.blogspot.com/-K0rwK5K0ywM/Tgz20VsGJeI/AAAAAAAAAEE/w3LCSlADqts/s400/9.jpg" alt="" id="BLOGGER_PHOTO_ID_5624141413714634210" border="0" /></a><br /><br /><br /><span style="font-family:arial;">Party time…</span><br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-ioZFz2tNIsU/Tgz3AXkZJ_I/AAAAAAAAAEM/WGUDIYOnMl8/s1600/10.jpg"><img style="cursor:pointer; cursor:hand;width: 267px; height: 400px;" src="http://1.bp.blogspot.com/-ioZFz2tNIsU/Tgz3AXkZJ_I/AAAAAAAAAEM/WGUDIYOnMl8/s400/10.jpg" alt="" id="BLOGGER_PHOTO_ID_5624141620377626610" border="0" /></a><br /><br /><br /><span style="font-family:arial;">Mike Scherbakov from our team of stackers talked about implementing Nova in Mirantis’ internal IT department, taking quite a few questions from the audience. The deck of his presentation is here: <a href="http://slidesha.re/jyS4WL">http://slidesha.re/jyS4WL</a>. The recording of the talk can be found here: <a href="http://bit.ly/lo6s7a">part 1</a>; <a href="http://bit.ly/kTDn8z">part 2</a>; <a href="http://bit.ly/jZMStc">part 3</a>; and <a href="http://bit.ly/kDoTnn">part 4</a>.</span><br /><br /><span style="font-family:arial;">I’d like to thank everyone for coming and we’ll appreciate any comments or suggestions on the event. We plan to have our next meetup at the end of September. If you would like to help organize, present your OpenStack story, or offer any ideas on how to make the experience better, please ping me on twitter @zer0tweets or send me an email – borisr at mirantis dot com.</span>Boris Renskihttp://www.blogger.com/profile/06261736815703853427noreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-73726357538317727062011-06-30T03:47:00.000-07:002011-07-08T12:00:34.821-07:00vCider Virtual Switch Overview<span class="Apple-style-span">A couple of months ago, Chris Marino, CEO at <a href="http://www.vcider.com/">vCider</a>, stopped by the Mirantis office and gave a very interesting presentation on the vCider networking solution for clouds. A few days later, he kindly provided me with beta access to their product.<br /><br />A few days ago, vCider announced public availability of the product. So now it's a good time to blog about my experience concerning it.<br /><h3><br /></h3><h3>About vCider Virtual Switch<br /></h3>To make a long story short, vCider Virtual Switch allows you to build a virtual <a href="http://en.wikipedia.org/wiki/OSI_model#Layer_2:_Data_Link_Layer">Layer 2</a> network across several Linux boxes; these boxes might be Virtual Machines (VMs) on a cloud (or even in different clouds), or it might be a physical server.<br /><br />The flow is pretty simple: you download a package (DEBs and RPMs are available on the site) and install it to all of the boxes for which you will create a network. No configuration is required except for creating a file with an account token.<br /><br />After that, all you have to do is to visit the vCider Dashboard and create networks and assign nodes to them.<br /><br />So to start playing with that, I created two nodes on Rackspace and created a virtual network for them for which I used <tt>192.168.87.0/24</tt> address space.<br /><br />On both boxes two new network interfaces appeared:<br /><br />On the first box:<br /><br /><pre><code>5: vcider-net0: <broadcast,multicast,up,lower_up> mtu 1442 qdisc pfifo_fast state UNKNOWN qlen 1000<br />link/ether ee:cb:0b:93:34:45 brd ff:ff:ff:ff:ff:ff<br />inet 192.168.87.1/24 brd 192.168.87.255 scope global vcider-net0<br />inet6 fe80::eccb:bff:fe93:3445/64 scope link<br /> valid_lft forever preferred_lft forever<br /></broadcast,multicast,up,lower_up></code></pre><br />and on the second one:<br /><br /><pre><code>7: vcider-net0: <broadcast,multicast,up,lower_up> mtu 1442 qdisc pfifo_fast state UNKNOWN qlen 1000<br />link/ether 6e:8e:a0:e9:a0:72 brd ff:ff:ff:ff:ff:ff<br />inet 192.168.87.4/24 brd 192.168.87.255 scope global vcider-net0<br />inet6 fe80::6c8e:a0ff:fee9:a072/64 scope link<br /> valid_lft forever preferred_lft forever<br /></broadcast,multicast,up,lower_up></code></pre><br />tracepath output looks like this:<br /><br />root@alice:~# tracepath 192.168.87.4<br />1: 192.168.87.1 (192.168.87.1) 0.169ms pmtu 1442<br />1: 192.168.87.4 (192.168.87.4) 6.677ms reached<br />1: 192.168.87.4 (192.168.87.4) 0.338ms reached<br />Resume: pmtu 1442 hops 1 back 64<br />root@alice:~#<br /><br />arping also works fine:<br /><br />novel@bob:~ %> sudo arping -I vcider-net0 192.168.87.1<br />ARPING 192.168.87.1 from 192.168.87.4 vcider-net0<br />Unicast reply from 192.168.87.1 [EE:CB:0B:93:34:45] 0.866ms<br />Unicast reply from 192.168.87.1 [EE:CB:0B:93:34:45] 1.030ms<br />Unicast reply from 192.168.87.1 [EE:CB:0B:93:34:45] 0.901ms<br />^CSent 3 probes (1 broadcast(s))<br />Received 3 response(s)<br />novel@bob:~ %><br /><br /><h3>Performance</h3>One of the most important questions is performance. First, I used <tt>iperf</tt> to measure bandwidth on the public interfaces:<br /><br />novel@bob:~ %> iperf -s -B xx.yy.94.250<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />Binding to local address xx.yy.94.250<br />TCP window size: 85.3 KByte (default)<br />------------------------------------------------------------<br />[ 4] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34231<br />[ ID] Interval Transfer Bandwidth<br />[ 4] 0.0-10.3 sec 12.3 MBytes 9.94 Mbits/sec<br />[ 5] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34232<br />[ 5] 0.0-20.9 sec 12.5 MBytes 5.02 Mbits/sec<br />[SUM] 0.0-20.9 sec 24.8 MBytes 9.93 Mbits/sec<br />[ 6] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34233<br />[ 6] 0.0-10.6 sec 12.5 MBytes 9.92 Mbits/sec<br />[ 4] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34234<br />[ 4] 0.0-10.6 sec 12.5 MBytes 9.94 Mbits/sec<br />[ 5] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34235<br />[ 5] 0.0-10.5 sec 12.4 MBytes 9.94 Mbits/sec<br />[ 6] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34236<br />[ 6] 0.0-10.6 sec 12.6 MBytes 9.94 Mbits/sec<br />[ 4] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34237<br />[ 4] 0.0-10.7 sec 12.6 MBytes 9.94 Mbits/sec<br />[ 5] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34238<br />[ 5] 0.0-10.6 sec 12.6 MBytes 9.93 Mbits/sec<br /><br />So it gives average bandwidth ~9.3Mbit/sec.<br /><br /></span><span class="Apple-style-span">And here's the same test via vCider network:<br /><br />novel@bob:~ %> iperf -s -B 192.168.87.4<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />Binding to local address 192.168.87.4<br />TCP window size: 85.3 KByte (default)<br />------------------------------------------------------------<br />[ 4] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60977<br />[ ID] Interval Transfer Bandwidth<br />[ 4] 0.0-10.5 sec 11.4 MBytes 9.10 Mbits/sec<br />[ 5] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60978<br />[ 5] 0.0-10.5 sec 11.4 MBytes 9.05 Mbits/sec<br />[ 6] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60979<br />[ 6] 0.0-10.6 sec 11.4 MBytes 9.03 Mbits/sec<br />[ 4] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60980<br />[ 4] 0.0-10.4 sec 11.2 MBytes 9.03 Mbits/sec<br />[ 5] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60981<br />[ 5] 0.0-10.5 sec 11.4 MBytes 9.06 Mbits/sec<br />[ 6] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60982<br />[ 6] 0.0-10.4 sec 11.3 MBytes 9.05 Mbits/sec<br />[ 4] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60983<br />[ 4] 0.0-20.8 sec 11.2 MBytes 4.51 Mbits/sec<br />[SUM] 0.0-20.8 sec 22.4 MBytes 9.05 Mbits/sec<br />[ 5] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60984<br />[ 5] 0.0-10.5 sec 11.3 MBytes 9.03 Mbits/sec<br /><br />It gives an average bandwidth of 8.5Mbit/sec, and it's about 91% of the original bandwidth, which is not bad I believe.<br /><br />For the sake of experimenting, I tried to emulate <a href="http://en.wikipedia.org/wiki/TAP_%28network_driver%29">TAP</a> networking using <a href="http://openvpn.net/">openvpn</a>. I chose the quickest configuration possible and just ran openvpn on the server this way:<br /><br /># openvpn --dev tap0<br /><br />and on the client:<br /><br /># openvpn --remote xx.yy.94.250 --dev tap0<br /><br />As you might guess, openvpn runs in user space and it tunnels traffic over the public<br />interfaces on the boxes I use for tests.<br /><br />And I conducted another <tt>iperf</tt> test:<br /><br />novel@bob:~ %> iperf -s -B 192.168.37.4<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />Binding to local address 192.168.37.4<br />TCP window size: 85.3 KByte (default)<br />------------------------------------------------------------<br />[ 4] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53923<br />[ ID] Interval Transfer Bandwidth<br />[ 4] 0.0-10.5 sec 11.2 MBytes 8.97 Mbits/sec<br />[ 5] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53924<br />[ 5] 0.0-10.5 sec 11.1 MBytes 8.88 Mbits/sec<br />[ 6] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53925<br />[ 4] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53926<br />[ 6] 0.0-10.4 sec 11.1 MBytes 8.90 Mbits/sec<br />[ 4] 0.0-20.6 sec 10.8 MBytes 4.38 Mbits/sec<br />[SUM] 0.0-20.6 sec 21.8 MBytes 8.90 Mbits/sec<br />[ 5] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53927<br />[ 5] 0.0-10.4 sec 11.0 MBytes 8.87 Mbits/sec<br />[ 6] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53928<br />[ 6] 0.0-10.3 sec 10.9 MBytes 8.90 Mbits/sec<br />[ 4] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53929<br />[ 4] 0.0-10.5 sec 11.1 MBytes 8.88 Mbits/sec<br />[ 5] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53930<br />[ 5] 0.0-10.3 sec 10.9 MBytes 8.88 Mbits/sec<br /><br />It gives an average bandwidth of 8.3Mbit/sec, and it's 89% of the original bandwidth. It's just a little slower than vCider Virtual Switch which is very good for openvpn, but I have to note it's not quite a fair comparison:<br /><br /></span><ul><li><span class="Apple-style-span">I don't use encryption in my openvpn setup</span></li><li>Real-world openvpn configuration will be much more complex</li><li>I believe openvpn will scale significantly worse with the growth of the number of machines in the network, as openvpn works in client/server mode while vCider works in p2p mode and uses central service to grab metadata such as routing information etc.</li></ul><br /><br />Also, it seems to me that the vCider team's comparison to openvpm is helpful, as they have a note on it in the <a href="http://www.vcider.com/developers/frequently-asked-questions#software">FAQ</a> -- be sure to check it out.<br /><br /><h3>Support</h3>It's a pleasure to note that the vСider team is very responsive. As I started testing the product at quite an early stage, I spotted some issues, and even they were not critical. It's a great pleasure to see they are all fixed in the next version.<br /><br /><h3>Conclusion</h3>vCider Virtual Switch is a product with expected behavior, good performance, complete documentation, and it's easy to use. The vCider team provides good support as well.<br /><br />It seems that for relatively small setups within a single trusted environment, e.g. about 5-8 VMs within a single cloud provider, where traffic encryption and performance are not that critical, one could go with a openvpn setup. However, when either security or performance becomes important or the size of the setup increases, vCider Virtual Switch would be a good choice.<br /><br />I am looking forward to new releases and specifically I'm very curious about multicast support and exposed API which manages networks.<br /><br /><h3>Further reading</h3>* <a href="http://www.vcider.com/">vCider Home Page</a><br />* <a href="http://www.vcider.com/developers/frequently-asked-questions">vCider Virual Switch FAQ</a><br />* <a href="http://en.wikipedia.org/wiki/OSI_model">Wikipedia article on OSI model</a><br />* <a href="http://openvpn.net/">OpenVPN Home Page</a>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-5385691462895891732011-06-09T01:10:00.000-07:002011-06-09T01:28:46.551-07:00Clustered LVM on DRBD resource in Fedora LinuxAs <a href="http://fghaas.wordpress.com/">Florian Haas</a> has <a href="http://mirantis.blogspot.com/2011/05/shared-storage-for-openstack-based-on.html?showComment=1306528806951#c5028478804503884300">pointed out</a> in my previous post's comment, our shared storage configuration requires special precautions to avoid corruption of data when two hosts connected via DRBD try to manage LVM volumes simultaneously. Generally, these precautions concern locking LVM metadata operations while running DRBD in 'dual-primary' mode.<br /><br />Let's examine it in detail. The LVM locking mechanism is configured in the [global] section of <i>/etc/lvm/lvm.conf</i>. The 'locking_type' parameter is the most important here. It defines which locking LVM is used while changing metadata. It can be equal to:<br /><br /><ul><li>'0': disables locking completely - it's dangerous to use;</li><li>'1': default, local file-based locking. It knows nothing about the cluster and possible conflicting metadata changes;</li><li>'2': uses an external shared library and is defined by the 'locking_library' parameter;</li><li>'3': uses built-in LVM clustered locking;</li><li>'4': read-only locking which forbids any changes of metadata.</li></ul><br /><br />The simplest way is to use local locking on one of the drbd peers and to disable metadata operations on another one. This has a serious drawback though: we won't have our Volume Groups and Logical Volumes activated automatically upon creation on the other, 'passive' peer. The thing is that it's not good for the production environment and cannot be automated easily.<br /><br />But there is another, more sophisticated way. We can use the <a href="http://www.linux-ha.org/doc/users-guide/users-guide.html">Linux-HA</a> (Heartbeat) coupled with the <a href="http://linux-ha.org/doc/man-pages/re-ra-LVM.html">LVM Resource Agent</a>. It automates activation of the newly created LVM resources on the shared storage, but still provides no locking mechanism suitable for a 'dual-primary' DRBD operation.<br /><br />It should be noted that full support of clustered locking for the LVM can be achieved by the <b>lvm2-cluster</b> Fedora RPM package stored in the repository. It contains the <b>clvmd</b> service which runs on all hosts in the cluster and controls LVM locking on shared storage. In this case, we have only 2 drbd-peers in the cluster.<br /><br /><b>clvmd</b> requires a cluster engine in order to function properly. It's provided by the <b>cman</b> service, installed as a dependency of the <b>lvm2-cluster</b> (other dependencies may vary from installation to installation):<br /><br /><script src="https://gist.github.com/1010188.js"> </script><br /><br />The only thing we need the cluster for is the use of clvmd; the configuration of cluster itself is pretty basic. Since we don't need advanced features like automated <a href="https://fedorahosted.org/cluster/wiki/Fence">fencing</a> yet, we specify manual handling. As we have only 2 nodes in the cluster, we can tell cman about it. Configuration for <b>cman</b> resides in the <i>/etc/cluster/cluster.conf</i> file:<br /><br /><script src="https://gist.github.com/1010213.js"> </script><br /><br /><b>clusternode name</b> should be a fully qualified domain name and should be resolved by DNS or be present in <i>/etc/hosts</i>. Number of <b>votes</b> is used to determine <b>quorum</b> of the cluster. In this case, we have two nodes, one vote per node, and expect one vote to make the cluster run (to have a quorum), as configured by <b>cman expected</b> attribute.<br /><br />The second thing we need to configure is the cluster engine (<b>corosync</b>). Its configuration goes to <i>/etc/corosync/corosync.conf</i>:<br /><br /><script src="https://gist.github.com/1010226.js"> </script><br /><br />The <b>bindinetaddr</b> parameter must contain a <i><b>network</b></i> address. We configure <b>corosync</b> to work on <b>eth1</b> interfaces, connecting our nodes back-to-back on 1Gbps network. Also, we should configure <b>iptables</b> to accept multicast traffic on both hosts.<br /><br />It's noteworthy that these configurations should be identical on both cluster nodes.<br /><br />After the cluster has been prepared, we can change the LVM locking type in <i>/etc/lvm/lvm.conf</i> on both drbd-connected nodes:<br /><br /><script src="https://gist.github.com/1010243.js"> </script><br /><br />Start <b>cman</b> and <b>clvmd</b> services on drbd-peers and get our cluster ready for the action:<br /><br /><script src="https://gist.github.com/1010247.js"> </script><br /><br />Now, as we already have a Volume Group on the shared storage, we can easily make it cluster-aware:<br /><br /><script src="https://gist.github.com/1010256.js"> </script><br /><br />Now we see the 'c' flag in VG Attributes:<br /><br /><script src="https://gist.github.com/1010259.js"> </script><br /><br />As a result, Logical Volumes created in the <i>vg_shared</i> volume group will be active on both nodes, and clustered locking is enabled for operations with volumes in this group. LVM commands can be issued on both hosts and <b>clvmd</b> takes care of possible concurrent metadata changes.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-82630829404921883082011-06-06T03:09:00.000-07:002011-06-07T16:54:14.260-07:00OpenStack Nova: basic disaster recovery<div style="text-align: justify;">Today, I want to take a look at some possible issues that may be encountered while using <a href="http://openstack.org/">OpenStack</a>. The purpose of this topic is to share our experience dealing with the hardware or software failures which definitely would be faced by anyone who attempts to run <a href="http://openstack.org/">OpenStack</a> in production.<br /><br /><h3>Software issue</h3><br />Let's look at the simplest, but possibly the most frequent issue. For example, if we need to upgrade the kernel or software that will require a host reboot on one of the compute nodes, the best decision in this case is to migrate all virtual machines running on this server to other compute nodes. Unfortunately, sometimes it may be impossible due to several reasons, such as lack of shared storage to perform migration or cpu/memory resources to allocate all VMs. The only option is to shut down virtual machines for the maintenance period. But how should they be started correctly after being rebooted? Of course, you may set the special flag in <i><b>nova.conf</b></i> and instances will start automatically on the host system boot:<br /><br /><script src="https://gist.github.com/1009910.js"> </script><br />However, you may want to disable it (in fact, setting this flag is a bad idea if you use <i><b>nova-volume</b></i> service).<br /><br />There are many ways to start virtual machines. Probably the simplest one is to run:<br /><br /><script src="https://gist.github.com/1009923.js"> </script><br />It will recreate and start the libvirt domain using instance XML. This method works good if you don't have remote attached volume; otherwise, <i><b>nova boot</b></i> will fail with an error. In this case, you'll need to start the domain manually using the <i>virsh</i> tool, connect the iscsi device, create an XML file and attach it to the instance, which is a nightmare if you have lots of instances with volumes.<br /><br /><h3>Hardware issue</h3><br />Imagine another situation. Assume our server with a compute node experiences a hardware issue that we can't eliminate in a short time. The bad thing is that it often happens unpredictably, without the ability to transfer virtual machines to a safe place. Yet, if you have shared storage, you won't lose instances data; however, the way to recover may be pretty vague. Going into technical details, the procedure can be described by following steps:<br /><ul><li>update host information in DB for recovered instance</li><br /><li>spawn instance on compute node</li><br /><li>search for any attached volumes in database</li><br /><li>look for volume device path, connect to it by iscsi or some other driver if necessary</li><br /><li>attach it to the guest system</li></ul><br /><h3>Solution</h3><br />For this and previous situations we developed python script that would run a virtual machine on the host where this script is executed. You can find it on our git repository: <a href="https://github.com/Mirantis/openstack-utils/blob/master/nova-compute">openstack-utils</a>. All you need is to copy the script on the compute node where you want to recover the virtual machine and execute:<br /><br /><script src="https://gist.github.com/1009969.js"> </script><br />You can look for <i>instance_id</i> using the <i><b>nova list</b></i> command. The only limitation is that the virtual machine should be available on the host system.<br /><br /><br />Of course, in everyday <a href="http://openstack.org/">OpenStack</a> usage, you will be faced with lots of troubles that couldn't be solved by this script. For example, you may have storage configuration that provides the mirroring of data between two compute nodes and you need to recover the virtual machine on the third node that doesn't contain it on local hard drives. The more complex issues require more sophisticated solutions and we are working to cover most of them.</div>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-38568926680112165042011-05-27T02:04:00.000-07:002011-05-27T15:37:16.265-07:00OpenStack Nova and Dashboard authorization using existing LDAPOur current integration task involves using <a href="http://www.gosa-project.org/">goSA</a> as the central management utility. goSA internally uses the LDAP repository for all of its data. So we had to find a solution to make both <a href="http://openstack.org/">OpenStack</a> <a href="http://openstack.org/projects/compute/">Nova</a> and <a href="http://wiki.openstack.org/OpenStackDashboard">Dashboard</a> authenticate and authorize users using goSA's LDAP structures.<br /><br /><br /><br /><h3>LDAP in Nova</h3>Nova stores its users, projects and roles (global and per-project) in LDAP. Necessary schema files are in /nova/auth dir in the Nova source distribution. The following describes how Nova stores each of these object types.<br /><br />Users are stored as objects with a <tt>novaUser</tt> class. They have mandatory <tt>accessKey</tt>, <tt>secretKey</tt> and <tt>isNovaAdmin</tt> (self-explanatory) attributes along with customizable attributes set by flags <em>ldap_user_id_attribute</em> (<tt>uid</tt> by default) and <em>ldap_user_name_attribute</em> (<tt>cn</tt>). To use the latter ones, it assigns <tt>person</tt>, <tt>organizationalPerson</tt> and <tt>inetOrgPerson</tt> to all newly created users. All users are stored and searched for in the LDAP subtree defined by <em>ldap_user_subtree</em> and <em>ldap_user_unit</em>.<br /><br />If you want to manage user creation and deletion from some other place (such as <a href="http://www.gosa-project.org/">goSA</a> in our case), you can set the <em>ldap_user_modify_only</em> flag to <tt>True</tt>.<br /><br />Projects are objects with the widely used <tt>groupOfNames</tt> class in the subtree defined by the <em>ldap_project_subtree</em> flag. Nova uses the <tt>cn</tt> attribute for the project name, <tt>description</tt> for description, <tt>member</tt> for the list of members' DNs, <tt>owner</tt> for the project manager's DN. All of these attributes are common for user (and any object) groups management, so it's easy to integrate Nova projects with an existing user groups management system (e.g. <a href="http://www.gosa-project.org/">goSA</a>).<br /><br />Roles are also stored as <tt>groupOfNames</tt>, with similar <tt>cn</tt>, <tt>description</tt> and <tt>member</tt> attributes. Nova has hard-coded roles: <tt>cloudadmin</tt>, <tt>itsec</tt>, <tt>sysadmin</tt>, <tt>netadmin</tt>, <tt>developer</tt>. Global roles are stored in a subtree defined by <em>role_project_subtree</em>, <tt>cn</tt>'s are defined by the <em>ldap_cloudadmin</em>, <em>ldap_itsec</em>, <em>ldap_sysadmin</em>, <em>ldap_netadmin</em> and <em>ldap_developer</em> flags respectively. Per-project roles are stored right under the project's DN with <tt>cn</tt> set to the role's name.<br /><br /><br /><br /><h3>LDAP in Dashboard</h3>To make Dashboard authorize users in LDAP, I use the <a href="http://pypi.python.org/pypi/django-auth-ldap">django-ldap-auth</a> module.<br />First, you need to install it using your preferred package manager (<tt>easy_install django-auth-ldap</tt> is sufficient). Second, you need to add it to Dashboard's <tt>local_settings.py</tt> in <tt>AUTHENTICATION_BACKENDS</tt> and set up <tt>AUTH_LDAP_SERVER_URI</tt> to your LDAP URI and <tt>AUTH_LDAP_USER_DN_TEMPLATE</tt> to Python's template of users' DN; in our case, it should be <tt>"<em>ldap_user_id_attribute</em>=%(user)s,<em>ldap_user_subtree</em>"</tt>.<br /><br />Note that in <tt>local_settings.py</tt> you override default settings, so if you want to just add a backend to <tt>AUTHENTICATION_BACKENDS</tt>, you should use <tt>+=</tt>. Also if you want to totally disable <tt>ModelBackend</tt> like we did, you can use <tt>=</tt> as well.<br /><br />Also note that to make Dashboard work, you'll have to create an account in Nova with admin privileges and a project with the same name as the account. You can either set all parameters in LDAP by hand or add it using <tt>nova-manage user admin</tt> using one of usernames from LDAP.<br /><br /><br /><br /><h3>Configuration examples</h3>Let's say goSA is managing the organization <tt>exampleorg</tt> in the domain <tt>example.com</tt> on LDAP at <tt>ldap://ldap.example.com</tt>. To make use of its users and groups for Nova's user, projects and roles, we wrote configs like this:<br /><br /><br /><script src="https://gist.github.com/995105.js"> </script><br /><br /><br /><script src="https://gist.github.com/995111.js"> </script><br /><br /><br />By the way, to make <a href="http://www.gosa-project.org/">goSA</a> the central user management utility, we created a special plugin that manages Nova users. The plugin can be found <a href="https://github.com/Mirantis/gosa-openstack">here</a>. It looks like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-T1aqhk1cXtw/Td-ijzZHFqI/AAAAAAAAAAM/GDTWtZUxHDY/s1600/gosa-nova.png"><img style="CURSOR: hand" id="BLOGGER_PHOTO_ID_5611382396701578914" border="0" alt="" src="http://4.bp.blogspot.com/-T1aqhk1cXtw/Td-ijzZHFqI/AAAAAAAAAAM/GDTWtZUxHDY/s640/gosa-nova.png" /></a>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-59464663230325251192011-05-19T05:22:00.001-07:002011-05-26T04:40:39.639-07:00Shared storage for OpenStack based on DRBD<div style="TEXT-ALIGN: left" dir="ltr" trbidi="on">Storage is a tricky part of the cloud environment. We want it to be fast, to be network-accessible and to be as reliable as possible. One way is to go to the shop and buy yourself a SAN solution from a prominent vendor for solid money. Another way is to take commodity hardware and use open source magic to turn it into distributed network storage. Guess what we did?<br /><br />We have several primary goals ahead. First, our storage has to be reliable. We want to survive both minor and major hardware crashes - from HDD failure to host power loss. Second, it must be flexible enough to slice it fast and easily and resize slices as we like. Third, we will manage and mount our storage from cloud nodes over the network. And, last but not the least, we want decent performance from it.<br /><br />For now, we have decided on the DRBD driver for our storage. <a style="OUTLINE-STYLE: none; COLOR: #003366; TEXT-DECORATION: none" class="external-link" href="http://www.drbd.org/" rel="nofollow">DRBD</a>® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network-based RAID-1. It has lots of <a style="OUTLINE-STYLE: none; COLOR: #003366; TEXT-DECORATION: none" class="external-link" href="http://www.drbd.org/home/feature-list/" rel="nofollow">features</a>, has been tested and is reasonably stable.<br /><br />DRBD has been supported by the Linux kernel since version 2.6.33. It is implemented as a kernel module and included in the mainline. We can install the DRBD driver and command line interface tools using a standard package distribution mechanism; in our case it is Fedora 14:<br /><br /><script src="https://gist.github.com/988288.js"> </script><br />The DRBD configuration file is <i>/etc/drbd.conf</i>, but usually it contains only 'include' statements. The configuration itself resides in <i>global_common.conf</i> and <i>*.res</i> files inside <i>/etc/drbd.d/.</i> An important parameter in <i>global_common.conf</i> is '<b>protocol</b>'. It defines the sync level of the replication:<br /><br /><ul><li>A (async). Local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has been placed in the local TCP send buffer. Data loss is possible in case of fail-over.</li><br /><br /><li>B (semi-sync or memory-sync). Local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has reached the peer node. Data loss is unlikely unless the primary node is irrevocably destroyed.</li><br /><br /><li>C (sync). Local write operations on the primary node are considered completed only after both the local and the remote disk write have been confirmed. As a result, loss of a single node is guaranteed not to lead to any data loss. This is the default replication mode.</li></ul><br /><br />Other sections of the common configuration are usually left blank and can be redefined in per-resource configuration files. To create a usable resource, we must create a configuration file for our resource in <i>/etc/drbd.d/drbd0.res</i>. Basic parameters for the resource are:<br /><br /><ul><li>Name of the resource. Defined with 'resource' parameter, open main configuration section.</li><br /><br /><li>'<b>on</b>' directive opens the host configuration section. Only 2 '<b>on</b>' host sections are allowed per resource. Common parameters for both hosts can be defined once in the main resource configuration section.</li><br /><br /><li>'<b>address</b>' directive is unique to each host and must contain the IP-address and port number to which the DRBD driver listens.</li><br /><br /><li>'<b>device</b>' directive defines the path to the device created on the host for the DRBD resource.</li><br /><br /><li>'<b>disk</b>' is the path to the back-end device for the resource. This can be a hard drive partition (i.e. <i>/dev/sda1</i>), soft- or hardware RAID device, LVM Logical Volume or any other block device, configured by the Linux device-mapper infrastructure.</li><br /><br /><li>'<b>meta</b>-disk' defines how DRBD stores meta-data. It can be '<b>internal</b>' when meta-data resides on the same back-end device as user data, or '<b>external</b>' on a separate device.</li></ul><br /><br /><span class="Apple-style-span" style="font-size:medium;">Configuration Walkthrough</span><br /><br />We are creating a relatively simple configuration: one DRBD resource shared between two nodes. On each node, the back-end for the resource is the software RAID-0 (stripe) device <i>/dev/md3</i> made of two disks. The hosts are connected back-to-back by GigabitEthernet interfaces with private addresses.<br /><br /><script src="https://gist.github.com/988292.js"> </script><br /><br />As we need write access to the resource on both nodes, we must make it 'primary' on both nodes. A DRBD device in the primary role can be used unrestrictedly for read and write operations. This mode is called 'dual-primary' mode. Dual-primary mode requires additional configuration. In the 'startup' section directive, 'become-primary-on' is set to 'both'. In the 'net' section, the following is recommended:<br /><br /><script src="https://gist.github.com/988295.js"> </script><br /><br />The '<b>allow-two-primaries</b>' directive allows both ends to send data.<br />Next, three parameters define I/O errors handling.<br />The '<b>sndbuf-size</b>' is set to 0 to allow dynamic adjustment of the TCP buffer size.<br /><br />Resource configuration with all of these considerations applied will be as follows:<br /><br /><script src="https://gist.github.com/988356.js"> </script><br /><br /><span class="Apple-style-span" style="font-size:medium;">Enabling Resource For The First Time</span><br /><br />To create the device <i>/dev/drbd0</i> for later use, we use the <b>drbdadm</b> command:<br /><br /><script src="https://gist.github.com/988357.js"> </script><br /><br />After the front-end device is created, we bring the resource up:<br /><br /><script src="https://gist.github.com/988362.js"> </script><br /><br />This command set must be executed on both nodes. We may collapse the steps <b>drbdadm attach</b>, <b>drbdadm syncer</b>, and <b>drbdadm connect</b> into one, by using the shorthand command <b>drbdadm up</b>.<br />Now we can observe the <i>/proc/drbd</i> virtual status file and get the status of our resource:<br /><br /><script src="https://gist.github.com/988364.js"> </script><br /><br />We must now synchronize resources on both nodes. If we want to replicate data that are already on one of the drives, it's important to run the next command on the host which contains data. Otherwise, this can be issued on any of two hosts.<br /><br /><script src="https://gist.github.com/988369.js"> </script><br /><br />This command puts the node <b>host1</b> in 'primary' mode and makes it the synchronization source. This is reflected in the status file <i>/proc/drbd</i>:<br /><br /><script src="https://gist.github.com/988373.js"> </script><br /><br />We can adjust the syncer rate to make initial and background synchronization faster. To speed up the initial sync <b>drbdsetup</b> command used:<br /><br /><script src="https://gist.github.com/988376.js"> </script><br /><br />This allows us to consume almost all bandwidth of Gigabit Ethernet. The background syncer rate is configured in the corresponding config file section:<br /><br /><script src="https://gist.github.com/988377.js"> </script><br /><br />The exact rate depends on available bandwidth and must be about 0.3 of the slowest I/O subsystem (network or disk). DRBD seems to make it slower if it interferes with data flow.<br /><br /><span class="Apple-style-span" style="font-size:medium;">LVM Over DRBD Configuration</span><br /><br />Configuration of LVM over DRBD requires changes to <i>/etc/lvm/lvm.conf</i>. First, physical volume is created:<br /><br /><script src="https://gist.github.com/988378.js"> </script><br /><br />This command writes LVM Physical Volume data on the <b>drbd0</b> device and also on the underlying <b>md3</b> device. This can pose a problem as LVM default behavior is to scan all block devices for the LVM PV signatures. This means <i>two</i> devices with the same UUID will be detected and an error issued. This can be avoided by excluding <i>/mnt/md3</i> from scanning in the <i>/etc/lvm/lvm.conf</i> file by using the 'filter' parameter:<br /><br /><script src="https://gist.github.com/988380.js"> </script><br /><br />The <strong>vgscan</strong> command must be executed after the file is changed. It forces LVM to discard its configuration cache and re-scan the devices for PV signatures.<br />Different 'filter' configurations can be used, but it must ensure that: 1. DRBD devices used as PVs are accepted (included); 2. Corresponding lower-level devices are rejected (excluded).<br /><br />It is also nessesary to disable the LVM write cache:<br /><br /><script src="https://gist.github.com/988384.js"> </script><br /><br />These steps must be repeated on the peer node. Now we can create a Volume Group using the configured PV <i>/dev/drbd0</i> and Logical Volume in this VG. Execute these commands on one of nodes:<br /><br /><script src="https://gist.github.com/988386.js"> </script><br /><br />To make use of this VG and LV on the peer node, we must make it active on it:<br /><br /><script src="https://gist.github.com/988389.js"> </script><br /><br />When the new PV is configured, it is possible to proceed to adding it to the Volume Group or creating a new one from it. This VG can be used to create Logical Volumes as usual.<br /><br /><span class="Apple-style-span" style="font-size:medium;">Conclusion</span><br />We are going to install <a href="http://www.openstack.org">Openstack</a> on nodes with shared storage as a private cloud controller. The architecture of our system presumes that storage volumes will reside on the same nodes as <b>nova-compute</b>. This makes it very important to have some level of disaster survival on the cloud nodes.<br /><br />With DRBD we can survive any I/O errors on one of nodes. DRBD internal error handling can be configured to mask any errors and go to <i>diskless</i> mode. In this mode, all I/O operations are transparently redirected from the failed node to the replicant. This gives us time to restore a faulty disk system.<br /><br />If we have a major system crash, we still have all of the data on the second node. We can use them to restore or replace the failed system. Network failure can put us into a 'split brain' situation, when data differs between hosts. This is dangerous, but DRBD also has rather powerful mechanisms to deal with these kinds of problems.</div>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9120206829210052209.post-50748987964141732242011-05-18T08:33:00.000-07:002011-05-31T03:07:13.794-07:00OpenStack Deployment on Fedora using Kickstart<div style="TEXT-ALIGN: left" dir="ltr" trbidi="on"><span style="FONT-WEIGHT: bold;font-size:19;" class="Apple-style-span" >Overview</span><br /><br /><br />In this article, we discuss our approach to performing an Openstack installation on Fedora using our RPM repository and Kickstart. When we first started working with <a class="external-link" href="http://openstack.org/" rel="nofollow">OpenStack</a>, we found that the most popular platform for deploying OpenStack was Ubuntu, which seemed like a viable option for us, as there are packages for it available, as well as plenty of documentation. However, because our internal infrastructure is running on Fedora, instead of migrating the full infrastructure to Ubuntu, we decided to make OpenStack Fedora-friendly. The challenge in using Fedora, however, is that there aren't any packages, nor is there much documentation available. Details of how we worked around these limitations are discussed below.<br /><br /><br /><h3><a href="http://www.blogger.com/" name="OpenStackDeploymentonFedorausingKickstart-OpenStackRPMRepository"></a>OpenStack RPM Repository</h3><br /><br />Of course, installing everything from sources and bypassing the system's package manager is always an option, but this approach has some limitations:<br /><br /><ul><li>OpenStack has a lot of dependencies, so it's hard to track them all</li><li>Installations that bypass the system's package manager take quite some time (compared to executing a single Yum installation)</li><li>When some packages are installed from repositories, and some are installed from sources, managing upgrades can become quite tricky</li></ul><br /><br />Because of these limitations, we decided to create RPMs for Fedora. In order to avoid reinventing the wheel, we've based these RPMs on <a class="external-link" href="https://github.com/griddynamics/openstack-rhel" rel="nofollow">RHEL6 OpenStack Packages</a>, as RHEL6 and Fedora are fairly similar. There are two sets of packages available for various OpenStack versions:<br /><br /><ul><li><a class="external-link" href="http://download.mirantis.com/cactus/" rel="nofollow">Cactus</a> - click here for the latest official release</li><li><a class="external-link" href="http://download.mirantis.com/repo/" rel="nofollow">Hourly</a> - click here for hourly builds from trunk</li></ul><br /><br />There are two key metapackages:<br /><br /><ul><li><b>node-full:</b> installing a complete cloud controller infrastructure, including RabbitMQ, dnsmasq, etc.</li><li><b>node-compute:</b> installing only node-compute services</li></ul><br />To use the repository, just install the RPM:<br /><br /><script src="https://gist.github.com/978391.js"> </script><br /><br /><br />In addition to installing everything with a single "yum install" command, we also need to perform the configuration. For a bare metal installation, we've created a Kickstart script. <a href="http://fedoraproject.org/wiki/Anaconda/Kickstart">Kickstart</a> by itself is a set of answers for the automated installation of Fedora distributive. We use it for automated hosts provisioning with <a href="http://en.wikipedia.org/wiki/Preboot_Execution_Environment">PXE</a>. The post-installation part of the Kickstart script was extended to include the OpenStack installation and configuration procedures.<br /><br /><br /><h4><a href="http://www.blogger.com/" name="OpenStackDeploymentonFedorausingKickstart-CloudController"></a>Cloud Controller</h4><br /><br />To begin with, you can find the post-installation part of the Kickstart file for deploying a cloud controller below.<br />There are basic settings you will need to change. In our case, we are using a MySQL database.<br /><br /><script src="https://gist.github.com/978418.js"></script><br /><br /><br />Your server must be accessible by hostname, because RabbitMQ uses "node@host" identification. Also, because OpenStack uses hostnames to register services, if you want to change the hostname, you must stop all nova services and RabbitMQ, and then start it again after making the change. So make sure you set a resolvable hostname.<br /><br />Add required repos and install the cloud controller.<br /><br /><script src="https://gist.github.com/978466.js"> </script><br /><br /><br />qemu 0.14+ is needed to support creating custom images.<br />(UPD: Fedora 15 release already has qemu 0.14.0 in repository)<br /><br /><script src="https://gist.github.com/978474.js"> </script><br /><br /><br />If you're running nova under a non-privileged user ("nova" in this case), libvirt configs should be changed to provide access to the libvirtd unix socket for nova services. Access over TCP is required for live migration, so all of our nodes should have read/write access to the TCP socket.<br /><br /><script src="https://gist.github.com/978477.js"> </script><br /><br /><br />Now we can apply our db credentials to the nova config and generate the root certificate.<br /><br /><script src="https://gist.github.com/978481.js"> </script><br /><br /><br />And finally, we add services to "autostart", prepare the database, and run the migration. Don't forget the setup root password for the MySQL server.<br /><br /><script src="https://gist.github.com/978487.js"> </script><br /><br /><br /><br /><h4><a href="http://www.blogger.com/" name="OpenStackDeploymentonFedorausingKickstart-ComputeNode"></a>Compute Node</h4><br />Compute Node script is much easier:<br /><br /><script src="https://gist.github.com/978490.js"> </script><br /><br /><br />The config section differs very little; there is a cloud controller IP variable, which points to full nova infrastructure and other support services, such as MySQL and rabbit.<br /><br /><script src="https://gist.github.com/978494.js"> </script><br /><br /><br />That code is very similar to cloud controller, except that it installs the openstack-nova-node-compute package, instead of node-full.<br /><br /><script src="https://gist.github.com/978497.js"> </script><br /><br /><br />It is required to change the Cloud Controller IP address (CC_IP variable) for Compute Node installation.<br /><br />IMPORTANT NOTE: All of your compute nodes should have synchronized time with the cloud controller for heartbeat control.</div>Anonymousnoreply@blogger.com