Merge branch 'master' into master

This commit is contained in:
Chris Aniszczyk 2018-02-01 11:46:13 +00:00 committed by GitHub
commit 93e0515418
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 472 additions and 7 deletions

View file

@ -21,7 +21,7 @@ List below is the official list of TOC contributors, in alphabetical order:
* Alex Chircop, StorageOS (alex.chircop@storageos.com)
* Andy Santosa, Ebay (asantosa@ebay.com)
* Ara Pulido, Bitnami (ara@bitnami.com)
* Bassam Tabbara, Quantum (bassam@tabbara.com)
* Bassam Tabbara, Upbound (bassam@upbound.io)
* Bob Wise, Samsung SDS (bob@bobsplanet.com)
* Cathy Zhang, Huawei (cathy.h.zhang@huawei.com)
* Chase Pettet, Wikimedia Foundation (cpettet@wikimedia.org)
@ -29,6 +29,7 @@ List below is the official list of TOC contributors, in alphabetical order:
* Clinton Kitson, Dell (Clinton.Kitson@dell.com)
* Dan Wilson, Concur (danw@concur.com)
* Darren Ratcliffe, Atos (darren.ratcliffe@atos.net)
* Deyuan Deng, Caicloud (deyuan@caicloud.io)
* Doug Davis, IBM (dug@us.ibm.com)
* Drew Rapenchuk, Bloomberg (drapenchuk@bloomberg.net)
* Dustin Kirkland, Canonical (kirkland@canonical.com)
@ -38,6 +39,7 @@ List below is the official list of TOC contributors, in alphabetical order:
* Ghe Rivero, Independent (ghe.rivero@gmail.com)
* Gou Rao, Portworx (gou@portworx.com)
* Ian Crosby, Container Solutions (ian.crosby@container-solutions.com)
* Jeyappragash JJ, Independent (pragashjj@gmail.com)
* Jonghyuk Jong Choi, NCSoft (jongchoi@ncsoft.com)
* Joseph Jacks, Independent (jacks.joe@gmail.com)
* Josh Bernstein, Dell (Joshua.Bernstein@dell.com)
@ -53,8 +55,10 @@ List below is the official list of TOC contributors, in alphabetical order:
* Quinton Hoole, Huawei (quinton.hoole@huawei.com)
* Randy Abernethy, RX-M LLC (randy.abernethy@rx-m.com)
* Rick Spencer, Bitnami (rick@bitnamni.com)
* Sarah Allen, Google (sarahallen@google.com)
* Timothy Chen, Hyperpilot (tim@hyperpilot.io)
* Xu Wang, Hyper (xu@hyper.sh)
* Yaron Haviv, iguazio (yaronh@iguaz.io)
* Yong Tang, Infoblox (ytang@infoblox.com)
* Yuri Shkuro, Uber (ys@uber.com)

147
PRINCIPLES.md Normal file
View file

@ -0,0 +1,147 @@
# CNCF TOC Principles
_Version 1.0, Nov 27, 2017
Approved by TOC on: Nov 27, 2017
Approved by GB on: Dec 5, 2017
[TOC Operating Principles](#toc-operating-principles)
[We Are Project-Centric](#we-are-project-centric)
[Projects Are Self-Governing](#projects-are-self-governing)
[What We're Looking For](#what-were-looking-for)
[No Kingmakers & One Size Does Not Fit All](#no-kingmakers--one-size-does-not-fit-all)
[Not a Standards Body](#not-a-standards-body)
[We Want a Comprehensive Toolchain](#we-want-a-comprehensive-toolchain)
[Above All We Want To Help Projects](#above-all-we-want-to-help-projects)
## TOC Operating Principles
Now that CNCF has been active for over a year we want to start writing down what we have learned. Future TOCs can still make changes, but there will at least be documented precedent.
### We Are Project-Centric
_Principle: If it can be on a modern public source code control system, then it can be a project. And we put projects front and center._
CNCF is a home for several kinds of “project” where community collaboration furthers the goals of the CNCF community:
1. Open source software projects, e.g., Prometheus.
1. Projects that develop interface and/or schema specifications (e.g., [CNI](https://github.com/containernetworking/cni), reference implementations, conformance tests, adaptors, etc., in order to facilitate interoperability.
1. Reference materials, such as architectures, stacks, guides, docs.
## Projects Are Self-Governing
_Principle: Minimal Viable Governance_
Our expectations around governance and support are all predicated on the notion that a CNCF project works like a typical, modern “community-owned” open source software project, such as a person might discover hosted on GitHub. That means that it has committers and shared ownership using source code control, etc. People who want the CNCF to support their thing need to make it into a project and support “GitHub-style” communities. (Though please note that CNCF projects don't actually need to live on GitHub.)
In the GitHub era, open projects are able to get a lot “done” without outside help. The CNCF does not want to get in the way of that. This starts with “minimal viable governance”.
- The CNCF, TOC, and GB are available for help if it is asked for.
- But: we do not want to impose bureaucracy on projects because that will slow them down.
- Minimal viable governance also means that the TOC does not step in at a tactical level to overrule project leads decisions.
- There are some basics like [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) - see Draft Statement below. Including dealing with problematic leads & maintainers.
- There is a formal & regulated system of [Graduation Criteria](https://www.cncf.io/projects/graduation-criteria/) for CNCF Projects
- The TOC/CNCF want the ability to intervene if things go really wrong - i.e., project leads are stuck and cannot fix things.
- Provide a template for new projects, a set of best practices to help jump-start the task of setting up a new project.
### Draft Public Statement for website
The CNCF is committed to helping its member projects succeed, but without dictating or micromanaging how the projects are run. To that end, it requires only minimal viable governance criteria: a [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) with neutral processes for resolving conflicts, a documented governance model that includes a contribution-based process by which contributors can become committers or maintainers, and a clear definition of the top-level project leadership, with which the foundation will engage and inform and from which it may receive requests for funding and support. Once a project has graduated from incubation, new governance requirements cannot be imposed without consent of the project, except where legally required.
## What We're Looking For
We are looking for high-quality, high-velocity projects that fit cloud native.
_Principle: Great projects already have many ingredients to succeed. First: do no harm._
Identify projects that have a real shot at being a useful tool in the evolving box of cloud native technology. This is a mix of mature and early-stage projects. Early stage may not have all the criteria we want: diverse contributor bases, formalized governance, interoperability, cloud-native designs, quality bar, etc.
Some considerations:
- Transparent, consistent technical and governance quality bar for [graduation](https://www.cncf.io/projects/graduation-criteria/) from incubation
- Has users, preferably in production; is a high quality, high-velocity project (for incubation and graduated projects). Inception level projects are targeted at earlier-stage projects to cultivate a community/technology
- Has a committed and excited team that appears to understand the challenges ahead and wishes to meet them
- Has a fundamentally sound design without obvious critical compromises that will inhibit potential widespread adoption
- Is useful for cloud native deployments & ideally, is architected in a cloud native style
- Has an affinity for how CNCF wants to operate
- Charter Section 9(e): _New open source projects initiated in CNCF shall complete a project proposal template adopted by the TOC and be approved by the TOC for inclusion in CNCF. The TOC members shall be afforded sufficient time to discuss and review new project proposals. New project proposals shall include details of the roles in the project, the governance proposed for the project and identify alignment with CNCFs role and values_
## No Kingmakers & One Size Does Not Fit All
_Preamble:_
1. Many problems in technology have more than one solution. Markets are good at finding them. There may be multiple good OSS projects for solving a problem that are optimized for different edge cases.
1. Often multiple solutions get widespread use, for example, because they are optimized for different constraints. We dont want to get in the way of that by insisting that one technology is “the answer” for each functional gap that we can identify today. We believe that the market and user community provide a good mechanism for pushing the most appropriate projects forward over time. We want projects to enjoy the support of the CNCF during that process.
1. There is no “one true stack”, cloud native applications cover many different use cases with different needs. Many architectures are reasonable: from 12-factor to microservice, to stateful or data-intensive, to others. And there are many scales from one node to many, from low to high latency, etc. So “one size does not fit all”.
_Principles:_
1. No kingmakers. The TOC picks projects with a real chance of achieving widespread use, and it does not pick a winner in each category. Similar or competitive projects are not excluded for reasons of overlap.
1. No one stack. The TOC does not pick a “winning stack” - i.e., vertically integrated set of projects as a solution for multiple application problems. Instead by encouraging interop, we hope that a range of patterns & “stacks” will emerge.
Via the “no kingmakers” principle and “what is a project”, the CNCF may support several projects which show how a stack is a solution to certain use cases. For example, some stacks might use a container orchestrator. Other stacks might show how to integrate monitoring with other “observability” technologies, for cloud native apps regardless of their orchestration model.
This means that the CNCF is not promoting a single, monolithic stack of technologies.
The CNCF is a badge of quality and velocity. CNCF projects should be on a path to being tools that users can trust, broadly work together, and that meet other cloud native criteria. But the CNCF badge does not mean “this is the standard tool”.
- Overlapping projects are ok, especially where they make significantly different design tradeoffs
- Who to pick: The market is too young for us to pick winners - sometimes we shall identify several really promising tools that overlap in function. Let's aim for eventual consistency based on real community use, and not create early deadlocks over "which one tool is best".
- CNCF resources, both time and money, to be extremely limited, so we do need to choose carefully, and therefore should do some reasonable due diligence, including considering alternative projects.
- Overall the TOC will try to maintain a public roadmap or “backlog” where it sees interesting projects emerging, or space for “RFPs”, and “WGs”. While not a hard and fast document, this will help make dialogue with the community more transparent & efficient.
## Not a Standards Body
_Principle: CNCF promotes interoperability via interfaces that get real-world use_
Users and vendors both want as little friction as possible when it comes to integration. Taking any two projects and putting them into a larger stack, product, platform, solution, etc., is always easier when this is the case. For example, cloud native storage vendors want as few moving parts as possible when it comes to making their products and services work with the various emerging container platforms. At the same time, the world is littered with the untouchable remains of failed standards that promised to provide interoperability but did not get traction.
In the CNCF we like projects that are getting traction and may go on to become widespread and popular with end users and the ecosystem. We apply this thinking to the area normally covered by standards including specifications for APIs, SPIs, protocols, etc. Where a good interface exists we are happy to use it if our users like it, but we are not compelled to do so.
We want markets and users to drive interop, not committees. We want to help real-world use happen faster, and foster collaboration. We do not wish to become gated on committees.
We should focus on areas of rough 'de facto' agreement, under the proviso that early markets are also diverse. Possible areas to target:
- CNI - network
- CSI - storage
- CRI - runtime
- Openmetrics
- CLI - logging
- Spiffe
- (there is demand for some kind of serverless-related event scheme)
### How CNCF works with interface definitions like CNI, vs. standards efforts like OCI
The world has a number of recognized international standards bodies such as IETF and W3C. CNCF is not playing the role of a standards body. By contrast, OCI is a standards body.
CNCF may develop written materials in the style of the current CNI interface document, or in the style of an IETF RFC for example. These CNCF “specification” materials are not “standards”. It is in future possible that an independent and recognized international standards body takes a CNCF document as “upstream” and evolves it into a standard via (e.g.) the IETF process. The CNCF is morally supportive of independent parties doing this, but does not see this work as its own responsibility. For that matter, to date, no such exercise is in process.
In general CNCF specifications will evolve as “living documents” side by side with the CNCF OSS projects that adopt them. This means they are “dynamic and fast-moving” whereas a typical IETF standard is ”static” and not subject to frequent updates & new releases.
For the avoidance of doubt: Any written “specification” materials in CNCF shall have the same status as CNCF open source software and project technical docs. That is to say that specifications shall be updated and release versioned according to the conventions of a CNCF open source project.
### Important: Principle of Interoperability
CNCF values this highly: that for any given specification, multiple implementations exist. Those implementations will use the projects specification as the source of truth.
Moreover, CNCF shall not claim interoperability if there is only one implementation.
### Example: CNI
CNI fits all of the above requirements. There is 1) a specification and it is co-developed with 2) a library (libcni) and 3) ecosystem-contributed plugins. Collectively (1-3) form “the CNI Project”. In the future, CNI might include a compliance test suite which can be run against those implementations.
CNI is a software project, but the centerpiece of that project is the set of interfaces documented in the specification. Those interfaces live in the CNI Project. By the principle of interoperability, the existence of multiple CNI implementations is encouraged. Those implementations will use the interface definition from CNI as the source of truth.
### Example: OCI
OCI is not in the CNCF. The OCI project operates according to the norms of international standards bodies and has a clear primary goal: to provide a document that describes a standard set of interfaces for a container runtime and label this as 1.0. Nothing else is required.
## We Want a Comprehensive Toolchain
_Principle: users dont need to look beyond the CNCF for cloud native app tooling_
Grand vision: CNCF should identify, facilitate and promote a *complete toolset* for cloud native applications and stacks at scales from small to large. This enables customers to adopt good tools faster, and be less at risk of confusion and doubt.
We need to flesh out the portfolio of projects needed by users to succeed with cloud-native computing. It's pretty clear where remaining gaps in the project portfolio are. We should try to fill some of those, and at least document the rest. We can make more WGs to help with a few of those. (Note that “detailed architecture and stack” can be a Project)
## Above All We Want To Help Projects
_Principle: Our top priority is helping high-quality high-velocity cloud native open source projects be the main driver of customer adoption and success_
We want to be able to say that CNCF is a net positive for big & small projects. Doing so requires more coordination with project leads.
Project needs may include test automation and CI, cloud resources to test on, clear documentation, per-project marketing & evangelism, roadmaps for interop, and advice from experts on governance and scalability. And we need to make sure project contributors see what value they are getting & are not afraid to ask for help!

View file

@ -36,10 +36,10 @@ The TOC has created the following working groups to investigate and discuss the
| Working Group | Chair | Meeting Time | Minutes/Recordings |
|---------------|------------------|---------------------------------------|--------------------|
| [CI](https://github.com/cncf/wg-ci) | Camille Fournier | [2nd and 4th Tue every month at 8AM PT](https://zoom.us/j/199346891) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2P3_A3ujWHSxOu1IO_bd7Zi)
| [Networking](https://github.com/cncf/wg-networking) | Ken Owens | [1st and 3rd Tue every month at 9AM PT](https://zoom.us/j/999936723) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2M_-K5n67_zTdrPh_PtTKFC)
| [Serverless](https://github.com/cncf/wg-serverless) | Ken Owens | [Thu of every week at 9AM PT](https://zoom.us/j/893315636) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt)
| [Storage](https://github.com/cncf/wg-storage) | Ben Hindman | [2nd and 4th Wed every month at 8AM PT](https://zoom.us/j/158580155) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2NoiNaLVZxr-ERc1ifKP7n6)
| [CI](https://github.com/cncf/wg-ci) | Camille Fournier | [2nd and 4th Tue every month at 8AM PT](https://zoom.us/my/cncfciwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2P3_A3ujWHSxOu1IO_bd7Zi)
| [Networking](https://github.com/cncf/wg-networking) | Ken Owens | [1st and 3rd Tue every month at 9AM PT](https://zoom.us/my/cncfnetworkingwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2M_-K5n67_zTdrPh_PtTKFC)
| [Serverless](https://github.com/cncf/wg-serverless) | Ken Owens | [Thu of every week at 9AM PT](https://zoom.us/my/cncfserverlesswg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt)
| [Storage](https://github.com/cncf/wg-storage) | Ben Hindman | [2nd and 4th Wed every month at 8AM PT](https://zoom.us/my/cncfstoragewg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2NoiNaLVZxr-ERc1ifKP7n6)
All meetings are on the public CNCF calendar: https://goo.gl/eyutah
@ -83,6 +83,11 @@ Here is a link to a World Time Zone Converter here http://www.thetimezoneconvert
[Jaeger](https://github.com/jaegertracing/jaeger)|Bryan Cantrill|[8/1/17](https://goo.gl/ehtgts)|[9/13/17](https://www.cncf.io/blog/2017/09/13/cncf-hosts-jaeger/)|Incubating
[Notary](https://github.com/docker/notary)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating
[TUF](https://github.com/theupdateframework)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating
[rook](https://github.com/rook)|Ben Hindman|[6/6/17](https://goo.gl/6nmyDn)|[1/29/18](https://www.cncf.io/blog/2018/01/29/cncf-host-rook-project-cloud-native-storage-capabilities)|Inception
## Website Guidelines
CNCF has the following [guidelines](https://www.cncf.io/projects/website-guidelines/) for the websites of our projects.
## Scheduled Community Presentations
@ -116,8 +121,12 @@ If you're interested in presenting at a TOC call about your project, please open
* **October 17, 2017**: TOC Principles / GB (Todd Moore) and OpenMetrics Update
* **November 7, 2017**: Istio, SPIFFE.io and Serverless WG
* **November 14, 2017**: [OPA](http://www.openpolicyagent.org/) and Storage WG/CSI and Project Graduation/Health Reviews (Kubernetes, Prometheus)
* **December 5, 2017**: Rook
* **January 16, 2017**: (interested presenters contact cra@linuxfoundation.org or open up a github [issue](https://github.com/cncf/toc/issues)
* **December 5, 2017**: Rook, OpenOverlay
* **December 7, 2017**: KubeCon/CloudNativeCon F2F
* **January 16, 2018**: CSI/Storage WG Readout
* **Feb 6, 2018**: NATS
* **Feb 20, 2018**: CoreDNS Inception Project Review
* **Mar 6, 2018**: (interested presenters contact cra@linuxfoundation.org or open up a github [issue](https://github.com/cncf/toc/issues)
## Meeting Minutes
@ -163,3 +172,6 @@ If you're interested in presenting at a TOC call about your project, please open
* [October 17th, 2017](https://goo.gl/hH6fS4)
* [November 7th, 2017](https://goo.gl/LoKyV5)
* [November 14th, 2017](https://goo.gl/vKbawR)
* [December 5th, 2017](https://goo.gl/77pMFY)
* [December 7th, 2017](https://goo.gl/Ugo7F9)
* [January 16th, 2018](https://goo.gl/5wBe3d)

View file

@ -0,0 +1,147 @@
# Due Diligence Guidelines
This page provides guidelines to those leading or contributing to due
diligence exercises performed by or on behalf of the Technical
Oversight Committee of the CNCF.
## Introduction
Part of the evaluation process in deciding upon initial or continued
inclusion of projects into the CNCF is a Technical Due Diligence
('Tech DD') exercise. Ultimately the voting members of the TOC will,
on the basis of this and other information, vote for or against the
inclusion of each project at the relevant time.
## Leading a Technical Due Diligence
### Primary Goals
To enable the voting TOC members to cast an informed vote about a
project, it is crucial that each member is able to form their own
opinion as to whether and to what extent the project meets the agreed
upon [criteria](https://www.cncf.io/projects/graduation-criteria/) for
inception, incubation or graduation. As the leader of a DD, your job
is to make sure that they have whatever information they need,
succinctly and readily available, to form that opinion.
As a secondary goal, it is in the interests of the broader CNCF
ecosystem that there exists some reasonable degree of consensus across
the community regarding the inclusion or otherwise of projects at the
various maturity levels. Making sure that the relevant information is
available, and any disagreement or misunderstanding as to it's
validity are ideally resolved, helps to foster this consensus.
### Where to start
* make sure you're clear on the [TOC Principles](https://github.com/cncf/toc/blob/master/PRINCIPLES.md),
the [project proposal process](https://github.com/cncf/toc/blob/master/process/project_proposals.adoc),
the [graduation criteria](https://www.cncf.io/projects/graduation-criteria/)
and [desired cloud native properties](https://www.cncf.io/about/charter/) are. The project sponsor (a member
of the TOC) should have assisted in crafting the proposal to explain why it's a good fit for the CNCF. If anything's
unclear to you, reach out to the project sponsor or, failing that, the TOC mailing list for advice.
* make sure you've read, in detail, the relevant [project proposal](https://github.com/cncf/toc/tree/master/proposals),
This will usually be in the form of an [open pull request](https://github.com/cncf/toc/pulls).
Consider holding off on commenting on the PR until you've completed the next three steps.
* take a look at some [previous submissions](https://github.com/cncf/toc/pulls?utf8=%E2%9C%93&q=is%3Apr)
(both successful and unsuccessful) to help calibrate your expectations.
* Verify that all of the basic [project proposal requirements](https://github.com/cncf/toc/blob/master/process/project_proposals.adoc) have been provided.
* do as much reading up as you need to (and consult with experts in the specific field) in order to familiarize yourself with the technology
landscape in the immediate vicinity of the project (and don't only use the proposal and that project's documentation as a guide in this regard).
* at this point you should have a very clear technical idea of what exactly the project actually does and does not do, roughly how it compares with and differs from
similar projects in it's technology area, and/or a set of unanswered questions in those regards.
* go through the [graduation criteria](https://www.cncf.io/projects/graduation-criteria/) and for each item,
decide for yourself whether or not you have enough info to make a strong, informed call on that item.
* If so, write it down, with motivation.
* If not, jot down what information you feel you're missing.
* Also take note of what unanswered questions the community might have posted in the PR review that you consider
to be critically important.
### Some example questions that will ideally need clear answers
Most of these should be covered in the project proposal document. The
due diligence exercise involves validating any claims made there,
verifying adequate coverage of the topics, and possibly summarizing
the detail where necessary.
#### Technical
* An architectural, design and feature overview should be available.
([example](https://github.com/docker/notary/blob/master/docs/service_architecture.md),
[example](https://github.com/docker/notary/blob/master/docs/command_reference.md))
* What are the primary target cloud-native use cases? Which of those:
* Can be accomplished now.
* Can be accomplished with reasonable additional effort (and are ideally already on the project roadmap).
* Are in-scope but beyond the current roadmap.
* Are out of scope.
* What are the current performance, scalability and resource consumption bounds of the software? Have these been explicitly tested?
Are they appropriate given the intended usage (e.g. agent-per-node or agent-per-container need to be lightweight, etc)?
* What exactly are the failure modes? Are they well understood? Have they been tested? Do they form part of continuous integration testing?
Are they appropriate given the intended usage (e.g. cluster-wide shared services need to fail gracefully etc)?
* What trade-offs have been made regarding performance, scalability, complexity, reliability, security etc? Are these trade-offs explicit or implicit?
Why? Are they appropriate given the intended usage? Are they user-tunable?
* What are the most important holes? No HA? No flow control? Inadequate integration points?
* Code quality. Does it look good, bad or mediocre to you (based on a spot review). How thorough are the code reviews? Substance over form.
Are there explicit coding guidelines for the project?
* Dependencies. What external dependencies exist, do they seem justified?
* What is the release model? Versioning scheme? Evidence of stability or otherwise of past stable released versions?
* What is the CI/CD status? Do explicit code coverage metrics exist? If not, what is the subjective adequacy of automated testing?
Do different levels of tests exist (e.g. unit, integration, interface, end-to-end), or is there only partial coverage in this regard? Why?
* What licensing restrictions apply? Again, CNCF staff will handle the full legal due diligence.
* What are the recommended operational models? Specifically, how is it operated in a cloud-native environment, such as on Kubernetes?
#### Project
The key high-level questions that the voting TOC members will be looking to have answered are (from the [graduation criteria](https://www.cncf.io/projects/graduation-criteria/):
* Do we believe this is a growing, thriving project with committed contributors?
* Is it aligned with CNCF's values and mission?
* Do we believe it could eventually meet the graduation criteria?
* Should it start at the inception level or incubation level?
Some details that might inform the above include:
* Does ithe project have a sound, documented process for source control, issue tracking, release management etc.
* Does it have a documented process for adding committers?
* Does it have a documented governance model of any kind?
* Does it have committers from multiple organizations?
* Does it have a code of conduct?
* Does it have a license? Which one? Does it have a CLA or DCO? Are the licenses of it's dependencies compatible with their usage and CNCF policies?
CNCF staff will handle the full legal due diligence.
* What is the general quality of informal communication around the project (slack, github issues, PR reviews, technical blog posts, etc)?
* How much time does the core team commit to the project?
* How big is the team? Who funds them? Why? How much? For how long?
* Who are the clear leaders? Are there any areas lacking clear leadership? Testing? Release? Documentation? These roles sometimes go unfilled.
* Besides the core team, how active is the surrounding community? Bug reports? Assistance to newcomers? Blog posts etc.
* Do they make it easy to contribute to the project? If not, what are the main obstacles?
* Are there any especially difficult personalities to deal with? How is this done? Is it a problem?
* What is the rate of ongoing contributions to the project (typically in the form of merged commits).
#### Users
* Who uses the project? Get a few in-depth references from 2-4 of them who actually know and understand it.
* What do real users consider to be it's strengths and weaknesses? Any concrete examples of these?
* Perception vs Reality: Is there lots of buzz, but the software is flaky/untested/unused? Does it have a bad reputation for some flaw that has already been addressed?
#### Context
* What is the origin and history of the project?
* Where does it fit in the market and technical ecosystem?
* Is it growing or shrinking in that space? Is that space growing or shrinking?
* How necessary is it? What do people who don't use this project do? Why exactly is that not adequate, and in what situations?
* Clearly compare and contrast with peers in this space. A summary matrix often helps.
Beware of comparisons that are too superficial to be useful, or might have been manipulated so as to favor some projects over others.
Most balanced comparisons will include both strengths and weaknesses, require significant detailed research, and usually there is no hands-down winner.
Be suspicious if there appears to be one.
#### Other advice
* Bring in other people (e.g. from your company) who might be more familiar with a
particular area than you are, to assist where needed. Even if you know the area,
additional perspectives from experts are usually valuable.
* Conduct as much of the investigation in public as is practical. For example, favor explicit comments on the
submission PR over private emails, phone calls etc. By all means conduct whatever communication might be
necessary to do a thorough job, but always try to summarize these discussions in the PR so that others can follow along.
* Explicitly disclose any vested interest or potential conflict of interest that you, the project sponsor,
the project champion, or any of the reviewers have in the project. If this creates any significant concerns regarding
impartiality, its usually best for those parties to recuse themselves from the submission and it's evaluation.
* Fact-check where necessary. If an answer you get to a question doesn't smell right, check the underlying data, or get a second/third... opinion.

155
proposals/rook.adoc Normal file
View file

@ -0,0 +1,155 @@
== Rook
*Name of project:* Rook
*Description:*
Rook is an open source orchestrator for distributed storage systems running in cloud native environments.
Distributed storage systems are inherently complex -- they define strong consistency and durability guarantees that must hold even when scaling, upgrading, and running maintenance operations. They require careful provisioning and balancing of resources to optimize access to data and maintain durability. It's common for such systems to require dedicated administrators.
Rook turns distributed storage systems into self-managing, self-scaling, and self-healing storage services. It does this by automating the tasks of a storage administrator including deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook leverages the power of the underlying cloud-native container management, scheduling, and orchestration platform to perform its duties.
Rook integrates deeply into cloud native environments leveraging extension points and providing a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.
Rook is currently in alpha state and has focused initially on orchestrating Ceph on-top of Kubernetes. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook is planning to be production ready by Dec'17 for block storage deployments on-top of Kubernetes.
With community participation, Rook plans to add support for other storage systems beyond Ceph and other cloud native environments beyond Kubernetes. The logic for orchestrating storage systems can be reused across storage backends. Also having common abstractions, packaging, and integrations reduces the burden of introducing storage back-ends and improves the overall experience.
*Statement on alignment with CNCF mission:*
Rook is well-aligned with CNCF's goals and mission of promoting cloud-native computing. Rook adheres to the core principles of cloud-native systems: container packaged, micro-services oriented, and dynamically managed.
Rook is complimentary to other CNCF projects like Kubernetes and Prometheus. It integrates and extends Kubernetes and has a strong alignment on design and architecture. Rook is itself implemented as a controller (reconciling desired and actual state), and uses the Kubernetes API extensively to perform its functions. Rook exposes monitoring and instrumentation via Prometheus.
Rook brings distributed storage services into cloud-native environments beyond what has been done to date with plugins (including CSI). We believe that by running storage systems on-top of cloud-native environments we will be a step closer to the multi-cloud vision.
*Sponsor / Advisor from TOC:* Benjamin Hindman
*Unique identifier:* rook
*Preferred maturity level:* inception
*License:* Apache License v2.0
*Source control repositories:* https://github.com/rook/rook
*External Dependencies:*
Golang package dependencies:
* https://github.com/coreos/pkg (Apache v2.0)
* https://github.com/go-ini/ini (Apache v2.0)
* https://github.com/google/uuid (BSD 3-Clause)
* https://github.com/gorilla/mux (Apache v2.0)
* https://github.com/jbw976/go-ps (Apache v2.0)
* https://github.com/kubernetes/api (Apache v2.0)
* https://github.com/kubernetes/apiextensions-apiserver (Apache v2.0)
* https://github.com/kubernetes/apimachinery (Apache v2.0)
* https://github.com/kubernetes/apiserver (Apache v2.0)
* https://github.com/kubernetes/client-go (Apache v2.0)
* https://github.com/kubernetes/code-generator (Apache v2.0)
* https://github.com/kubernetes/kubernetes (Apache v2.0)
* https://github.com/kubernetes/utils (Apache v2.0)
* https://github.com/prometheus/client_golang (Apache v2.0)
* https://github.com/rook/operator-kit (Apache v2.0)
* https://github.com/spf13/cobra (Apache v2.0)
* https://github.com/spf13/pflag (BSD 3-Clause)
Binary dependencies packaged into Rook containers:
* Ceph (mostly LGPL 2.0) - https://github.com/ceph/ceph
*Initial Committers:*
* Bassam Tabbara (Upbound)
* Jared Watts (Quantum)
* Travis Nielsen (Quantum)
Current list is on https://github.com/rook/rook/blob/master/MAINTAINERS. Maintainers are updated according to the following rules https://github.com/rook/rook/blob/master/MAINTAINERS_RULES.md
*Infrastructure requests (CI / CNCF Cluster):*
CI currently at https://jenkins.rook.io but could move to CNCF CI.
Planning to use CNCF cluster for integration and performance testing at scale.
*Communication Channels:*
* Slack: https://rook-slackin.herokuapp.com
* Gitter: https://gitter.im/rook/rook (deprecated)
* Google Groups: https://groups.google.com/forum/#!forum/rook-dev
* Email: mailto:info@rook.io[info@rook.io]
*Issue tracker:* https://github.com/rook/rook/issues
*Website:* https://rook.io
*Release methodology and mechanics:*
Major releases roughly every two months, minor releases as needed.
*Social media accounts:*
* Twitter: @rook_io
*Existing sponsorship*: Quantum and Upbound
Statement from Quantum: In 2016 as part of ongoing product development work we identified the need for richer implementations of storage technologies in Cloud Native systems. As this work progressed we felt that it was evolving into a core component of the platform architecture and chose to open-source our work. Since then Quantum has continued to invest in both the Rook technologies and launching Rook as a vibrant open source project. Internally we are utilizing Rook as well as many other Cloud Native technologies to build systems relevant to our businesses. We firmly believe that a vibrant Rook project and ecosystem is in our and the communitys best interests. As the project continues to grow our role will become less significant in terms of strategy and direction and we think this evolution and adopting well established governance principles will strengthen the project.
*Community size:*
* Rook was open sourced Nov'2016
* 1785+ stars
* 40+ contributors
* 155+ forks
* 135+ on slack
* 600K+ container pulls (quay.io), 50K+ container pulls (docker)
*Comparison with gluster-kubernetes and ceph-container*:
Existing approaches to running distributed storage systems like Ceph and Gluster focus primarily on packaging in containers, initial deployment, and bootstrapping. There is no central controller that is responsible for ongoing operations, dynamic management and maintenance of such storage systems. While some of these operations can be handled by the orchestration platform itself (for example, scaling through stateful-sets in Kubernetes) the approach only covers a small subset of the administration tasks and does not take into account the inherent constraints and guarantees of the backend storage system. For example, growing a cluster in Ceph not only requires scheduling more storage nodes but also updating the storage topology to optimize data access and improve durability all without breaking consistency guarantees. Rook's storage controller is responsible for ongoing and dynamic management of the storage system and it does so in a storage backend specific way.
Rook introduces new abstractions for storage clusters, pools, volumes, volume-attachements, snapshots and others that are extension points of the cloud-native environment. This leads to a deeper integration into cloud-native environments. Other approaches like gluster-kubernetes and ceph-container rely on their own storage API for management and integrate primarily at the volume plugin level, and not the storage service level.
Finally Rook is designed to run primarily as an application of cloud-native systems minimizing (and eventually eliminating all dependencies) on the host platform. For example, Rook runs using the Kubernetes networking, whereas other approach like ceph-container require host networking.
*Comparison with minio*:
Minio is a distributed object store that is designed for cloud applications. Minio focuses on simplicity of deployment and operations. Rook could orchestrate Minio just like it does with Ceph's object store (rgw). Some of the operation tasks that Rook would perform include initial deployment, dealing with erasure-coding and multi-tenancy constraints, locking and dsync quorum, topology, and healing storage nodes on loss events. Also Rook exposes object store abstractions that could be used by minio for a deeper integration into cloud-native environments like Kubernetes.
*Production usage*:
Rook is in alpha and has little production usage. The first stable release of Rook is expected in Dec'2017. Ceph is production ready and is deployed in large-scale production environments. There are a number of companies and users that have deployed Rook in testing and staging environments (on-premise and public cloud), and a few that have deployed it in production (see quotes below). Quantum Corp. (the current sponsor of the Rook project) plans to deploy Rook within commercial enterprise storage appliances early next year.
[quote, Brandon Philips, CTO - CoreOS]
CoreOS helps companies ensure their critical application infrastructure is able to run free from cloud lock-in with CoreOS Tectonic and Kubernetes APIs. We are encouraged to see storage systems, like Rook, emerging that build directly upon those APIs to deliver a flexible cloud-agnostic storage solution.
[quote, Sasha Klizhentas, CTO - Gravitational]
Gravitational team is excited to be early adopters of Rook. Rook's solid foundation makes it the leader among emerging cloud-native storage solutions.
[quote, Hunter Nield, CTO - Acaleph]
At Acaleph, we're excited for a true cloud-native storage platform. Having experienced the complexity of running Ceph on Kubernetes, Rook provides the stability and power of an established software-defined storage solution with ease of use of native Kubernetes integration. With the latest release of Rook, we're looking to implement as a core part of our storage platform.
[quote, Matt Baldwin, CTO - StackPointCloud]
I have been watching adoption of Rook grow within our 6,000+ base of Kubernetes users. We have worked with users to prototype Rook in their Deployments. As it approaches a production release, I have plans to include and support it as a part of the official Stackpoint.io offering.
[quote, Bryan Zubrod, Founder - Zubrod Farms]
On my farm it's important to make efficient use of resources I already have. With Rook's Kubernetes-native design I am able to use commodity hardware without sacrificing redundancy for my storage or availability of my services. That's why Rook fits perfectly in my farm's metrics and automation systems, and I follow its development closely.
[quote, Jason Vigil, Software Engineer - Dell/EMC]
Rook looks like a simple and easy solution for persistent storage in a Kubernetes environment. I plan to use it for upcoming projects.
[quote, Lucas Käldström, Founder - luxas labs]
I'm really excited to see Rook evolve to a fully production-grade system. I've used and contributed to it from an early stage and can't wait to use it in even more prod systems
[quote, Patrick Stadler, Software Engineer - Liip]
Utilizing hyper-converged systems with storage tightly coupled to computational resources reduces cost and operational complexity of infrastructure. This is especially true for small scale cluster deployments. The biggest challenge with Kubernetes on bare metal is providing distributed block storage. Although proprietary solutions exist, there's been a lack of well-backed open source solutions. Rook has the potential to fill this void.