Compare commits

..

2 commits

Author SHA1 Message Date
Chris Aniszczyk
3bd5e29a0d
Address Quinton's concerns 2018-08-20 15:30:28 -05:00
Chris Aniszczyk
d1c648ad6c
Add a requirement regarding security audits
A part of achieving a CII Badge involves in setting up a security disclosure process, which is a great practice for all open source projects to have. However, not all security disclosure processes are tested so the TOC is considering the requirement moving forward to have CNCF projects go through a third party security audit which helps test the security disclosure process.
2018-08-20 14:49:17 -05:00
14 changed files with 27 additions and 960 deletions

View file

@ -24,7 +24,7 @@ List below is the official list of TOC contributors, in alphabetical order:
* Ara Pulido, Bitnami (ara@bitnami.com)
* Ayrat Khayretdinov (akhayertdinov@cloudops.com)
* Bassam Tabbara, Upbound (bassam@upbound.io)
* Bob Wise, Amazon Web Services (bob@bobsplanet.com)
* Bob Wise, Samsung SDS (bob@bobsplanet.com)
* Cathy Zhang, Huawei (cathy.h.zhang@huawei.com)
* Chase Pettet, Wikimedia Foundation (cpettet@wikimedia.org)
* Christopher Liljenstople, Tigera (cdl@asgaard.org)
@ -37,7 +37,6 @@ List below is the official list of TOC contributors, in alphabetical order:
* Drew Rapenchuk, Bloomberg (drapenchuk@bloomberg.net)
* Dustin Kirkland, Canonical (kirkland@canonical.com)
* Eduardo Silva, Treasure Data (eduardo@treasure-data.com)
* Edward Lee, Intuit (edward_lee@intuit.com)
* Erin Boyd, Red Hat (eboyd@redhat.com)
* Gergely Csatari, Nokia (gergely.csatari@nokia.com)
* Ghe Rivero, Independent (ghe.rivero@gmail.com)
@ -50,14 +49,11 @@ List below is the official list of TOC contributors, in alphabetical order:
* Joseph Jacks, Independent (jacks.joe@gmail.com)
* Josh Bernstein, Dell (Joshua.Bernstein@dell.com)
* Justin Cormack, Docker (justin.cormack@docker.com)
* Jun Du, Huawei (dujun5@huawei.com)
* Kiran Mova, MayaData (kiran.mova@mayadata.io)
* Lachlan Evenson, Microsoft (lachlan.evenson@microsoft.com)
* Lee Calcote, SolarWinds (leecalcote@gmail.com)
* Lei Zhang, HyperHQ (harryzhang@zju.edu.cn)
* Louis Fourie, Huawei (louis.fourie@huawei.com)
* Mark Peek, VMware (markpeek@vmware.com)
* Matt Farina, Samsung SDS (matt@mattfarina.com)
* Matthew Fornaciari, Gremlin (forni@gremlin.com)
* Nick Chase, Mirantis (nchase@mirantis.com)
* Pengfei Ni, Microsoft (peni@microsoft.com)
@ -66,7 +62,6 @@ List below is the official list of TOC contributors, in alphabetical order:
* Randy Abernethy, RX-M LLC (randy.abernethy@rx-m.com)
* Rick Spencer, Bitnami (rick@bitnamni.com)
* Sarah Allen, Google (sarahallen@google.com)
* Steven Dake, Cisco (stdake@cisco.com)
* Tammy Butow, Gremlin (tammy@gremlin.com)
* Timothy Chen, Hyperpilot (tim@hyperpilot.io)
* Vasu Chandrasekhara, SAP SE (vasu.chandrasekhara@sap.com)
@ -75,4 +70,4 @@ List below is the official list of TOC contributors, in alphabetical order:
* Yaron Haviv, iguazio (yaronh@iguaz.io)
* Yong Tang, Infoblox (ytang@infoblox.com)
* Yuri Shkuro, Uber (ys@uber.com)
* Zefeng (Kevin) Wang, Huawei (wangzefeng@huawei.com)

View file

@ -1,6 +0,0 @@
We would like to acknowledge previous TOC members and their huge contributions to our collective success:
* Solomon Hykes (1/29/2016 - 3/17/2018)
* Elissa Murphy (1/29/2016 - 10/2/2017)
We thank these members for their service to the CNCF community.

View file

@ -39,19 +39,15 @@ The TOC has created the following working groups to investigate and discuss the
| [CI](https://github.com/cncf/wg-ci) | Camille Fournier | [4th Tue of every month at 8AM PT](https://zoom.us/my/cncfciwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2P3_A3ujWHSxOu1IO_bd7Zi)
| [Networking](https://github.com/cncf/wg-networking) | Ken Owens | [1st and 3rd Tue every month at 9AM PT](https://zoom.us/my/cncfnetworkingwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2M_-K5n67_zTdrPh_PtTKFC)
| [Serverless](https://github.com/cncf/wg-serverless) | Ken Owens | [Thu of every week at 9AM PT](https://zoom.us/my/cncfserverlesswg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt)
| [Storage](https://github.com/cncf/wg-storage) | Quinton Hoole | [2nd and 4th Wed every month at 8AM PT](https://zoom.us/my/cncfstoragewg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2NoiNaLVZxr-ERc1ifKP7n6)
| [Storage](https://github.com/cncf/wg-storage) | Ben Hindman | [2nd and 4th Wed every month at 8AM PT](https://zoom.us/my/cncfstoragewg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2NoiNaLVZxr-ERc1ifKP7n6)
All meetings are on the public CNCF calendar: https://goo.gl/eyutah
## Meeting Agenda and Minutes
Meeting Minutes are recorded here: https://docs.google.com/document/d/1jpoKT12jf2jTf-2EJSAl4iTdA7Aoj_uiI19qIaECNFc/edit#
## Meeting Time
The TOC meets on the 1st and 3rd Tuesday of every month at 8AM PT (USA Pacific):
https://zoom.us/j/967220397
https://zoom.us/j/263858603
Or Telephone:
@ -87,7 +83,7 @@ Here is a link to a World Time Zone Converter here http://www.thetimezoneconvert
[Jaeger](https://github.com/jaegertracing/jaeger)|Bryan Cantrill|[8/1/17](https://goo.gl/ehtgts)|[9/13/17](https://www.cncf.io/blog/2017/09/13/cncf-hosts-jaeger/)|Incubating
[Notary](https://github.com/docker/notary)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating
[TUF](https://github.com/theupdateframework)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating
[rook](https://github.com/rook)|Ben Hindman|[6/6/17](https://goo.gl/6nmyDn)|[1/29/18](https://www.cncf.io/blog/2018/01/29/cncf-host-rook-project-cloud-native-storage-capabilities)|Incubating
[rook](https://github.com/rook)|Ben Hindman|[6/6/17](https://goo.gl/6nmyDn)|[1/29/18](https://www.cncf.io/blog/2018/01/29/cncf-host-rook-project-cloud-native-storage-capabilities)|Sandbox
[Vitess](https://github.com/vitessio/vitess)|Brian Grant|[4/19/17](https://goo.gl/6nmyDn)|[2/5/18](https://www.cncf.io/blog/2018/02/05/cncf-host-vitess/)|Incubating
[NATS](https://github.com/nats-io/gnatsd)|Alexis Richardson|[9/21/16](https://goo.gl/6nmyDn)|[3/15/18](https://www.cncf.io/blog/2018/03/15/cncf-to-host-nats/)|Incubating
[SPIFFE](https://github.com/spiffe)|Brian Grant, Sam Lambert, Ken Owens|[11/7/17](https://goo.gl/6nmyDn)|[3/29/18](https://www.cncf.io/blog/2018/03/29/cncf-to-host-the-spiffe-project/)|Sandbox
@ -95,13 +91,8 @@ Here is a link to a World Time Zone Converter here http://www.thetimezoneconvert
[CloudEvents](https://github.com/cloudevents)|Brian Grant, Ken Owens|[11/14/17](https://goo.gl/vKbawR)|[5/22/18](https://www.cncf.io/blog/2018/05/22/cloudevents-in-the-sandbox/)|Sandbox
[Telepresence](https://github.com/telepresenceio)|Alexis Richardson, Camille Fournier|[4/17/18](https://docs.google.com/presentation/d/1VrHKGre5Y8AbmXEOXu4VPfILReoLT38Uw9TMN71u08E/edit?usp=sharing)|[5/22/18](https://www.cncf.io/blog/2018/05/22/telepresence-in-the-sandbox/)|Sandbox
[Helm](https://github.com/helm)|Brian Grant|[5/15/18](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g25ca91f87f_0_0)|[6/1/18](https://www.cncf.io/blog/2018/06/01/cncf-to-host-helm/)|Incubating
[Harbor](https://github.com/goharbor)|Quinton Hoole, Ken Owens|[6/19/18](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g25ca91f87f_0_0)|[7/31/18](https://www.cncf.io/blog/2018/07/31/cncf-to-host-harbor-in-the-sandbox/)|Incubating
[Harbor](https://github.com/goharbor)|Quinton Hoole, Ken Owens|[6/19/18](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g25ca91f87f_0_0)|[7/31/18](https://www.cncf.io/blog/2018/07/31/cncf-to-host-harbor-in-the-sandbox/)|Sandbox
[OpenMetrics](https://github.com/OpenObservability/OpenMetrics)|Alexis Richardson, Bryan Cantrill|[6/20/17](https://goo.gl/6nmyDn)|[8/10/18](https://www.cncf.io/blog/2018/08/10/cncf-to-host-openmetrics/)|Sandbox
[TiKV](https://github.com/tikv/tikv)|Ben Hindman, Bryan Cantrill|[7/3/18](https://docs.google.com/presentation/d/1864TEfbwCpbW5kPYGQNAfqAUdc3X83n-_OYigqxfohw/edit?usp=sharing)|[8/28/18](https://www.cncf.io/blog/2018/08/28/cncf-to-host-tikv/)|Sandbox
[Cortex](https://github.com/cortexproject/cortex)|Ken Owens, Bryan Cantrill|[6/5/18](https://docs.google.com/presentation/d/190oIFgujktVYxWZLhLYN4q8p9dtQYoe4sxHgn4deBSI/edit#slide=id.g25ca91f87f_0_0)|[9/20/18](https://www.cncf.io/blog/2018/09/20/cncf-to-host-in-the-sandbox/)|Sandbox
[Buildpacks](https://github.com/buildpack/spec)|Brian Grant, Alexis Richardson|[8/21/18](https://docs.google.com/presentation/d/1RkygwZw7ILVgGhBpKnFNgJ4BCc_9qMG8cIf0MRbuzB4/edit?usp=sharing)|[10/3/18](https://www.cncf.io/blog/2018/10/03/cncf-to-host-cloud-native-buildpacks-in-the-sandbox)|Sandbox
[Falco](https://github.com/falcosecurity/falco)|Brian Grant, Quinton Hoole|[7/17/18](https://docs.google.com/presentation/d/17p5QBVooGMLAtX6Mn6d3NAFhRmFHE0cH-WI_-0MbOm8/edit?usp=sharing)|[10/10/18](https://falco.org/)|Sandbox
[Dragonfly](https://github.com/dragonflyoss/dragonfly)|Jonathan Boulle, Benjamin Hindman|[9/4/18](https://docs.google.com/presentation/d/1umu-iT5ZXq5XsMFmqmVeRe-tn2y7DeSoCebhrehi7fk/edit#slide=id.g41381b8fd7_0_199)|[11/15/18](https://github.com/oss/dragonfly)|Sandbox
## Website Guidelines
@ -109,7 +100,7 @@ CNCF has the following [guidelines](https://www.cncf.io/projects/website-guideli
## Scheduled Community Presentations
If you're interested in presenting at a TOC call about your project, please open a [github issue](https://github.com/cncf/toc/issues) with the request. We can schedule a maximum of one community presentation per TOC meeting.
If you're interested in presenting at a TOC call about your project, please open a [github issue](https://github.com/cncf/toc/issues) with the request. We can schedule a maximum of two community presentations per TOC meeting.
* **May 4th, 2016**: [Prometheus](https://prometheus.io/) ([overview](https://docs.google.com/presentation/d/1GtVX-ppI95LhrijprGENsrpq78-I1ttcSWLzMVk5d8M/edit?usp=sharing)): Fabian Reinartz, Julius Volz
* **August 3rd, 2016**: [Fluentd](http://www.fluentd.org/) ([overview](https://docs.google.com/presentation/d/1S79MNv3E2aG8nuZJFJ0XMSumf7jnKozN3vdrivCH77U/edit?usp=sharing)): Kiyoto Tamura / [Heron](https://github.com/twitter/heron) ([overview](https://docs.google.com/presentation/d/1pKwNO2V3VScjD1JxJ0gEgFTwAOccJgaJxHWgwcyczec/edit?usp=sharing)): Karthik Ramasamy / [Minio](https://minio.io/) ([overview](https://docs.google.com/presentation/d/1DGm_Zwq7qYHaXm6ZH26RAQeyBAKF1FOCLlEZQNTMJYE/edit?usp=sharing)): Anand Babu Periasamy
@ -159,8 +150,8 @@ If you're interested in presenting at a TOC call about your project, please open
* **Sep 4, 2018**: OpenMessaging/Dragonfly
* **Sep 18, 2018**: netdata
* **Oct 2, 2018**: keycloak
* **Nov 20, 2018**: Graduation/Project Reviews
* **Oct 16, 2018**: (interested presenters contact cra@linuxfoundation.org or open up a github [issue](https://github.com/cncf/toc/issues))
* **Nov 6, 2018**: (interested presenters contact cra@linuxfoundation.org or open up a github [issue](https://github.com/cncf/toc/issues))
## Meeting Minutes
@ -222,7 +213,3 @@ If you're interested in presenting at a TOC call about your project, please open
* [July 17th, 2018](https://docs.google.com/presentation/d/17p5QBVooGMLAtX6Mn6d3NAFhRmFHE0cH-WI_-0MbOm8/edit?usp=sharing)
* [August 7th, 2018](https://docs.google.com/presentation/d/1Eebd5ZwSYyvNRLbHDpiF_USDC4sEz7lEEpPLju_0PaU/edit)
* [August 21st, 2018](https://docs.google.com/presentation/d/1RkygwZw7ILVgGhBpKnFNgJ4BCc_9qMG8cIf0MRbuzB4/edit?usp=sharing)
* [September 4th, 2018](https://docs.google.com/presentation/d/1umu-iT5ZXq5XsMFmqmVeRe-tn2y7DeSoCebhrehi7fk/edit#slide=id.g41381b8fd7_0_199)
* [September 18th, 2018](https://docs.google.com/presentation/d/1umu-iT5ZXq5XsMFmqmVeRe-tn2y7DeSoCebhrehi7fk/edit#slide=id.g41381b8fd7_0_199)
* [October 2nd, 2018](https://docs.google.com/presentation/d/1Xt1xNSN8_pGuDLl5H8xEYToFss7VoIm7GBG0e_HrsLc/edit?usp=sharing)
* [October 16th, 2018](https://docs.google.com/presentation/d/1UtObz-sbjJqtfoVxlfsl2YlalnZnWQQyH8wloDcRyXk/edit#slide=id.g25ca91f87f_0_0)

View file

@ -1,100 +0,0 @@
# Due Diligence Project Review Template
This page provides project review guidelines to those leading or contributing to due diligence exercises performed by or on behalf of the Technical Oversight Committee of the CNCF.
## Introduction
The decision to graduate or promote a project depend on the TOC sponsors of the project performina dn documenting the evaluation process in deciding upon initial or continued inclusion of projects through a Technical Due Diligence ('Tech DD') exercise. Ultimately the voting members of the TOC will, on the basis of this and other information, vote for or against the inclusion of each project at the relevant time.
## Technical Due Diligence
### Primary Goals
To enable the voting TOC members to cast an informed vote about a project, it is crucial that each member is able to form their own opinion as to whether and to what extent the project meets the agreed upon criteria for sandbox, incubation or graduation. As the leader of a DD, your job is to make sure that they have whatever information they need, succinctly and readily available, to form that opinion.
As a secondary goal, it is in the interests of the broader CNCF ecosystem that there exists some reasonable degree of consensus across the community regarding the inclusion or otherwise of projects at the various maturity levels. Making sure that the relevant information is available, and any disagreement or misunderstanding as to it's validity are ideally resolved, helps to foster this consensus.
## Statment of CNCF Alignment to TOC Principles
1. Project is self-goverrning
2. Is there a documented Code of Conduct that adhears to the CNCF guidelines?
3. Does the project have production deployments that are high quality and high-velocity? (for incubation and graduated projects).
(Sandbox level projects are targeted at earlier-stage projects to cultivate a community/technology)
4. Is the project committed to acheiving the CNCF principls and do they have a committed roadmap to address any areas of concern raised by the community?
5. The project needs to be reviewed and dosucment that the project has a fundamentally sound design without obvious critical compromises that will inhibit potential widespread adoption
6. Document that the project is useful for cloud native deployments & degree that its architected in a cloud native style
7. Document that the project has an affinity for how CNCF operates and understand the expectation of being a CNCF project.
## Review of graduation criteria and desired cloud native properties
/* Use appropriate Section */
### Sandbox Graduation (Exit Requirements)
1. Document that it is being used successfully in production by at least three independent end users which with focus on adequate quality and scope defined.
2. Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
3. Demonstrate a substantial ongoing flow of commits and merged contributions.
### Incubating Stage Graduation (Exit Requirements)
1. Document that it is being used successfully in production by at least three independent end users which with focus on adequate quality and scope defined.
2. Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
3. Demonstrate a substantial ongoing flow of commits and merged contributions.
4. Have committers from at least two organizations.
5. Have achieved and maintained a Core Infrastructure Initiative Best Practices Badge.
6. Adopted the CNCF Code of Conduct.
7. Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers.
8. Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).
### Documentation of CNCF Alignment (if not addressed above):
name of project (must be unique within CNCF)
project description (what it does, why it is valuable, origin and history)
statement on alignment with CNCF charter mission
sponsor from TOC (sponsor helps mentor projects)
license (charter dictates Apache 2 by default)
source control (GitHub by default)
external dependencies (including licenses)
release methodology and mechanics
community size and any existing sponsorship
##Technical
* An architectural, design and feature overview should be available. (add link)
* What are the primary target cloud-native use cases? Which of those:
* Can be accomplished now.
* Can be accomplished with reasonable additional effort (and are ideally already on the project roadmap).
* Are in-scope but beyond the current roadmap.
* Are out of scope.
* What are the current performance, scalability and resource consumption bounds of the software? Have these been explicitly tested? Are they appropriate given the intended usage (e.g. agent-per-node or agent-per-container need to be lightweight, etc)?
* What exactly are the failure modes? Are they well understood? Have they been tested? Do they form part of continuous integration testing? Are they appropriate given the intended usage (e.g. cluster-wide shared services need to fail gracefully etc)?
* What trade-offs have been made regarding performance, scalability, complexity, reliability, security etc? Are these trade-offs explicit or implicit? Why? Are they appropriate given the intended usage? Are they user-tunable?
* What are the most important holes? No HA? No flow control? Inadequate integration points?
* Code quality. Does it look good, bad or mediocre to you (based on a spot review). How thorough are the code reviews? Substance over form. Are there explicit coding guidelines for the project?
* Dependencies. What external dependencies exist, do they seem justified?
* What is the release model? Versioning scheme? Evidence of stability or otherwise of past stable released versions?
* What is the CI/CD status? Do explicit code coverage metrics exist? If not, what is the subjective adequacy of automated testing? Do different levels of tests exist (e.g. unit, integration, interface, end-to-end), or is there only partial coverage in this regard? Why?
* What licensing restrictions apply? Again, CNCF staff will handle the full legal due diligence.
* What are the recommended operational models? Specifically, how is it operated in a cloud-native environment, such as on Kubernetes?
## Project
* Do we believe this is a growing, thriving project with committed contributors?
* Is it aligned with CNCF's values and mission?
* Do we believe it could eventually meet the graduation criteria?
* Should it start at the sandbox level or incubation level?
* Does ithe project have a sound, documented process for source control, issue tracking, release management etc.
* Does it have a documented process for adding committers?
* Does it have a documented governance model of any kind?
* Does it have committers from multiple organizations?
* Does it have a code of conduct?
* Does it have a license? Which one? Does it have a CLA or DCO? Are the licenses of it's dependencies compatible with their usage and CNCF policies? CNCF staff will handle the full legal due diligence.
* What is the general quality of informal communication around the project (slack, github issues, PR reviews, technical blog posts, etc)?
* How much time does the core team commit to the project?
* How big is the team? Who funds them? Why? How much? For how long?
* Who are the clear leaders? Are there any areas lacking clear leadership? Testing? Release? Documentation? These roles sometimes go unfilled.
* Besides the core team, how active is the surrounding community? Bug reports? Assistance to newcomers? Blog posts etc.
* Do they make it easy to contribute to the project? If not, what are the main obstacles?
* Are there any especially difficult personalities to deal with? How is this done? Is it a problem?
* What is the rate of ongoing contributions to the project (typically in the form of merged commits).
## Users
* Who uses the project? Get a few in-depth references from 2-4 of them who actually know and understand it.
* What do real users consider to be it's strengths and weaknesses? Any concrete examples of these?
* Perception vs Reality: Is there lots of buzz, but the software is flaky/untested/unused? Does it have a bad reputation for some flaw that has already been addressed?
## Context
* What is the origin and history of the project?
* Where does it fit in the market and technical ecosystem?
* Is it growing or shrinking in that space? Is that space growing or shrinking?
* How necessary is it? What do people who don't use this project do? Why exactly is that not adequate, and in what situations?
* Clearly compare and contrast with peers in this space. A summary matrix often helps. Beware of comparisons that are too superficial to be useful, or might have been manipulated so as to favor some projects over others. Most balanced comparisons will include both strengths and weaknesses, require significant detailed research, and usually there is no hands-down winner. Be suspicious if there appears to be one.

View file

@ -8,21 +8,25 @@ The key sections of the [charter](https://www.cncf.io/about/charter/) are:
>6(c)(i) The TOC shall select a Chair of the TOC to set agendas and call meetings of the TOC.
>6(e)(ii) Nominations: Each CNCF member may nominate up to two (2) technical representatives, (from vendors, end users or any other fields), at most one of which may be from their respective company. The nominee(s) must agree to participate prior to being added to the nomination list.
>6(e)(ii) Nominations: Each individual (entity or member) eligible to nominate a TOC member may nominate up to two (2) technical representatives, (from vendors, end users or any other fields), at most one of which may be from their respective company.
>6(f)(i) TOC Members shall serve two-year, staggered terms. The initial six elected TOC members from the Governing Board election shall serve an initial term of three (3) years. The TOC members initially elected by the End User TAB and TOC shall serve an initial term of two (2) years.
Current TOC [Members](https://github.com/cncf/toc#members) and their terms are:
* Jonathan Boulle (term: 3 years - start date: 1/29/2016) [GB appointed]
* Bryan Cantrill (term: 3 years - start date: 1/29/2016) [GB appointed]
* Camille Fournier (term: 3 years - start date: 1/29/2016) [GB appointed]
* Brian Grant (term: 2 years - start date: 3/17/2018) [TOC appointed]
* Benjamin Hindman (term: 3 years - start date: 1/29/2016) [GB appointed]
* Quinton Hoole (term: 1 year - start date: 3/17/2018) [TOC appointed]
* Sam Lambert (term: 16 months - start date: 10/2/2017) [enduser appointed]
* Ken Owens (term: 3 years - start date: 1/29/2016) [GB appointed]
* Alexis Richardson (term: 3 years - start date: 1/29/2016) [GB appointed]
* Jonathan Boulle (term: 3 years - start date: 1/29/2016)
* Bryan Cantrill (term: 3 years - start date: 1/29/2016)
* Camille Fournier (term: 3 years - start date: 1/29/2016)
* Brian Grant (term: 2 years - start date: 3/17/2016)
* Benjamin Hindman (term: 3 years - start date: 1/29/2016)
* Solomon Hykes (term: 2 years - start date: 3/17/2016)
* Sam Lambert (term: 16 months - start date: 10/2/2017)
* Ken Owens (term: 3 years - start date: 1/29/2016)
* Alexis Richardson (term: 3 years - start date: 1/29/2016)
The End User Community will shortly (September 2017) be electing a new TOC member to replace Elissa. That person's term would normally last through 3/10/2018. We will ask the End User Community to instead approve a 16 month term to align with GB-appointed TOC selections going forward. This End User TOC member will be reappointed or replaced on 1/29/2019.
The terms of the two TOC appointed seats, currently held by Brian and Solomon, end on 3/16/18. At the time they are reelected or replaced, we propose that the two appointed members will draw straws to determine which of them gets a 1-year term in just that cycle so that these two positions are staggered going forward. After they are selected, we propose that the TOC vote to select its chairperson, and do so every 2 years thereafter.
On 1/29/2019, the other 6 TOC positions are up for re-election by the GB. The charter requires that the initial appointments have been for 3 years (which they were), but to use staggered, 2-year terms going forward. We propose that half of the positions get a 1-year term in just that cycle (by drawing straws), so that each year afterwards, 3 of the 6 will be reappointed or replaced.
@ -30,6 +34,8 @@ On 1/29/2019, the other 6 TOC positions are up for re-election by the GB. The ch
*All terms are two years unless otherwise specified. Selected means reappointed or replaced.*
* 10/1/2017: New End User TOC member is selected for a 16 month term.
* 3/17/2018: Both TOC-selected members are selected, one for a 1-year term.
* 3/17/2018 (and each future even year): The TOC selects its chairperson.
* 1/29/2019: 6 GB-selected TOC members are selected, half for 1-year terms.
* 1/29/2019 (and each future odd year): End User TOC member is selected.

View file

@ -1,4 +1,4 @@
== CNCF Graduation Criteria v1.1
== CNCF Graduation Criteria v1.2
Every CNCF project has an associated maturity level. Proposed CNCF projects should state their preferred maturity level. A two-thirds supermajority is required for a project to be accepted as incubating or graduated. If there is not a supermajority of votes to enter as a graduated project, then any graduated votes are recounted as votes to enter as an incubating project. If there is not a supermajority of votes to enter as an incubating project, then any graduated or incubating votes are recounted as sponsorship to enter as an sandbox project. If there is not enough sponsorship to enter as an sandbox stage project, the project is rejected. This voting process is called fallback voting.
@ -23,7 +23,9 @@ To graduate from sandbox or incubating status, or for a new project to join as a
* Have committers from at least two organizations.
* Have achieved and maintained a Core Infrastructure Initiative https://bestpractices.coreinfrastructure.org/[Best Practices Badge].
* Have completed an independent and third party security audit with results published of similar scope and quality as the following example (including critical vulnerabilities addressed): https://github.com/envoyproxy/envoy#security-audit
* Adopt the CNCF https://github.com/cncf/foundation/blob/master/code-of-conduct.md[Code of Conduct].
* Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers.
* Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).
* Receive a supermajority vote from the TOC to move to graduation stage. Projects can attempt to move directly from sandbox to graduation, if they can demonstrate sufficient maturity. Projects can remain in an incubating state indefinitely, but they are normally expected to graduate within two years.

View file

@ -1,116 +0,0 @@
== Cloud Native Buildpacks
*Name of project:* Cloud Native Buildpacks
*Description:*
Buildpacks are application build tools that provide a higher level of abstraction compared to Dockerfiles.
Conceived by Heroku in 2011, they establish a balance of control that reduces the operational burden on developers and supports operators who manage apps at scale.
Buildpacks ensure that apps meet security and compliance requirements without developer intervention.
They provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
Cloud Native Buildpacks aim to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and that incorporates learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku, the largest contributors to the buildpack ecosystem.
Cloud Native Buildpacks embrace modern container standards, such as the OCI image format.
They take advantage of the latest capabilities of these standards, such as remote image layer rebasing on Docker API v2 registries.
*Statement on alignment with CNCF mission:*
The Cloud Native Buildpacks project is well-aligned with the CNCF's mission statement of supporting cloud native systems.
The next generation of buildpacks will aid developers and operators in packaging applications into containers (1a), allow operators to efficiently manage the infrastructure necessary to keep application dependencies updated (1b), and be available via well-defined interfaces (1c).
The Cloud Native Buildpacks project is complimentary to other CNCF projects like Helm, Harbor, and Kubernetes.
Cloud Native Buildpacks produce OCI images that can be managed by Helm, stored in Harbor, and deployed to Kubernetes.
Additionally, the project roadmap includes creating a Kubernetes CRD controller (or alternatively, adapting Knative's https://github.com/knative/build[Build CRD]) to enable cloud builds using buildpacks.
We agree with the CNCFs “no kingmakers” principle, and propose Cloud Native Buildpacks as an alternative to Dockerfiles for certain use cases, not as a one-size-fits-all solution for building cloud apps.
*Sponsors from TOC:* Brian Grant & Alexis Richardson
*Preferred maturity level:* Sandbox
*License:* Apache License v2.0
*Source control:* Github (https://github.com/buildpack)
*External Dependencies:*
* https://github.com/BurntSushi/toml[github.com/BurntSushi/toml] (MIT)
* https://github.com/docker/docker[github.com/docker/docker] (Apache-2.0)
* https://github.com/docker/go-connections[github.com/docker/go-connections] (Apache-2.0)
* https://github.com/golang/mock[github.com/golang/mock] (Apache-2.0)
* https://github.com/google/go-cmp[github.com/google/go-cmp] (NewBSD)
* https://github.com/google/go-containerregistry[github.com/google/go-containerregistry] (Apache-2.0)
* https://github.com/google/uuid[github.com/google/uuid] (NewBSD)
* https://github.com/nu7hatch/gouuid[github.com/nu7hatch/gouuid] (MIT)
* https://github.com/onsi/ginkgo[github.com/onsi/ginkgo] (MIT)
* https://github.com/onsi/gomega[github.com/onsi/gomega] (MIT)
* https://github.com/sclevine/spec[github.com/sclevine/spec] (Apache-2.0)
* https://github.com/spf13/cobra[github.com/spf13/cobra] (Apache-2.0)
* https://gopkg.in/yaml.v2[gopkg.in/yaml.v2] (Apache-2.0)
* https://code.cloudfoundry.org/buildpackapplifecycle[code.cloudfoundry.org/buildpackapplifecycle] (Apache-2.0)
* https://code.cloudfoundry.org/cli[code.cloudfoundry.org/cli] (Apache-2.0)
*Initial Committers:*
Founding Maintainers:
* Stephen Levine (Pivotal)
* Ben Hale (Pivotal)
* Terence Lee (Heroku)
* Joe Kutner (Heroku)
Additional Maintainers:
* Emily Casey (Pivotal)
* Jacques Chester (Pivotal)
* Dave Goddard (Pivotal)
* Anthony Emengo (Pivotal)
* Stephen Hiehn (Pivotal)
* Andreas Voellmer (Pivotal)
*Infrastructure requests (CI / CNCF Cluster):*
_Development needs:_
We currently use Travis for CI, but we may want to use CNCF resources to deploy Concourse CI.
Additionally, we will need access to all common Docker registry implementations for performance and compatibility testing.
This includes deploying Harbor to CNCF infrastructure as well as access to DockerHub, GCR, ACR, ECR, etc.
_Production needs:_
Additionally, we would like to use CNCF resources to host a buildpack registry containing buildpacks and buildpack dependencies.
*Communication Channels:*
* Slack: https://buildpacks.slack.com
* Mailing List: https://lists.cncf.io/g/cncf-buildpacks (proposed)
* Issue tracker: https://github.com/orgs/buildpack/projects
*Website:* https://buildpacks.io
*Release methodology and mechanics:*
Continuous release process made possible by reliable automated tests.
We plan to cut small releases whenever possible.
*Social media accounts:*
* Twitter: @buildpacks_io
*Existing sponsorship*: Pivotal and Heroku
*Community size:*
_Existing buildpacks:_
Cloud Foundry Buildpacks:
1000+ stars, 4,000+ forks, 8 full-time engineers
Heroku Buildpacks:
5,500+ stars, 12,000+ forks, 5 full-time engineers
_Cloud Native Buildpacks project:_
New project with 10 active contributors from Pivotal and Heroku.

View file

@ -92,7 +92,6 @@ CoreDNS can be thought of as a DNS protocol head that can be configured to front
*Comparison with KubeDNS*:
The incumbent DNS service for Kubernetes, “kubedns”, consists of three components:
* kube-dns which uses SkyDNS as a library provides the DNS service based on the Kubernetes API
* dnsmasq which acts as a caching server in front of kube-dns
* sidecar provides metrics and health-check status.

View file

@ -1,99 +0,0 @@
== Cortex
*Name of project:* Cortex
*Description:*
Cortex is a horizontally scalable, highly available, and multitenant SaaS service that is compatible with Prometheus and offers a long-term storage solution.
For teams looking for a Prometheus solution that offers the following over vanilla Prometheus:
* Long-term metrics storage in a variety of cloud based and on-prem NoSQL data stores
* Tenancy model supporting commercial SaaS offerings or large/multiple Kubernetes installations requiring data separation
* On-demand Prometheus instance provisioning
* A highly-available architecture that benefits from cloud-native architectures run with Kubernetes
* A highly scalable Prometheus experience that scales out, not up
* The ability to handle large metric topologies in a single instance without the need for federation
Cortex was presented at the https://docs.google.com/presentation/d/190oIFgujktVYxWZLhLYN4q8p9dtQYoe4sxHgn4deBSI/edit#slide=id.g25ca91f87f_0_0[CNCF TOC meeting on 6/5/2018]
*Statement on alignment with CNCF mission:*
Cortex fully supports the CNCF's goal for scalability, "Ability to support all scales of deployment, from small developer centric environments to the scale of enterprises and service providers."
There are many different ways to provide a scalable and available metric system for Kubernetes. Cortex with it's tenancy model combined with the both the high-availability and horizontally scalability architecture serves this goal directly.
*Sponsor / Advisor from TOC:* Bryan Cantrill and Ken Owens
*Unique identifier:* cortex
*Preferred maturity level:* sandbox
The CNCF sandbox was designed for just this kind of project. Specifically, the Cortex community is looking for the following from being in the sandbox:
* Encourage public visibility of experiments or other early work that can add value to the CNCF mission
* Visibility for a new projects designed to extend one or more CNCF projects with functionality
* The Sandbox should provide a beneficial, neutral home for such projects, in order to foster collaborative development.
*License:* Apache License 2.0
*Source control repositories:* https://github.com/weaveworks/cortex
*External Dependencies:*
Cortex depends on the following external software components:
* Prometheus (Apache Software License 2.0)
* Kubernetes (Apache Software License 2.0)
* Jaeger Tracing (Apache Software License 2.0)
* OpenTracing (Apache Software License 2.0)
* GRPC (Apache Software License 2.0)
* Weaveworks Mesh (Apache Software License 2.0)
* Golang (Apache Software License 2.0)
*Initial Committers (leads):*
Julius Volz (Independent)
Tom Wilkie (Grafana Labs)
*Infrastructure requests (CI / CNCF Cluster):*
None
*Communication Channels:*
* Slack: https://weave-community.slack.com/
* Mailing List: https://groups.google.com/forum/#!forum/cortex-monitoring
* Community Meeting Doc: https://docs.google.com/document/d/1mYvY4HMVGmetYHupi5z2BnwT1K8PiO64ZcxuX5c6ssc/edit#heading=h.ou5xp51fcp6v
*Issue tracker:* https://github.com/weaveworks/cortex/issues
*Website:* https://github.com/weaveworks/cortex
*Release methodology and mechanics:* Most folks run HEAD in production.
*Social media accounts:* None
*Existing sponsorship:* WeaveWorks
*Community size:*
* 500+ stars
* 60+ forks
*Production usage*:
Cortex is being actively used in production by the following:
* Electronic Arts https://www.ea.com/
* FreshTracks.io https://freshtracks.io/
* Grafana Labs https://grafana.com/
* OpenEBS https://www.openebs.io/
* WeaveWorks https://weave.works/

View file

@ -1,119 +0,0 @@
=== Dragonfly CNCF Sandbox Project Proposal
*Name of Project:* Dragonfly
*Description:*
Dragonfly is an intelligent P2P based image and file distribution system. It aims to resolve three major issues: efficiency, flow control and security.
It is a general tool which can be integrated with container engine to help deploy cloud native applications at scale. In addition, users can deploy Dragonfly easily on Kubernetes via Helm and daemonset.
Dragonfly ensures distribution efficiency of images with P2P policy, the avoidance of duplicated image downloads. To not impact the other running applications, Dragonfly implements image distribution flow control, such as download bandwidth limit and disk IO protection. Dragonfly also takes advantages of encryption algorithm for image transmission in order to meet secure demand of enterprise. Here are some key features of Dragonfly:
* P2P based file distribution
* Support a wide range of container technologies
* Host level speed limit
* Passive CDN for downloads
* Strong consistency of distributed image
* Disk protection and high efficient IO
* High performance
* Exception auto isolation
* Effective concurrency control of Registry Auth
* Image encryption when transmission
Dragonfly consists of three major components:
1. **SuperNode**: provides image cache services from source image registry; chooses appropriate downloading policy for each peer.
1. **dfget**: is a client which downloads files from P2P network(peer nodes and SuperNode); receives control orders from SuperNode and transfers data among P2P network.
1. **dfdaemon**: is an agent which proxies image pulling request from local container engine; filters out layer fetching requests and uses dfget to download all these layers.
**Statement on alignment with CNCF mission:**
The Cloud Native Dragonfly project is well-aligned with the CNCF's mission statement of supporting cloud native systems. When developers and operators finish to package applications in container images, Dragonfly aims to tackle distribution issue of packaged image distribution(1a). The intelligent distribution ability of Dragonfly can dynamically manage network bandwidth, disk IO and other resources efficiently to reduce maintenance and operation cost(1b). Dragonfly is decoupled with dependencies and designed to be consist of explicit and minimal services within itself(1c).
The Cloud Native Dragonfly project is complimentary to other CNCF projects, such as Kubernetes, Helm, Harbor and containerd. SuperNode of Dragonfly can be deployed via Helm and dfget and dfdaemon agents can be deployed via daemonset of Kubernetes. When releasing a cloud native application in Kubernetes, Harbor takes advantanges of Dragonfly's open API to control the image preheater. when startup of pod, containerd sends image pull request to Dragonfly and Dragonfly takes over image distribution part automatically, efficiently and safely.
*Roadmap:*
Dragonfly intends to deliver more essential and advanced feature in ecosystem openness, scalability and security. For more details, please refer to https://github.com/alibaba/Dragonfly/blob/master/ROADMAP.md[ROADMAP].
*Sponsors from TOC:* Jonathan Boulle & Benjamin Hindman
*Preferred maturity level:* Sandbox
*License:* Apache License v2.0
*Source control:* GitHub (https://github.com/alibaba/dragonfly)
*External Dependencies:*
External dependencies of Falco are listed below:
|===
|*Software*|*License*|*Project Page*
|go-check|BSD|https://github.com/go-check/check/[https://github.com/go-check/check/]
|compress|BSD|https://github.com/klauspost/compress[https://github.com/klauspost/compress]
|cpuid|MIT|https://github.com/klauspost/cpuid[https://github.com/klauspost/cpuid]
|uuid|BSD|https://github.com/pborman/uuid[https://github.com/pborman/uuid]
|logrus|MIT|https://github.com/sirupsen/logrus[https://github.com/sirupsen/logrus]
|pflag|BSD|https://github.com/spf13/pflag[https://github.com/spf13/pflag]
|bytebufferpool|MIT|https://github.com/valyala/bytebufferpool[https://github.com/valyala/bytebufferpool]
|fasthttp|MIT|https://github.com/valyala/fasthttp[https://github.com/valyala/fasthttp]
|terminal|BSD|https://golang.org/x/crypto/ssh/terminal[https://golang.org/x/crypto/ssh/terminal]
|unix|MIT|https://golang.org/x/sys/unix[https://golang.org/x/sys/unix]
|windows|zlib|https://golang.org/x/sys/windows[https://golang.org/x/sys/windows]
|gcfg|BSD|https://gopkg.in/gcfg.v1[https://gopkg.in/gcfg.v1]
|yaml|Apache License 2.0|https://gopkg.in/yaml.v2[https://gopkg.in/yaml.v2]
|===
*Initial Committers:*
Founding Maintainers:
* Allen Sun (Alibaba)
* Chaobing Chen (Meitu)
* Jian Wang (Alibaba)
* Jin Zhang (Alibaba)
* Zuozheng Hu (Alibaba)
Additional Maintainers:
* Haibing Zhou (Ebay China)
*Infrastructure requests (CI / CNCF Cluster):*
_Development needs:_
We currently use Travis and CircleCI for CI, but we may want to use CNCF resources to deploy jenkis for node e2e test.
_Production needs:_
none
*Communication Channels:*
* Gitter: https://gitter.im/alibaba/Dragonfly
* Mailing List: https://lists.cncf.io/g/cncf-dragonfly (proposed)
* Issue tracker: https://github.com/alibaba/Dragonfly/issues
*Website:* https://alibaba.github.io/Dragonfly/
*Release methodology and mechanics:*
We set the version rule of Dragonfly on the basis of SemVer which has a version number of MAJOR.MINOR.PATCH. Currently we do feature release 4-5 times per year(all with minor releases). Before every minor release, we plan to tag several RC releases to invite community developers to fully test them. In addition, all the code commits to Dragonfly project must add essential tests to cover the feature or code change.
*Social media accounts:*
* Twitter: https://twitter.com/dragonfly_oss[@dragonfly_oss]
*Existing sponsorship*: Alibaba, AntFinancial and China Mobile
*Community size:*
2300+ stars
3 full-time engineers
16 contributors

View file

@ -1,219 +0,0 @@
=== Falco CNCF Sandbox Project Proposal
*Name of Project:* Falco
*Description:*
Highly distributed and dynamic architectural patterns such as microservices are proving that traditional models of application and network security alone do not meet todays current needs. Additionally, the increasing level of regulation being introduced (General Data Protection Regulation, or GDPR, for instance) to any business with a digital presence makes security more important than ever. Organizations must quickly respond to exploits and breaches to minimize financial penalties introduced by such regulation, yet the dynamic nature of modern Cloud Native architectures make it extremely difficult for organizations to keep pace.
Falco seeks to solve this problem by shortening the security incident detection and response cycle in microservices architectures. Falco provides runtime security for systems running container workloads to detect behavior that is defined as abnormal. Falco can be broken into three areas:
*Event & Metadata Providers* - inputs of events to the rules engine.
* Sysdig Kernel Module - provides a stream of system call events for Linux based systems.
* Kubernetes API Server - provides metadata for Kubernetes resources such as Namespace, Deployment, Replication Controllers, Pods, and Services.
* Marathon - provides metadata for Marathon resources.
* Mesos - provides metadata for Mesos resources.
* Docker - provides metadata for containers running under the Docker container runtime.
*Rules Engine & Condition Syntax* - Falco implements a rules engine that supports the following rule syntax.
* https://github.com/draios/falco/wiki/Falco-Rules#conditions[Sysdig Filter Syntax] - Falco supports the Sysdig filter syntax used for filtering system call events from the Sysdig kernel module. This syntax also supports filtering on metadata from sources such as container runtimes, Kubernetes, Mesos, and Marathon.
*Notification Outputs* - Falcos rules engine will send alerts when rule conditions are met. The following output destinations are currently supported.
* Stdout, Log file, Syslog - These can be aggregated using Fluentd or similar
* Command Execution - Falco can execute a command, passing the alert in via stdin
For example, by leveraging the Sysdig kernel modules capabilities of tapping into system calls from the Linux kernel, rules can be written to detect behavior seen as abnormal. Through the system calls, Falco can detect events such as:
* A Kubernetes Pod running in a Deployment labeled node-frontend begins running processes other than node.
* A shell is run inside a container
* A container is running in privileged mode, or is mounting a sensitive path like /proc from the host.
* A server process spawns a child process of an unexpected type
* Unexpected read of a sensitive file (like /etc/shadow)
* A non-device file is written to /dev
* A standard system binary (like ls) makes an outbound network connection
When a rule condition is met, Falco can either log an alert to a file, syslog, stdout, etc, or trigger an external program. This allows an automated system to respond to compromised containers or container hosts. This automated system could stop or kill containers identified as compromised, or mark container hosts as tainted to prevent workloads from being scheduled on the compromised host.
*Value to the Cloud Native Operating Model*
As Cloud Native starts to become the defacto operating model for many organizations, the security of this model is often the first thing many organizations seek to address. The Cloud Native model seeks to empower developers to be able to rapidly package applications and services in containers, then quickly deploy them to platforms such as Kubernetes. This model seeks to remove the traditional points of friction in operations by providing a consistent deployment paradigm and abstraction of the underlying infrastructure. The challenge for many organizations is that applications packaged as containers are often a black box to downstream teams in terms of 1) what is packaged inside the container, and 2) operations any processes might perform once the application is running.
Currently there are several prescribed methods for building security into the Cloud Native workflow:
* *Image Chain of Trust*
** Scan images as part of a deployment process, such as GitOps, to verify their contents and check for known vulnerabilities (for example Anchore or Clair).
** Cryptographically sign images and restrict container runtimes to only run trusted images. (eg Notary)
** Restrict which container registries images can be pulled from.
* *Admittance Control*
** Cryptographically verifiable identities to restrict/allow workloads to run based on a defined policy (eg SPIFFE).
** Leveraging Service Meshes to control what workloads can join a particular service.
* *Orchestrator/Infra Security*
** Role Based Access Control to restrict access to the orchestrator API services.
** General best practices for securing the orchestrator entry points.
** Network Policy API and CNI Plugins
** Linux Security Module support.
** PodSecurity Policies
* *Runtime Security*
** Detect abnormal behavior inside a workload and take appropriate action, such as telling the orchestrator to kill the workload, thus shortening the security “detect-response” cycle. (eg Falco)
* *Workload Access Control Policies*
** Policies controlling the network activity of workloads and restricting inter-workload communication.
** Policies controlling the API endpoints available to workloads (eg Cilium)
Each prescribed method provides an additional level of protection, but one method by itself does not provide a complete security solution. Image Chain of Trust for instance is a “point in time” method of providing security. In other words, the container image is considered “secure” when the image scanning process completes successfully, but anytime after that it may become “insecure” once new exploits or vulnerabilities are discovered.
Additionally, while container images are considered immutable when built, once a container is created from the image, the process inside the container can modify the containers instantiation of the root filesystem. Some best-practices suggest starting containers with a read-only root filesystem to prevent this, but this method has its own problems. For instance, the “standard” Node.js image needs to write to the root filesystem to create a number of files (lock files for instance) when node starts. Runtime Security seeks to mitigate this problem by watching what changes may be made once a container is running, and taking action on abnormal behavior.
Currently the most of the options for runtime security are limited to proprietary solutions that limits the ability to take advantage of the larger open source software ecosystem. Falco is unique in that its open approach allows for a broader community to define and share rule sets for common security exploits. This open approach also provides the opportunity for a faster response time to newly discovered exploits by providing the ability to share new rules for these exploits as they are discovered.
*Falco Roadmap*
Short term improvements include:
* *Rules Library* - Expand the shipped rule set to include rules for commonly deployed applications and CNCF Projects, as well as common compliance rules such as CIS.
** Container Images/Apps: Nginx, HAProxy, etcd, Java, Node
** CNCF Projects: Kubernetes, Prometheus, Fluentd, Linkerd
** CIS Runtime Compliance Rules
Longer term improvements include:
* *Prometheus Metrics Exporter* - Expose a metrics endpoint to allow collection of metrics by Prometheus. Metrics include # of overall alerts, # of alerts by rule, # of alerts by rule tag.
* *Kubernetes networking policy support* - Support detecting networking policy violations via the Sysdig kernel module
* *Alert Output* - Add support for additional output destinations to allow Falco to more easily be integrated into a Cloud Native architecture.
** *Direct webhook support* - Support posting to a generic webhook +
** *Messaging systems* - Support sending messages to a messaging server such as NATS +
** *gRPC* - Support sending to alerts to external systems via gRPC
* *Event & Metadata Providers* - Support for additional backend providers for the event stream.
* *Kubernetes Audit Events* - Ingest Kubernetes Audit Events and support rules based on Kubernetes Audit Events. +
* *Container Runtimes* - Support additional container runtime.
* *Baselining* - Automatic baselining of an applications “normal” behavior
*Planned Advocacy Work*
Beyond the engineering work planned, there is also work planned to improve the awareness of Falco in the Cloud Native ecosystem.
* *Workshops on Falco:* As the projects main sponsor, Sysdig has been investing in workshops focused on Container Troubleshooting and Container Forensics that include sections on Falco and CNCF projects such as Kubernetes. These workshops will be expanded to include more exercises on writing rules for applications, testing workflow for rule writing, and incorporation of Falco in CD workflows such as GitOps, etc.
* *Documentation Improvements*: Improve documentation with regard to writing rules including out of the box macros, lists, and rules provided by Falco.
* *Documenting Use Cases:* Document existing use cases around using Falco with other projects to deliver a complete end to end solution.
* *Events:* Conference and Meetup presentations to help educate the community on security in the Cloud Native landscape, and to help new community members how to implement Cloud Native based architectures in a secure fashion.
*Current CNCF Ecosystem Integrations:*
*Containerd and rkt*
Falco can detect containers running in both containerd and rkt container runtimes.
*Kubernetes*
Falco can communicate with the Kubernetes API to pull Namespace, Deployment, Service, ReplicaSet, Pod, and Replication controller information such as name and labels. This data can be used to create rule conditions (e.g. k8s.ns.name = mynamspace) as well as used as an outputted field in any generated alerts.
A common deployment method for Falco in the Cloud Native landscape is to deploy it as a Daemon Set running in Kubernetes. The Falco project provides releases packaged as containers and provides a Daemon Set example for end users to deploy Falco.
Docker Hub: https://hub.docker.com/r/sysdig/falco/[https://hub.docker.com/r/sysdig/falco/]
Kubernetes Daemon Set: https://github.com/draios/falco/tree/dev/integrations/k8s-using-daemonset[https://github.com/draios/falco/tree/dev/integrations/k8s-using-daemonset]
Helm chart: https://github.com/helm/charts/tree/master/stable/falco[https://github.com/helm/charts/tree/master/stable/falco]
*Fluentd*
Falco can also leverage Fluentd from the CNCF ecosystem. Falco alerts can be collected from logs or stdout by Fluentd and the alerts can be aggregated and analyzed. An example of using Falco with Fluentd, Elasticsearch, and Kibana can be found on the Sysdig Blog.
https://sysdig.com/blog/kubernetes-security-logging-fluentd-falco/[https://sysdig.com/blog/kubernetes-security-logging-fluentd-falco/]
*NATS*
A https://github.com/sysdiglabs/falco-nats[proof of concept] was created showing publishing of Falco alerts to a NATS messaging server. These alerts can be subscribed to by various programs to process and take action on alerts. In the proof of concept, Falco alerts published to NATS triggered a Kubeless function to delete an offending Pod.
*Sponsors from TOC:* Quinton Hoole, Brian Grant
*Preferred maturity level:* Sandbox
*Unique identifier:* falco
*Current Project Sponsor:* https://sysdig.com/opensource/[Sysdig]
*License:*** **Apache License v 2 (ALv2)
*Code Repositories:*
Code is currently hosted by Sysdig:
https://github.com/draios/falco[https://github.com/draios/falco]
The code will move to a vendor netural github organization at:
https://github.com/falcosecurity[https://github.com/falcosecurity]
*External Code Dependencies* +
External dependencies of Falco are listed below:
|===
|*Software*|*License*|*Project Page*
|libb64|Creative Commons|http://libb64.sourceforge.net/[http://libb64.sourceforge.net/]
|curl|MIT/X|https://curl.haxx.se/[https://curl.haxx.se/]
|jq|MIT|https://stedolan.github.io/jq/[https://stedolan.github.io/jq/]
|libyaml|MIT|https://pyyaml.org/wiki/LibYAML[https://pyyaml.org/wiki/LibYAML]
|lpeg|MIT|http://www.inf.puc-rio.br/\~roberto/lpeg/[http://www.inf.puc-rio.br/~roberto/lpeg/]
|luajit|MIT|http://luajit.org/luajit.html[http://luajit.org/luajit.html]
|lyaml|MIT|https://github.com/gvvaughan/lyaml[https://github.com/gvvaughan/lyaml]
|ncurses|MIT?|https://www.gnu.org/software/ncurses/[https://www.gnu.org/software/ncurses/]
|openssl|OpenSSL & SSLeay|https://www.openssl.org/source[https://www.openssl.org/source]
|yamlcpp|MIT|https://github.com/jbeder/yaml-cpp[https://github.com/jbeder/yaml-cpp]
|zlib|zlib|https://www.zlib.net/zlib.html[https://www.zlib.net/zlib.html]
|sysdig|ALv2|https://github.com/draios/sysdig[https://github.com/draios/sysdig]
|tbb|ALv2|https://www.threadingbuildingblocks.org/[https://www.threadingbuildingblocks.org/]
|===
*Committers:* 16
*Users of Note:*
Cloud.gov:
* https://cloud.gov/docs/apps/experimental/behavior-monitoring/[Dynamic behavior monitoring in Cloud.gov]
* https://www.youtube.com/watch?v=wFQOXMcZnQg[Detecting tainted apps in Cloud Foundry]
* https://github.com/cloudfoundry-community/falco-boshrelease[falco-boshrelease]
*Community Communication:*
Slack is the preferred form of communication. Sysdig runs a Slack team for its open source projects and hosts a #falco channel under that Slack team:
Slack team: https://sysdig.slack.com[https://sysdig.slack.com] +
Falco Channel: https://sysdig.slack.com/messages/C19S3J21F/[https://sysdig.slack.com/messages/C19S3J21F/]
*Website/Blog:*
The website is currently hosted by Sysdig, under the Open Source section of the website: https://sysdig.com/opensource/falco[https://sysdig.com/opensource/falco]
Blog posts related to Falco are currently posted to the Sysdig Blog. https://sysdig.com/blog/tag/falco/[https://sysdig.com/blog/tag/falco/]
The Falco website and blog will be moved to: https://falco.org[https://falco.org]
*Release Cadence:*
Minor releases quarterly, Patch releases as frequent needed (Minor and Patch used as defined by https://semver.org/[semantic versioning].)
*Statement on alignment with CNCF mission:*
With the number of systems under management increasing at a greater and greater rate, and regulation becoming more common, new approaches are required with regards to security that allows organizations to automatically manage the “detection & response” security cycle. Innovations in Cloud Native technologies allow this automatic approach to security more and more feasible.
Falco aligns with the CNCF mission statement by:
* Focusing on containers first: Falco was built with the assumption that containers are the method in which modern applications would be run. Falco has included since its inception the ability to identify containerized processes and apply rules to these processes.
* Enabling the CNCF ecosystem by including Cloud Native best practices: The https://github.com/draios/falco/blob/dev/rules/falco_rules.yaml[default Falco rule set] focuses on container anti-patterns, or rather common mistakes that new users tend to do when deploying a Cloud Native application in containers. While currently these rules focuses on containers and container runtimes, additional rule sets can be written for CNCF projects, and application runtimes in the CNCF Landscape. This work is on the Falco roadmap, and could be easily done wby the broader CNCF community.
* Falcos goal is to provide a modular, composable system that allows easy integration with other CNCF projects or open source projects. This idea of composability allows for operators of Cloud Native platforms to easily build systems to manage the security of the platform, while maintaining a high degree of flexibility and maintaining the Cloud Native developer velocity.

View file

@ -1,140 +0,0 @@
== TiKV Project Proposal
*Name of Project*: TiKV
*Description*: TiKV is an open-source distributed transactional key-value database built in Rust and implements the Raft consensus algorithm. It features horizontal scalability, consistent distributed transactions, and geo-replication.
*Why is TiKV a good fit for CNCF?*
TiKV has been one of the few key-value storage solutions in the cloud-native community that can balance both performance and ease of operation with Kubernetes. Data storage is one of the most important components of any cloud-native infrastructure platform, and end users need a range of choices to meet their needs. TiKV is complementary to existing CNCF database projects like Vitess, which is currently the only database option hosted by CNCF. As a transactional key-value database, TiKV serves as another choice for cloud-native applications that need scalability, distributed transactions, high availability, and strong consistency.
With TiKV becoming a CNCF project, the open-source cloud-native ecosystem will also become more vibrant and robust in China, because our team has a strong track record of fostering the open source community in China and is dedicated to building and promoting CNCFs mission there. Open source is global, and having TiKV as a part of CNCF will further make that story so.
*TiKV Overview*
_Development Timeline_:
- Current release: 2.1.0 beta
- April 27, 2018: TiKV 2.0 released
- October 16, 2017: TiKV 1.0 released
- October 2016: beta version of TiKV was released and used in production
- April 1, 2016: TiKV was open-sourced
TiKV is currently adopted in-production in more than 200 companies, either together with TiDB (a stateless MySQL compatible SQL layer) or on its own. Please refer to the “Adopters” list below for the current list of publicly acknowledged adopters.
_Community Stats_:
- Stars: 3300+
- Contributors: 75+
- Commits: 2900+
- Forks: 400+
*Cloud-Native Features of TiKV*
_Horizontal scalability_: TiKV automatically handles data sharding and replication for cloud-native applications and enables elastic capacity scaling by simply adding or removing nodes with no interruption to ongoing workloads.
_Auto-failover and self-healing_: TiKV supports automatic failover with its implementation of the Raft consensus algorithm, so in situations of software or hardware failures, the system will automatically recover while maintaining the applications availability.
_Strong consistency_: TiKV delivers performant transactions and strong consistency by providing full support for ACID semantics, ensuring the accuracy and reliability of your data anytime, anywhere.
_Cloud-native deployment_: TiKV can be deployed in any cloud environment--public, private, or hybrid--using tidb-operator, a Kubernetes-based deployment tool.
*Comparison*
This comparison is intended simply to compare features of TiKV with two other well-known NoSQL databases, Cassandra and MongoDB. It is not intended to favor or position one project over another. Any corrections are welcome.
.Feature Comparison
|===
|Area |Cassandra |MongoDB |TiKV
|Type
|Wide Column
|Document
|Key-Value
|Auto-scaling
|Y
|Optional
|Y
|ACID Transaction
|N
|Maybe?
|Y
|Strong consistency replication
|Optional
|N
|Y
|Geo-based replication
|N
|N
|Y
|Self-hearing
|N
|N
|Y
|SQL Compatibility
|Partial (w/ CQL)
|N
|MySQL (w/ TiDB)
|===
*Roadmap*:
https://github.com/pingcap/tikv/blob/master/docs/ROADMAP.md
*Additional Information*:
_TOC Presentation Date_: July 3, 2018
_Current TOC Sponsor_: Bryan Cantrill and Ben Hindman
_Preferred Maturity Level_: Sandbox
_License_: Apache 2.0
_Source control repositories_: https://github.com/pingcap/tikv
_Contributor Guideline_: https://github.com/pingcap/tikv/blob/master/CONTRIBUTING.md
_Official Documentation_: https://github.com/pingcap/tikv/wiki/TiKV-Documentation
_Blog_: https://www.pingcap.com/blog/#TiKV
_Infrastructure Required_:
TiKV uses Circle CI for unit tests and builds and in-house Jenkins CI cluster for some integration tests. We plan to use CNCF test cluster to automatically run stability tests and performance tests in the future.
_Issue Tracker_: https://github.com/pingcap/tikv/issues
_Website_: tikv.org (under construction)
_Release Methodology and Mechanics_:
TiKV follows the Semantic Versioning 2.0.0 convention. The release cadence is:
- Major version is released every 6 months
- Minor version is released every 3 months.
- Patch version is released every 2 weeks.
TiKV releases are announced using GitHub releases and current release is 2.1.0 beta.
_Social Media Accounts_: TBD
_Adopters_:
https://github.com/pingcap/tikv/blob/master/docs/adopters.md
_Dependencies and License Compliance (done by FOSSA)_:
https://app.fossa.io/reports/87fe16e8-72a2-4e27-8509-a07dfa52a21a
*Statement on Alignment with CNCF Mission*
Our team believes TiKV will be a great fit for CNCF. As the CNCFs mission is to “create and drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments capable of scaling to tens of thousands of self healing multi-tenant nodes,” we believe TiKV to be a core enabling technology for this mission. This belief has been validated by our many adopters and developers working to build, deploy, and maintain large-scale applications in a cloud-native environment. Moreover, TiKV has very strong existing synergy with other CNCF projects, and is used heavily in conjunction with projects like: Kubernetes, Prometheus, and gRPC.

View file

@ -1,80 +0,0 @@
# Harbor Incubating Stage Review
Harbor is currently a CNCF sandbox project. Please refer to Harbor's initial
[sandbox proposal](../proposals/harbor.adoc) for discussion on Harbor's
alignment with the CNCF and details on sandbox requirements.
In the time since being accepted as a sandbox project, Harbor has demonstrated
healthy growth and progress.
* [v1.6.0 is the latest
releases](https://goharbor.io/blogs/harbor-1.6.0-release/), shipped on
September 7th, marking our 7th major feature release. New features include:
* [Support for hosting Helm charts](https://github.com/goharbor/harbor/issues/4922)
* [Support for RBAC via LDAP groups](https://github.com/goharbor/harbor/issues/3506)
* [Replication filtering via labels](https://github.com/goharbor/harbor/issues/4861)
* [Major refactoring to coalesce to a single PostgreSQL database](https://github.com/goharbor/harbor/issues/4855)
* A [formalized governance
policy](https://github.com/goharbor/community/blob/master/GOVERNANCE.md) has
been approved and instituted for the project, and two new maintainers from
different companies have joined the project to help Harbor continue to grow.
## Incubating Stage Criteria
In addition to sandbox requirements, a project must meet the following
criteria to become an incubation-stage project:
* Document that it is being used successfully in production by at least three
independent end users which, in the TOCs judgement, are of adequate quality
and scope.
* Adopters: [https://github.com/goharbor/harbor/blob/master/ADOPTERS.md](https://github.com/goharbor/harbor/blob/master/ADOPTERS.md)
* Have a healthy number of committers. A committer is defined as someone with
the commit bit; i.e., someone who can accept contributions to some or all of
the project.
* Maintainers of the project are listed in
[https://github.com/goharbor/harbor/blob/master/OWNERS.md](https://github.com/goharbor/harbor/blob/master/OWNERS.md). There are 11 maintainers working on Harbor from 3 different
companies (VMware, Caicloud and Hyland Software)
* Maintainers are added and removed from the project as per the policies
outlined in the project governance:
[https://github.com/goharbor/community/blob/master/GOVERNANCE.md](https://github.com/goharbor/community/blob/master/GOVERNANCE.md).
* Demonstrate a substantial ongoing flow of commits and merged contributions.
* Releases: 7 major releases ([https://github.com/goharbor/harbor/releases](https://github.com/goharbor/harbor/releases))
* Roadmap: [https://github.com/goharbor/harbor/wiki/Harbor-Roadmap](https://github.com/goharbor/harbor/wiki/Harbor-Roadmap)
* Contributors: [https://github.com/goharbor/harbor/graphs/contributors](https://github.com/goharbor/harbor/graphs/contributors)
* Commit activity: [https://github.com/goharbor/harbor/graphs/commit-activity](https://github.com/goharbor/harbor/graphs/commit-activity)
* CNCF DevStats: [https://harbor.devstats.cncf.io/](https://harbor.devstats.cncf.io/)
* [Last 30 days activity on GitHub](https://harbor.devstats.cncf.io/d/8/dashboards?refresh=15m&orgId=1&from=now-30d&to=now-1h)
* [Community Stats](https://harbor.devstats.cncf.io/d/3/community-stats?orgId=1&var-period=d7&var-repo_name=goharbor%2Fharbor)
Further details of Harbor's growth and progress since entering the sandbox
stage as well as use case details from the Harbor community can be found in this
[slide
deck](https://docs.google.com/presentation/d/1aBQnE96kKatc1_t3E97lJBwiWvL-3GTitojuv-nWMuo/).
## Security
Harbor's codebase has been analyzed and reviewed by VMware's internal product
security team.
* Static analysis has been performed on Harbor via
[gosec](https://github.com/securego/gosec)
* Software decomposition via AppCheck, Snyk and retire.js with goal of
discovering outdated or vulnerable packages
* Manual code analysis / review
* Vulnerability assessment via multiple scanners
* Completed threat model
In addition to this security work the Harbor maintainers are partnering with
the CNCF to schedule a third-party security audit of Harbor.

View file

@ -1,43 +0,0 @@
# Rook Incubating Stage Review
Rook is currently a sandbox stage project. Please refer to Rook's [sandbox stage proposal](../proposals/rook.adoc) ("inception" at time of acceptance) for details on the sandbox requirements.
In the time since being accepted to the sandbox stage, Rook has demonstrated healthy growth and progress.
Two releases were completed, starting with v0.7 on February 21st and then v0.8 on July 18th.
With those releases, Rook extended beyond just orchestration of Ceph and has built a framework of reusable specs, logic and policies for [cloud-native storage orchestration of other providers](https://blog.rook.io/rooks-framework-for-cloud-native-storage-orchestration-c66278014df7).
Operators and CRD types were added for both CockroachDB and Minio in the v0.8 release, initial support for NFS is nearly complete, and other storage providers are also in the works.
The CRD types and support for Ceph has graduated to Beta in the v0.8 release, reflecting the increased maturity that has only been possible from impressive engagement from the community.
Other big features for the Ceph operator include automatic horizontal scaling of storage resources, an improved security model, and support for new environments such as OpenShift.
A [formalized governance policy](https://github.com/rook/rook/blob/master/GOVERNANCE.md) has been approved and instituted for the project, and a [new maintainer](https://github.com/rook/rook/blob/master/OWNERS.md) has also been added to help the project continue to grow.
## Incubating Stage Criteria
To be accepted to incubating stage, a project must meet the sandbox stage requirements plus:
* Document that it is being used successfully in production by at least three independent end users which, in the TOCs judgement, are of adequate quality and scope.
* Adopters: [https://github.com/rook/rook/blob/master/ADOPTERS.md](https://github.com/rook/rook/blob/master/ADOPTERS.md)
* Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
* Maintainers of the project are listed in [https://github.com/rook/rook/blob/master/OWNERS.md](https://github.com/rook/rook/blob/master/OWNERS.md).
* Maintainers are added and removed from the project as per the policies outlined in the project governance: [https://github.com/rook/rook/blob/master/GOVERNANCE.md](https://github.com/rook/rook/blob/master/GOVERNANCE.md).
* Demonstrate a substantial ongoing flow of commits and merged contributions.
* Releases: [https://github.com/rook/rook/releases](https://github.com/rook/rook/releases)
* Roadmap: [https://github.com/rook/rook/blob/master/ROADMAP.md](https://github.com/rook/rook/blob/master/ROADMAP.md)
* Contributors: [https://github.com/rook/rook/graphs/contributors](https://github.com/rook/rook/graphs/contributors)
* Commit activity: [https://github.com/rook/rook/graphs/commit-activity](https://github.com/rook/rook/graphs/commit-activity)
* CNCF DevStats: [https://rook.devstats.cncf.io/](https://rook.devstats.cncf.io/)
* [Last 30 days activity on Github](https://rook.devstats.cncf.io/d/8/dashboards?refresh=15m&orgId=1&from=now-30d&to=now-1h)
* [Community Stats](https://rook.devstats.cncf.io/d/3/community-stats?orgId=1)
Further details of Rook's growth and progress since entering the sandbox stage as well as use case details from the Rook community can be found in this [slide deck](https://docs.google.com/presentation/d/1DOgAlX0RyB8hzD7KbmXK4pKu9hFFPY9WiLv-LEy38jo/edit?usp=sharing).