Compare commits

..

1 commit

Author SHA1 Message Date
Łukasz Gryglicki
ef9448910d
Update README.md 2018-03-15 11:23:08 +01:00
33 changed files with 57 additions and 2611 deletions

View file

@ -18,7 +18,7 @@ Possible ways to contribute:
* Working groups (various tasks) * Working groups (various tasks)
* Technical content for website * Technical content for website
If you are interested in engaging in this way, we would encourage you to issue a pull request to [TOC Contributors](https://github.com/cncf/toc/blob/master/CONTRIBUTORS.md) that you desire to become a TOC Contributor. Although there is not an actual limit of having one Contributor per company, we would encourage CNCF member companies to designate an official TOC Contributor who is tasked with consulting internal experts and expressing a semi-official view on a given project. We will list current TOC Contributors on a page similar to https://www.cncf.io/people/ambassadors/. If you are interested in engaging in this way, we would encourage you to make a public commitment on the TOC mailing list that you will become a TOC Contributor. Although there is not an actual limit of having one Contributor per company, we would encourage CNCF member companies to designate an official TOC Contributor who is tasked with consulting internal experts and expressing a semi-official view on a given project. We will list current TOC Contributors on a page similar to https://www.cncf.io/people/ambassadors/.
This is not only about individual contribution. It is also about rallying help from your employer, e.g., if you work for a CNCF Member company. Given the [breadth](https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_v0.9.5_cncf.jpg) of projects represented by cloud native, it is impossible for anyone to be an expert in all technologies that were evaluating. Were particularly interested in Contributors that can act as a focal point for tapping relevant expertise from their organizations and colleagues in order to engage with CNCF discussions in a timely manner. This is not only about individual contribution. It is also about rallying help from your employer, e.g., if you work for a CNCF Member company. Given the [breadth](https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_v0.9.5_cncf.jpg) of projects represented by cloud native, it is impossible for anyone to be an expert in all technologies that were evaluating. Were particularly interested in Contributors that can act as a focal point for tapping relevant expertise from their organizations and colleagues in order to engage with CNCF discussions in a timely manner.

View file

@ -19,46 +19,39 @@ If you are interested in engaging in this way, we would encourage you to issue a
List below is the official list of TOC contributors, in alphabetical order: List below is the official list of TOC contributors, in alphabetical order:
* Alex Chircop, StorageOS (alex.chircop@storageos.com) * Alex Chircop, StorageOS (alex.chircop@storageos.com)
* Allen Sun, Alibaba (allensun.shl@alibaba-inc.com)
* Andy Santosa, Ebay (asantosa@ebay.com) * Andy Santosa, Ebay (asantosa@ebay.com)
* Ara Pulido, Bitnami (ara@bitnami.com) * Ara Pulido, Bitnami (ara@bitnami.com)
* Ayrat Khayretdinov (akhayertdinov@cloudops.com)
* Bassam Tabbara, Upbound (bassam@upbound.io) * Bassam Tabbara, Upbound (bassam@upbound.io)
* Bob Wise, Amazon Web Services (bob@bobsplanet.com) * Bob Wise, Samsung SDS (bob@bobsplanet.com)
* Cathy Zhang, Huawei (cathy.h.zhang@huawei.com) * Cathy Zhang, Huawei (cathy.h.zhang@huawei.com)
* Vasu Chandrasekhara, SAP SE (vasu.chandrasekhara@sap.com)
* Chase Pettet, Wikimedia Foundation (cpettet@wikimedia.org) * Chase Pettet, Wikimedia Foundation (cpettet@wikimedia.org)
* Christopher Liljenstople, Tigera (cdl@asgaard.org) * Christopher Liljenstople, Tigera (cdl@asgaard.org)
* Clinton Kitson, Dell (Clinton.Kitson@dell.com) * Clinton Kitson, Dell (Clinton.Kitson@dell.com)
* Dan Wilson, Concur (danw@concur.com) * Dan Wilson, Concur (danw@concur.com)
* Darren Ratcliffe, Atos (darren.ratcliffe@atos.net) * Darren Ratcliffe, Atos (darren.ratcliffe@atos.net)
* Dave Zolotusky, Spotify (dzolo@spotify.com)
* Deyuan Deng, Caicloud (deyuan@caicloud.io) * Deyuan Deng, Caicloud (deyuan@caicloud.io)
* Doug Davis, IBM (dug@us.ibm.com) * Doug Davis, IBM (dug@us.ibm.com)
* Drew Rapenchuk, Bloomberg (drapenchuk@bloomberg.net) * Drew Rapenchuk, Bloomberg (drapenchuk@bloomberg.net)
* Dustin Kirkland, Canonical (kirkland@canonical.com) * Dustin Kirkland, Canonical (kirkland@canonical.com)
* Eduardo Silva, Treasure Data (eduardo@treasure-data.com) * Eduardo Silva, Treasure Data (eduardo@treasure-data.com)
* Edward Lee, Intuit (edward_lee@intuit.com)
* Erin Boyd, Red Hat (eboyd@redhat.com) * Erin Boyd, Red Hat (eboyd@redhat.com)
* Gergely Csatari, Nokia (gergely.csatari@nokia.com) * Gergely Csatari, Nokia (gergely.csatari@nokia.com)
* Ghe Rivero, Independent (ghe.rivero@gmail.com) * Ghe Rivero, Independent (ghe.rivero@gmail.com)
* Gou Rao, Portworx (gou@portworx.com) * Gou Rao, Portworx (gou@portworx.com)
* Ian Crosby, Container Solutions (ian.crosby@container-solutions.com) * Ian Crosby, Container Solutions (ian.crosby@container-solutions.com)
* Jeyappragash JJ, Independent (pragashjj@gmail.com) * Jeyappragash JJ, Independent (pragashjj@gmail.com)
* Joe Beda, Heptio (joe@heptio.com)
* Jonghyuk Jong Choi, NCSoft (jongchoi@ncsoft.com) * Jonghyuk Jong Choi, NCSoft (jongchoi@ncsoft.com)
* Josef Adersberger, QAware (josef.adersberger@qaware.de) * Joe Beda, Heptio (joe@heptio.com)
* Joseph Jacks, Independent (jacks.joe@gmail.com) * Joseph Jacks, Independent (jacks.joe@gmail.com)
* Josh Bernstein, Dell (Joshua.Bernstein@dell.com) * Josh Bernstein, Dell (Joshua.Bernstein@dell.com)
* Justin Cormack, Docker (justin.cormack@docker.com) * Justin Cormack, Docker (justin.cormack@docker.com)
* Jun Du, Huawei (dujun5@huawei.com)
* Kiran Mova, MayaData (kiran.mova@mayadata.io)
* Lachlan Evenson, Microsoft (lachlan.evenson@microsoft.com) * Lachlan Evenson, Microsoft (lachlan.evenson@microsoft.com)
* Lee Calcote, SolarWinds (leecalcote@gmail.com) * Lee Calcote, SolarWinds (leecalcote@gmail.com)
* Lei Zhang, HyperHQ (harryzhang@zju.edu.cn) * Lei Zhang, HyperHQ (harryzhang@zju.edu.cn)
* Louis Fourie, Huawei (louis.fourie@huawei.com) * Louis Fourie, Huawei (louis.fourie@huawei.com)
* Mark Peek, VMware (markpeek@vmware.com) * Mark Peek, VMware (markpeek@vmware.com)
* Matt Farina, Samsung SDS (matt@mattfarina.com) * Naadir Jeewa, The Scale Factory (naadir@scalefactory.com)
* Matthew Fornaciari, Gremlin (forni@gremlin.com)
* Nick Chase, Mirantis (nchase@mirantis.com) * Nick Chase, Mirantis (nchase@mirantis.com)
* Pengfei Ni, Microsoft (peni@microsoft.com) * Pengfei Ni, Microsoft (peni@microsoft.com)
* Philip Lombardi, Datawire.io (plombardi@datawire.io) * Philip Lombardi, Datawire.io (plombardi@datawire.io)
@ -66,13 +59,9 @@ List below is the official list of TOC contributors, in alphabetical order:
* Randy Abernethy, RX-M LLC (randy.abernethy@rx-m.com) * Randy Abernethy, RX-M LLC (randy.abernethy@rx-m.com)
* Rick Spencer, Bitnami (rick@bitnamni.com) * Rick Spencer, Bitnami (rick@bitnamni.com)
* Sarah Allen, Google (sarahallen@google.com) * Sarah Allen, Google (sarahallen@google.com)
* Steven Dake, Cisco (stdake@cisco.com)
* Tammy Butow, Gremlin (tammy@gremlin.com)
* Timothy Chen, Hyperpilot (tim@hyperpilot.io) * Timothy Chen, Hyperpilot (tim@hyperpilot.io)
* Vasu Chandrasekhara, SAP SE (vasu.chandrasekhara@sap.com)
* Xiang Li, Alibaba (x.li@alibaba.com)
* Xu Wang, Hyper (xu@hyper.sh) * Xu Wang, Hyper (xu@hyper.sh)
* Yaron Haviv, iguazio (yaronh@iguaz.io) * Yaron Haviv, iguazio (yaronh@iguaz.io)
* Yong Tang, Infoblox (ytang@infoblox.com) * Yong Tang, Infoblox (ytang@infoblox.com)
* Yuri Shkuro, Uber (ys@uber.com) * Yuri Shkuro, Uber (ys@uber.com)
* Zefeng (Kevin) Wang, Huawei (wangzefeng@huawei.com)

View file

@ -1,26 +0,0 @@
# CNCF Cloud Native Definition v1.0 #
*Approved by TOC: 6/11/2018*
中文版本在英文版本之后 (in Chinese below)
Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic
environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable
infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with
robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal
toil.
The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an
ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these
innovations accessible for everyone.
## 中文版本:
云原生技术有利于各组织在公有云、私有云和混合云等新型动态环境中构建和运行可弹性扩展的应用。云原生的代表技术包括容器、服务网格、微服务、不可变基础设施和声明式API。
这些技术能够构建容错性好、易于管理和便于观察的松耦合系统。结合可靠的自动化手段,云原生技术使工程师能够轻松地对系统作出频繁和可预测的重大变更。
云原生计算基金会CNCF致力于培育和维护一个厂商中立的开源生态系统来推广云原生技术。我们通过将最前沿的模式民主化让这些创新为大众所用。

View file

@ -1,6 +0,0 @@
We would like to acknowledge previous TOC members and their huge contributions to our collective success:
* Solomon Hykes (1/29/2016 - 3/17/2018)
* Elissa Murphy (1/29/2016 - 10/2/2017)
We thank these members for their service to the CNCF community.

View file

@ -8,15 +8,15 @@ The CNCF TOC is the technical governing body of the CNCF Foundation. It admits a
## Members ## Members
* **Jonathan Boulle** (term: 3 years - start date: 1/29/2016 - 1/29/2019) * **Jonathan Boulle** (term: 3 years - start date: 1/29/2016)
* **Bryan Cantrill** (term: 3 years - start date: 1/29/2016 - 1/29/2019) * **Bryan Cantrill** (term: 3 years - start date: 1/29/2016)
* **Camille Fournier** (term: 3 years - start date: 1/29/2016 - 1/29/2019) * **Camille Fournier** (term: 3 years - start date: 1/29/2016)
* **Brian Grant** (term: 2 years - start date: 3/17/2018 - 3/17/2020) * **Brian Grant** (term: 2 years - start date: 3/17/2016)
* **Benjamin Hindman** (term: 3 years - start date: 1/29/2016 - 1/29/2019) * **Benjamin Hindman** (term: 3 years - start date: 1/29/2016)
* **Quinton Hoole** (term: 1 years - start date: 3/17/2018 - 3/17/2019) * **Solomon Hykes** (term: 2 years - start date: 3/17/2016)
* **Sam Lambert** (term: 16 months - start date: 10/2/2017 - 1/29/2019) * **Sam Lambert** (term: 16 months - start date: 10/2/2017)
* **Ken Owens** (term: 3 years - start date: 1/29/2016 - 1/29/2019) * **Ken Owens** (term: 3 years - start date: 1/29/2016)
* **Alexis Richardson** (term: 3 years - start date: 1/29/2016 - 1/29/2019) * **Alexis Richardson** (term: 3 years - start date: 1/29/2016)
Election [schedule](process/election-schedule.md) Election [schedule](process/election-schedule.md)
@ -36,22 +36,18 @@ The TOC has created the following working groups to investigate and discuss the
| Working Group | Chair | Meeting Time | Minutes/Recordings | | Working Group | Chair | Meeting Time | Minutes/Recordings |
|---------------|------------------|---------------------------------------|--------------------| |---------------|------------------|---------------------------------------|--------------------|
| [CI](https://github.com/cncf/wg-ci) | Camille Fournier | [4th Tue of every month at 8AM PT](https://zoom.us/my/cncfciwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2P3_A3ujWHSxOu1IO_bd7Zi) | [CI](https://github.com/cncf/wg-ci) | Camille Fournier | [2nd and 4th Tue every month at 8AM PT](https://zoom.us/my/cncfciwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2P3_A3ujWHSxOu1IO_bd7Zi)
| [Networking](https://github.com/cncf/wg-networking) | Ken Owens | [1st and 3rd Tue every month at 9AM PT](https://zoom.us/my/cncfnetworkingwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2M_-K5n67_zTdrPh_PtTKFC) | [Networking](https://github.com/cncf/wg-networking) | Ken Owens | [1st and 3rd Tue every month at 9AM PT](https://zoom.us/my/cncfnetworkingwg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2M_-K5n67_zTdrPh_PtTKFC)
| [Serverless](https://github.com/cncf/wg-serverless) | Ken Owens | [Thu of every week at 9AM PT](https://zoom.us/my/cncfserverlesswg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt) | [Serverless](https://github.com/cncf/wg-serverless) | Ken Owens | [Thu of every week at 9AM PT](https://zoom.us/my/cncfserverlesswg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt)
| [Storage](https://github.com/cncf/wg-storage) | Quinton Hoole | [2nd and 4th Wed every month at 8AM PT](https://zoom.us/my/cncfstoragewg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2NoiNaLVZxr-ERc1ifKP7n6) | [Storage](https://github.com/cncf/wg-storage) | Ben Hindman | [2nd and 4th Wed every month at 8AM PT](https://zoom.us/my/cncfstoragewg) | [Youtube](https://www.youtube.com/playlist?list=PLj6h78yzYM2NoiNaLVZxr-ERc1ifKP7n6)
All meetings are on the public CNCF calendar: https://goo.gl/eyutah All meetings are on the public CNCF calendar: https://goo.gl/eyutah
## Meeting Agenda and Minutes
Meeting Minutes are recorded here: https://docs.google.com/document/d/1jpoKT12jf2jTf-2EJSAl4iTdA7Aoj_uiI19qIaECNFc/edit#
## Meeting Time ## Meeting Time
The TOC meets on the 1st and 3rd Tuesday of every month at 8AM PT (USA Pacific): The TOC meets on the 1st and 3rd Tuesday of every month at 8AM PT (USA Pacific):
https://zoom.us/j/967220397 https://zoom.us/j/263858603
Or Telephone: Or Telephone:
@ -74,10 +70,10 @@ Here is a link to a World Time Zone Converter here http://www.thetimezoneconvert
**Project**|**Sponsor**|**TOC Deck**|**Accepted**|**Maturity Level** **Project**|**Sponsor**|**TOC Deck**|**Accepted**|**Maturity Level**
:-----:|:-----:|:-----:|:-----:|:-----: :-----:|:-----:|:-----:|:-----:|:-----:
[Kubernetes](https://kubernetes.io/)|Alexis Richardson|N/A|[3/10/16](https://cncf.io/news/news/2015/07/techcrunch-kubernetes-hits-10-google-donates-technology-newly-formed-cloud-native)|Graduated [Kubernetes](https://kubernetes.io/)|Alexis Richardson|N/A|[3/10/16](https://cncf.io/news/news/2015/07/techcrunch-kubernetes-hits-10-google-donates-technology-newly-formed-cloud-native)|Graduated
[Prometheus](https://prometheus.io/)|Alexis Richardson|[3/4/16](https://docs.google.com/presentation/d/1GtVX-ppI95LhrijprGENsrpq78-I1ttcSWLzMVk5d8M/edit?usp=sharing)|[5/9/16](https://cncf.io/news/announcement/2016/05/cloud-native-computing-foundation-accepts-prometheus-second-hosted-project)|Graduated [Prometheus](https://prometheus.io/)|Alexis Richardson|[3/4/16](https://docs.google.com/presentation/d/1GtVX-ppI95LhrijprGENsrpq78-I1ttcSWLzMVk5d8M/edit?usp=sharing)|[5/9/16](https://cncf.io/news/announcement/2016/05/cloud-native-computing-foundation-accepts-prometheus-second-hosted-project)|Incubating
[OpenTracing](http://opentracing.io/)|Bryan Cantrill|[8/17/16](https://docs.google.com/presentation/d/1kQkmJtT0bjSRvUTP5YFTKaXSfIM3aL7zxja_KtZtbgw/edit#slide=id.g15fc45ec1a_0_165)|[10/11/16](https://cncf.io/news/blogs/2016/10/opentracing-joins-cloud-native-computing-foundation)|Incubating [OpenTracing](http://opentracing.io/)|Bryan Cantrill|[8/17/16](https://docs.google.com/presentation/d/1kQkmJtT0bjSRvUTP5YFTKaXSfIM3aL7zxja_KtZtbgw/edit#slide=id.g15fc45ec1a_0_165)|[10/11/16](https://cncf.io/news/blogs/2016/10/opentracing-joins-cloud-native-computing-foundation)|Incubating
[Fluentd](http://www.fluentd.org/)|Brian Grant|[8/3/16](https://docs.google.com/presentation/d/1S79MNv3E2aG8nuZJFJ0XMSumf7jnKozN3vdrivCH77U/edit?usp=sharing)|[11/8/16](https://www.cncf.io/blog/2016/12/08/fluentd-cloud-native-logging)|Incubating [Fluentd](http://www.fluentd.org/)|Brian Grant|[8/3/16](https://docs.google.com/presentation/d/1S79MNv3E2aG8nuZJFJ0XMSumf7jnKozN3vdrivCH77U/edit?usp=sharing)|[11/8/16](https://www.cncf.io/blog/2016/12/08/fluentd-cloud-native-logging)|Incubating
[Linkerd](https://linkerd.io/)|Jonathan Boulle|[10/5/16](https://docs.google.com/presentation/d/19aamsOR__zGFNNFCmid2TjaJwEqNOXmHRa34EQwf3sA/edit#slide=id.g181e6fdb33_0_0)|[1/23/17](https://www.cncf.io/blog/2017/01/23/linkerd-project-joins-cloud-native-computing-foundation)|Incubating [Linkerd](https://linkerd.io/)|Jonathan Boulle|[10/5/16](https://docs.google.com/presentation/d/19aamsOR__zGFNNFCmid2TjaJwEqNOXmHRa34EQwf3sA/edit#slide=id.g181e6fdb33_0_0)|[1/23/17](https://www.cncf.io/blog/2017/01/23/linkerd-project-joins-cloud-native-computing-foundation)|Sandbox
[gRPC](http://www.grpc.io/)|Brian Grant|[10/19/16](https://docs.google.com/presentation/d/16mNYaqgd7BaV50OnbcuQ1zRHpWoUKhL3XHvCJwEm8CE/edit#slide=id.g185c09339a_23_106)|[2/16/17](https://www.cncf.io/blog/2017/03/01/cloud-native-computing-foundation-host-grpc-google)|Incubating [gRPC](http://www.grpc.io/)|Brian Grant|[10/19/16](https://docs.google.com/presentation/d/16mNYaqgd7BaV50OnbcuQ1zRHpWoUKhL3XHvCJwEm8CE/edit#slide=id.g185c09339a_23_106)|[2/16/17](https://www.cncf.io/blog/2017/03/01/cloud-native-computing-foundation-host-grpc-google)|Incubating
[CoreDNS](https://coredns.io/)|Jonathan Boulle|[8/17/16](https://docs.google.com/presentation/d/1LPvM44Pi7gletiDs40P7XmTKJLez5nz88ObYCHrHal8/edit?usp=sharing)|[2/27/17](https://www.cncf.io/blog/2017/03/02/cloud-native-computing-foundation-becomes-steward-service-naming-discovery-project-coredns)|Incubating [CoreDNS](https://coredns.io/)|Jonathan Boulle|[8/17/16](https://docs.google.com/presentation/d/1LPvM44Pi7gletiDs40P7XmTKJLez5nz88ObYCHrHal8/edit?usp=sharing)|[2/27/17](https://www.cncf.io/blog/2017/03/02/cloud-native-computing-foundation-becomes-steward-service-naming-discovery-project-coredns)|Incubating
[containerd](https://containerd.io/)|Brian Grant|[3/15/17](https://docs.google.com/presentation/d/1qmGsmARyMhRLwbFWG7LXJSsDHm45nqZ_QtBv5SnQL54/edit?usp=sharing)|[3/29/17](https://www.cncf.io/announcement/2017/03/29/containerd-joins-cloud-native-computing-foundation/)|Incubating [containerd](https://containerd.io/)|Brian Grant|[3/15/17](https://docs.google.com/presentation/d/1qmGsmARyMhRLwbFWG7LXJSsDHm45nqZ_QtBv5SnQL54/edit?usp=sharing)|[3/29/17](https://www.cncf.io/announcement/2017/03/29/containerd-joins-cloud-native-computing-foundation/)|Incubating
@ -87,21 +83,8 @@ Here is a link to a World Time Zone Converter here http://www.thetimezoneconvert
[Jaeger](https://github.com/jaegertracing/jaeger)|Bryan Cantrill|[8/1/17](https://goo.gl/ehtgts)|[9/13/17](https://www.cncf.io/blog/2017/09/13/cncf-hosts-jaeger/)|Incubating [Jaeger](https://github.com/jaegertracing/jaeger)|Bryan Cantrill|[8/1/17](https://goo.gl/ehtgts)|[9/13/17](https://www.cncf.io/blog/2017/09/13/cncf-hosts-jaeger/)|Incubating
[Notary](https://github.com/docker/notary)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating [Notary](https://github.com/docker/notary)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating
[TUF](https://github.com/theupdateframework)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating [TUF](https://github.com/theupdateframework)|Solomon Hykes|[6/20/17](https://goo.gl/6nmyDn)|[10/24/17](https://www.cncf.io/announcement/2017/10/24/cncf-host-two-security-projects-notary-tuf-specification/)|Incubating
[rook](https://github.com/rook)|Ben Hindman|[6/6/17](https://goo.gl/6nmyDn)|[1/29/18](https://www.cncf.io/blog/2018/01/29/cncf-host-rook-project-cloud-native-storage-capabilities)|Incubating [rook](https://github.com/rook)|Ben Hindman|[6/6/17](https://goo.gl/6nmyDn)|[1/29/18](https://www.cncf.io/blog/2018/01/29/cncf-host-rook-project-cloud-native-storage-capabilities)|Sandbox
[Vitess](https://github.com/vitessio/vitess)|Brian Grant|[4/19/17](https://goo.gl/6nmyDn)|[2/5/18](https://www.cncf.io/blog/2018/02/05/cncf-host-vitess/)|Incubating [Vitess](https://github.com/vitessio/vitess)|Brian Grant|[4/19/17](https://goo.gl/6nmyDn)|[2/5/18](https://www.cncf.io/blog/2018/02/05/cncf-host-vitess/)|Incubating
[NATS](https://github.com/nats-io/gnatsd)|Alexis Richardson|[9/21/16](https://goo.gl/6nmyDn)|[3/15/18](https://www.cncf.io/blog/2018/03/15/cncf-to-host-nats/)|Incubating
[SPIFFE](https://github.com/spiffe)|Brian Grant, Sam Lambert, Ken Owens|[11/7/17](https://goo.gl/6nmyDn)|[3/29/18](https://www.cncf.io/blog/2018/03/29/cncf-to-host-the-spiffe-project/)|Sandbox
[OPA](https://github.com/open-policy-agent)|Brian Grant, Ken Owens|[11/14/17](https://goo.gl/vKbawR)|[3/29/18](https://www.cncf.io/blog/2018/03/29/cncf-to-host-open-policy-agent-opa/)|Sandbox
[CloudEvents](https://github.com/cloudevents)|Brian Grant, Ken Owens|[11/14/17](https://goo.gl/vKbawR)|[5/22/18](https://www.cncf.io/blog/2018/05/22/cloudevents-in-the-sandbox/)|Sandbox
[Telepresence](https://github.com/telepresenceio)|Alexis Richardson, Camille Fournier|[4/17/18](https://docs.google.com/presentation/d/1VrHKGre5Y8AbmXEOXu4VPfILReoLT38Uw9TMN71u08E/edit?usp=sharing)|[5/22/18](https://www.cncf.io/blog/2018/05/22/telepresence-in-the-sandbox/)|Sandbox
[Helm](https://github.com/helm)|Brian Grant|[5/15/18](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g25ca91f87f_0_0)|[6/1/18](https://www.cncf.io/blog/2018/06/01/cncf-to-host-helm/)|Incubating
[Harbor](https://github.com/goharbor)|Quinton Hoole, Ken Owens|[6/19/18](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g25ca91f87f_0_0)|[7/31/18](https://www.cncf.io/blog/2018/07/31/cncf-to-host-harbor-in-the-sandbox/)|Incubating
[OpenMetrics](https://github.com/OpenObservability/OpenMetrics)|Alexis Richardson, Bryan Cantrill|[6/20/17](https://goo.gl/6nmyDn)|[8/10/18](https://www.cncf.io/blog/2018/08/10/cncf-to-host-openmetrics/)|Sandbox
[TiKV](https://github.com/tikv/tikv)|Ben Hindman, Bryan Cantrill|[7/3/18](https://docs.google.com/presentation/d/1864TEfbwCpbW5kPYGQNAfqAUdc3X83n-_OYigqxfohw/edit?usp=sharing)|[8/28/18](https://www.cncf.io/blog/2018/08/28/cncf-to-host-tikv/)|Sandbox
[Cortex](https://github.com/cortexproject/cortex)|Ken Owens, Bryan Cantrill|[6/5/18](https://docs.google.com/presentation/d/190oIFgujktVYxWZLhLYN4q8p9dtQYoe4sxHgn4deBSI/edit#slide=id.g25ca91f87f_0_0)|[9/20/18](https://www.cncf.io/blog/2018/09/20/cncf-to-host-in-the-sandbox/)|Sandbox
[Buildpacks](https://github.com/buildpack/spec)|Brian Grant, Alexis Richardson|[8/21/18](https://docs.google.com/presentation/d/1RkygwZw7ILVgGhBpKnFNgJ4BCc_9qMG8cIf0MRbuzB4/edit?usp=sharing)|[10/3/18](https://www.cncf.io/blog/2018/10/03/cncf-to-host-cloud-native-buildpacks-in-the-sandbox)|Sandbox
[Falco](https://github.com/falcosecurity/falco)|Brian Grant, Quinton Hoole|[7/17/18](https://docs.google.com/presentation/d/17p5QBVooGMLAtX6Mn6d3NAFhRmFHE0cH-WI_-0MbOm8/edit?usp=sharing)|[10/10/18](https://falco.org/)|Sandbox
[Dragonfly](https://github.com/dragonflyoss/dragonfly)|Jonathan Boulle, Benjamin Hindman|[9/4/18](https://docs.google.com/presentation/d/1umu-iT5ZXq5XsMFmqmVeRe-tn2y7DeSoCebhrehi7fk/edit#slide=id.g41381b8fd7_0_199)|[11/15/18](https://github.com/oss/dragonfly)|Sandbox
## Website Guidelines ## Website Guidelines
@ -109,7 +92,7 @@ CNCF has the following [guidelines](https://www.cncf.io/projects/website-guideli
## Scheduled Community Presentations ## Scheduled Community Presentations
If you're interested in presenting at a TOC call about your project, please open a [github issue](https://github.com/cncf/toc/issues) with the request. We can schedule a maximum of one community presentation per TOC meeting. If you're interested in presenting at a TOC call about your project, please open a [github issue](https://github.com/cncf/toc/issues) with the request. We can schedule a maximum of two community presentations per TOC meeting.
* **May 4th, 2016**: [Prometheus](https://prometheus.io/) ([overview](https://docs.google.com/presentation/d/1GtVX-ppI95LhrijprGENsrpq78-I1ttcSWLzMVk5d8M/edit?usp=sharing)): Fabian Reinartz, Julius Volz * **May 4th, 2016**: [Prometheus](https://prometheus.io/) ([overview](https://docs.google.com/presentation/d/1GtVX-ppI95LhrijprGENsrpq78-I1ttcSWLzMVk5d8M/edit?usp=sharing)): Fabian Reinartz, Julius Volz
* **August 3rd, 2016**: [Fluentd](http://www.fluentd.org/) ([overview](https://docs.google.com/presentation/d/1S79MNv3E2aG8nuZJFJ0XMSumf7jnKozN3vdrivCH77U/edit?usp=sharing)): Kiyoto Tamura / [Heron](https://github.com/twitter/heron) ([overview](https://docs.google.com/presentation/d/1pKwNO2V3VScjD1JxJ0gEgFTwAOccJgaJxHWgwcyczec/edit?usp=sharing)): Karthik Ramasamy / [Minio](https://minio.io/) ([overview](https://docs.google.com/presentation/d/1DGm_Zwq7qYHaXm6ZH26RAQeyBAKF1FOCLlEZQNTMJYE/edit?usp=sharing)): Anand Babu Periasamy * **August 3rd, 2016**: [Fluentd](http://www.fluentd.org/) ([overview](https://docs.google.com/presentation/d/1S79MNv3E2aG8nuZJFJ0XMSumf7jnKozN3vdrivCH77U/edit?usp=sharing)): Kiyoto Tamura / [Heron](https://github.com/twitter/heron) ([overview](https://docs.google.com/presentation/d/1pKwNO2V3VScjD1JxJ0gEgFTwAOccJgaJxHWgwcyczec/edit?usp=sharing)): Karthik Ramasamy / [Minio](https://minio.io/) ([overview](https://docs.google.com/presentation/d/1DGm_Zwq7qYHaXm6ZH26RAQeyBAKF1FOCLlEZQNTMJYE/edit?usp=sharing)): Anand Babu Periasamy
@ -143,24 +126,9 @@ If you're interested in presenting at a TOC call about your project, please open
* **December 7, 2017**: KubeCon/CloudNativeCon F2F * **December 7, 2017**: KubeCon/CloudNativeCon F2F
* **January 16, 2018**: CSI/Storage WG Readout * **January 16, 2018**: CSI/Storage WG Readout
* **Feb 6, 2018**: NATS * **Feb 6, 2018**: NATS
* **Feb 20, 2018**: Sandbox + CoreDNS Project Review * **Feb 20, 2018**: Sandbox + CoreDNS Inception Project Review
* **Mar 6, 2018**: Sandbox + Graduation Reviews + Working Group Process * **Mar 6, 2018**: Sandbox + Graduation Reviews + Working Group Process
* **Mar 20, 2018**: New Sandbox Projects + Working Group Process * **Mar 20, 2018**: (interested presenters contact cra@linuxfoundation.org or open up a github [issue](https://github.com/cncf/toc/issues)
* **Apr 3, 2018**: CNCF CI WG: [Cross Cloud CI](https://github.com/crosscloudci) + Working Group Process
* **Apr 17, 2018**: [Telepresence](https://github.com/cncf/toc/issues/99) + SAFE Working Group Proposal
* **May 1, 2018**: CANCELLED: CloudNativeCon/KubeCon Copenhagen Office Hours at CNCF Booth
* **May 15, 2018**: CloudEvents/ServerlessWG Update + Helm
* **June 5, 2018**: Cortex
* **June 19, 2018**: OpenMetrics and Harbor
* **July 3, 2018**: TiKV
* **July 17, 2018**: Falco
* **Aug 7, 2018**: RSocket / etcd
* **Aug 21, 2018**: Buildpacks
* **Sep 4, 2018**: OpenMessaging/Dragonfly
* **Sep 18, 2018**: netdata
* **Oct 2, 2018**: keycloak
* **Nov 20, 2018**: Graduation/Project Reviews
* **Oct 16, 2018**: (interested presenters contact cra@linuxfoundation.org or open up a github [issue](https://github.com/cncf/toc/issues))
## Meeting Minutes ## Meeting Minutes
@ -212,17 +180,3 @@ If you're interested in presenting at a TOC call about your project, please open
* [February 6th, 2018](https://goo.gl/5WWA2Q) * [February 6th, 2018](https://goo.gl/5WWA2Q)
* [February 20th, 2018](https://goo.gl/Z5ytqu) * [February 20th, 2018](https://goo.gl/Z5ytqu)
* [March 6th, 2018](https://goo.gl/LcE3TC) * [March 6th, 2018](https://goo.gl/LcE3TC)
* [March 20th, 2018](https://goo.gl/PpznT7)
* [April 3rd, 2018](https://goo.gl/FnpaEA)
* [April 17th, 2018](https://docs.google.com/presentation/d/1VrHKGre5Y8AbmXEOXu4VPfILReoLT38Uw9TMN71u08E/edit?usp=sharing)
* [May 15th, 2018](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g25ca91f87f_0_0)
* [June 5th, 2018](https://docs.google.com/presentation/d/190oIFgujktVYxWZLhLYN4q8p9dtQYoe4sxHgn4deBSI/edit#slide=id.g25ca91f87f_0_0)
* [June 19th, 2018](https://docs.google.com/presentation/d/1Ym8fLRCaX43uHPHBRyuRXM62U8m4vXaBXkuUp6tt3js/edit?usp=sharing)
* [July 3rd, 2018](https://docs.google.com/presentation/d/1864TEfbwCpbW5kPYGQNAfqAUdc3X83n-_OYigqxfohw/edit?usp=sharing)
* [July 17th, 2018](https://docs.google.com/presentation/d/17p5QBVooGMLAtX6Mn6d3NAFhRmFHE0cH-WI_-0MbOm8/edit?usp=sharing)
* [August 7th, 2018](https://docs.google.com/presentation/d/1Eebd5ZwSYyvNRLbHDpiF_USDC4sEz7lEEpPLju_0PaU/edit)
* [August 21st, 2018](https://docs.google.com/presentation/d/1RkygwZw7ILVgGhBpKnFNgJ4BCc_9qMG8cIf0MRbuzB4/edit?usp=sharing)
* [September 4th, 2018](https://docs.google.com/presentation/d/1umu-iT5ZXq5XsMFmqmVeRe-tn2y7DeSoCebhrehi7fk/edit#slide=id.g41381b8fd7_0_199)
* [September 18th, 2018](https://docs.google.com/presentation/d/1umu-iT5ZXq5XsMFmqmVeRe-tn2y7DeSoCebhrehi7fk/edit#slide=id.g41381b8fd7_0_199)
* [October 2nd, 2018](https://docs.google.com/presentation/d/1Xt1xNSN8_pGuDLl5H8xEYToFss7VoIm7GBG0e_HrsLc/edit?usp=sharing)
* [October 16th, 2018](https://docs.google.com/presentation/d/1UtObz-sbjJqtfoVxlfsl2YlalnZnWQQyH8wloDcRyXk/edit#slide=id.g25ca91f87f_0_0)

View file

@ -1,100 +0,0 @@
# Due Diligence Project Review Template
This page provides project review guidelines to those leading or contributing to due diligence exercises performed by or on behalf of the Technical Oversight Committee of the CNCF.
## Introduction
The decision to graduate or promote a project depend on the TOC sponsors of the project performina dn documenting the evaluation process in deciding upon initial or continued inclusion of projects through a Technical Due Diligence ('Tech DD') exercise. Ultimately the voting members of the TOC will, on the basis of this and other information, vote for or against the inclusion of each project at the relevant time.
## Technical Due Diligence
### Primary Goals
To enable the voting TOC members to cast an informed vote about a project, it is crucial that each member is able to form their own opinion as to whether and to what extent the project meets the agreed upon criteria for sandbox, incubation or graduation. As the leader of a DD, your job is to make sure that they have whatever information they need, succinctly and readily available, to form that opinion.
As a secondary goal, it is in the interests of the broader CNCF ecosystem that there exists some reasonable degree of consensus across the community regarding the inclusion or otherwise of projects at the various maturity levels. Making sure that the relevant information is available, and any disagreement or misunderstanding as to it's validity are ideally resolved, helps to foster this consensus.
## Statment of CNCF Alignment to TOC Principles
1. Project is self-goverrning
2. Is there a documented Code of Conduct that adhears to the CNCF guidelines?
3. Does the project have production deployments that are high quality and high-velocity? (for incubation and graduated projects).
(Sandbox level projects are targeted at earlier-stage projects to cultivate a community/technology)
4. Is the project committed to acheiving the CNCF principls and do they have a committed roadmap to address any areas of concern raised by the community?
5. The project needs to be reviewed and dosucment that the project has a fundamentally sound design without obvious critical compromises that will inhibit potential widespread adoption
6. Document that the project is useful for cloud native deployments & degree that its architected in a cloud native style
7. Document that the project has an affinity for how CNCF operates and understand the expectation of being a CNCF project.
## Review of graduation criteria and desired cloud native properties
/* Use appropriate Section */
### Sandbox Graduation (Exit Requirements)
1. Document that it is being used successfully in production by at least three independent end users which with focus on adequate quality and scope defined.
2. Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
3. Demonstrate a substantial ongoing flow of commits and merged contributions.
### Incubating Stage Graduation (Exit Requirements)
1. Document that it is being used successfully in production by at least three independent end users which with focus on adequate quality and scope defined.
2. Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
3. Demonstrate a substantial ongoing flow of commits and merged contributions.
4. Have committers from at least two organizations.
5. Have achieved and maintained a Core Infrastructure Initiative Best Practices Badge.
6. Adopted the CNCF Code of Conduct.
7. Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers.
8. Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).
### Documentation of CNCF Alignment (if not addressed above):
name of project (must be unique within CNCF)
project description (what it does, why it is valuable, origin and history)
statement on alignment with CNCF charter mission
sponsor from TOC (sponsor helps mentor projects)
license (charter dictates Apache 2 by default)
source control (GitHub by default)
external dependencies (including licenses)
release methodology and mechanics
community size and any existing sponsorship
##Technical
* An architectural, design and feature overview should be available. (add link)
* What are the primary target cloud-native use cases? Which of those:
* Can be accomplished now.
* Can be accomplished with reasonable additional effort (and are ideally already on the project roadmap).
* Are in-scope but beyond the current roadmap.
* Are out of scope.
* What are the current performance, scalability and resource consumption bounds of the software? Have these been explicitly tested? Are they appropriate given the intended usage (e.g. agent-per-node or agent-per-container need to be lightweight, etc)?
* What exactly are the failure modes? Are they well understood? Have they been tested? Do they form part of continuous integration testing? Are they appropriate given the intended usage (e.g. cluster-wide shared services need to fail gracefully etc)?
* What trade-offs have been made regarding performance, scalability, complexity, reliability, security etc? Are these trade-offs explicit or implicit? Why? Are they appropriate given the intended usage? Are they user-tunable?
* What are the most important holes? No HA? No flow control? Inadequate integration points?
* Code quality. Does it look good, bad or mediocre to you (based on a spot review). How thorough are the code reviews? Substance over form. Are there explicit coding guidelines for the project?
* Dependencies. What external dependencies exist, do they seem justified?
* What is the release model? Versioning scheme? Evidence of stability or otherwise of past stable released versions?
* What is the CI/CD status? Do explicit code coverage metrics exist? If not, what is the subjective adequacy of automated testing? Do different levels of tests exist (e.g. unit, integration, interface, end-to-end), or is there only partial coverage in this regard? Why?
* What licensing restrictions apply? Again, CNCF staff will handle the full legal due diligence.
* What are the recommended operational models? Specifically, how is it operated in a cloud-native environment, such as on Kubernetes?
## Project
* Do we believe this is a growing, thriving project with committed contributors?
* Is it aligned with CNCF's values and mission?
* Do we believe it could eventually meet the graduation criteria?
* Should it start at the sandbox level or incubation level?
* Does ithe project have a sound, documented process for source control, issue tracking, release management etc.
* Does it have a documented process for adding committers?
* Does it have a documented governance model of any kind?
* Does it have committers from multiple organizations?
* Does it have a code of conduct?
* Does it have a license? Which one? Does it have a CLA or DCO? Are the licenses of it's dependencies compatible with their usage and CNCF policies? CNCF staff will handle the full legal due diligence.
* What is the general quality of informal communication around the project (slack, github issues, PR reviews, technical blog posts, etc)?
* How much time does the core team commit to the project?
* How big is the team? Who funds them? Why? How much? For how long?
* Who are the clear leaders? Are there any areas lacking clear leadership? Testing? Release? Documentation? These roles sometimes go unfilled.
* Besides the core team, how active is the surrounding community? Bug reports? Assistance to newcomers? Blog posts etc.
* Do they make it easy to contribute to the project? If not, what are the main obstacles?
* Are there any especially difficult personalities to deal with? How is this done? Is it a problem?
* What is the rate of ongoing contributions to the project (typically in the form of merged commits).
## Users
* Who uses the project? Get a few in-depth references from 2-4 of them who actually know and understand it.
* What do real users consider to be it's strengths and weaknesses? Any concrete examples of these?
* Perception vs Reality: Is there lots of buzz, but the software is flaky/untested/unused? Does it have a bad reputation for some flaw that has already been addressed?
## Context
* What is the origin and history of the project?
* Where does it fit in the market and technical ecosystem?
* Is it growing or shrinking in that space? Is that space growing or shrinking?
* How necessary is it? What do people who don't use this project do? Why exactly is that not adequate, and in what situations?
* Clearly compare and contrast with peers in this space. A summary matrix often helps. Beware of comparisons that are too superficial to be useful, or might have been manipulated so as to favor some projects over others. Most balanced comparisons will include both strengths and weaknesses, require significant detailed research, and usually there is no hands-down winner. Be suspicious if there appears to be one.

View file

@ -20,7 +20,7 @@ To enable the voting TOC members to cast an informed vote about a
project, it is crucial that each member is able to form their own project, it is crucial that each member is able to form their own
opinion as to whether and to what extent the project meets the agreed opinion as to whether and to what extent the project meets the agreed
upon [criteria](https://www.cncf.io/projects/graduation-criteria/) for upon [criteria](https://www.cncf.io/projects/graduation-criteria/) for
sandbox, incubation or graduation. As the leader of a DD, your job inception, incubation or graduation. As the leader of a DD, your job
is to make sure that they have whatever information they need, is to make sure that they have whatever information they need,
succinctly and readily available, to form that opinion. succinctly and readily available, to form that opinion.
@ -96,7 +96,7 @@ The key high-level questions that the voting TOC members will be looking to have
* Do we believe this is a growing, thriving project with committed contributors? * Do we believe this is a growing, thriving project with committed contributors?
* Is it aligned with CNCF's values and mission? * Is it aligned with CNCF's values and mission?
* Do we believe it could eventually meet the graduation criteria? * Do we believe it could eventually meet the graduation criteria?
* Should it start at the sandbox level or incubation level? * Should it start at the inception level or incubation level?
Some details that might inform the above include: Some details that might inform the above include:

View file

@ -8,21 +8,25 @@ The key sections of the [charter](https://www.cncf.io/about/charter/) are:
>6(c)(i) The TOC shall select a Chair of the TOC to set agendas and call meetings of the TOC. >6(c)(i) The TOC shall select a Chair of the TOC to set agendas and call meetings of the TOC.
>6(e)(ii) Nominations: Each CNCF member may nominate up to two (2) technical representatives, (from vendors, end users or any other fields), at most one of which may be from their respective company. The nominee(s) must agree to participate prior to being added to the nomination list. >6(e)(ii) Nominations: Each individual (entity or member) eligible to nominate a TOC member may nominate up to two (2) technical representatives, (from vendors, end users or any other fields), at most one of which may be from their respective company.
>6(f)(i) TOC Members shall serve two-year, staggered terms. The initial six elected TOC members from the Governing Board election shall serve an initial term of three (3) years. The TOC members initially elected by the End User TAB and TOC shall serve an initial term of two (2) years. >6(f)(i) TOC Members shall serve two-year, staggered terms. The initial six elected TOC members from the Governing Board election shall serve an initial term of three (3) years. The TOC members initially elected by the End User TAB and TOC shall serve an initial term of two (2) years.
Current TOC [Members](https://github.com/cncf/toc#members) and their terms are: Current TOC [Members](https://github.com/cncf/toc#members) and their terms are:
* Jonathan Boulle (term: 3 years - start date: 1/29/2016) [GB appointed] * Jonathan Boulle (term: 3 years - start date: 1/29/2016)
* Bryan Cantrill (term: 3 years - start date: 1/29/2016) [GB appointed] * Bryan Cantrill (term: 3 years - start date: 1/29/2016)
* Camille Fournier (term: 3 years - start date: 1/29/2016) [GB appointed] * Camille Fournier (term: 3 years - start date: 1/29/2016)
* Brian Grant (term: 2 years - start date: 3/17/2018) [TOC appointed] * Brian Grant (term: 2 years - start date: 3/17/2016)
* Benjamin Hindman (term: 3 years - start date: 1/29/2016) [GB appointed] * Benjamin Hindman (term: 3 years - start date: 1/29/2016)
* Quinton Hoole (term: 1 year - start date: 3/17/2018) [TOC appointed] * Solomon Hykes (term: 2 years - start date: 3/17/2016)
* Sam Lambert (term: 16 months - start date: 10/2/2017) [enduser appointed] * Sam Lambert (term: 16 months - start date: 10/2/2017)
* Ken Owens (term: 3 years - start date: 1/29/2016) [GB appointed] * Ken Owens (term: 3 years - start date: 1/29/2016)
* Alexis Richardson (term: 3 years - start date: 1/29/2016) [GB appointed] * Alexis Richardson (term: 3 years - start date: 1/29/2016)
The End User Community will shortly (September 2017) be electing a new TOC member to replace Elissa. That person's term would normally last through 3/10/2018. We will ask the End User Community to instead approve a 16 month term to align with GB-appointed TOC selections going forward. This End User TOC member will be reappointed or replaced on 1/29/2019.
The terms of the two TOC appointed seats, currently held by Brian and Solomon, end on 3/16/18. At the time they are reelected or replaced, we propose that the two appointed members will draw straws to determine which of them gets a 1-year term in just that cycle so that these two positions are staggered going forward. After they are selected, we propose that the TOC vote to select its chairperson, and do so every 2 years thereafter.
On 1/29/2019, the other 6 TOC positions are up for re-election by the GB. The charter requires that the initial appointments have been for 3 years (which they were), but to use staggered, 2-year terms going forward. We propose that half of the positions get a 1-year term in just that cycle (by drawing straws), so that each year afterwards, 3 of the 6 will be reappointed or replaced. On 1/29/2019, the other 6 TOC positions are up for re-election by the GB. The charter requires that the initial appointments have been for 3 years (which they were), but to use staggered, 2-year terms going forward. We propose that half of the positions get a 1-year term in just that cycle (by drawing straws), so that each year afterwards, 3 of the 6 will be reappointed or replaced.
@ -30,6 +34,8 @@ On 1/29/2019, the other 6 TOC positions are up for re-election by the GB. The ch
*All terms are two years unless otherwise specified. Selected means reappointed or replaced.* *All terms are two years unless otherwise specified. Selected means reappointed or replaced.*
* 10/1/2017: New End User TOC member is selected for a 16 month term.
* 3/17/2018: Both TOC-selected members are selected, one for a 1-year term.
* 3/17/2018 (and each future even year): The TOC selects its chairperson. * 3/17/2018 (and each future even year): The TOC selects its chairperson.
* 1/29/2019: 6 GB-selected TOC members are selected, half for 1-year terms. * 1/29/2019: 6 GB-selected TOC members are selected, half for 1-year terms.
* 1/29/2019 (and each future odd year): End User TOC member is selected. * 1/29/2019 (and each future odd year): End User TOC member is selected.

View file

@ -1,16 +1,22 @@
== CNCF Graduation Criteria v1.1 == CNCF Graduation Criteria v1.0
Every CNCF project has an associated maturity level. Proposed CNCF projects should state their preferred maturity level. A two-thirds supermajority is required for a project to be accepted as incubating or graduated. If there is not a supermajority of votes to enter as a graduated project, then any graduated votes are recounted as votes to enter as an incubating project. If there is not a supermajority of votes to enter as an incubating project, then any graduated or incubating votes are recounted as sponsorship to enter as an sandbox project. If there is not enough sponsorship to enter as an sandbox stage project, the project is rejected. This voting process is called fallback voting. Every CNCF project has an associated maturity level. Proposed CNCF projects should state their preferred maturity level. When a TOC vote is held on a proposed project entering CNCF, votes may either be for the project to enter as an inception, incubating, or graduated project, or not to enter at this time. A two-thirds supermajority is required for a project to be accepted. If there is not a supermajority of votes to enter as a graduated project, then any graduated votes are recounted as votes to enter as an incubating project. If there is not a supermajority of votes to enter as an incubating project, then any graduated or incubating votes are recounted as votes to enter as an inception project. If there is not a supermajority to enter as an inception stage project, the project is rejected. This voting process is called fallback voting.
Projects of all maturities have access to all resources listed at https://cncf.io/projects[https://cncf.io/projects] but if there is contention, more mature projects will generally have priority. Projects of all maturities have access to all resources listed at https://cncf.io/projects[https://cncf.io/projects] but if there is contention, more mature projects will generally have priority.
=== Sandbox Stage === Inception Stage
To be accepted in the sandbox a project must have at least 2 TOC sponsors. See the https://github.com/cncf/toc/blob/master/process/sandbox.md[CNCF Sandbox Guidelines v1.0] for the detailed process. To be accepted to the inception stage, a project must:
* Add value to cloud native computing (i.e., containerization, orchestration, microservices, or some combination) and be aligned with the CNCF https://cncf.io/about/charter[charter].
* Have all code under an ASL 2.0 license, or another license explicitly approved by the Governing Board.
* Agree to transfer any relevant trademarks to CNCF and to assist in filing for any relevant unregistered ones. This means, for example, that Example, Inc. would need to call their microservices tool OpenExample (or similar) and support CNCF receiving a trademark for OpenExample, while Example could remain a trademark of Example, Inc. This assignment will be reversed if the project does not remain in the CNCF, as described below. Note that no patent or copyright assignment is necessary because the ASL 2.0 license provides sufficient protections for other developers and users.
* Every 12 months, each inception stage project will come to a vote with the TOC. A supermajority vote is required to renew a project at inception stage for another 12 months or move it to incubating or graduated stage. If there is not a supermajority for any of these options, using the fallback voting process defined above, the project is not renewed.
* In the case of an inception stage project that is not renewed with CNCF, the trademark will be returned to the project maintainers or an organization they designate.
=== Incubating Stage === Incubating Stage
To be accepted to incubating stage, a project must meet the sandbox stage requirements plus: To be accepted to incubating stage, a project must meet the inception stage requirements plus:
* Document that it is being used successfully in production by at least three independent end users which, in the TOCs judgement, are of adequate quality and scope. * Document that it is being used successfully in production by at least three independent end users which, in the TOCs judgement, are of adequate quality and scope.
* Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project. * Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
@ -19,11 +25,11 @@ To be accepted to incubating stage, a project must meet the sandbox stage requir
=== Graduation Stage === Graduation Stage
To graduate from sandbox or incubating status, or for a new project to join as a graduated project, a project must meet the incubating stage criteria plus: To graduate from inception or incubating status, or for a new project to join as a graduated project, a project must meet the incubating stage criteria plus:
* Have committers from at least two organizations. * Have committers from at least two organizations.
* Have achieved and maintained a Core Infrastructure Initiative https://bestpractices.coreinfrastructure.org/[Best Practices Badge]. * Have achieved and maintained a Core Infrastructure Initiative https://bestpractices.coreinfrastructure.org/[Best Practices Badge].
* Adopt the CNCF https://github.com/cncf/foundation/blob/master/code-of-conduct.md[Code of Conduct]. * Adopt the CNCF https://github.com/cncf/foundation/blob/master/code-of-conduct.md[Code of Conduct].
* Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers. * Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers.
* Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website). * Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).
* Receive a supermajority vote from the TOC to move to graduation stage. Projects can attempt to move directly from sandbox to graduation, if they can demonstrate sufficient maturity. Projects can remain in an incubating state indefinitely, but they are normally expected to graduate within two years. * Receive a supermajority vote from the TOC to move to graduation stage. Projects can attempt to move directly from inception to graduation, if they can demonstrate sufficient maturity. Projects can remain in an incubating state indefinitely, but they are normally expected to graduate within two years.

View file

@ -1,4 +1,4 @@
*CNCF Project Proposal Process v1.2* *CNCF Project Proposal Process v1.1*
. *Introduction*. This governance policy sets forth the proposal process for projects to be accepted into the Cloud Native Computing Foundation (“CNCF”). The process is the same for both existing projects which seek to move into the CNCF, and new projects to be formed within the CNCF. . *Introduction*. This governance policy sets forth the proposal process for projects to be accepted into the Cloud Native Computing Foundation (“CNCF”). The process is the same for both existing projects which seek to move into the CNCF, and new projects to be formed within the CNCF.
. *Project Proposal Requirements*. Projects must be proposed via https://github.com/cncf/toc/tree/master/proposals[GitHub]. Project proposals submitted to the CNCF (see https://github.com/cncf/toc/blob/master/proposals/kubernetes.adoc[example]) must provide the following information to the best of your ability: . *Project Proposal Requirements*. Projects must be proposed via https://github.com/cncf/toc/tree/master/proposals[GitHub]. Project proposals submitted to the CNCF (see https://github.com/cncf/toc/blob/master/proposals/kubernetes.adoc[example]) must provide the following information to the best of your ability:
@ -23,3 +23,4 @@
. *Project Acceptance Process*. . *Project Acceptance Process*.
.. Projects are required to present their proposal at a TOC meeting .. Projects are required to present their proposal at a TOC meeting
.. Projects get accepted via a 2/3 supermajority vote of the TOC .. Projects get accepted via a 2/3 supermajority vote of the TOC
.. All projects start in the incubator TLP

View file

@ -1,116 +0,0 @@
== Cloud Native Buildpacks
*Name of project:* Cloud Native Buildpacks
*Description:*
Buildpacks are application build tools that provide a higher level of abstraction compared to Dockerfiles.
Conceived by Heroku in 2011, they establish a balance of control that reduces the operational burden on developers and supports operators who manage apps at scale.
Buildpacks ensure that apps meet security and compliance requirements without developer intervention.
They provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
Cloud Native Buildpacks aim to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and that incorporates learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku, the largest contributors to the buildpack ecosystem.
Cloud Native Buildpacks embrace modern container standards, such as the OCI image format.
They take advantage of the latest capabilities of these standards, such as remote image layer rebasing on Docker API v2 registries.
*Statement on alignment with CNCF mission:*
The Cloud Native Buildpacks project is well-aligned with the CNCF's mission statement of supporting cloud native systems.
The next generation of buildpacks will aid developers and operators in packaging applications into containers (1a), allow operators to efficiently manage the infrastructure necessary to keep application dependencies updated (1b), and be available via well-defined interfaces (1c).
The Cloud Native Buildpacks project is complimentary to other CNCF projects like Helm, Harbor, and Kubernetes.
Cloud Native Buildpacks produce OCI images that can be managed by Helm, stored in Harbor, and deployed to Kubernetes.
Additionally, the project roadmap includes creating a Kubernetes CRD controller (or alternatively, adapting Knative's https://github.com/knative/build[Build CRD]) to enable cloud builds using buildpacks.
We agree with the CNCFs “no kingmakers” principle, and propose Cloud Native Buildpacks as an alternative to Dockerfiles for certain use cases, not as a one-size-fits-all solution for building cloud apps.
*Sponsors from TOC:* Brian Grant & Alexis Richardson
*Preferred maturity level:* Sandbox
*License:* Apache License v2.0
*Source control:* Github (https://github.com/buildpack)
*External Dependencies:*
* https://github.com/BurntSushi/toml[github.com/BurntSushi/toml] (MIT)
* https://github.com/docker/docker[github.com/docker/docker] (Apache-2.0)
* https://github.com/docker/go-connections[github.com/docker/go-connections] (Apache-2.0)
* https://github.com/golang/mock[github.com/golang/mock] (Apache-2.0)
* https://github.com/google/go-cmp[github.com/google/go-cmp] (NewBSD)
* https://github.com/google/go-containerregistry[github.com/google/go-containerregistry] (Apache-2.0)
* https://github.com/google/uuid[github.com/google/uuid] (NewBSD)
* https://github.com/nu7hatch/gouuid[github.com/nu7hatch/gouuid] (MIT)
* https://github.com/onsi/ginkgo[github.com/onsi/ginkgo] (MIT)
* https://github.com/onsi/gomega[github.com/onsi/gomega] (MIT)
* https://github.com/sclevine/spec[github.com/sclevine/spec] (Apache-2.0)
* https://github.com/spf13/cobra[github.com/spf13/cobra] (Apache-2.0)
* https://gopkg.in/yaml.v2[gopkg.in/yaml.v2] (Apache-2.0)
* https://code.cloudfoundry.org/buildpackapplifecycle[code.cloudfoundry.org/buildpackapplifecycle] (Apache-2.0)
* https://code.cloudfoundry.org/cli[code.cloudfoundry.org/cli] (Apache-2.0)
*Initial Committers:*
Founding Maintainers:
* Stephen Levine (Pivotal)
* Ben Hale (Pivotal)
* Terence Lee (Heroku)
* Joe Kutner (Heroku)
Additional Maintainers:
* Emily Casey (Pivotal)
* Jacques Chester (Pivotal)
* Dave Goddard (Pivotal)
* Anthony Emengo (Pivotal)
* Stephen Hiehn (Pivotal)
* Andreas Voellmer (Pivotal)
*Infrastructure requests (CI / CNCF Cluster):*
_Development needs:_
We currently use Travis for CI, but we may want to use CNCF resources to deploy Concourse CI.
Additionally, we will need access to all common Docker registry implementations for performance and compatibility testing.
This includes deploying Harbor to CNCF infrastructure as well as access to DockerHub, GCR, ACR, ECR, etc.
_Production needs:_
Additionally, we would like to use CNCF resources to host a buildpack registry containing buildpacks and buildpack dependencies.
*Communication Channels:*
* Slack: https://buildpacks.slack.com
* Mailing List: https://lists.cncf.io/g/cncf-buildpacks (proposed)
* Issue tracker: https://github.com/orgs/buildpack/projects
*Website:* https://buildpacks.io
*Release methodology and mechanics:*
Continuous release process made possible by reliable automated tests.
We plan to cut small releases whenever possible.
*Social media accounts:*
* Twitter: @buildpacks_io
*Existing sponsorship*: Pivotal and Heroku
*Community size:*
_Existing buildpacks:_
Cloud Foundry Buildpacks:
1000+ stars, 4,000+ forks, 8 full-time engineers
Heroku Buildpacks:
5,500+ stars, 12,000+ forks, 5 full-time engineers
_Cloud Native Buildpacks project:_
New project with 10 active contributors from Pivotal and Heroku.

View file

@ -1,156 +0,0 @@
# CloudEvents
**Name of project**: CloudEvents
**Description**:
Last year the CNCF TOC created the Serverless Working Group to investigate
the Serverless landscape. The outputs of the WG included:
- a [whitepaper](https://github.com/cncf/wg-serverless#whitepaper) that:
- defines Serverless and its terminology
- describes common use cases for the technology
- compares it with other Cloud Native technologies and \*aaS environments
- describes the common architecture of Serverless platforms
- a [landscape document](https://docs.google.com/spreadsheets/d/10rSQ8rMhYDgf_ib3n6kfzwEuoE88qr0amUPRxKbwVCk/edit#gid=0)
that lists well-known open-source and proprietary Serverless platforms
and tools
- a set of recommended next steps for the WG, as part of the whitepaper:
- encourage more Serverless technology vendors and open source developers
to join the CNCF
- foster an open ecosystem by establishing interoperable APIs, in particular
around: Events, Deployments and Workflows
- provide additional education as needed
One of the recommendations, interoperability around Events, was agreed to
by the TOC and the WG began to develop a new specification for how
Events that are transferred between an event producer and an event consumer
should be formalized. The purpose of this would be to better enable
interoperability between these components such that basic processing of
the events (such as routing) can be achieved without having to require
knowledge of the event's structure in advance, or without understanding
the application specific data of the event.
The work on this specification is currently being done within the
CNCF's Serverless Working Group, but with the release of our first
milestone (v0.1), it would make sense for this work to be more
formalized as a new sandbox project under the CNCF.
The goals/roadmap of the project include:
- moving the specification to v1.0. A baseline format for an Event
to enable broad adoption within the Cloud community, and in particular
for Serverless/FaaS implementations
- define protocol mappings for popular transports, such as HTTP
- define serialization mappings for popular formats, such as JSON
**Statement on alignment with CNCF mission**:
Being born out of the CNCF's Serverless Working Group, the CloudEvents
project (and its members) share the CNCF's goals of promoting Cloud Native
technologies, and offering choice to our consumers through an open
interoperability specification, as shown by the significant participation
from key industry companies.
We believe that the CNCF provides the proper home for this due to its
commitment to the promotion and development of open, vendor-neutral projects.
Additionally, the wide breadth of the CNCF members will provide the feedback
necessary to ensure the CloudEvents specification isn't too limited in its
scope and appeals to as many constituents of the cloud native community
as possible.
**Sponsor / Advisor from TOC**:
- Ken Owens <ken.owens @ mastercard.com>
- Brian Grant <briangrant @ google.com>
**Preferred maturity level**: Sandbox
**License**: Apache License v2.0
**Source control repositories**:
CloudEvents org: https://github.com/cloudevents
CloudEvents repo for the specification: https://github.com/cloudevents/spec
**External dependencie**: None
**Initial Maintainers**:
The CloudEvents group does not have "maintainers" that approve
Pull Requests (PRs) like traditional GitHub projects. Rather, the group
discusses/reviews PRs in the PRs themselves and then when consensus is reached
they are approved during our weekly calls. If concensus can not be reached
then a formal vote is taken.
Voting rights: each member company designates a "primary" and "alternate"
member whose attendance at the weekly calls is tracked. Any member company
that attends three out of the last four meetings (current meeting not included)
has voting rights.
We also have this
[GOVERNANCE](https://github.com/cloudevents/spec/blob/master/GOVERNANCE.md)
doc which explains the processes we follow.
**Infrastructure Requests**: None
**Communication Channels**:
Mailing list: CloudEvents uses the CNCF Serverless WG mailing list:
https://groups.google.com/forum/#!forum/cncf-wg-serverless but we may
move to our own dedicated mailing list when/if the Serverless WG starts
a second project.
Slack: There is a #cloudevents Slack channel under CNCF's Slack workspace.
We have weekly zoom calls (9am PT on Thursdays):
https://zoom.us/my/cncfserverlesswg
**Issue tracker**:
Issues are tracked with GitHub Issues:https://github.com/cloudevents/spec/issues
Changes are tracked with GitHub PRs: https://github.com/cloudevents/spec/pulls
**Website**:
CloudEvents has its own website at: https://cloudevents.io
**Release Methodology and Mechanics**
CloudEvents has a set of milestones defined in its
[roadmap](https://github.com/cloudevents/spec/blob/master/roadmap.md)
document. Beyond what is defined there, the group will decide when
significate progress has been made to warrant a new release.
**Social Media Accounts**:
Twitter: @CloudEventsDemo
**Contributor statistics**:
Attendance is tracked [here](https://docs.google.com/spreadsheets/d/1bw5s9sC2ggYyAiGJHEk7xm-q2KG6jyrfBy69ifkdmt0/edit?pli=1#gid=0).
As can been seen in that document, CloudEvents weekly calls have regular
attendance from most major cloud vendors, averaging nearly 30 people
each week.
Without implying endorsement, the following companies have attended at least
one meeting:
Accenture, Alibaba, Amazon, Bitnami/Kubeless, Cisco, Clay, CNCF,
Collinson Group, Cuemby, Google, Hauwei, Honeycomb.io, Huawei, IBM, iguazio,
infraCloud, Intel, JP Morgan, JS Foundation, Mastercard, Microsoft, NAIC,
Nordstrom, OpenFaaS, Oracle, Particular Software, Pivotal, Progress, Red Hat,
RX-M, SAP, Serverless, Singlepoint, Solar Winds, solo.io, Splunk, VMWare
And the following have voting rights (today), which means they regularly
attend the weekly calls:
Alibaba, CNCF, Google, Huawei, IBM, iguazio, Intel, JS Foundation, Microsoft,
NAIC, Nordstrom, Oracle, Red Hat, SAP, Serverless, VMWare
In terms of adoption, the following companies participated in the KubeCon
EU CloudEvents demo:
Alibaba, Google, Hauwei, IBM, iguazio, Microsoft, Oracle, Red Hat, SAP,
Serverless, VMWare
Azure recently announced official support for CloudEvents in their
[Event Grid](https://docs.microsoft.com/en-us/azure/event-grid/cloudevents-schema),
and Serverless announced support for it in their
[Event Gateway](https://serverless.com/learn/event-gateway/).

View file

@ -92,7 +92,6 @@ CoreDNS can be thought of as a DNS protocol head that can be configured to front
*Comparison with KubeDNS*: *Comparison with KubeDNS*:
The incumbent DNS service for Kubernetes, “kubedns”, consists of three components: The incumbent DNS service for Kubernetes, “kubedns”, consists of three components:
* kube-dns which uses SkyDNS as a library provides the DNS service based on the Kubernetes API * kube-dns which uses SkyDNS as a library provides the DNS service based on the Kubernetes API
* dnsmasq which acts as a caching server in front of kube-dns * dnsmasq which acts as a caching server in front of kube-dns
* sidecar provides metrics and health-check status. * sidecar provides metrics and health-check status.

View file

@ -1,99 +0,0 @@
== Cortex
*Name of project:* Cortex
*Description:*
Cortex is a horizontally scalable, highly available, and multitenant SaaS service that is compatible with Prometheus and offers a long-term storage solution.
For teams looking for a Prometheus solution that offers the following over vanilla Prometheus:
* Long-term metrics storage in a variety of cloud based and on-prem NoSQL data stores
* Tenancy model supporting commercial SaaS offerings or large/multiple Kubernetes installations requiring data separation
* On-demand Prometheus instance provisioning
* A highly-available architecture that benefits from cloud-native architectures run with Kubernetes
* A highly scalable Prometheus experience that scales out, not up
* The ability to handle large metric topologies in a single instance without the need for federation
Cortex was presented at the https://docs.google.com/presentation/d/190oIFgujktVYxWZLhLYN4q8p9dtQYoe4sxHgn4deBSI/edit#slide=id.g25ca91f87f_0_0[CNCF TOC meeting on 6/5/2018]
*Statement on alignment with CNCF mission:*
Cortex fully supports the CNCF's goal for scalability, "Ability to support all scales of deployment, from small developer centric environments to the scale of enterprises and service providers."
There are many different ways to provide a scalable and available metric system for Kubernetes. Cortex with it's tenancy model combined with the both the high-availability and horizontally scalability architecture serves this goal directly.
*Sponsor / Advisor from TOC:* Bryan Cantrill and Ken Owens
*Unique identifier:* cortex
*Preferred maturity level:* sandbox
The CNCF sandbox was designed for just this kind of project. Specifically, the Cortex community is looking for the following from being in the sandbox:
* Encourage public visibility of experiments or other early work that can add value to the CNCF mission
* Visibility for a new projects designed to extend one or more CNCF projects with functionality
* The Sandbox should provide a beneficial, neutral home for such projects, in order to foster collaborative development.
*License:* Apache License 2.0
*Source control repositories:* https://github.com/weaveworks/cortex
*External Dependencies:*
Cortex depends on the following external software components:
* Prometheus (Apache Software License 2.0)
* Kubernetes (Apache Software License 2.0)
* Jaeger Tracing (Apache Software License 2.0)
* OpenTracing (Apache Software License 2.0)
* GRPC (Apache Software License 2.0)
* Weaveworks Mesh (Apache Software License 2.0)
* Golang (Apache Software License 2.0)
*Initial Committers (leads):*
Julius Volz (Independent)
Tom Wilkie (Grafana Labs)
*Infrastructure requests (CI / CNCF Cluster):*
None
*Communication Channels:*
* Slack: https://weave-community.slack.com/
* Mailing List: https://groups.google.com/forum/#!forum/cortex-monitoring
* Community Meeting Doc: https://docs.google.com/document/d/1mYvY4HMVGmetYHupi5z2BnwT1K8PiO64ZcxuX5c6ssc/edit#heading=h.ou5xp51fcp6v
*Issue tracker:* https://github.com/weaveworks/cortex/issues
*Website:* https://github.com/weaveworks/cortex
*Release methodology and mechanics:* Most folks run HEAD in production.
*Social media accounts:* None
*Existing sponsorship:* WeaveWorks
*Community size:*
* 500+ stars
* 60+ forks
*Production usage*:
Cortex is being actively used in production by the following:
* Electronic Arts https://www.ea.com/
* FreshTracks.io https://freshtracks.io/
* Grafana Labs https://grafana.com/
* OpenEBS https://www.openebs.io/
* WeaveWorks https://weave.works/

View file

@ -1,119 +0,0 @@
=== Dragonfly CNCF Sandbox Project Proposal
*Name of Project:* Dragonfly
*Description:*
Dragonfly is an intelligent P2P based image and file distribution system. It aims to resolve three major issues: efficiency, flow control and security.
It is a general tool which can be integrated with container engine to help deploy cloud native applications at scale. In addition, users can deploy Dragonfly easily on Kubernetes via Helm and daemonset.
Dragonfly ensures distribution efficiency of images with P2P policy, the avoidance of duplicated image downloads. To not impact the other running applications, Dragonfly implements image distribution flow control, such as download bandwidth limit and disk IO protection. Dragonfly also takes advantages of encryption algorithm for image transmission in order to meet secure demand of enterprise. Here are some key features of Dragonfly:
* P2P based file distribution
* Support a wide range of container technologies
* Host level speed limit
* Passive CDN for downloads
* Strong consistency of distributed image
* Disk protection and high efficient IO
* High performance
* Exception auto isolation
* Effective concurrency control of Registry Auth
* Image encryption when transmission
Dragonfly consists of three major components:
1. **SuperNode**: provides image cache services from source image registry; chooses appropriate downloading policy for each peer.
1. **dfget**: is a client which downloads files from P2P network(peer nodes and SuperNode); receives control orders from SuperNode and transfers data among P2P network.
1. **dfdaemon**: is an agent which proxies image pulling request from local container engine; filters out layer fetching requests and uses dfget to download all these layers.
**Statement on alignment with CNCF mission:**
The Cloud Native Dragonfly project is well-aligned with the CNCF's mission statement of supporting cloud native systems. When developers and operators finish to package applications in container images, Dragonfly aims to tackle distribution issue of packaged image distribution(1a). The intelligent distribution ability of Dragonfly can dynamically manage network bandwidth, disk IO and other resources efficiently to reduce maintenance and operation cost(1b). Dragonfly is decoupled with dependencies and designed to be consist of explicit and minimal services within itself(1c).
The Cloud Native Dragonfly project is complimentary to other CNCF projects, such as Kubernetes, Helm, Harbor and containerd. SuperNode of Dragonfly can be deployed via Helm and dfget and dfdaemon agents can be deployed via daemonset of Kubernetes. When releasing a cloud native application in Kubernetes, Harbor takes advantanges of Dragonfly's open API to control the image preheater. when startup of pod, containerd sends image pull request to Dragonfly and Dragonfly takes over image distribution part automatically, efficiently and safely.
*Roadmap:*
Dragonfly intends to deliver more essential and advanced feature in ecosystem openness, scalability and security. For more details, please refer to https://github.com/alibaba/Dragonfly/blob/master/ROADMAP.md[ROADMAP].
*Sponsors from TOC:* Jonathan Boulle & Benjamin Hindman
*Preferred maturity level:* Sandbox
*License:* Apache License v2.0
*Source control:* GitHub (https://github.com/alibaba/dragonfly)
*External Dependencies:*
External dependencies of Falco are listed below:
|===
|*Software*|*License*|*Project Page*
|go-check|BSD|https://github.com/go-check/check/[https://github.com/go-check/check/]
|compress|BSD|https://github.com/klauspost/compress[https://github.com/klauspost/compress]
|cpuid|MIT|https://github.com/klauspost/cpuid[https://github.com/klauspost/cpuid]
|uuid|BSD|https://github.com/pborman/uuid[https://github.com/pborman/uuid]
|logrus|MIT|https://github.com/sirupsen/logrus[https://github.com/sirupsen/logrus]
|pflag|BSD|https://github.com/spf13/pflag[https://github.com/spf13/pflag]
|bytebufferpool|MIT|https://github.com/valyala/bytebufferpool[https://github.com/valyala/bytebufferpool]
|fasthttp|MIT|https://github.com/valyala/fasthttp[https://github.com/valyala/fasthttp]
|terminal|BSD|https://golang.org/x/crypto/ssh/terminal[https://golang.org/x/crypto/ssh/terminal]
|unix|MIT|https://golang.org/x/sys/unix[https://golang.org/x/sys/unix]
|windows|zlib|https://golang.org/x/sys/windows[https://golang.org/x/sys/windows]
|gcfg|BSD|https://gopkg.in/gcfg.v1[https://gopkg.in/gcfg.v1]
|yaml|Apache License 2.0|https://gopkg.in/yaml.v2[https://gopkg.in/yaml.v2]
|===
*Initial Committers:*
Founding Maintainers:
* Allen Sun (Alibaba)
* Chaobing Chen (Meitu)
* Jian Wang (Alibaba)
* Jin Zhang (Alibaba)
* Zuozheng Hu (Alibaba)
Additional Maintainers:
* Haibing Zhou (Ebay China)
*Infrastructure requests (CI / CNCF Cluster):*
_Development needs:_
We currently use Travis and CircleCI for CI, but we may want to use CNCF resources to deploy jenkis for node e2e test.
_Production needs:_
none
*Communication Channels:*
* Gitter: https://gitter.im/alibaba/Dragonfly
* Mailing List: https://lists.cncf.io/g/cncf-dragonfly (proposed)
* Issue tracker: https://github.com/alibaba/Dragonfly/issues
*Website:* https://alibaba.github.io/Dragonfly/
*Release methodology and mechanics:*
We set the version rule of Dragonfly on the basis of SemVer which has a version number of MAJOR.MINOR.PATCH. Currently we do feature release 4-5 times per year(all with minor releases). Before every minor release, we plan to tag several RC releases to invite community developers to fully test them. In addition, all the code commits to Dragonfly project must add essential tests to cover the feature or code change.
*Social media accounts:*
* Twitter: https://twitter.com/dragonfly_oss[@dragonfly_oss]
*Existing sponsorship*: Alibaba, AntFinancial and China Mobile
*Community size:*
2300+ stars
3 full-time engineers
16 contributors

View file

@ -1,219 +0,0 @@
=== Falco CNCF Sandbox Project Proposal
*Name of Project:* Falco
*Description:*
Highly distributed and dynamic architectural patterns such as microservices are proving that traditional models of application and network security alone do not meet todays current needs. Additionally, the increasing level of regulation being introduced (General Data Protection Regulation, or GDPR, for instance) to any business with a digital presence makes security more important than ever. Organizations must quickly respond to exploits and breaches to minimize financial penalties introduced by such regulation, yet the dynamic nature of modern Cloud Native architectures make it extremely difficult for organizations to keep pace.
Falco seeks to solve this problem by shortening the security incident detection and response cycle in microservices architectures. Falco provides runtime security for systems running container workloads to detect behavior that is defined as abnormal. Falco can be broken into three areas:
*Event & Metadata Providers* - inputs of events to the rules engine.
* Sysdig Kernel Module - provides a stream of system call events for Linux based systems.
* Kubernetes API Server - provides metadata for Kubernetes resources such as Namespace, Deployment, Replication Controllers, Pods, and Services.
* Marathon - provides metadata for Marathon resources.
* Mesos - provides metadata for Mesos resources.
* Docker - provides metadata for containers running under the Docker container runtime.
*Rules Engine & Condition Syntax* - Falco implements a rules engine that supports the following rule syntax.
* https://github.com/draios/falco/wiki/Falco-Rules#conditions[Sysdig Filter Syntax] - Falco supports the Sysdig filter syntax used for filtering system call events from the Sysdig kernel module. This syntax also supports filtering on metadata from sources such as container runtimes, Kubernetes, Mesos, and Marathon.
*Notification Outputs* - Falcos rules engine will send alerts when rule conditions are met. The following output destinations are currently supported.
* Stdout, Log file, Syslog - These can be aggregated using Fluentd or similar
* Command Execution - Falco can execute a command, passing the alert in via stdin
For example, by leveraging the Sysdig kernel modules capabilities of tapping into system calls from the Linux kernel, rules can be written to detect behavior seen as abnormal. Through the system calls, Falco can detect events such as:
* A Kubernetes Pod running in a Deployment labeled node-frontend begins running processes other than node.
* A shell is run inside a container
* A container is running in privileged mode, or is mounting a sensitive path like /proc from the host.
* A server process spawns a child process of an unexpected type
* Unexpected read of a sensitive file (like /etc/shadow)
* A non-device file is written to /dev
* A standard system binary (like ls) makes an outbound network connection
When a rule condition is met, Falco can either log an alert to a file, syslog, stdout, etc, or trigger an external program. This allows an automated system to respond to compromised containers or container hosts. This automated system could stop or kill containers identified as compromised, or mark container hosts as tainted to prevent workloads from being scheduled on the compromised host.
*Value to the Cloud Native Operating Model*
As Cloud Native starts to become the defacto operating model for many organizations, the security of this model is often the first thing many organizations seek to address. The Cloud Native model seeks to empower developers to be able to rapidly package applications and services in containers, then quickly deploy them to platforms such as Kubernetes. This model seeks to remove the traditional points of friction in operations by providing a consistent deployment paradigm and abstraction of the underlying infrastructure. The challenge for many organizations is that applications packaged as containers are often a black box to downstream teams in terms of 1) what is packaged inside the container, and 2) operations any processes might perform once the application is running.
Currently there are several prescribed methods for building security into the Cloud Native workflow:
* *Image Chain of Trust*
** Scan images as part of a deployment process, such as GitOps, to verify their contents and check for known vulnerabilities (for example Anchore or Clair).
** Cryptographically sign images and restrict container runtimes to only run trusted images. (eg Notary)
** Restrict which container registries images can be pulled from.
* *Admittance Control*
** Cryptographically verifiable identities to restrict/allow workloads to run based on a defined policy (eg SPIFFE).
** Leveraging Service Meshes to control what workloads can join a particular service.
* *Orchestrator/Infra Security*
** Role Based Access Control to restrict access to the orchestrator API services.
** General best practices for securing the orchestrator entry points.
** Network Policy API and CNI Plugins
** Linux Security Module support.
** PodSecurity Policies
* *Runtime Security*
** Detect abnormal behavior inside a workload and take appropriate action, such as telling the orchestrator to kill the workload, thus shortening the security “detect-response” cycle. (eg Falco)
* *Workload Access Control Policies*
** Policies controlling the network activity of workloads and restricting inter-workload communication.
** Policies controlling the API endpoints available to workloads (eg Cilium)
Each prescribed method provides an additional level of protection, but one method by itself does not provide a complete security solution. Image Chain of Trust for instance is a “point in time” method of providing security. In other words, the container image is considered “secure” when the image scanning process completes successfully, but anytime after that it may become “insecure” once new exploits or vulnerabilities are discovered.
Additionally, while container images are considered immutable when built, once a container is created from the image, the process inside the container can modify the containers instantiation of the root filesystem. Some best-practices suggest starting containers with a read-only root filesystem to prevent this, but this method has its own problems. For instance, the “standard” Node.js image needs to write to the root filesystem to create a number of files (lock files for instance) when node starts. Runtime Security seeks to mitigate this problem by watching what changes may be made once a container is running, and taking action on abnormal behavior.
Currently the most of the options for runtime security are limited to proprietary solutions that limits the ability to take advantage of the larger open source software ecosystem. Falco is unique in that its open approach allows for a broader community to define and share rule sets for common security exploits. This open approach also provides the opportunity for a faster response time to newly discovered exploits by providing the ability to share new rules for these exploits as they are discovered.
*Falco Roadmap*
Short term improvements include:
* *Rules Library* - Expand the shipped rule set to include rules for commonly deployed applications and CNCF Projects, as well as common compliance rules such as CIS.
** Container Images/Apps: Nginx, HAProxy, etcd, Java, Node
** CNCF Projects: Kubernetes, Prometheus, Fluentd, Linkerd
** CIS Runtime Compliance Rules
Longer term improvements include:
* *Prometheus Metrics Exporter* - Expose a metrics endpoint to allow collection of metrics by Prometheus. Metrics include # of overall alerts, # of alerts by rule, # of alerts by rule tag.
* *Kubernetes networking policy support* - Support detecting networking policy violations via the Sysdig kernel module
* *Alert Output* - Add support for additional output destinations to allow Falco to more easily be integrated into a Cloud Native architecture.
** *Direct webhook support* - Support posting to a generic webhook +
** *Messaging systems* - Support sending messages to a messaging server such as NATS +
** *gRPC* - Support sending to alerts to external systems via gRPC
* *Event & Metadata Providers* - Support for additional backend providers for the event stream.
* *Kubernetes Audit Events* - Ingest Kubernetes Audit Events and support rules based on Kubernetes Audit Events. +
* *Container Runtimes* - Support additional container runtime.
* *Baselining* - Automatic baselining of an applications “normal” behavior
*Planned Advocacy Work*
Beyond the engineering work planned, there is also work planned to improve the awareness of Falco in the Cloud Native ecosystem.
* *Workshops on Falco:* As the projects main sponsor, Sysdig has been investing in workshops focused on Container Troubleshooting and Container Forensics that include sections on Falco and CNCF projects such as Kubernetes. These workshops will be expanded to include more exercises on writing rules for applications, testing workflow for rule writing, and incorporation of Falco in CD workflows such as GitOps, etc.
* *Documentation Improvements*: Improve documentation with regard to writing rules including out of the box macros, lists, and rules provided by Falco.
* *Documenting Use Cases:* Document existing use cases around using Falco with other projects to deliver a complete end to end solution.
* *Events:* Conference and Meetup presentations to help educate the community on security in the Cloud Native landscape, and to help new community members how to implement Cloud Native based architectures in a secure fashion.
*Current CNCF Ecosystem Integrations:*
*Containerd and rkt*
Falco can detect containers running in both containerd and rkt container runtimes.
*Kubernetes*
Falco can communicate with the Kubernetes API to pull Namespace, Deployment, Service, ReplicaSet, Pod, and Replication controller information such as name and labels. This data can be used to create rule conditions (e.g. k8s.ns.name = mynamspace) as well as used as an outputted field in any generated alerts.
A common deployment method for Falco in the Cloud Native landscape is to deploy it as a Daemon Set running in Kubernetes. The Falco project provides releases packaged as containers and provides a Daemon Set example for end users to deploy Falco.
Docker Hub: https://hub.docker.com/r/sysdig/falco/[https://hub.docker.com/r/sysdig/falco/]
Kubernetes Daemon Set: https://github.com/draios/falco/tree/dev/integrations/k8s-using-daemonset[https://github.com/draios/falco/tree/dev/integrations/k8s-using-daemonset]
Helm chart: https://github.com/helm/charts/tree/master/stable/falco[https://github.com/helm/charts/tree/master/stable/falco]
*Fluentd*
Falco can also leverage Fluentd from the CNCF ecosystem. Falco alerts can be collected from logs or stdout by Fluentd and the alerts can be aggregated and analyzed. An example of using Falco with Fluentd, Elasticsearch, and Kibana can be found on the Sysdig Blog.
https://sysdig.com/blog/kubernetes-security-logging-fluentd-falco/[https://sysdig.com/blog/kubernetes-security-logging-fluentd-falco/]
*NATS*
A https://github.com/sysdiglabs/falco-nats[proof of concept] was created showing publishing of Falco alerts to a NATS messaging server. These alerts can be subscribed to by various programs to process and take action on alerts. In the proof of concept, Falco alerts published to NATS triggered a Kubeless function to delete an offending Pod.
*Sponsors from TOC:* Quinton Hoole, Brian Grant
*Preferred maturity level:* Sandbox
*Unique identifier:* falco
*Current Project Sponsor:* https://sysdig.com/opensource/[Sysdig]
*License:*** **Apache License v 2 (ALv2)
*Code Repositories:*
Code is currently hosted by Sysdig:
https://github.com/draios/falco[https://github.com/draios/falco]
The code will move to a vendor netural github organization at:
https://github.com/falcosecurity[https://github.com/falcosecurity]
*External Code Dependencies* +
External dependencies of Falco are listed below:
|===
|*Software*|*License*|*Project Page*
|libb64|Creative Commons|http://libb64.sourceforge.net/[http://libb64.sourceforge.net/]
|curl|MIT/X|https://curl.haxx.se/[https://curl.haxx.se/]
|jq|MIT|https://stedolan.github.io/jq/[https://stedolan.github.io/jq/]
|libyaml|MIT|https://pyyaml.org/wiki/LibYAML[https://pyyaml.org/wiki/LibYAML]
|lpeg|MIT|http://www.inf.puc-rio.br/\~roberto/lpeg/[http://www.inf.puc-rio.br/~roberto/lpeg/]
|luajit|MIT|http://luajit.org/luajit.html[http://luajit.org/luajit.html]
|lyaml|MIT|https://github.com/gvvaughan/lyaml[https://github.com/gvvaughan/lyaml]
|ncurses|MIT?|https://www.gnu.org/software/ncurses/[https://www.gnu.org/software/ncurses/]
|openssl|OpenSSL & SSLeay|https://www.openssl.org/source[https://www.openssl.org/source]
|yamlcpp|MIT|https://github.com/jbeder/yaml-cpp[https://github.com/jbeder/yaml-cpp]
|zlib|zlib|https://www.zlib.net/zlib.html[https://www.zlib.net/zlib.html]
|sysdig|ALv2|https://github.com/draios/sysdig[https://github.com/draios/sysdig]
|tbb|ALv2|https://www.threadingbuildingblocks.org/[https://www.threadingbuildingblocks.org/]
|===
*Committers:* 16
*Users of Note:*
Cloud.gov:
* https://cloud.gov/docs/apps/experimental/behavior-monitoring/[Dynamic behavior monitoring in Cloud.gov]
* https://www.youtube.com/watch?v=wFQOXMcZnQg[Detecting tainted apps in Cloud Foundry]
* https://github.com/cloudfoundry-community/falco-boshrelease[falco-boshrelease]
*Community Communication:*
Slack is the preferred form of communication. Sysdig runs a Slack team for its open source projects and hosts a #falco channel under that Slack team:
Slack team: https://sysdig.slack.com[https://sysdig.slack.com] +
Falco Channel: https://sysdig.slack.com/messages/C19S3J21F/[https://sysdig.slack.com/messages/C19S3J21F/]
*Website/Blog:*
The website is currently hosted by Sysdig, under the Open Source section of the website: https://sysdig.com/opensource/falco[https://sysdig.com/opensource/falco]
Blog posts related to Falco are currently posted to the Sysdig Blog. https://sysdig.com/blog/tag/falco/[https://sysdig.com/blog/tag/falco/]
The Falco website and blog will be moved to: https://falco.org[https://falco.org]
*Release Cadence:*
Minor releases quarterly, Patch releases as frequent needed (Minor and Patch used as defined by https://semver.org/[semantic versioning].)
*Statement on alignment with CNCF mission:*
With the number of systems under management increasing at a greater and greater rate, and regulation becoming more common, new approaches are required with regards to security that allows organizations to automatically manage the “detection & response” security cycle. Innovations in Cloud Native technologies allow this automatic approach to security more and more feasible.
Falco aligns with the CNCF mission statement by:
* Focusing on containers first: Falco was built with the assumption that containers are the method in which modern applications would be run. Falco has included since its inception the ability to identify containerized processes and apply rules to these processes.
* Enabling the CNCF ecosystem by including Cloud Native best practices: The https://github.com/draios/falco/blob/dev/rules/falco_rules.yaml[default Falco rule set] focuses on container anti-patterns, or rather common mistakes that new users tend to do when deploying a Cloud Native application in containers. While currently these rules focuses on containers and container runtimes, additional rule sets can be written for CNCF projects, and application runtimes in the CNCF Landscape. This work is on the Falco roadmap, and could be easily done wby the broader CNCF community.
* Falcos goal is to provide a modular, composable system that allows easy integration with other CNCF projects or open source projects. This idea of composability allows for operators of Cloud Native platforms to easily build systems to manage the security of the platform, while maintaining a high degree of flexibility and maintaining the Cloud Native developer velocity.

View file

@ -1,134 +0,0 @@
== Harbor Proposal
*Name of project:* Harbor
*Description:* Harbor is an open source cloud native registry that provides trust, compliance, performance, and interoperability. As a private on-premises registry, Harbor fills a gap for organizations that prefer not to use a public or cloud-based registry or want a consistent experience across clouds.
=== Why does CNCF need a container registry?
The CNCF has an impressive portfolio of projects that can be leveraged to build and run complex distributed systems; a gap, however, exists without a secure container registry. In particular, no other open source container registry offers the featureset present in Harbor.
Harbor's features and community are a natural fit for the CNCF. A donation would ensure a vendor-neutral home for the project, while increasing community involvement and feature velocity, and a tighter alignment between Harbor and other CNCF projects.
=== Harbor Overview
Harbor is an open source cloud native registry that solves common problems in organizations building cloud native applications by delivering trust, compliance, performance, and interoperability. As a private on-premises registry, Harbor fills a gap for organizations that prefer not to use a public or cloud-based registry or want a consistent experience across clouds.
==== Features
The mission of Harbor is to provide users in cloud native environments the ability to confidently manage and securely serve container images. To do so, Harbor stores, signs, and scans content. Here are some of the key features of Harbor:
* Multi-tenant content signing and validation
* Security and vulnerability analysis
* Audit logging
* Identity integration and role-based access control
* Image replication between instances
* Extensible API and graphical UI
* Internationalization (currently English and Chinese)
https://blogs.vmware.com/cloudnative/2018/06/14/harbor-delivers-a-trusted-cloud-native-registry/[Click here] to learn more about Harbor's features.
=== Project Timeline and Snapshot
* In June 2014, Harbor started as a project within VMware's China R&D organization, where it was leveraged for a handful of internal projects to manage container images. To allow more developers in the community to use and contribute to the project, VMware open sourced Harbor in March of 2016 and it has steadily gained traction since.
* Harbor has been integrated into two commercial VMware products, vSphere Integrated Containers (VIC) and Pivotal Container Services (PKS).
* Many companies include Harbor in their own cloud native solutions, including Chinese CNCF member startups Caicloud and Dataman.
* In April 2018, Harbor passed 4000 stars on GitHub and currently has 59 community contributors worldwide, 30 of which have made non-trivial contributions to the project.
== Production Users
Harbor currently has production https://github.com/vmware/harbor/blob/master/partners.md[users], including:
* Trend Micro
* OnStar in China
* Caicloud
* CloudChef
* Rancher
A number of CNCF member companies, such as JD.com, China Mobile, Caicloud, Dataman, and Tenxcloud are also users of Harbor.
== In-Flight Features
The Harbor team is currently working on improving Harbor, including:
* Native support of Helm
* Highly-available deployments
* Image caching and proxying
* Label-related feature improvements
* Quotas
The direction of the project has been generally guided by our open source community and users. There are a plethora of GitHub issues requesting various features that we prioritize based on popularity of user requests and engineering capacity. Our community has been involved in the addition of several new important features, including the creation of a Helm chart for Harbor.
A roadmap for future features, including those listed above, can be found GitHub: https://github.com/vmware/harbor/labels/Epic. The project welcomes contributions of any kind: code, documentation, bug reporting via issues, and project management to help track and prioritize workstreams.
== Use Cases
The following is a list of common use-cases for Harbor users:
* *On-prem container registry* organizations with the desire to host sensitive production images on-premises can do so with Harbor
* *Vulnerability scanning* organizations can scan images before they are used in production. Images with failed vulnerability scans can be blocked from being pulled
* *Image signing* images can be signed via Notary to ensure provenance
* *Role-based Access Control* integration with LDAP (and AD) to provide user- and group-level permissions
* *Image replication* production images can be replicated to disparate Harbor nodes, providing disaster recovery, load balancing and the ability for organizations to replicate images to different geos to provide a more expedient image pull
== CNCF Donation Details
* *Preferred Maturity Level:* Sandbox or Incubation
* *Sponsors:* Quinton Hoole and Ken Owens
* *License:* Apache 2
* *Source control repositories / issue tracker:* https://github.com/vmware/harbor, with a ZenHub board tracking engineering work. _Will be moved to github.com/goharbor organization_
* *Infrastructure Required:* Infrastructure for CI / CD
* *Website:* https://vmware.github.io/harbor/. Will be moved to https://goharbor.io.
* *Release Methodology and Mechanics:* We currently do feature releases for major updates 3-4 times per year (with minor releases) when needed. Before releasing we tag one or more RC releases for community testing. Commits to the project are analyzed and we require that changes do not decrease overall test coverage to the project.
== Social Media Accounts:
* *Twitter:* https://twitter.com/project_harbor
* *Users Google Groups:* harbor-users@googlegroups.com
* *Developer Google Groups:* harbor-dev@googlegroups.com
* *Slack:* https://goharbor.slack.com
== Contributor Statistics
There have been 23 non-VMware committers with non-trivial (50+ LoC) contributions since the project's inception.
== Alignment with CNCF
Our team believes Harbor to be a great fit for the CNCF. Harbor's core mission aligns well with Kubernetes and the container ecosystem. The CNCF's mission is to “create and drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments capable of scaling to tens of thousands of self-healing multi-tenant nodes.” We believe container registries are essential to achieve this mission. Harbor, as a mature open source registry is a logical complement to the CNCF's existing portfolio of projects.
== Asks from CNCF
* Governance General access to staff to provide advice, and help optimize and document our governance process
* Infrastructure for CI / CD
* Integration with CNCF devstat
* A vendor-neutral home for Harbor
== Appendices
=== Architecture
Harbor is cleanly architected and includes both third-party components notably Docker registry, Clair, Notary and Nginx and various Harbor-specific components. Harbor leverages Kubernetes to manage the runtimes of the various components.
An architectural diagram can be found on https://github.com/vmware/harbor/blob/master/docs/img/harbor-arch.png[GitHub] and shows various components: red 3rd party components which Harbor leverages for functionality (e.g., nginx, Notary, etc.); green components to denote a persistence layer; and blue Harbor-specific components.
Succinctly, the bulk of the heavy lifting is done by the Core Service which provides both an API and a UI for registry functionality. The job and admin services handle asynchronous jobs and management of configurations. Additional details for the various components below.
=== Components
|===
| *Component* | *Description*
| *API Routing Layer (Nginx)* | A reverse proxy serves as the endpoint of Harbor, Docker and Notary clients. Users will leverage this endpoint to access Harbors API or UI
| *Core Services* | Hosts Harbors API and UI resources. Additionally, an interceptor for registry API to block Docker pull/push in particular use cases (e.g., image fails vulnerability scan)
| *Admin Service* | Serves API for components to retrieve/manage the configurations
| *Job Service* | Serves API to be called by Core service for asynchronous job
| *Registry v2* | Open Source Docker Distribution, whose authorization is set to the token API of Core service
| *Clair* | Open Source vulnerability scanner by CoreOS whose API will be called by job service to pull image layers fro Registry for static analysis
| *Notary* | Components of Dockers content trust open source project
| *Database* | PostgresSQL to store user data
|===
== Registry Landscape
There are numerous registries available for developers and platform architecture teams to leverage. Weve analyzed the various options available and summarized them here:
https://github.com/vmware/harbor/blob/master/docs/registry_landscape.md
This table provides our best estimation of features and functionality available on other container registry platforms. Should you find mistakes please submit a PR to update the table.

View file

@ -1,124 +0,0 @@
== Helm
*Name of project*: Helm
*Description*:
link:http://helm.sh[Helm] is a package manager, like Debian Apt for Kubernetes, that enables you to define, install, and upgrade container based applications including those with dependencies. Dependencies can be held in distributed repositories including those in public and private locations.
Those who develop packages, known as charts, have the full power of Kubernetes objects and the ability to depend on other charts. Depending on other charts allows individual services to be defined separately while also allowing an application to launch using a microservice architecture.
Helm not only provides a simple out-of-the-box experience for those installing applications, but also simplifies deployment automation by enabling configuration reuse, enabling multiple components to be managed as a single entity, and facilitating observability of overall application health.
*Sponsor / Advisor from TOC*: Brian Grant <briangrant@google.com>
*Unique Identifier*: helm
*License*: ALv2
*Maturity Level:* Incubating
*Source control repositories*:
* https://github.com/kubernetes/helm
* https://github.com/kubernetes/charts
* https://github.com/kubernetes-helm/community
* https://github.com/kubernetes-helm/monocular
* https://github.com/kubernetes-helm/helm-summit-notes
* https://github.com/kubernetes-helm/chart-testing
* https://github.com/kubernetes-helm/charts-tooling
* https://github.com/kubernetes-helm/rudder-federation
* https://github.com/kubernetes-helm/chartmuseum
* https://github.com/helm/helm-www
A goal is to consolidate all repositories under the link:https://github.com/helm[helm] GitHub org.
link:https://github.com/kubernetes/community/blob/6c3b1a6f0c1152f5e35a53ea93e692ed501abf7a/governance.md#subprojects[Kubernetes, where Helm grew up, has the concept of sub-projects]. For Kubernetes these can be ways the core Kubernetes codebase is organized as well as separate codebases, some with their own release schedules, that support Kubernetes as a whole. Under Kubernetes, Helm and its supporting projects were organized as several sub-projects. This proposal groups those supporting projects of Helm, coming from Kubernetes, as sub-projects of Helm. Sub-projects may have their own maintainers and release schedules.
*Current Core Maintainers*:
* Adam Reese
* Adnan Abdulhussein
* Justin Scott
* Maciej Kwiek
* Matt Butcher
* Matt Farina
* Matt Fisher
* Michelle Noorali
* Nikhil Manchanda
* Taylor Thomas
* Vic Iglesias
_Note, the current core maintainers represent 5 different companies._
Sub-projects of Helm have their own maintainers. For example, you can read about the Charts maintainers in the link:https://github.com/kubernetes/charts/blob/master/OWNERS[OWNERS file].
*Infrastructure requirements*: CI, CNCF Cluster, Object Storage
*Issue tracker*: https://github.com/kubernetes/helm/issues
Sub-projects each have their own issue queue.
*Mailing lists*
* Slack:
** Helm Dev room https://kubernetes.slack.com/messages/helm-dev
** Helm Users room https://kubernetes.slack.com/messages/helm-users (see https://kubernetes.slackarchive.io/helm-users/page-100)
** Charts room https://kubernetes.slack.com/messages/charts
** Chartmuseum room https://kubernetes.slack.com/messages/chartmuseum
* https://lists.cncf.io/g/cncf-kubernetes-helm
*Website*: http://helm.sh
*Release methodology and mechanics*
Helm uses link:http://semver.org/[semantic versioning] for releases. Releases are announced using GitHub releases while the release artifacts are placed into object storage for later download. The continuous integration systems, currently CircleCI, automatically places releases and development builds into object storage.
Helm is currently releases stable releases with a major version of 2. When a minor version comes out containing new features a release branch is created where release candidates, final releases, and patch releases are created from. Anything to be added to these releases is cherry-picked into the branch prior to releases.
The Helm release process is documented in the link:https://github.com/kubernetes/helm/blob/master/docs/release_checklist.md[release checklist].
Sub-projects have their own releases processes. For example, the Helm Community Charts repository uses continuous deployments. All changes to individual charts increment the chart versions. A sync job runs every 15 minutes to pickup changes, builds the chart packages, and places them into object storage to be retrieved by Helm clients.
*Social media accounts*:
* https://twitter.com/helmpack
* link:https://www.youtube.com/channel/UC_kvCKc5EHNomq64f8C4sfA[YouTube]
*Existing sponsorship*:
* Microsoft
* Google
* Codefresh
* Bitnami
* Ticketmaster
* Codecentric
_Note, these companies and their logos are listed on the link:https://helm.sh[Helm website]._
*Adopters*:
Many Kubernetes users depend on Helm to configure and deploy their applications. The following is a partial list of those who have said they are using Helm at the Helm Summit, a conference held earlier this year that focused solely on the development of and use of Helm. The list is in alphabetical order.
* IBM
* jFrog
* Microsoft
* Nike
* Oteemo
* Reddit †
* Samsung SDS
* SUSE
* Ubisoft †
* WP Engine †
† These companies shared, at the conference, how they use Helm in production.
In addition to these we have measured downloads of Helm. A sample of that for the month of April 2018 shows 59,050 downloads from unique IPs from the Helm distribution channel along with 11,618 installations via Homebrew for MacOS.
*Statement on alignment with CNCF mission*:
Helm joined the CNCF at the same time Kubernetes did as it was a sub-project of Kubernetes at that time. Helm is seeking to become a top-level project within the CNCF because Helm has grown up and is taking on a life of it's own. This can be seen in the over 300 contributors to Helm, the over 800 contributors to the community charts, a successful conference based solely on Helm, and the unique culture forming around Helm compared to core Kubernetes.
*External Dependencies*: A full list of dependencies can be found at https://github.com/kubernetes/helm/blob/master/glide.lock.
*Other Contributors*: https://github.com/kubernetes/helm/graphs/contributors

View file

@ -1,405 +0,0 @@
== NATS Proposal
*Name of project:* NATS
*Description:* As developers and operators of modern cloud native
infrastructure have come to realize, there are limitations to using
traditional forms of systems communications (eg. REST, legacy
messaging, or traditional enterprise messaging) and applying these to
a cloud native environment.
=== Why does CNCF need messaging?
Software has matured from large monolith applications to event driven
distributed applications and microservices comprised of many
components that need to communicate. Messaging
(https://en.wikipedia.org/wiki/Message-oriented_middleware[message oriented middleware])
has evolved to meet these communication needs, and NATS was created
specifically for next generation cloud native applications.
=== NATS Overview
NATS is a mature, seven year old messaging technology, built from the
ground up to be cloud native, implementing the publish/subscribe,
request/reply and distributed queue patterns to help create a
performant and secure method of InterProcess Communication (IPC).
Simplicity, performance, scalability, and security constitute the core
tenets of NATS. For more detail of how these values inform the design
of NATS, including features that are intentionally absent, refer to
https://github.com/nats-io/roadmap/blob/master/architecture/DESIGN.md[“NATS Design Considerations”].
NATS is based on a client-server architecture with servers that can be
clustered to operate as a single entity. Clients connect to these
clusters to exchange data encapsulated in messages. An overview of
the NATS architecture can be found in
https://github.com/nats-io/roadmap/blob/master/architecture/ARCHITECTURE.md[“Understanding NATS Architecture”].
Core NATS was designed around fire and forget, or *at-most-once*
semantics, similar to how neurons fire in the brain. However, some
use cases may require a guarantee of delivery, and *at-least-once*
pattern utilizing storage and replay of data. In this case the
optional streaming component of NATS can be deployed and utilized.
Most messaging systems do provide a mechanism to persist messages and
ensure message delivery. NATS does this through log based streaming;
a way to store and replay messages. Streaming subscribers can retrieve
messages published when they were offline, or replay a series of
messages. Streaming inherently provides a buffer in the distributed
application ecosystem, increasing stability and matching consumer
ability to receive messages. This allows applications to offload
local message caching and buffering logic into NATS Streaming, and
ensures a message will be delivered.
NATS supports both of these modes of delivery, *at-most-once*, and
*at-least-once*. At-most-once means that a message will be sent to a
subscriber only one time, and can be lost in flight. It is up to the
application, or the system, to ensure data has been delivered,
resending messages as necessary. This is sufficient for most modern
cloud native applications since for example NATS based Request/Response
can be used to ensure that a message has been delivered and processed,
thus providing an end-to-end delivery guarantee. At-least-once
delivery, provided through NATS Streaming, means a message will always
be delivered, but may be delivered more than once. It is worth noting
that there is another delivery mode, *exactly-once*, which guarantees
a message will always be delivered once and only once. This mode is
not supported by NATS.
==== Trade-offs
As stated, NATS' design goals include simplicity and performance. In
order to achieve this, there are a number of notable features NATS
does not provide. Some of these include:
* Message transactions
* Message schemas
* Last will and testament messages
* Message groups (e.g. JMSXGroupID)
* Exactly once delivery
* https://github.com/nats-io/roadmap/blob/master/architecture/DESIGN.md#minimizing-state[Cluster consistency]
While features like these are valuable to users, they add complexity,
and thus overhead. A simpler feature set ultimately translates into a
simple and direct fastpath that a message takes, allowing NATS to
optimize for raw performance, availability to all users, and to
maintain a small memory footprint.
=== Messaging Patterns
Messaging systems typically provide a number of usage patterns. The
major patterns NATS provides includes publish/subscribe, queue
subscriptions, and request/reply. These basic patterns supported by
NATS provide a foundation to build a scalable and resilient
application ecosystem in a cloud environment. NATS goes further,
providing additional features facilitating cloud based deployments.
More information about this can be found in <<Appendix A>>.
=== The NATS Protocol
Core NATS has a lightweight plain text protocol with a handful of
verbs. The protocol is easy to learn - plain text simplifies
development and debugging and facilitates contributions of new client
libraries. Being very terse, there are only a few extra bytes of
overhead per message found when compared to binary protocols.
The NATS Streaming protocol, being more complex, is a binary protocol
implemented through protobuf, layered above the NATS protocol.
NATS has a versioning plan in place for handling both breaking and
non-breaking changes in protocol, described
https://github.com/nats-io/roadmap/blob/master/VERSIONING.md[here].
=== Cloud-Native Features of NATS
Being built from the ground up to be cloud-native, NATS has a number of
cloud-friendly features.
==== High Availability and Scalability augmented with Auto-Discovery
NATS allows users to dynamically scale server cluster sizes with zero
downtime and no configuration changes. Updated cluster topology
information is propagated in real time throughout the NATS server
nodes and clients, allowing existing servers to automatically route
with new servers and clients to automatically update their list of
available NATS servers. This means you cluster a few seed servers in
your cloud, then add additional NATS servers (referencing the seed
servers) as needed - no downtime or reconfiguration of existing
servers or clients is needed.
==== Resiliency
NATS prioritizes the health and availability of the system as a whole
rather than attempting to service an individual client or server,
creating a foundation for stable and resilient systems. In
traditional messaging systems, when a consumer is slow to process
messages, resources can be used trying to accommodate it at the
expense of the entire system, potentially leading to instability and
errors. Core NATS identifies a slow consumer and drops messages, or
the consumer's connection entirely, to prevent back-pressure affecting
the entire system and other users.
NATS Streaming, built upon NATS, has this same resiliency but takes it
a step further to avoid the problem of slow consumers entirely in that
it is self-metering to the throughput rate of each consumer.
==== No Dependencies and Low Overhead
NATS servers are extremely lightweight, with very low configuration
needs, making them ideal for use in cloud environments. The server
operates as a single binary with no prerequisites or runtime
dependencies. The NATS server docker image is less than 10MB, utilizes
little memory, and spins up very quickly allowing NATS to work well in
container orchestration systems.
=== Messaging Alternatives
Messaging is simply a form of IPC - there are other ways to transfer
information, for example using a coordination mechanism such as a
distributed hash table or a database - these may be more appropriate
depending on the use case. Generally though, messaging provides
better features in terms of diverse messaging patterns, scalability
and throughput when compared to other forms of IPC, and does not
require as much additional custom tooling and error handling. We
address a specific question asked of us,
"Why not use etcd?" in <<Appendix B>>.
=== NATS Feature Comparison
This comparison is intended simply to compare features of NATS with
Apache Kafka and RabbitMQ, two other messaging projects. It is not
intended to favor or position one project over another. Any
corrections are welcome.
.Feature Comparison
|===
|Area |NATS |Apache Kafka |RabbitMQ
|Language & Platform Coverage
|Core NATS: 48 known client types, 11 supported by maintainers, 18 contributed by the community. NATS Streaming: 6 client types supported by maintainers, 3 contributed by the community. NATS servers can be compiled on architectures supported by golang. NATS provides binary distributions for darwin-amd64, linux-306, linux-amd64, linux-arm6, linux-arm64, linux-arm7, windows-386, and windows-amd6, and server installations through homebrew, chocolatey, and go.
|18 client types supported across the community and by confluent. Kafka servers can run on platforms supporting java - very wide support.
|At least 10 client platforms footnote:[http://www.rabbitmq.com/devtools.html] that are maintainer supported with over 50 community supported client types. Servers are supported on the following platforms: Linux Windows, NT through 10 Windows Server 2003 through 201, Mac OS X, Solaris, FreeBSD, TRU64, VxWorks The server may be run on many other platforms where erlang can run, but may not officially supported.
|Delivery Guarantees
|At most once, at least once
|At most once, at least once, exactly once footnote:[https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/]
|At most once, at least once
|Operational Complexity
|Little configuration for both server and clients, easy to install, auto discovery reduces configuration.
|Requires several configured components, zookeeper, brokers, clients must maintain some state.
|Should work out of the box.
|Security
|TLS, Authentication and Subject based Authorization in a reloadable configuration file.
|Supports Kerberos and TLS. Supports JAAS and an out-of-box authorizer implementation that uses ZooKeeper to store connection and subject.
|TLS, SASL, and Pluggable authentication.
|HA/FT
|Core NATS supports full mesh clustering to provide high availability to clients. NATS streaming has warm failover backup servers. Full data replication is in progress.
|Fully replicated cluster members coordinated via zookeeper.
|Clustering Support with full data replication via mirrors.
|Monitoring
|Configuration is command line and configuration file, which can be reloaded with changes at runtime
|Kafka has a number of managements tools and consoles including Confluent Control Center, Kafkat, Kafka Web Console, Kafka Offset Monitor.
|CLI tools, a plugin-based management system with dashboards and third party tools.
|Management
|Configuration is command line and configuration file, which can be reloaded with changes at runtime.
|Kafka has a number of managements tools and consoles including Confluent Control Center, Kafkat, Kafka Web Console, Kafka Offset Monitor.
|CLI tools, a plugin-based management system with dashboards and third party tools.
|Integrations
|NATS supports a NATS Connector Framework with a Redis Connector, Apache Spark, Apache Flink, CoreOS, Elasticsearch, Prometheus, Telegraf, Logrus, Fluent Bit, Fluentd
|Kafka has a large number of integrations in their ecosystem, including stream processing (Storm, Samza, Flink), Hadoop, database (JDBC, Oracle Golden Gate), Search and Query (ElasticSearch, Hive), and a variety of logging and other integrations.
|RabbitMQ has a rich set of plugins, including protocols (MQTT, STOMP), websockets, and various authorization and authentication plugins.
|===
==== Performance
We feel NATS performance is industry leading. However, to our knowledge there
has not been a third party benchmark made public that includes NATS, Kafka,
and RabbitMQ. We feel strongly that benchmarks by third party are unbiased
and widely accepted.
Here are two third party benchmarks to reference:
** http://bravenewgeek.com/dissecting-message-queues/[Dissecting Message Queues] comparing NATS and Kafka.
** https://cloudplatform.googleblog.com/2014/06/rabbitmq-on-google-compute-engine.html[RabbitMQ on Google Compute Engine].
=== Notable Use Cases
NATS, being as flexible as it is, covers a variety of use cases, from
acting as a microservices control plane to publishing events on
devices in IoT solutions.
A few use cases include:
* http://nats.io/blog/rapidloop-monitoring-with-opsdash-built-on-nats/[Rapidloop]: NATS as a microservices backplane, service discovery, and service orchestration.
* http://nats.io/blog/how-clarifai-uses-nats-and-kubernetes-for-machine-learning/[Clarifai]: NATS as a microservices control plane in Kubernetes
* http://nats.io/blog/nats-good-gotchas-awesome-features/[StorageOS]: NATS enabling a system event notification system.
* http://nats.io/blog/serverless-functions-and-workflows-with-kubernetes-and-nats/[Fission.io]: Event sourcing for serverless functions implemented through NATS streaming.
* http://nats.io/blog/nats-for-the-marionette-collective/[Choria/MCollective]: Server orchestration implemented over NATS.
* https://nats.io/blog/earthquakewarningnats/[A Circular World]: An early earthquake detection system utilizing NATS as the communications system with back end servers.
* http://nats.io/blog/nats-on-autopilot/[Joyent]: Sensor data aggregation implemented through NATS streaming.
* http://weave.works[Weaveworks]: General Pub/Sub and simple queue based routing within Weave Cloud SaaS, alongside K8s.
=== Roadmap
NATS intends to deliver some compelling additional functionality in the future,
refer to our https://github.com/nats-io/roadmap[roadmap].
=== Additional Resources
For additional information about NATS, please visit
http://nats.io/documentation/, and a good slideshow about NATS
messaging and the problems it can solve can be found in
https://www.slideshare.net/Apcera/simple-solutions-for-complex-problems[“Simple Solutions for Complex Problems”].
*Sponsor / Advisor from the TOC:* Alexis Richardson
*Preferred Maturity Level:* Incubating
*License:* MIT (Intend to change to Apache 2.0 in the near future)
*Source control repositories:* https://github.com/nats-io
*Issue Tracker:* These are currently tracked via the various server and client
repositories for NATS Server and NATS Streaming. For example,
https://github.com/nats-io/gnatsd/issues for NATS Server. This has currently
served us very well, although if there is a preferred tracking system CNCF use,
we would be interested in discussing.
*Website:* https://NATS.io
*Release Methodology and Mechanics:* We currently do numbered releases for
major updates 3-4 times per year. We include the highest priority items from
our roadmap as well as the user communitys wishlist and strive for code
coverage of >80% for client APIs, and >90% for server code.
*Social Media Accounts:*
* Twitter: https://twitter.com/nats_io
* Google Groups: https://groups.google.com/forum/#!forum/natsio
* Slideshare: https://www.slideshare.net/nats_io/presentations
* Reddit: https://www.reddit.com/r/NATS_io/
* Slack: (currently by invite, with ~550 members: http://bit.ly/2DMdR6G)
*Existing project sponsorship:* Synadia
*Contributor Statistics:*
* NATS Server and NATS Streaming: 43 external contributors distributed across dozens of companies, spanning a variety of industry segments.
* NATS Server and NATS Streaming Clients: Over 100 contributors distributed across dozens of companies
*Sample Adopters:* Apcera, Apporeto, Clarifai, Comcast, General Electric (GE),
Greta.io, CloudFoundry, HTC, Samsung, Netlify, Pivotal, Platform9, Sensay,
Workiva, VMware.
*Sample Integrators:*
* *Functions as a Service:* OpenFaaS, Fission.io, Storage, Minio, StorageOS
* *Cloud Computing, Monitoring and Tooling:* Pivotal, VMware, Hemera, RapidLoop, Spindoc
* *Event Gateways:* Apache Camel
*Statement on Alignment with CNCF mission:* Our team believes NATS to be a
great fit for the CNCF. We believe that the CNCF also recognizes this, having
been in discussions for some time for NATS to be contributed, and we are
interested in making that a reality. As the CNCFs mission is to “create and
drive the adoption of a new computing paradigm that is optimized for modern
distributed systems environments capable of scaling to tens of thousands of
self healing multi-tenant nodes,” we believe NATS to be a core enabling
technology for this. This has also been validated by developers working on
cloud native systems already, as NATS has been widely chosen over traditional
communication methods and protocols for distributed systems.
Moreover, NATS has very strong existing synergy and inertia with other CNCF
projects, and is used heavily in conjunction with projects like: Kubernetes,
Prometheus, gRPC, Fluentd, Linkerd, and Containerd to name a few. The broad
client coverage, and simplicity of the protocol will make supporting and
integrating with future cloud native systems and paradigms straight forward
as well.
*Additional CNCF asks:*
. *Governance advice:* General access to staff to provide advice and help
optimize and document our governance process
. *General help managing contribution process going forward:* We do not
currently have a CLA, nor do we require developers making contributions
to sign anything. We would like to find a straightforward process that
meets the CNCFs requirements - but also that is not overly burdensome
for developers to interact with.
=== Appendices
=== Appendix A
*Messaging Patterns in NATS*
Messaging systems typically provide a number of usage patterns. The major
patterns NATS provides include the following:
===== Publish/Subscribe
Messaging systems that support the publish/subscribe paradigm offer a
key benefit: decoupling of applications through subjects (also called
topics). Applications establish a connection to the broker, then
subscribe to various topics and begin receiving messages on that topic
regardless of the location or number of publishers producing data.
Any interested subscriber receives messages published on that topic.
This allows scalability and a loose coupling of publishers and
subscribers. With this dynamic topology, any publisher or subscriber
can move across network nodes without affecting the rest of the
system - a boon to microservices in the cloud.
===== Queue Subscribers (Load Balancing)
NATS can be described as a layer 7 load balancer - it routes
application data based on message data, the subject, which is provided
by the producing application. In discussing load balancing specific
to NATS we are referring to the competing consumer pattern in the form
of queue subscribers. In this pattern, the NATS server distributes
messages randomly amongst multiple subscribers working together to
each individually process messages from a single virtual “queue”. For
example, one might run several identical applications queue subscribed
on the same subject. The NATS server (or streaming server) will
distribute this message to one subscriber in the group, allowing for
distribution of workload amongst multiple instances of the
application. In some cases this can be preferable to layer 4 load
balancing because network traffic can be directed through use of the
subject namespace - applications balancing the workload can move or
scale with no additional configuration, although it may not be as
performant as level 4 load balancing.
===== Request / Reply Pattern Support
NATS supports request/reply through use of unique subjects, still allowing for
a loose coupling of a requestor and replier(s). The request reply pattern
involves sending a request message, and expecting a reply. Often times the
application will block until the reply is received.
=== Appendix B
==== Why not use etcd?
NATS is designed to deliver application data in a distributed system.
NATS does this by packaging application data in a message and sending
it to endpoints. Various messaging patterns (request reply,
publish/subscribe, distributed queues) are supported to communicate
with individual consumers or to fan out and send one message to many
consumers. It is up to the application to consider messages as atomic
units of data, or as elements of a stream - real-time with Core NATS,
or as a historical log of messages NATS streaming.
Etcd was designed to solve the problem of distributed system
coordination and metadata storage. It persists data in a key value
store, and supports many concurrency primitives including distributed
locking and leadership election. There are recipes for queueing using
unique keys, as well as a gRpc API to stream updates - this is where
we begin to see overlap.
The fundamental decision of whether to use NATS or etcd can be based
on a few factors. One factor is the structure of data - whether your
distributed application can benefit most from data structured as a
key-value store versus a stream. If your application benefits from
key/value data storage, etcd is a better choice. The second being the
frequency of the updates. Any update to a value in etcd is more
expensive than a message sent in NATS due to the consistency
guarantees etcd provides. If you have frequently updating values, or
require an extremely high frequency of update, NATS is a better
choice.
NATS and etcd can also complement each other, with etcd for
coordination and NATS for data distribution.

View file

@ -1,182 +0,0 @@
# Project Description
Every organization has unique policies that affect the entire stack. These policies are vital to long term success because they codify
important requirements around cost, performance, security, legal regulation, and more. At the same time, organizations often rely on
tribal knowledge and documentation to ensure that policies are enforced correctly. While these approaches are known to be error prone,
they exist because systems frequently lack the flexibility and expressiveness required to automate policy enforcement.
The Open Policy Agent (OPA) is a general-purpose policy engine that enables unified, context-aware policy enforcement across the stack.
OPA empowers administrators with greater control and flexibility so that organizations can automate policy enforcement at any layer.
At the core of OPA is a high-level declarative language (and runtime) that allows administrators to enforce policies across multiple
domains such as API authorization, admission control, workload placement, storage, and networking. OPAs language is purpose-built for
expressing policy decisions. The language has rich support for processing complex data structures as well as performing search and
aggregation across context required for policy decisions. The language also provides support for encapsulation and composition so that
complex policies can be shared and re-used. Finally, the language includes a standard library of built-in functions for performing math
operations, string manipulation, date/time parsing, and more.
With OPA, policy decisions are decoupled from applications and services so that policy logic can be modified easily and upgraded
on-the-fly without requiring expensive, time consuming development and release cycles.
OPA provides simple APIs to offload policy decisions from applications and services. Policy decisions are computed by OPA and returned
to callers as structured data. Callers integrate with OPA by executing policy queries that can include arbitrary input values. For
example, an API gateway might supply incoming API requests as input and expect boolean values (representing allow/deny decisions) as
output. On the other hand, a container orchestrator might supply workload resources as input and expect a map of clusters and weights
to drive workload placement as output. See the appendix for sample policies that cover these use cases.
OPA itself is written in Go and can be integrated as a library, host-level daemon, or sidecar container. OPA provides APIs to load and
manage policies as well as external data. Finally, OPA provides rich tooling to support the development, testing, and debugging of
policies.
Since the initial release in July 2016, OPAs mission has been to provide a powerful building block that enables policy-based control
across the stack. OPAs roadmap for the next 12 months includes improvements to the language, integration with Googles CEL, expansion
of the standard policy library, as well as continued hardening and performance optimization.
**Sponsor from TOC:** Ken Owens, Brian Grant
**Preferred Maturity Level:** Sandbox
**License:** Apache License v2
# Source Control
https://github.com/open-policy-agent/opa
https://github.com/open-policy-agent/library
# External Dependencies
github.com/ghodss/yaml MIT License
github.com/gorilla/mux BSD 3-clause "New" or "Revised" License
github.com/mattn/go-runewidth MIT License
github.com/olekukonko/tablewriter MIT License
github.com/peterh/liner MIT License
github.com/pkg/errors BSD 2-clause "Simplified" License
github.com/sirupsen/logrus MIT License
github.com/spf13/cobra Apache License 2.0
github.com/spf13/pflag BSD 3-clause "New" or "Revised" License
golang.org/x/crypto/ssh/terminal BSD 3-clause "New" or "Revised" License
golang.org/x/sys/unix BSD 3-clause "New" or "Revised" License
gopkg.in/fsnotify.v1 BSD 3-clause "New" or "Revised" License
gopkg.in/yaml.v2 Apache License 2.0
**Initial Committers:** Torin Sandall and Tim Hinrichs from Styra (since creation), Tristan Swadell from Google (since May 2017)
**Infrastructure Requests:** None initially. CI is currently hosted on Travis and covered by the free tier for open source projects. In
the future, we would like to leverage CNCF test clusters for system testing integrations built as part of the OPA project.
**Communication Channels:**
Slack: http://slack.openpolicyagent.org
**Issue Tracker:** https://github.com/open-policy-agent/opa/issues
**Website:** http://www.openpolicyagent.org
# Release Methodology and Mechanics
We currently use numbered releases with the changelog and binaries published to https://github.com/open-policy-agent/opa/releases.
The release process is partially automated with manual portions assisted by scripts. The current release process is documented here:
https://github.com/open-policy-agent/opa/blob/master/docs/devel/RELEASE.md. The release schedule is somewhat ad-hoc, aligned around
large feature boundaries.
**Social Media Accounts:**
Twitter: https://twitter.com/openpolicyagent
# Community Size and any Existing Sponsorship
Adopters:
Netflix
Medallia
Schuberg Phillis
Huawei
More: At least one large financial institution and one large online retailer is testing OPA
Integrations:
Kubernetes (Use cases: federated resource placement, admission control)
Docker (Use cases: Docker engine authorization)
Istio (Use cases: microservice API authorization)
Linkerd (Use cases: microservice API authorization)
OpenSDS (Use cases: storage scheduling)
Terraform (Use cases: risk management on terraform plans)
PAM (Use cases: SSH and sudo authorization)
Cloud Foundry buildpack to enable microservice API authorization
**Sponsors**
https://www.styra.com
https://www.firebase.com (Google)
**Numbers:**
3 active contributors currently (2 from Styra, 1 from Google), with 8 other contributors over past 12 months.
80 stars
49 members on Slack
31 releases
# Statement of Alignment with CNCF Mission
As cloud native technology matures and enterprise adoption increases, the need for policy-based control has become apparent. OPA
provides a powerful building-block that enables fine-grained, expressive policy enforcement. As such, we think that OPA would be a
great for fit for the CNCF
# Benefits to the CNCF
The ecosystem must provide solutions to control who can do what across microservice deployments because legacy approaches to access
control do not satisfy the requirements of modern environments. OPA provides a purpose-built language and runtime that can be used to
author and enforce authorization policy. As such, we feel that OPA will complement the CNCFs portfolio and help accelerate adoption of
cloud native technology in enterprises. In the longer term, we think that enterprises will benefit from a unified approach to policy
enforcement can be applied across the stack.
# What does OPA need from the CNCF
OPA needs a well respected, vendor-neutral home that can help serve as a rallying point around policy as code. In addition to increased
visibility, we hope that inclusion in the CNCF will foster communication between OPA and other projects in the ecosystem. As the project
grows, we would want to leverage the CNCFs expertise around project governance and community standards as those are fundamental to the
long term success of the project.
The project does not have any infrastructure requests at this time. CI is currently hosted on Travis and covered by the free tier for
open source projects. In the future, we would like to leverage CNCF test clusters for system testing integrations built as part of the
OPA project.
# Appendix A: REST API Authorization Example
This sample shows two simple rules that enforce an authorization policy on an API that serves salary data. In English, the policy says
that employees can see their own salary and the salary of any of their reports.
allow {
input.method = "GET"
input.path = ["salary", employee_id]
input.user = employee_id
}
allow {
input.method = "GET"
input.path = ["salary", employee_id]
input.user = data.management_chain[employee_id][_]
}
The first rule allows employees to GET their own salary. The rule shows how you can use variables in rules. In that rule, employee_id is
a variable that will be bound to the same value across the last two expressions.
The second rule allow employees to GET the salary of their reports. The rule shows how you can access arbirary context (e.g., JSON data)
inside the policy. The data may loaded into the policy engine (and cached) or it may be external and fetched dynamically.
# Appendix B: Cluster Placement Example
This sample shows a simple rule that generates a set of clusters that a workload may be deployed to. The workload is provided as input
to policy. In English, the policy says that workloads must be placed on clusters that satisfy the workloads jurisdiction requirements.
desired_clusters = {name |
cluster = data.clusters[name]
satisfies_jurisdiction(input.deployment, cluster)
}
satisfies_jursidiction(deployment, cluster) {
deployment.jurisdiction = "europe"
startswith(cluster.region, "eu")
} else {
not deployment.jurisdiction
}
This example shows how logic can be composed across rules and functions.

View file

@ -1,77 +0,0 @@
== OpenMetrics
*Name of project:* OpenMetrics
*Description:*
OpenMetrics refines the Prometheus exposition format into an independent standard.
Prometheus has become the de facto standard in cloud-native metric monitoring, and has active upstream work by competitors.
The ease of implementing this exposition data has led to an explosion in compatible metrics endpoints with 300+ exporters registered, dozens of native integrations, and unknown numbers of internal adoptions.
To allow for even more adoption, OpenMetrics received a lot of additional scrutiny and engineering time from several large players in the cloud-native space.
It also puts the format under a neutral name, allowing more monitoring vendors to adopt it without potential political considerations.
With substantial commitments for adoption, OpenMetrics will enjoy solid support from day 1.
Amongst others, these are:
* Prometheus
* Cloudflare
* GitLab
* Google
* Grafana
* InfluxData
* Oath.com
* RobustPerception
* SpaceNet
* Uber
OpenMetrics was presented at the [CNCF TOC meeting on 2018-06-19](https://docs.google.com/presentation/d/1Ym8fLRCaX43uHPHBRyuRXM62U8m4vXaBXkuUp6tt3js/edit#slide=id.g25ca91f87f_0_0).
*Statement on alignment with CNCF mission:*
Given the CNCF's stated role in "fostering the growth and evolution of the ecosystem" and "making the technology accessible and reliable", we believe OpenMetrics helps with both of these goals.
*Sponsor / Advisor from TOC:* Alexis Richardson, Bryan Cantrill
*Unique identifier:* openmetrics
*Preferred maturity level:* sandbox
*License:* Apache License v2.0
*Source control repositories:* https://github.com/RichiH/OpenMetrics/
*External Dependencies:*
OpenMetrics currently depends on no external software components.
Once the test suite is released, it will depend on Go and Python and some libraries. Proper licence hygiene will be ensured.
*Lead:* * Richard Hartmann (SpaceNet)
*Infrastructure requests (CI / CNCF Cluster):* None
*Communication Channels:*
*Issue tracker:* https://github.com/RichiH/OpenMetrics/issues
*Website:* https://www.openmetrics.io
*Release methodology and mechanics:*
Given that this is a format, releases will be slow, deliberate, and forward- and backwards-compatible.
*Social media accounts:* None
*Existing sponsorship*: None
*Community size:*
* 128 stars
* 15 forks
* Commitments by companies with billions of combined yearly turnover
* 6 people on bi-weekly call
*Production usage*: None yet

View file

@ -1,140 +0,0 @@
== SPIFFE
*Name of project*: SPIFFE
*Description*:
With microservices, container orchestrators, and cloud computing leading to the deployment of increasingly dynamic and heterogeneous production environments, conventional network and application security practices struggle to scale under such distributed design patterns.
Further, engineers must be involved in how applications are deployed and managed in such environments; and operations teams require deeper visibility into managed applications.
As we move to a more evolved security stance, we must create technology frameworks that enable the aforementioned to play active roles in easily building secure, distributed applications. **SPIFFE (aka the “Secure Production Identity Framework for Everyone”)** is one such framework.
SPIFFE comprises three (3) components:
1. **SPIFFE ID**: A specification defining how workloads identify themselves to each other; such an ID is implemented as a Uniform Resource Identifier (URI).
1. **SPIFFE Verifiable Identity Document (SVID)**: a specification for encoding SPIFFE IDs in a cryptographically-verifiable document.
1. **SPIFFE Workload API**: An API specification to issue and/or retrieve SVIDs.
The SPIFFE Workload API does not require a calling workload to 1) have a priori knowledge of its identity; or 2) possess authentication token(s) when calling the API.
Implementations of the SPIFFE Workload API can 1) run on and across multiple platforms; and 2) identify running workloads at a process “and” kernel level, making it suitable for use with container schedulers like Kubernetes.
Building upon work done at Bell Labs (Plan 9), Google (LOAS), and others, **SPIRE (aka the “SPIFFE Runtime Environment”)** is an open-source software implementation of SPIFFE that can bootstrap and issue cryptographically verifiable identity to workloads running on heterogeneous environments and organizational boundaries. SPIRE consists of two (2) components:
1. **SPIRE Server**: provides a central registry of SPIFFE IDs, and the attestation policies describing which workloads can assume said identities. Attestation policies describe the properties a workload must exhibit to be assigned a SPIFFE ID, and are described as a mix of process attributes (such as a Linux UID, or Kubernetes service account) and infrastructure attributes (such as running in a Amazon EC2 instance with a particular tag).
1. **SPIRE Agent**: runs on any kernel and exposes the local workload API to any process that needs a SPIFFE ID, key, and/or trust bundle. On *nix systems, this API is exposed locally through a Unix domain socket. By verifying the attributes of a calling workload, the workload API avoids requiring the workload to supply a secret in order to authenticate.
SPIREs 12-month roadmap is exciting and will deliver multiple features:
* Production readiness, including HA mode, versioned APIs, documented SLOs, >80% test coverage, and functional testing in release train.
* Support for automatic bootstrapping and node attestation on public cloud platforms (Amazon Web Services, Microsoft Azure, and Google Cloud Platform).
* Support for automatic bootstrapping and node attestation on virtualization platforms (VMWare and OpenStack).
* Support for Microsoft Windows-based workloads.
* SPIFFE Workload API client libraries in Go, C, Java, and Javascript, with support for TLS negotiation and JWT signing.
* gRPC support for the SPIFFE Workload API.
* SPIFFE Workload API certificate helpers for Linux and Windows.
* A standards conformance test suite.
* Secure introduction to popular products, including Lyft Envoy and Hashicorp Vault.
*Sponsor / Advisor from TOC*: Brian Grant <briangrant@google.com>, Sam Lambert <samlambert@github.com>, Ken Owens <ken.owens@mastercard.com>
*Preferred maturity level*: Sandbox
*Unique Identifier*: spiffe
*License*: ALv2
*Source control repositories*:
SPIFFE has its own “top-level” link:https://github.com/spiffe[GitHub organization], within which resides the link:https://github.com/spiffe/spiffe[SPIFFE] and link:https://github.com/spiffe/spire[SPIRE] repositories.
*Initial Committers*:
This link:https://github.com/spiffe/spiffe/blob/master/CODEOWNERS[document] captures SPIFFEs current committers, while this link:https://github.com/spiffe/spire/blob/master/CODEOWNERS[document] captures SPIREs current committers.
*Infrastructure requirements*:
SPIFFE's test suite and SPIREs continuous integration (CI) tests are currently executed on Travis-CI.org. Longer term, we seek access to the CNCF test cluster to automatically run functional, integration, and performance tests.
*Issue tracker*:
Issues are tracked with GitHub Issues feature link:https://github.com/spiffe/spiffe/issues[here].
*Mailing lists*
SPIFFE has the following primary mailing lists, nearly all of which were used primarily for ACLing meeting documents and calendar invites. The lists do have some activity, but the overwhelming activity occurs in SPIFFEs link:https://spiffe.slack.com/[Slack] channel. More details can be found link:https://github.com/spiffe/spiffe#communications[here].
* [Discussions] Developers & Contributors (link:https://groups.google.com/a/spiffe.io/forum/#!forum/dev-discussion[website]): used by The purpose of this Google Group is for SPIFFE developers and contributors to discuss design and implementation issues.
* [Discussions] Users (link:https://groups.google.com/a/spiffe.io/forum/#!forum/user-discussion[website]): The purpose of this Groogle Group is to give feedback, ask questions, and interact with the SPIFFE community. You can also check out SPIFFE on GitHub.
* [SIG] Components (link:https://groups.google.com/a/spiffe.io/forum/#!forum/sig-components[website]): The purpose of this Google Group is to discuss items related to the components and APIs tied to SPIFFE's reference implementation (SPIRE) and its architecture. Topics such as role of Node Agent vs. Cluster CA, API semantics, and others serve as good examples of what's to be discussed.
* [SIG] Specification (link:https://groups.google.com/a/spiffe.io/forum/#!forum/sig-specification[website]): The purpose of this Google Group is to discuss items related to the SPIFFE specifications.
* SPIFFE Announce (link:https://groups.google.com/a/spiffe.io/forum/#!forum/announce[website]): The purpose of this Google Group is to share community-wide announcements about SPIFFE and SPIRE.
* Technical Steering Committee (link:https://groups.google.com/a/spiffe.io/forum/#!forum/tsc[website]): This is an ACLd distribution group for communications amongst members of the SPIFFEs Technical Steering Committee.
*Website*:
SPIFFEs link:https://www.spiffe.io/[website] is based on GitHub Pages. It primarily serves as a landing page for the projects primary documents, and mostly redirects to the GitHub repositories.
*Release methodology and mechanics*
SPIRE operates on a 30 to 60-day release cadence, with releases marked with versioned git tags. RC-quality code is periodically tagged off of the master branch before the final release. RC and final releases include binaries for glibc-based Linux platforms. The SPIFFE standards themselves are currently unversioned.
*Social media accounts*:
SPIFFEs only social media account is on link:https://twitter.com/spiffeio[Twitter].
*Existing sponsorship*:
link:https://www.scytale.io[Scytale, Inc.] and link:https://www.google.com[Google] currently serves as SPIFFEs primary sponsors.
*Contributor statistics*:
The various SPIFFE projects currently have 16 active contributors from 8 organizations, including Scytale, Twilio, Square, Buoyant.io, and OvrClk. 11 contributors are granted the ability to commit changes across some or all of the codebase.
*External Dependencies*:
SPIRE has the following build-time dependencies:
* golang (BSD 3-clause)
* go.uuid (MIT)
* golang/protobuf (BSD 3-clause)
* logrus (MIT)
* go-grpc (Apache 2.0)
* go-plugin (MPL-2.0)
* hcl (MPL-2.0)
* gorm (MIT)
* gopsutil (BSD 3-clause)
* go-hclog (MIT)
* grpc-gateway (BSD 3-clause, Apache 2.0)
* inflection (MIT)
* go-bindata (CC0 1.0)
* go-sqlite3 (MIT)
* sqlite (public domain)
As a golang project, SPIRE has no special runtime dependencies.
*Statement on alignment with CNCF mission*:
We believe aligning on a common representation of workload identity, and proscribing best practices for identity issuance and delivery are critical for widespread adoption of cloud-native architectures. SPIFFE provides exactly this capability.
We see organizations adopting SPIFFE in conjunction with other CNCF-sponsored projects to deliver robust and secure production systems. Concrete examples include:
* Providing the basis for authentication between Kubernetes-hosted workloads, between workloads hosted across multiple Kubernetes clusters, and workloads hosted outside of Kubernetes.
* Providing the basis of identity and establishing TLS between endpoints of a service mesh implemented with Envoy and/or Linkerd.
* Authentication and TLS between gRPC servers and clients.
* Identifying workloads when exporting telemetry to systems such as Prometheus, Jaeger, and fluentd, and establishing mTLS to the same.
* Enforcing that only Notary-signed images be issued valid identities in production environments.
*Additional CNCF asks*:
* Public relations (including analyst relations and social media management)
* Marketing (case studies, store)
* Certification (expert certification, software conformance, training)
* Legal (trademark, copyright, patents, licenses)

View file

@ -1,87 +0,0 @@
== Telepresence
*Name of project:* Telepresence
*Description:*
Telepresence enables software engineers using Kubernetes to develop services locally, while proxying their local services to a remote Kubernetes clusters.
As cloud-native applications grow in complexity, running the entire application locally is no longer practical. The entire application frequently consumes more memory and CPU than is available locally. Moreover, many applications rely on cloud-native services such as cloud databases (e.g., Amazon RDS) or cloud messaging (e.g., Google Pub/Sub). Thus, developers need to develop using a remote Kubernetes cluster.
However, moving development to a remote Kubernetes cluster has tradeoffs. Remote development requires containers to be pushed to a remote registry, does not permit auto-reloading of code, and generally increases the overall latency in the code/test/debug cycle. In addition, developers are unable to use their complete suite of development tools, eg., IDE, debugger, profiler, and so forth.
Telepresence enables a hybrid model for development. Services are developed locally, while the rest of the application runs in the cloud. Telepresence deploys a bi-directional proxy to the remote Kubernetes cluster, connecting the local development machine to the cloud.
Telepresence is currently used by dozens of organizations in their daily development process. These organizations range in size from Fortune 50 companies to small startups.
Telepresence was presented at the https://docs.google.com/presentation/d/1VrHKGre5Y8AbmXEOXu4VPfILReoLT38Uw9TMN71u08E/edit#slide=id.g380c8a0114_0_178[CNCF TOC meeting on 4/17/2018].
*Statement on alignment with CNCF mission:*
Given the CNCF's stated role in "fostering the growth and evolution of the cosystem" and "making the technology accessible and reliable", we believe Telepresence helps with both of these goals. In particular, we have heard repeatedly from Kubernetes users that one of the major barriers to adoption is the developer experience. Telepresence's goal is to reduce the friction of developing cloud-native applications, for developers. We think that expanding the portfolio of CNCF projects beyond operational infrastructure (e.g., Kubernetes, Prometheus, Envoy) to software for developers will help further the ubiquity of cloud-native technologies.
*Sponsor / Advisor from TOC:* Alexis Richardson, Camille Fournier
*Unique identifier:* telepresence
*Preferred maturity level:* sandbox
*License:* Apache License v2.0
*Source control repositories:* https://github.com/datawire/telepresence
*External Dependencies:*
Teepresence depends on the following external software components:
* `kubectl` (Apache Software License 2.0)
* OpenSSH (BSD 2 clause)
* `sshfs` (GPL 2.0)
* `conntrack` (GPL 2.0)
* `torsocks` (GPL 2.0)
* `socat` (GPL 2.0)
* Docker (Apache Software License 2.0)
*Initial Committers (leads):*
* Rafael Schloming (Datawire)
* Abhay Saxena (Datawire)
*Infrastructure requests (CI / CNCF Cluster):*
CI (currently using the CircleCI free plan), and possibly the CNCF Community cluster for regression testing.
*Communication Channels:*
* Gitter: https://gitter.im/datawire/telepresence
*Issue tracker:* https://github.com/datawire/telepresence/issues
*Website:* https://www.telepresence.io
*Release methodology and mechanics:*
We release rapidly and frequently. Generally this varies from weekly to monthly.
*Social media accounts:*
None
*Existing sponsorship*: https://www.datawire.io[Datawire]
*Community size:*
* 700+ stars
* 50+ forks
* 100K+ container pulls
* 90+ people on Gitter
*Production usage*:
Telepresence is being actively used by a number of organizations for active development. Telepresence is not designed for use in production. Some of these users include:
* Bitnami https://youtu.be/8Dl8U-AbJN0([KubeCon EU talk]
* Namely https://www.youtube.com/watch?v=xIOkbu0sUi4[Kubernetes NYC meetup talk]
* Sight Machine
* Shopify
* Verloop

View file

@ -1,140 +0,0 @@
== TiKV Project Proposal
*Name of Project*: TiKV
*Description*: TiKV is an open-source distributed transactional key-value database built in Rust and implements the Raft consensus algorithm. It features horizontal scalability, consistent distributed transactions, and geo-replication.
*Why is TiKV a good fit for CNCF?*
TiKV has been one of the few key-value storage solutions in the cloud-native community that can balance both performance and ease of operation with Kubernetes. Data storage is one of the most important components of any cloud-native infrastructure platform, and end users need a range of choices to meet their needs. TiKV is complementary to existing CNCF database projects like Vitess, which is currently the only database option hosted by CNCF. As a transactional key-value database, TiKV serves as another choice for cloud-native applications that need scalability, distributed transactions, high availability, and strong consistency.
With TiKV becoming a CNCF project, the open-source cloud-native ecosystem will also become more vibrant and robust in China, because our team has a strong track record of fostering the open source community in China and is dedicated to building and promoting CNCFs mission there. Open source is global, and having TiKV as a part of CNCF will further make that story so.
*TiKV Overview*
_Development Timeline_:
- Current release: 2.1.0 beta
- April 27, 2018: TiKV 2.0 released
- October 16, 2017: TiKV 1.0 released
- October 2016: beta version of TiKV was released and used in production
- April 1, 2016: TiKV was open-sourced
TiKV is currently adopted in-production in more than 200 companies, either together with TiDB (a stateless MySQL compatible SQL layer) or on its own. Please refer to the “Adopters” list below for the current list of publicly acknowledged adopters.
_Community Stats_:
- Stars: 3300+
- Contributors: 75+
- Commits: 2900+
- Forks: 400+
*Cloud-Native Features of TiKV*
_Horizontal scalability_: TiKV automatically handles data sharding and replication for cloud-native applications and enables elastic capacity scaling by simply adding or removing nodes with no interruption to ongoing workloads.
_Auto-failover and self-healing_: TiKV supports automatic failover with its implementation of the Raft consensus algorithm, so in situations of software or hardware failures, the system will automatically recover while maintaining the applications availability.
_Strong consistency_: TiKV delivers performant transactions and strong consistency by providing full support for ACID semantics, ensuring the accuracy and reliability of your data anytime, anywhere.
_Cloud-native deployment_: TiKV can be deployed in any cloud environment--public, private, or hybrid--using tidb-operator, a Kubernetes-based deployment tool.
*Comparison*
This comparison is intended simply to compare features of TiKV with two other well-known NoSQL databases, Cassandra and MongoDB. It is not intended to favor or position one project over another. Any corrections are welcome.
.Feature Comparison
|===
|Area |Cassandra |MongoDB |TiKV
|Type
|Wide Column
|Document
|Key-Value
|Auto-scaling
|Y
|Optional
|Y
|ACID Transaction
|N
|Maybe?
|Y
|Strong consistency replication
|Optional
|N
|Y
|Geo-based replication
|N
|N
|Y
|Self-hearing
|N
|N
|Y
|SQL Compatibility
|Partial (w/ CQL)
|N
|MySQL (w/ TiDB)
|===
*Roadmap*:
https://github.com/pingcap/tikv/blob/master/docs/ROADMAP.md
*Additional Information*:
_TOC Presentation Date_: July 3, 2018
_Current TOC Sponsor_: Bryan Cantrill and Ben Hindman
_Preferred Maturity Level_: Sandbox
_License_: Apache 2.0
_Source control repositories_: https://github.com/pingcap/tikv
_Contributor Guideline_: https://github.com/pingcap/tikv/blob/master/CONTRIBUTING.md
_Official Documentation_: https://github.com/pingcap/tikv/wiki/TiKV-Documentation
_Blog_: https://www.pingcap.com/blog/#TiKV
_Infrastructure Required_:
TiKV uses Circle CI for unit tests and builds and in-house Jenkins CI cluster for some integration tests. We plan to use CNCF test cluster to automatically run stability tests and performance tests in the future.
_Issue Tracker_: https://github.com/pingcap/tikv/issues
_Website_: tikv.org (under construction)
_Release Methodology and Mechanics_:
TiKV follows the Semantic Versioning 2.0.0 convention. The release cadence is:
- Major version is released every 6 months
- Minor version is released every 3 months.
- Patch version is released every 2 weeks.
TiKV releases are announced using GitHub releases and current release is 2.1.0 beta.
_Social Media Accounts_: TBD
_Adopters_:
https://github.com/pingcap/tikv/blob/master/docs/adopters.md
_Dependencies and License Compliance (done by FOSSA)_:
https://app.fossa.io/reports/87fe16e8-72a2-4e27-8509-a07dfa52a21a
*Statement on Alignment with CNCF Mission*
Our team believes TiKV will be a great fit for CNCF. As the CNCFs mission is to “create and drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments capable of scaling to tens of thousands of self healing multi-tenant nodes,” we believe TiKV to be a core enabling technology for this mission. This belief has been validated by our many adopters and developers working to build, deploy, and maintain large-scale applications in a cloud-native environment. Moreover, TiKV has very strong existing synergy with other CNCF projects, and is used heavily in conjunction with projects like: Kubernetes, Prometheus, and gRPC.

View file

@ -1,60 +0,0 @@
# Prometheus Graduation Application
Prometheus was the second accepted project into the CNCF (joined in May 2016) and has grown significantly over time. In August 2017 we have successfully hosted a community conference (PromCon) in collaboration with the CNCF that attracted 200+ attendees from the developer and user community.
The following application links to the required information to become a graduated project.
## Prometheus fulfills all the incubating and graduation criteria:
### Document that it is being used successfully in production by at least three independent end users which, in the TOCs judgement, are of adequate quality and scope.
* "Users" section of https://prometheus.io/
* In-progress PR to add an `ADOPTERS.md` file: https://github.com/prometheus/prometheus/pull/3833/files
### Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
See the current list of [Prometheus team members](https://github.com/prometheus/docs/blob/master/content/governance.md#team-members), who are also committers.
### Demonstrate a substantial ongoing flow of commits and merged contributions.
* https://github.com/prometheus/prometheus/graphs/contributors
In all official Prometheus repositories, we have had 850+ unique contributors with a total of 12k+ commits so far.
### Have committers from at least two organizations.
We have [17 committers](https://github.com/prometheus/docs/blob/master/content/governance.md#team-members) from ~10 organizations:
* [Ben Kochie](https://github.com/SuperQ) ([GitLab](https://about.gitlab.com/))
* [Björn Rabenstein](https://github.com/beorn7) ([SoundCloud](https://soundcloud.com/))
* [Brian Brazil](https://github.com/brian-brazil) ([Robust Perception](https://www.robustperception.io/))
* [Conor Broderick](https://github.com/Conorbro) ([Robust Perception](https://www.robustperception.io/))
* [Fabian Reinartz](https://github.com/fabxc) ([CoreOS](https://coreos.com/) / [Red Hat](https://www.redhat.com/))
* [Frederic Branczyk](https://github.com/brancz) ([CoreOS](https://coreos.com/) / [Red Hat](https://www.redhat.com/))
* [Goutham Veeramachaneni](https://github.com/Gouthamve) (Independent)
* [Johannes Ziemke](https://github.com/discordianfish) ([Latency.at](https://latency.at/) / Independent)
* [Julius Volz](https://github.com/juliusv) (Independent)
* [Matt Layher](https://github.com/mdlayher) ([DigitalOcean](https://www.digitalocean.com/))
* [Matthias Rampke](https://github.com/matthiasr) ([SoundCloud](https://soundcloud.com/))
* [Max Inden](https://github.com/mxinden) ([CoreOS](https://coreos.com/) / [Red Hat](https://www.redhat.com/))
* [Richard Hartmann](https://github.com/RichiH) ([SpaceNet](https://www.space.net/))
* [Steve Durrheimer](https://github.com/sdurrheimer) ([Netapsys](https://www.netapsys.fr/))
* [Stuart Nelson](https://github.com/stuartnelson3) ([DigitalOcean](https://www.digitalocean.com/))
* [Tobias Schmidt](https://github.com/grobie) ([SoundCloud](https://soundcloud.com/))
* [Tom Wilkie](https://github.com/tomwilkie) ([Kausal](https://kausal.co/))
### Have achieved and maintained a Core Infrastructure Initiative Best Practices Badge.
https://bestpractices.coreinfrastructure.org/projects/486
### Adopt the CNCF Code of Conduct.
https://github.com/prometheus/prometheus/blob/master/code-of-conduct.md
### Explicitly define a project governance and committer process. This preferably is laid out in a GOVERNANCE.md file and references an OWNERS.md file showing the current and emeritus committers.
* https://prometheus.io/governance/
### Have a public list of project adopters for at least the primary repo (e.g., ADOPTERS.md or logos on the project website).
See the bottom of https://prometheus.io/. We aim to additionally curate a more extensive list in an `ADOPTERS.md` file in the future. See https://github.com/prometheus/prometheus/pull/3833/files.

View file

@ -1,80 +0,0 @@
# Harbor Incubating Stage Review
Harbor is currently a CNCF sandbox project. Please refer to Harbor's initial
[sandbox proposal](../proposals/harbor.adoc) for discussion on Harbor's
alignment with the CNCF and details on sandbox requirements.
In the time since being accepted as a sandbox project, Harbor has demonstrated
healthy growth and progress.
* [v1.6.0 is the latest
releases](https://goharbor.io/blogs/harbor-1.6.0-release/), shipped on
September 7th, marking our 7th major feature release. New features include:
* [Support for hosting Helm charts](https://github.com/goharbor/harbor/issues/4922)
* [Support for RBAC via LDAP groups](https://github.com/goharbor/harbor/issues/3506)
* [Replication filtering via labels](https://github.com/goharbor/harbor/issues/4861)
* [Major refactoring to coalesce to a single PostgreSQL database](https://github.com/goharbor/harbor/issues/4855)
* A [formalized governance
policy](https://github.com/goharbor/community/blob/master/GOVERNANCE.md) has
been approved and instituted for the project, and two new maintainers from
different companies have joined the project to help Harbor continue to grow.
## Incubating Stage Criteria
In addition to sandbox requirements, a project must meet the following
criteria to become an incubation-stage project:
* Document that it is being used successfully in production by at least three
independent end users which, in the TOCs judgement, are of adequate quality
and scope.
* Adopters: [https://github.com/goharbor/harbor/blob/master/ADOPTERS.md](https://github.com/goharbor/harbor/blob/master/ADOPTERS.md)
* Have a healthy number of committers. A committer is defined as someone with
the commit bit; i.e., someone who can accept contributions to some or all of
the project.
* Maintainers of the project are listed in
[https://github.com/goharbor/harbor/blob/master/OWNERS.md](https://github.com/goharbor/harbor/blob/master/OWNERS.md). There are 11 maintainers working on Harbor from 3 different
companies (VMware, Caicloud and Hyland Software)
* Maintainers are added and removed from the project as per the policies
outlined in the project governance:
[https://github.com/goharbor/community/blob/master/GOVERNANCE.md](https://github.com/goharbor/community/blob/master/GOVERNANCE.md).
* Demonstrate a substantial ongoing flow of commits and merged contributions.
* Releases: 7 major releases ([https://github.com/goharbor/harbor/releases](https://github.com/goharbor/harbor/releases))
* Roadmap: [https://github.com/goharbor/harbor/wiki/Harbor-Roadmap](https://github.com/goharbor/harbor/wiki/Harbor-Roadmap)
* Contributors: [https://github.com/goharbor/harbor/graphs/contributors](https://github.com/goharbor/harbor/graphs/contributors)
* Commit activity: [https://github.com/goharbor/harbor/graphs/commit-activity](https://github.com/goharbor/harbor/graphs/commit-activity)
* CNCF DevStats: [https://harbor.devstats.cncf.io/](https://harbor.devstats.cncf.io/)
* [Last 30 days activity on GitHub](https://harbor.devstats.cncf.io/d/8/dashboards?refresh=15m&orgId=1&from=now-30d&to=now-1h)
* [Community Stats](https://harbor.devstats.cncf.io/d/3/community-stats?orgId=1&var-period=d7&var-repo_name=goharbor%2Fharbor)
Further details of Harbor's growth and progress since entering the sandbox
stage as well as use case details from the Harbor community can be found in this
[slide
deck](https://docs.google.com/presentation/d/1aBQnE96kKatc1_t3E97lJBwiWvL-3GTitojuv-nWMuo/).
## Security
Harbor's codebase has been analyzed and reviewed by VMware's internal product
security team.
* Static analysis has been performed on Harbor via
[gosec](https://github.com/securego/gosec)
* Software decomposition via AppCheck, Snyk and retire.js with goal of
discovering outdated or vulnerable packages
* Manual code analysis / review
* Vulnerability assessment via multiple scanners
* Completed threat model
In addition to this security work the Harbor maintainers are partnering with
the CNCF to schedule a third-party security audit of Harbor.

View file

@ -1,18 +0,0 @@
_Linkerd is currently an inception stage CNCF project._
To be accepted to incubating stage, a project must meet the inception stage requirements plus:
* Document that it is being used successfully in production by at least three independent end users which, in the TOCs judgement, are of adequate quality and scope.
* [https://github.com/linkerd/linkerd/blob/master/ADOPTERS.md](https://github.com/linkerd/linkerd/blob/master/ADOPTERS.md)
* (Several non-public adopters that we know of that we can share privately if you desire.)
* Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
* [https://github.com/linkerd/linkerd/blob/master/MAINTAINERS.md](https://github.com/linkerd/linkerd/blob/master/MAINTAINERS.md)
* Demonstrate a substantial ongoing flow of commits and merged contributions
* [https://github.com/linkerd/linkerd/releases](https://github.com/linkerd/linkerd/releases)
* [https://github.com/linkerd/linkerd/graphs/contributors](https://github.com/linkerd/linkerd/graphs/contributors)

View file

@ -1,43 +0,0 @@
# Rook Incubating Stage Review
Rook is currently a sandbox stage project. Please refer to Rook's [sandbox stage proposal](../proposals/rook.adoc) ("inception" at time of acceptance) for details on the sandbox requirements.
In the time since being accepted to the sandbox stage, Rook has demonstrated healthy growth and progress.
Two releases were completed, starting with v0.7 on February 21st and then v0.8 on July 18th.
With those releases, Rook extended beyond just orchestration of Ceph and has built a framework of reusable specs, logic and policies for [cloud-native storage orchestration of other providers](https://blog.rook.io/rooks-framework-for-cloud-native-storage-orchestration-c66278014df7).
Operators and CRD types were added for both CockroachDB and Minio in the v0.8 release, initial support for NFS is nearly complete, and other storage providers are also in the works.
The CRD types and support for Ceph has graduated to Beta in the v0.8 release, reflecting the increased maturity that has only been possible from impressive engagement from the community.
Other big features for the Ceph operator include automatic horizontal scaling of storage resources, an improved security model, and support for new environments such as OpenShift.
A [formalized governance policy](https://github.com/rook/rook/blob/master/GOVERNANCE.md) has been approved and instituted for the project, and a [new maintainer](https://github.com/rook/rook/blob/master/OWNERS.md) has also been added to help the project continue to grow.
## Incubating Stage Criteria
To be accepted to incubating stage, a project must meet the sandbox stage requirements plus:
* Document that it is being used successfully in production by at least three independent end users which, in the TOCs judgement, are of adequate quality and scope.
* Adopters: [https://github.com/rook/rook/blob/master/ADOPTERS.md](https://github.com/rook/rook/blob/master/ADOPTERS.md)
* Have a healthy number of committers. A committer is defined as someone with the commit bit; i.e., someone who can accept contributions to some or all of the project.
* Maintainers of the project are listed in [https://github.com/rook/rook/blob/master/OWNERS.md](https://github.com/rook/rook/blob/master/OWNERS.md).
* Maintainers are added and removed from the project as per the policies outlined in the project governance: [https://github.com/rook/rook/blob/master/GOVERNANCE.md](https://github.com/rook/rook/blob/master/GOVERNANCE.md).
* Demonstrate a substantial ongoing flow of commits and merged contributions.
* Releases: [https://github.com/rook/rook/releases](https://github.com/rook/rook/releases)
* Roadmap: [https://github.com/rook/rook/blob/master/ROADMAP.md](https://github.com/rook/rook/blob/master/ROADMAP.md)
* Contributors: [https://github.com/rook/rook/graphs/contributors](https://github.com/rook/rook/graphs/contributors)
* Commit activity: [https://github.com/rook/rook/graphs/commit-activity](https://github.com/rook/rook/graphs/commit-activity)
* CNCF DevStats: [https://rook.devstats.cncf.io/](https://rook.devstats.cncf.io/)
* [Last 30 days activity on Github](https://rook.devstats.cncf.io/d/8/dashboards?refresh=15m&orgId=1&from=now-30d&to=now-1h)
* [Community Stats](https://rook.devstats.cncf.io/d/3/community-stats?orgId=1)
Further details of Rook's growth and progress since entering the sandbox stage as well as use case details from the Rook community can be found in this [slide deck](https://docs.google.com/presentation/d/1DOgAlX0RyB8hzD7KbmXK4pKu9hFFPY9WiLv-LEy38jo/edit?usp=sharing).

View file

@ -1,20 +0,0 @@
# CNCF Working Groups
## Introduction
The purpose of working groups are to study and report on a particular question and make recommendations based on its findings. The end result of a working group may be a new project proposal, landscape, whitepaper or even a report detailing their findings. The intention of working groups is not to host a full project or specification. Working Groups can be formed at any time but must be sponsored by a TOC member and voted with a super majority vote by the CNCF TOC. The TOC can also shut down a working group with a super majority vote.
## Process
If you would like to submit a working group proposal, please submit a pull request to the working groups folder. As an example, you can see the other working group proposals here: https://github.com/cncf/toc/tree/master/workinggroups
You will also have to present to the CNCF TOC and wider community before your WG proposal will be voted upon by the TOC and community. You can request a presentation by filing an issue here: https://github.com/cncf/toc/issues
At a minimum, please include this information:
* Goals
* Non-goals
* Mailing list information
* The location of meetings / agenda / notes
* Initial interested parties to show that there are multiple people across multiple orgs interested
* The chair(s) and TOC sponsor being explicitly listed so they are discoverable

View file

@ -1,34 +0,0 @@
# CNCF CI WG Proposal
## TOC Sponsor
Camille Fournier
## Objective
Explore the intersection of cloud native and CI technology. Discuss options for taking some of the cluster resources and dedicating them to supporting an open source CI system that can be used by CNCF projects for their CI needs.
## Goals and Expected Outcomes
* We believe that it would be good for us to provide CI services to projects who need or want to use them
* We need to understanding what, if any, SLA we can promise projects for this system
* We need to scope what features this system will provide; there is some concern around trying to promise testing the full cross-product of integration with all of the different CNCF projects
* We want to come away with a recommendation for staffing to support building out this initiative given project needs and desired SLA
## Non Goals
* Run CI for CNCF projects
* Recommend CI systems for CNCF projects
## Initial Interested Parties
* Camille Fournier (@skamille) [LEAD]
* Chris McClimans (@hh) [Hippie Hacker]
* Denver Williams (@dlx)
* Taylor Carpenter (@taylor)
* Lucina Stricko (@lixuna)
* Jonathan Boulle (@jonboulle)
* Clint Byrum (@spamaps)
* Quinton Hoole (@quintonhoole)
* Quanyi Ma (@genedna)
* Gianluca Arbezzano (@gianarb)

View file

@ -1,53 +0,0 @@
# CNCF Networking WG Proposal
## TOC Sponsor
Ken Owens
## Objective
Explore cloud native networking technology and concepts around the container networking interface (CNI).
## Goals and Expected Outcomes
* Recommend CNI be adopted as initial network interface specification focused on connectivity and portability as an official CNCF project.
* Adopt implementations of CNI that have traction in the cloud native ecosystem
* Define cloud native networking patterns
* Define the Policy framework and network services model
* A network plugin author should be able to write one “plugin” (a container) that “just works” across all container orchestration (CO) systems.
* Enable container orchestrator to present network interfaces to the users in a portable manner that is focused on connectivity initially.
* Support dynamic provisioning and deprovisioning network primitives through this interface.
* Support group of entities that are uniquely addressable that can communicate amongst each other. This could be either an individual container, a machine, or some other network service (e.g. load balancing, firewall, VPN, QoS, Service Discovery). Containers can be conceptually added to or removed from one or* more networks.
* Focused on cloud native application patterns. This includes VM-based, Bare metal based, and FaaS (TBD) based.
* Define policy framework for network isolation
## Non Goals
* Provide or dictate an implementation.
* This includes dictating plugin lifecycle management
* Plugin distribution
* Protocol-level authn/authz
* Plugin discovery
* Not going to make a one network standard for all
* Not going to focus on individual projects per service but rather projects that model network services and patterns not going to be prescriptive but more reference guidelines and patterns
## Interested Parties
* Ken Owens (@kenowens12) [lead]
* Ben Hindman (@benh)
* Alexis Richardson (@monadic)
* Jonathan Boulle (@jonboulle)
* Lee Calcote (@lcalcote)
* Madhu Venugopal
* Jie Yu
* Deepak Bansal
* John Gossman
* Christopher Liljenstolpe (@liljenstolpe)
* Bryan Boreham (@bboreham)
* Minhan Xia (@freehan)
* Daniel Nardo (@dnardo)
* Pengfei Ni (@feiskyer)
* John Belamaric (@johnbelamaric)
* Thomas Graf (@tgraf__)
* Jason Venner (@jvmirdel)
* Doug Davis (@duglin)

View file

@ -1,39 +0,0 @@
# CNCF Serverless WG Proposal
## TOC Sponsor
Ken Owens
## Objective
Explore the intersection of cloud native and serverless technology.
## Goals and Expected Outcomes
* Produce a whitepaper
* Produce a serverless landscape
* Explore specifications for serverless to propose to the CNCF
* Bring recommendations to the TOC on serverless projects in CNCF
## Non Goals
* Define one serverless project to rule them all
## Initial Interested Parties
* Sarah Allen (Google)
* Chris Aniszczyk (CNCF)
* Chad Arimura (Oracle)
* Ben Browning (Red Hat)
* Lee Calcote (SolarWinds)
* Amir Chaudhry (Docker)
* Doug Davis (IBM)
* Louis Fourie (Huawei)
* Antonio Gulli (Google)
* Yaron Haviv (iguazio)
* Daniel Krook (IBM)
* Orit Nissan-Messing (iguazio)
* Chris Munns (AWS)
* Ken Owens (Mastercard)
* Mark Peek (VMWare)
* Cathy Zhang (Huawei)

View file

@ -1,32 +0,0 @@
# CNCF Storage WG Proposal
## TOC Sponsor
Ben Hindman
## Objective
Explore cloud native storage technology and concepts.
## Goals and Expected Outcomes
* Produce a landscape
* Explore specifications for storage to propose to the CNCF
* Bring recommendations to the TOC on storage projects in CNCF
## Non Goals
* N/A
## Initial Interested Parties
* Ben Hindman (@benh) [lead]
* Steven Tan (@stevenphtan)
* Clinton Kitson (@clintonskitson)
* Alex Chircop (@chira001)
* Steve Wong (@cantbewong)
* Venkat Ramakrishnan (@katkrish)
* Gou Rao (@gourao)
* Vinod Jayaraman (@jvinod)
* Allen Samuels (@allensamuels)
* Yaron Haviv (@yaronhaviv)