This the multi-page printable view of this section. Click here to print.
オンプレミスVM
- 1: Cloudstack
- 2: DC/OS上のKubernetes
- 3: oVirt
1 - Cloudstack
CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
CoreOS templates for CloudStack are built nightly. CloudStack operators need to register this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses a single Ansible playbook, which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
前提条件
sudo apt-get install -y python-pip libssl-dev
sudo pip install cs
sudo pip install sshpubkeys
sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
On CloudStack server you also have to install libselinux-python :
yum install libselinux-python
cs is a python module for the CloudStack API.
Set your CloudStack endpoint, API keys and HTTP method used.
You can define them as environment variables: CLOUDSTACK_ENDPOINT
, CLOUDSTACK_KEY
, CLOUDSTACK_SECRET
and CLOUDSTACK_METHOD
.
Or create a ~/.cloudstack.ini
file:
[cloudstack]
endpoint = <your cloudstack api endpoint>
key = <your api access key>
secret = <your api secret key>
method = post
We need to use the http POST method to pass the large userdata to the coreOS instances.
playbookのクローン
git clone https://github.com/apachecloudstack/k8s
cd kubernetes-cloudstack
Kubernetesクラスターの作成
You simply need to run the playbook.
ansible-playbook k8s.yml
Some variables can be edited in the k8s.yml
file.
vars:
ssh_key: k8s
k8s_num_nodes: 2
k8s_security_group_name: k8s
k8s_node_prefix: k8s2
k8s_template: <templatename>
k8s_instance_type: <serviceofferingname>
This will start a Kubernetes master node and a number of compute nodes (by default 2).
The instance_type
and template
are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).
Check the tasks and templates in roles/k8s
if you want to modify anything.
Once the playbook as finished, it will print out the IP of the Kubernetes master:
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
SSH to it using the key that was created and using the core user.
ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
And you can list the machines in your cluster:
fleetctl list-machines
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
サポートレベル
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
CloudStack | Ansible | CoreOS | flannel | docs | Community (@Guiques) |
2 - DC/OS上のKubernetes
MesosphereはDC/OS上にKubernetesを構築するための簡単な選択肢を提供します。それは
- 純粋なアップストリームのKubernetes
- シングルクリッククラスター構築
- デフォルトで高可用であり安全
- Kubernetesが高速なデータプラットフォーム(例えばAkka、Cassandra、Kafka、Spark)と共に稼働
です。
公式Mesosphereガイド
DC/OS入門の正規のソースはクイックスタートリポジトリにあります。
3 - oVirt
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
oVirtクラウドプロバイダーによる構築
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster. At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to import or install Project Atomic (or Fedora) in a VM to generate a template. Any other distribution that includes Kubernetes may work as well.
It is mandatory to install the ovirt-guest-agent in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
oVirtクラウドプロバイダーの使用
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the ovirt-cloud.conf
file:
[connection]
uri = https://localhost:8443/ovirt-engine/api
username = admin@internal
password = admin
In the same file it is possible to specify (using the filters
section) what search query to use to identify the VMs to be reported to Kubernetes:
[filters]
# Search query used to find nodes
vms = tag=kubernetes
In the above example all the VMs tagged with the kubernetes
label will be reported as nodes to Kubernetes.
The ovirt-cloud.conf
file then must be specified in kube-controller-manager:
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
oVirtクラウドプロバイダーのスクリーンキャスト
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
サポートレベル
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
oVirt | docs | Community (@simon3z) |