群组信息 私有

administrators

 

成员列表

yet another oh-my-zsh theme with vault and hostname display

Customized oh-my-zsh theme with ruby and vault address support

create the following theme for .oh-my-zsh/custom/themes/ya.zsh-theme and then update ~/.zshrc file with

ZSH_THEME="ya"

#ya.zsh-theme
# vim:ft=zsh ts=2 sw=2 sts=2

rvm_current() {
  rvm current 2>/dev/null
}

rbenv_version() {
  rbenv version 2>/dev/null | awk '/[0-9]/{print $1}'
}

PROMPT='
%{$fg_bold[green]%}${PWD/#$HOME/~} $(hostname)%{$reset_color%}$(git_prompt_info) ⌚ %{$fg_bold[red]%}%*%{$reset_color%}
$ '

# Must use Powerline font, for \uE0A0 to render.
ZSH_THEME_GIT_PROMPT_PREFIX=" on %{$fg[magenta]%}\uE0A0 "
ZSH_THEME_GIT_PROMPT_SUFFIX="%{$reset_color%}"
ZSH_THEME_GIT_PROMPT_DIRTY="%{$fg[red]%}!"
ZSH_THEME_GIT_PROMPT_UNTRACKED="%{$fg[green]%}?"
ZSH_THEME_GIT_PROMPT_CLEAN=""

if [ -e ~/.rvm/bin/rvm-prompt ]; then
  RPROMPT='%{$fg_bold[red]%}‹$(rvm_current)›%{$reset_color%} $VAULT_ADDR %{$fg_bold[blue]%}$HOME%{$reset_color%}'
else
  if which rbenv &> /dev/null; then
    RPROMPT='%{$fg_bold[red]%}$(rbenv_version)%{$reset_color%} $VAULT_ADDR %{$fg_bold[blue]%} $HOME %{$reset_color%}'
  fi
fi

发布在 Blogs
A method to manage postgres database created in cloudfoundry

Most of cloudfoundry based platform have postgres service in their catalog, applications could using the credentials bounded to their running environment for accessing, but sometimes the administrator need to access the database instance for whatever reason, debugging or checking specific record.

TeamPostgreSQL is a project of Webworks (webworks.dk), which is excellent for the webui based database management, using a single browser, user could manage the postgresql database.

  1. download release
/usr/bin/curl -# http://cdn.webworks.dk/download/teampostgresql_multiplatform.zip -o teampostgresql_multiplatform.zip && \
unzip teampostgresql_multiplatform.zip && rm -f teampostgresql_multiplatform.zip
  1. generate deploy manifest file for teampostgresql

The application would use java buildpack for compiling and hosting, and need to specify JBP_CONFIG_JAVA_MAIN otherwise the http service won't start correctly

---
applications:
- name: psqlgui
  memory: 1G
  buildpack: https://github.com/cloudfoundry/java-buildpack.git
  host: psqlgui
  env:
    JBP_CONFIG_JAVA_MAIN: '{arguments: "-cp WEB-INF/lib/log4j-1.2.17.jar-1.0.jar:WEB-INF/classes:WEB-INF/lib/* dbexplorer.TeamPostgreSQL $PORT"}'

  1. Deploy

using the following script to deploy application, before executing, need to obtain an account of cloudfoundry

#!/bin/bash


WORK_DIR=`dirname "${0}"`
RUN_DIR=$TMPDIR

cf --version > /dev/null

if [[ $? -gt 0 ]];
    then
    echo "CloudFoundry cli not installed"
    exit 1
fi

cf push -f $WORK_DIR/manifest.yml -p $RUN_DIR/teampostgresql/webapp

After the deployment, the application could be access through webui and then configure database with credential of postgresql database instance

发布在 Blogs
How to host your own Silex on CloudFoundry based PaaS

why

Silex, is a free and open source website builder in the cloud. Create websites directly in the browser without writing code. And it is suitable for professional designers to produce great websites without constraints. Silex is also known as the HTML5 editor. And we want to leverage the Cloud Foundry to host our own site of Silex, so you are here ! 😉

Prerequisite

You need to have an Cloud Foundry account, whatever Bluemix or PCF, both are the instance of Cloud Foundry Stack. For detail about them , pls referral to the link below and what you need to know is the follwing commands

  • cf login
  • cf push
  • cf set env
  • cf start
  • (Optional) Activate github service you need to define the env vars GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET (Create a github app here)

Host an instance of Silex in Cloud Foundry

  1. Clone this repository, and do not forget the sub modules (cloud-explorer and unifile)
$ git clone --recursive -b cf-integration https://github.com/silexlabs/Silex.git
  1. Go to Silex's Directory.
$ cd yadesigner
  1. Login to the Bluemix
$ cf login -a api.ng.bluemix.net -u <you username here>
and input the password in the interactive Shell

$ cf push -m 2G <your add name here> p -b https://github.com/dgodd/jdk-buildpack -b nodejs_buildpack --no-start
$ wait...
$ ...
$ cf set-env <your add name here> GITHUB_CLIENT_ID <GITHUB_CLIENT_ID value here>
$ cf set-env <your add name here> GITHUB_CLIENT_SECRET <GITHUB_CLIENT_SECRET value here>

$ cf start <your add name here>

After deploy you could vist your own silex site

Recap

The reason why use multiple buildpacks for the application is Silex using Google Closure Complier to build js, which depends on JDK for the application staging phase. The Java buildpack above is not community official supported, casue community still not decided what's the best way to support of the use case like this in general solution.

Links

发布在 Blogs
Kubernetes Easy Way -- consul and vault

本文主要介绍如果通过Terraform 和Vault 轻松构建Kubernetes 集群环境

OS CPU MEM DISK
Ubuntu 18.04.1 LTS Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz 32G 2T Extra Disk

Containerized Vault 和 Consul 安装

$ mkdir -p /hdd/k8s-easy-way/{consul_data,vault_data,vault_config}

#192.168.1.10 替换为安装机器的地址

docker run -d --name consul-server -v "/hdd/k8s-easy-way/consul_data":/consul/data --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' consul agent -server -bind=192.168.1.10 -retry-join=192.168.1.10 -bootstrap-expect=1 -ui -client=192.168.1.10

#创建vault配置文件

$ cat vault_config/config.hcl

storage "consul" {
address = "192.168.1.10:8500"
path = "vault"
}

listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}

$docker run -d --name vault-server -p 8200:8200 --cap-add=IPC_LOCK -v $(pwd)/vault_data:/vault/logs -v $(pwd)/vault_config:/vault/config -e 'VAULT_LOCAL_CONFIG={"default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server

$export VAULT_ADDR=http://192.168.1.10:8200

#初始化vault环境
$vault operator init

#查询vault application的log
$docker logs vault-server -f
...
...
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:

$ export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: yzDDBRsxjw2E4yjT2hCJVoLom2hSMiCSp1wxWBv4pso=
Unseal Key: yzDDBRsxjw2E4yjdaCJVoLom2hSMiCSp1wxWBv4pso=
Unseal Key: ysfw2E4yjT2hCJVoLom2hSMiCSp1wxWBv4pso=
Unseal Key: yzDDBRsxjw2E4yjd2VoLom2hSMiCSp1wxWBv4pso=
Unseal Key: yzDs3T2hCJVoLom2hSMiCSp1wxWBv4pso=
Root Token: a95bb2c6-2641-d0be-820e-3ad27c06f800

==> Vault server started! Log data will stream in below:

...
...

初始情况下,vault处于sealed 状态,需要通过上面获取到的5个unseal key来对vault server进行unseal
重复执行三次下面这条命令,根据提示输入不同的unseal key
$vault operator unseal
...
$ vault operator unseal
Key Value


Seal Type shamir
Sealed false
Total Shares 5
Threshold 3
Version 0.11.1
Cluster Name vault-cluster-eced0aa0
Cluster ID a5fe7988-04e2-d339-c072-653e4d962e4e
HA Enabled true
HA Cluster n/a
HA Mode standby
Active Node Address <none>

执行最后一次操作后,可以看到sealed状态是false,此时登录vault, 根据提示输入root token

$ vault login
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key Value


token c82805e8-580d-9652-6683-83dd900cdcd7
token_accessor 4ef08e65-8e22-8598-b756-971dd7404034
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]

启用kv version2 功能

$vault secrets enable kv -version=2
Success! Enabled the kv secrets engine at: kv/

version 2 的kv secret 功能特殊之处在于,每次对secret 的路径进行的修改,在vault 系统中对会有对应的版本信息,类似于git的提交,都可以进行找回. 平台的管理员可以对保存的副本进行配置

$ vault kv put /kv/foo val=bar
Key Value


created_time 2018-09-13T13:59:23.162374714Z
deletion_time n/a
destroyed false
version 1

$ vault kv put /kv/foo val=bar-v2
Key Value


created_time 2018-09-13T13:59:31.499522355Z
deletion_time n/a
destroyed false
version 2

$ vault kv get /kv/foo
====== Metadata ======
Key Value


created_time 2018-09-13T13:59:31.499522355Z
deletion_time n/a
destroyed false
version 2

=== Data ===
Key Value


val bar-v2

目前为止,已经配置好了可持久化的已consul作为数据存储节点的vault 服务,在下一章节,将讲述如何为kubernetes control plane 各服务节点配置证书

发布在 Blogs
解决Mac上VirtualBox 虚拟机网络无法连通问题

最近在使用Mac上的VirtualBox 创建出的虚拟机做K8S相关的开发工作, 物理机有时重启后无法连接到虚拟机当中,ICMP拒绝, 但是在VM 内部以及VM间网络通信都是正常的。重启Mac后问题可以解决,但是不是解决问题之道,经过排查,发现Mac上的到虚拟机hostonly 网络的路由丢失,导致连接失败

查看当前物理机上的hostonly 网卡信息

$ VBoxManage list hostonlyifs
Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.50.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Wireless: No
Status: Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet0

Name: vboxnet1
GUID: 786f6276-656e-4174-8000-0a0027000001
DHCP: Disabled
IPAddress: 192.168.59.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:01
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet1

Name: vboxnet2
GUID: 786f6276-656e-4274-8000-0a0027000002
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:02
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet2

添加dhcp server

VBoxManage dhcpserver modify --ifname vboxnet0 --ip 192.168.50.2 --netmask 255.255.255.0 --lowerip 192.168.50.100 --upperip 192.168.50.199 --enable

查看虚拟机网络网关的路由信息

$ route get 192.168.50.1
route to: 192.168.50.1
destination: default
mask: default
gateway: 192.168.1.1
interface: en0
flags: <UP,GATEWAY,DONE,STATIC,PRCLONING>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 1500 0
可以看到此网络的网关走到了192.168.1.1 此地址为Mac机器的网关,所以到该虚拟网络的流量都会通过gw 出去,因此也就到达不了虚拟机内部

添加到虚拟网络地址段的路由

$ sudo route -nv add -net 192.168.50 -interface vboxnet0 2.2.2 /Users/jiangytcn

u: inet 192.168.50.0; u: link vboxnet0:a.0.27.0.0.0; u: inet 255.255.255.0; RTM_ADD: Add Route: len 140, pid: 0, seq 1, errno 0, flags:<UP,STATIC>
locks: inits:
sockaddrs: <DST,GATEWAY,NETMASK>
192.168.50.0 vboxnet0:a.0.27.0.0.0 255.255.255.0
add net 192.168.50: gateway vboxnet0

$ route get 192.168.50.114
route to: 192.168.50.114
destination: 192.168.50.0
mask: 255.255.255.0
interface: vboxnet0
flags: <UP,DONE,CLONING,STATIC,PRCLONING>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 1500 -438
此时可以看到,到该虚拟网络的地址都通过vboxnet0

$ ping 192.168.50.115
PING 192.168.50.115 (192.168.50.115): 56 data bytes
64 bytes from 192.168.50.115: icmp_seq=0 ttl=64 time=0.287 ms
^C
--- 192.168.50.115 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.287/0.287/0.287/0.000 ms

发布在 Blogs
CloudStack: 线上系统设置debug模式以及子工程编译

此文原写于 12 February 2014, 当时刚做CloudStack 相关开发不久,由于项目需要,需要线上对环境进行调试。CloudStack 本身是Java 编写,可以运行于tomcat 容器当中, 通过对tomcat的配置,可以进行调试

编辑Tomcat
根据tomcat安装的具体情况,进行修改。默认是/usr/sbin/tomcat6
添加参数 -server -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8787 到此启动文件当中,大致添加后如下

35 -Djava.io.tmpdir="$CATALINA_TMPDIR"
36 -Djava.util.logging.config.file="${CATALINA_BASE}/conf/logging.properties"
37 -Djava.util.logging.manager="org.apache.juli.ClassLoaderLogManager"
38 -server -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8787 \

重启tomcat 容器,之后按照一般的eclipse debug 模式连接进行调试

编译单独的子工程

在项目开发过程当中,对某一个组件进行修改是再正常不过的了,CloudStack 是以maven 进行工程组装打包发布的,以cloud-server为例,如果我们想对修改后的工程,使其能够生效,可以单独编译该工程后,将编译发布后的jar文件替换原有的文件,重启管理节点后即可生效
cd ~/cloudstack4.1.0 mvn clean mvn -pl :cloud-server
在终端可以看到编译过程,最后可以看到如下信息

Total time: 3:01.915s
Finished at: Wed Feb 12 14:56:24 CST 2014
Final Memory: 26M/233M

将新编译后的jar 文件替换原有的 cloud-server-4.1.0.jar, 重启管理节点。


版本信息

软件 版本
CloudStack 4.1.0
tomcat 6
发布在 Blogs
CloudStack: VM Failes to start with error: VDI not available

在使用cloudstack的过程中可能会遇到如下的问题, ssh登录到虚拟机内部,执行关机命令(shutdown -h now), 在NFS backend下的Vm出现通过CloudStack无法启动的问题

产生原因
Xenserver 与存储设备或者Lun失去连接

解决方法
从CloudStack的管理节点中查找出该虚拟机的VDI 设备的编号(UUID)
日志文件地址默认是 /var/log/cloud/management/management-server.log, 可能的VDI UUID 值为 6f97582c-xxxx-xxxx-xxxx-9aa5686bcbd36. 并会伴有类似 VM are failing to start with “errorInfo: [SR_BACKEND_FAILURE_46, The VDI is not available 这样的日志记录。

登录到Xenserver 中, 执行以下命令

查找出该UUID 对应的vid 设备的的信息, 包含sr-uuid 以及name-label 信息
xe vdi-list uuid=<日志文件中查找出的VDI uuid> params=sr-uuid, name-label
xe vdi-forget uuid=<日志文件中查找出的VDI uuid>
xe sr-scan uuid=<SR uuid>
xe vdi-param-set uuid=< 日志文件中查找出的VDI uuid> name-label=< 之前查找出的name-label >

重启cloudstack 管理节点来使数据同步


版本信息

软件 版本
CloudStack 4.2.1
Xenserver 6.2.0
发布在 Blogs
Windows 2016 server 单机版配置ldaps 以及ubuntu 客户端ssl 连接配置

Windows 2016 server上配置Active Directory 以及Domain Controller,CA 并启用ldaps认证功能

  1. Windows 2016 安装
    通过微软官网下载iso 镜像进行安装,下载后有180的试用期。注意安装时选择带有桌面功能的数据中心版本。
    具体过程省略, 可自行搜索

  2. AD 安装配置
    系统启动后进行ad的安装配置,安装配置过程没有特殊之处。

  3. CA 安装以及配置
    具体过程省略。

  4. 证书导出
    需要注意的是,到处的是AD server的证书

  5. Ubuntu 客户端配置

    1. 安装OpenSSL 以及Ca 管理工具
      apt update && apt install openssl ca-certifcates ldap-utils

    2. 更新CA database
      mkdir /usr/share/ca-certificates/devbox/
      将导出的server的证书上传至上述创建的目录中
      编辑ca配置,加载新的证书

      root@a9ad14a3288f:/# tail -n 2 /etc/ca-certificates.conf
      mozilla/thawte_Primary_Root_CA_-_G3.crt
      devbox/server_certificate.cer

      root@a9ad14a3288f:/# update-ca-certificates
      Updating certificates in /etc/ssl/certs...
      1 added, 0 removed; done.
      Running hooks in /etc/ca-certificates/update.d...
      done.

      验证证书配置

      openssl s_client -connect WIN-2PHSJD5NH12.ad.devbox.int:636 -showcerts

      通过ldaps 连接AD server

      root@a9ad14a3288f:/# ldapsearch -x -H ldaps://WIN-2PHSJD5NH12.ad.devbox.int -D 'administrator@ad.devbox.int' -w 'Passw0rd' -b "DC=ad,dc=devbox,dc=int" "(objectclass=user)" dn

发布在 Blogs
K8S Ingress Controller 之 traefiker

关于K8S的Ingress Controller, 顾名思义,是负责处理Ingress请求的模块。如果熟悉CloudFoundry的话,可以把它想像成Gorouter的功能,由于在PaaS/CaaS中,每一个业务后端实际是由多个容器来支撑,如何将用户的请求按照一定的算法,round-robin 抑或是least-connection,分配到不同的容器当中,这个是Ingress Controller的作用。
关于K8S当中的Ingress 的定义,可以参考 官方文档

Traefik 其功能是HTTP层的方向代理以及负载均衡器,可以在其它的容器调度平台,如K8S、Mesos、Docker上部署微服务。本文主要介绍如何通过 traefik 在Minikube上配合 nip.io 为微服务提供访问服务。


前置条件

【1】 kubernetes 环境 本文以minikube 为例
【2】kubectl 客户端安装
【3】internet 访问权限

安装 traefik

在其官方文档中,提供了安装步骤,抽取了关键步骤,想要看详细介绍的,移步 ->

RBAC 配置

如果k8s cluster启用了RBAC之后,需要对Traefik 授权 ClusterRole 以及ClusterRoleBinding 来使用Kubernets 的API

kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml

安装Traefik

关于DeploymentSet 以及 Deployment有以下的区别

  1. 相比较DaemonSet 而言,Deployment 有更好的扩展性,如果使用了DS,那么在每一个node上只有一个pod

  2. 通过taints tolerations, DS可以在专有的机器上运行Service.可以参考

  3. 除此之外, DS可以直接访问任意Node上面的80, 443 端口,但是如果使用Deployment的话,那么需要设置一个Service 对象.

通过DaemonSet 安装

kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml

配置Ingress 并启用UI

Traefik 提供了webui供查看其上的配置信息

kubectl apply -f https://raw.githubusercontent.com/yacloud-io/k8s-hands-on/master/ingress-controller/traefik-ui.yml

通过浏览器访问 http://traefik-ui.192.168.99.100.nip.io/dashboard/#/

Treafik UI with nip.io

安装terrific 并为其配置ingress

terrific web server,访问后,会显示所运行的容器/宿主机的IP 地址信息

安装terrific deployment

kubectl apply -f https://raw.githubusercontent.com/yacloud-io/k8s-hands-on/master/ingress-controller/terrific-deployment.yml

为其配置Service
kubectl apply -f https://raw.githubusercontent.com/yacloud-io/k8s-hands-on/master/ingress-controller/terrific-service.yml

配置Ingress 规则
kubectl apply -f https://raw.githubusercontent.com/yacloud-io/k8s-hands-on/master/ingress-controller/nip.io-ingress.yml

在deployment的pod 启动后,访问 http://my-terrific.192.168.99.100.nip.io/ 查看结果

terrific ingress
刷新浏览器,可以看到访问到不同的容器上

总结

traefik作为K8S的负载均衡器,为其上部署的应用提供外部访问的能力。默认情况下,K8S提供了NodePort、LoadBalancer 的方式来访问,但是在IP资源有限的情况下,特别是以NodePort报露外部请求时,Worker 节点有时不是公网IP,用Traefik可以很方便的解决这个问题,并提供负载均衡。

发布在 Blogs