openstack真的是一个不胜缠绵悱恻的东西,四、在测算节点上同步调节节点时间

贰openstack的难点壹般都以布署文件漏洞百出引起的

四、nova服务安装与布署

支配节点

一、建数据库,连库使用的用户名和密码

mysql -uroot -pzx123456

CREATEDATABASE
nova_api;

CREATE DATABASE nova;

GRANT ALL
PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ONnova_api.* TO ‘nova’@’%’ IDENTIFIED BY
‘zx123456’;

GRANTALL PRIVILEGES ON
nova.* TO ‘nova’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ONnova.* TO ‘nova’@’%’ \IDENTIFIED BY
‘zx123456’;

flush privileges;

二、检查实行结果

select user,host from
mysql.user where user=”nova”;

叁、建服务实体,keystone用户,剧中人物关系

建nova服务实体

openstack service create
–name nova –description “OpenStack Compute”
compute

建用户

openstack user create
–domain default –password-prompt
nova

##提醒输入NOVA密码##

用户,剧中人物,项目涉及

openstack role add
–project service –user nova
admin

建keystone-api对外的端点

openstack endpoint create
–region RegionOne compute
publichttp://controller:8774/v2.1/%\\(tenant\_id\\)s

openstack endpoint create
–region RegionOne compute
internalhttp://controller:8774/v2.1/%\\(tenant\_id\\)s

openstack endpoint create
–region RegionOne compute admin
http://controller:8774/v2.1/%\\(tenant\_id\\)s

四、查看结果

openstack catalog
list

5、安装nova软件包

yum installopenstack-nova-api openstack-nova-conductor
openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler
-y

六、修改nova配置文件

vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis=
osapi_compute,metadata

rpc_backend= rabbit

auth_strategy= keystone

my_ip= 192.168.22.202

use_neutron= True

firewall_driver= nova.virt.firewall.NoopFirewallDriver

[api_database]

公海赌船710,connection =
mysql+pymysql://nova:zx123456@controller/nova_api

[database]

#nova连数据库.

connection =
mysql+pymysql://nova:zx123456@controller/nova

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456

[keystone_authtoken]

#keystone论证连接装置

auth_url=http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= nova

password= zx123456

[glance]

api_servers=
http://controller:9292

[vnc]

vncserver_listen=
192.168.22.202

vncserver_proxyclient_address=
192.168.22.202

[oslo_concurrency]

#安装锁文件地方

lock_path=
/var/lib/nova/tmp

7、同步数据库

su -s /bin/sh -c”nova-manage api_db sync” nova su-s /bin/sh -c
“nova-manage db sync” nova

##提个醒信息方可忽略##

8、验证

mysql –uroot –pzx123456

use nova;

show tables;

玖、运营服务并开机自启

#systemctl
enable
 openstack-nova-api.service openstack-nova-consoleauth.service

openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl startopenstack-nova-api.service \

openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service
\

openstack-nova-conductor.service openstack-nova-novncproxy.service

算算节点

1、nova-compute服务安装

yum installopenstack-nova-compute –y

2、修改配置文件

vim /etc/nova/nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

#计算节点ip

my_ip = 192.168.22.203

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password=
zx123456

[vnc]

enabled = True

vncserver_listen=
0.0.0.0

#计量节点管理网络ip

vncserver_proxyclient_address= 192.168.22.203

novncproxy_base_url=
http://192.168.22.202:6080/vnc\_auto.html

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

#锁文件

lock_path = /var/lib/nova/tmp

egrep-c ‘(vmx|svm)’
/proc/cpuinfo

##明确你的计量节点是不是匡助虚拟机的硬件加快##

重回0,则须求安插下边:

[libvirt]

virt_type = qemu

三、运转服务

#systemctl
enable
 libvirtd.service
openstack-nova-compute.service

# systemctl startlibvirtd.service openstack-nova-compute.service

注脚操作

在controller试行上边发号施令:

#source /root/admin-openrc

#openstack compute servicelist

公海赌船710 1


二、总结

Openstack Mitaka安装配置教程


1.用neutron agent-list 查看各组件职业意况是或不是平常。

2、环境安排:

一、全体节点关闭Firewalls、NetworkMananger、selinux、主机名字为各自节点名称

二、安装时间同步服务器chrony

#Yum install chrony –y

3、在决定节点上安插:allow 1九二.168.二1.0/22

四、在盘算节点上同步调控节点时间:server controller iburst

5、运行服务并开机自动运转:

#systemctl enable chronyd.service

#systemctl start chronyd.service

陆、准备Ali源、epel源

#yum install -y centos-release-openstack-mitaka

#yum
install https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-6.noarch.rpm
-y

#yum install python-openstackclient  -y                          
 ####安装opentack必须的插件####

#yum install openstack-selinux -y

#yum upgrade

#reboot

七、数据库安装(mariadb)       ####controller###

#yum install mariadb mariadb-serverpython2-PyMySQL -y

######数据库配置######

###制造并编写:/etc/my.cnf.d/openstack.cnf

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

######开行服务######

# systemctl enable mariadb.service

# systemctl start mariadb.service

######开始化数据库######

#mysql_secure_installation

####专注查看端口是或不是业已运行:netstat -lnp | grep 330陆###

8、rabbitmq安装(rabbitmq使用5672端口) ##controller##

# yum install rabbitmq-server -y                    
###安装###

# systemctl enable rabbitmq-server.service                  
###开机运转###

# systemctl start rabbitmq-server.service                        
###早先服务###

#rabbitmqctl
add_user
 openstack zx123456  
               ###日增openstack用户,并安装密码为zx12345陆###

#rabbitmqctl
set_permissions
openstack
 “.*” “.*” “.*”    
         ###新增用户安装权限###

九、memcached安装(使用端口1121一)   ##controller##

# yum install memcached python-memcached -y                        
 ###安装###

# systemctl enable memcached.service                  
###开机运转###

# systemctl start memcached.service                      
 ###起步服务###

10、keystone安装 ##controller##

######登陆数据库并制造keystone数据库:

#mysql -uroot –pzx123456

CREATE DATABASE
keystone;

GRANT ALL PRIVILEGES ON keystone.*
TO
 ‘keystone’@’localhost’
IDENTIFIED BY ‘zx123456’;

GRANTALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY
‘zx123456’;

       ###设置授权用户和密码###

**生成admin_token的随机值:openssl
rand -hex 10
**

# yum install openstack-keystone httpd mod_wsgi -y          
 ##controller##

配置:vi /etc/keystone/keystone.conf

admin_token=随机值(首要为平安,也足以不用替换)

connection=
mysql+pymysql://keystone:zx123456@192.168.22.202/keystone

provider = fernet

#初叶化身份验证服务的数据库:

#su -s /bin/sh -c
“keystone-manage
 db_sync”
keystone

#初始化Fernet keys:

#keystone-manage
fernet_setup
–keystone-user
 keystone
–keystone-group keystone

#配置Apache HTTP服务

配置:/etc/httpd/conf/httpd.conf

ServerName controller

用上边包车型的士剧情创建文件/etc/httpd/conf.d/wsgi-keystone.conf

Listen
5000

Listen 35357

WSGIDaemonProcess keystone-public processes=5 threads=1
user=keystonegroup=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

Require all granted

WSGIDaemonProcess keystone-admin processes=5 threads=1
user=keystonegroup=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

Require all granted

启动Apache HTTP服务:

# systemctl enable httpd.service

# systemctl start httpd.service

#创打败务实体和API端点

布局认证令牌:

#export
OS_TOKEN=
2e8cd090b7b50499d5f9

配置端点U福特ExplorerL:

#export OS_URL=export

#OS_URL=http://controller:35357/v3

布署认证API版本:

#export
OS_IDENTITY_API_VERSION=3

#创办服务实体和地方验证服务:

#openstack service create –name keystone–description “OpenStack
Identity” identity

#创制认证服务的API端点:

#openstack endpoint create –region
RegionOne
 identity
public http://controller:5000/v3

#openstack endpoint create –region
RegionOne
 identity
internal http://controller:5000/v3

#openstack endpoint create –region
RegionOne
 identity
admin http://controller:35357/v3

#创建域、项目、用户、角色

创建域“default”

#openstack domain create –description”Default Domain”
default

创建admin项目

#openstack project create –domain default–description “Admin
Project” admin

创建admin用户

#openstack user create –domain
default
 –password-prompt
admin

 ##提示输入admin用户密码##

创建admin角色

openstack role create
admin

添加“admin“剧中人物到admin项目和用户上

openstack role add
–project admin –user
adminadmin

创建“service“项目

openstack project
create –domain default –description “Service Project” service

创建“demo“项目

openstack project
create –domain default –description “Demo Project” demo

创建“demo“用户

openstack user
create –domain default –password-prompt demo

##提示输入demo用户密码##

创建user角色

openstack role
create user

添加”user”角色到“demo “项目和用户

openstack
role add –project demo –user demo
user

验证:

关门临时认证令牌机制:

编辑/etc/keystone/keystone-paste.ini文件,从“[pipeline:public_api]“,[pipeline:admin_api]“和“[pipeline:api_v3]“有的删除“admin_token_auth

重置“OS_TOKEN“和“OS_URL“环境变量

unset OS_TOKEN
OS_URL

应用admin用户来,检测,看能还是不能赚取令牌:

#openstack–os-auth-url
http://controller:35357/v3–os-project-domain-name default
–os-user-domain-namedefault–os-project-name admin–os-username admin
token issu
e

公海赌船710 2

新建admin项目和demo项目标环境变量

admin项目:增多如下内容

vim admin-openrc

export
OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=zx123456

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

demo项目:

vim demo-openrc

export
OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=zx123456

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

加载环境变量并收获令牌:

#source admin-openrc

#openstack token issue

公海赌船710 3


     
 openstack真的是二个这个缠绵悱恻的事物,万幸有机关布署工具,即使有活动安顿工具得以便宜大家安插使用,不过学习的话,第二次最权威动布置,因为手动布置更能我们驾驭openstack的劳作流程和各组建之间的关联。

6、Dashboard安装配备

操纵节点

壹、安装面板

yum installopenstack-dashboard –y

二、配置相应设置

vim /etc/openstack-dashboard/local_settings

修改如下配置:

OPENSTACK_HOST =”controller”

ALLOWED_HOSTS = [‘*’, ]

CACHES = {

‘default’: {

‘BACKEND’:’django.core.cache.backends.locmem.LocMemCache’,

‘LOCATION’: ‘192.168.22.202:11211’,

},

}

OPENSTACK_KEYSTONE_URL =”http://%s:5000/v3” % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT= True

OPENSTACK_API_VERSIONS = {

“identity”: 3,

“image”: 2,

“volume”: 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN= “default”

OPENSTACK_KEYSTONE_DEFAULT_ROLE= “user”

TIME_ZONE = “UTC”

3、重启apache和memcaced服务

systemctl enablehttpd.service memcached.service

systemctl restarthttpd.service memcached.service

systemctl statushttpd.service memcached.service

验证

http://192.168.22.202/dashboard

 

叁、glance安装和配备

垄断节点安装glance

1、登入MySQL,建库和建用户

mysql -uroot –pzx123456

CREATE DATABASE
glance;
       
 ##创建glance数据库##

GRANT ALL
PRIVILEGES ON glance.* TO ‘glance’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ON glance.* TO’glance’@’%’ IDENTIFIED BY
‘zx123456’;

二、建keystone论证连接,使用的用户,密码,剧中人物权限

source admin-openrc

创建glance用户

openstack user
create –domain default –password-prompt glance

##提拔输入glance密码##

添加admin角色到glance用户和service项目上

openstack role
add –project service –user glance admin

3、创建“glance“劳务实体

openstack
service create –name glance –description “OpenStack Image”
image

4、创设镜像服务的API端点

openstack endpoint
create –region RegionOne image
publichttp://controller:9292

openstack endpoint
create –region RegionOne image
internalhttp://controller:9292

openstack endpoint create
–region RegionOneimage admin
http://controller:9292

5、安装glance包   #controller#

yum install openstack-glance -y

6、glance-api配置

vim /etc/glance/glance-api.conf

[database]

connection =
mysql+pymysql://glance:zx123456@controller/glance

[keystone_authtoken]

auth_url
=
http://controller:5000

auth_url=
http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= glance

password

= zx123456

[paste_deploy]

flavor = keystone***#钦点论证机制***

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir=
/var/lib/glance

7、配置/etc/glance/glance-registry.conf

vim /etc/glance/glance-registry.conf

[database]

connection =
mysql+pymysql://glance:zx123456@controller/glance

[keystone_authtoken]

auth_uri =
http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = zx123456

[paste_deploy]

flavor = keystone

8、新建保存镜象目录,并退换属主

mkdir
/var/lib/glance

chown glance.
/var/lib/glance

九、生成数据库结构

su -s /bin/sh -c
“glance-managedb_sync”
glance

10、设置开机运维和平运动作

#systemctl enable
openstack-glance-api.service openstack-glance-registry.service

#systemctl
start
 openstack-glance-api.service
openstack-glance-registry.service

查阅服务end point新闻

#openstack catalog
list

注明操作

#source admin-openrc

#wgethttp://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86\_64-disk.img

##下载镜像##

openstack image create
“cirros” –file cirros-0.3.4-x86_64-disk.img–disk-format qcow2
–container-format bare
–public

##上传镜像##

openstack image list     ##查看结果##


关于运转虚拟机日志报错NovaException: Unexpected
vif_type=binding_failed.:的缓解方案


四服务运维后决然要看日志(grep -i ‘error’)

壹、实验环境:

系统:centos7.2-minimal

互联网:管理互连网eth0和虚拟机实例互连网eth1

controller:192.168.22.202 eth0

                       192.168.30.202 eth1

Compute01:192.168.22.203 eth0

                           192.168.30.203 eth1


只要日志不报错但服务不不奇怪,例如实例无法收获到p。

五、Neutron安装与配置

操纵节点

1、创设neutron数据库并授予权力

mysql –uroot –pzx123456

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ IDENTIFIED BY ‘zx123456’;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘zx123456’;

二、得到admin凭证及创立neutron用户

#source admin-openrc

#openstack user create –domain default –password-prompt neutron

##提示输入neutron密码##

3、添加“admin“角色到“neutron“用户**

openstack role add –project service –user neutron admin

四、创设“neutron“服务实体

openstack service create –name neutron –description “OpenStack Networking” network

五、创制网络服务API端点

openstack endpoint create –region RegionOnenetwork public http://controller:9696

openstack endpoint create –region RegionOnenetwork internal http://controller:9696

openstack endpoint create –region RegionOne

network adminhttp://controller:9696

6、网络选取:Self-service network

neutron相关包安装:

yum install openstack-neutronopenstack-neutron-ml2
openstack-neutron-linuxbridge ebtables –y

七、neutron服务配置文件

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

vim /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

[database]

connection = mysql+pymysql://neutron:zx123456@controller/neutron  #改为团结数据库密码

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456   #改为rabbitmq的密码

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = zx123456   #改为温馨neutron服务的密码

[nova]

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = zx123456  #改为和谐nova服务的密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

ML二插件的布局:

mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = *

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = True

linuxbridge agent配置文件

mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eht1   #那里安装为provider互联网的网卡名称,作者那边eth一

[vxlan]

enable_vxlan = True

local_ip = 192.168.22.202  #其1ip地址大家利用的是治本网段的ip (1九二.168.22.20二)

l2_population = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

L三 agent配置文件:

mv /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak

vim /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

external_network_bridge =#留空

dhcp agent配置

mv /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak

vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = True

配置metadata agent

mv /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak

vim /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_ip = controller

metadata_proxy_shared_secret = zx123456#修改为温馨的METADATA_SECRET,也可以不改换,要和nova服务配置同样

布置nova服务应用network

vim /etc/nova/nova.conf#充实以下内容

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = zx123456#改为祥和neutron服务密码

service_metadata_proxy= True

metadata_proxy_shared_secret= zx123456   #和地方的METADATA对应

8、给ML贰插件做个软连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

九、同步数据库

su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf  –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

10、重启nova-api

systemctl restart openstack-nova-api.service

1一、运营neutron相关服务,并设置开机运行

systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service

neutron-l3-agent.service

# systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.serviceneutron-l3-agent.service

compute节点配置

1、安装neutron服务

yum install openstack-neutron-linuxbridge ebtables ipset

2、配置

neutron服务配置

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

vim /etc/neutron/neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456   #改为rabbit密码

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = zx123456        #改为团结neutron服务密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

linuxbridge agent配置

mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eth1  #改为provider网络的网卡,那里是eth一

[vxlan]

enable_vxlan = True

local_ip = 192.168.22.203#改为本机managent互联网的ip地址192.16八.22.20叁

l2_population = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

nova服务应用network

vim  /etc/nova/nova.conf  #增添以下内容

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = zx123456    #改为温馨的neutron服务密码

3、重启nova服务

systemctl restart openstack-nova-compute.service

4、启动neutron

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

验证

在controller节点上进行:

source /root/admin-openrc

neutron ext-list

公海赌船710 4


neutron agent-list

公海赌船710 5


Neutron服务安装达成。

1、Neutron的布置文件中要把auth_uri换成identity_uri;(别的服务能够用auth_url,不过neutron服务必须要改为identity_url,否则不可能正常运维) 

neutron服务蒙受的主题材料:

 

1碰到标题断定要门可罗雀,不要扬弃,要善用思量。

依附一黄浩然后的美图:

1、注意事项    

 

万壹状态不正规请查看各节点时间是否不相同步。(日志不报错,但情况不健康基本上都以岁月分歧台形成的)

去除neutron互联网的步子:
1.router-gateway-clear
2.router-interface-delete
3.subnet-delete
4.router-delete

叁尽量将服务多种启四次看是或不是会报错,有个别服务纵然起步的时候显得的ok,然而并未有运营起来。

1.安插cinder时要把cinder
volumes上的计划文件中volumes_dir=$state_path/volumes
改为volumes_dir=/etc/cinder/volumes
二.将/etc/rc.d/init.d/openstack-cinder-volume
中的配置文件只保留–config-file $config,删除-config-file
$distconfig,制止失误
eg: daemon –user cinder –pidfile $pidfile “$exec –config-file $config
–logfile $logfile &>/dev/null & echo \$! > $pidfile”
3.cinder voleme节点配置文件中volume_group =
stack-volumes-lvmdriver-1项表示暗中同意vg为stack-volumes-lvmdriver,运行cinder
volume前务必先创建名称叫stack-volumes-lvmdriver的卷组。

 

五各主机时间必须联合

二、各安排文件属组应该为对应的劳动的运转者用户地点,否则其将无法访问导致服务运维失利;

     
笔者是鲁人持竿openstack的原版手册安装的,安装keystone,glance和compute都很顺遂,不过到了neutron的时候就悲哀了,google了一下有关neutron的篇章,全是说又何其多么的纷纭,对于多个新手来说确实是二个惊人的打击啊。(不可能,依然要一步一步的走下来)。在这么些历程中告负了不少次,最终弄了两周终于弄好了。

       openstack icehouse

公海赌船710 6

       系统平台cnetos六.七 X八6

1.面世上述错误首先检查ml2布局文件是还是不是安插不错
二.翻看互联网节点metadata_agent.ini配置文件是还是不是错误,metadata是承担将对neutron的操作保存在数据库(metadata_agent配置文件填写错误日志不会报错。eg:将admin_tenant_name
= service 写成 dmin_tenant_name = service)
三.禁止使用虚拟机网络功效看是或不是可以平常运作,借使能运转那么难题出在netron上,假如也不可能运作那么就要求检讨别的了。

安装glance境遇的标题:

相关文章