I have this exact same problem. I installed a brand new copy of Ubuntu 16.04 on my server, updated all the packages, then the only thing I installed was conjure-up and used that to install lxc openstack. lxc list showed all of the instances as up and running, I could login to the openstack dashboard, it was great. Then I rebooted...now lxc list shows all but one instance as not running. @gangstaluv to answer your questions in my environment:
Does juju status return anything?
$ juju status
Model Controller Cloud/Region Version
conjure-up-openstack-novalxd-561 conjure-up-localhost-1e7 localhost/localhost 2.1.0.1
App Version Status Scale Charm Store Rev OS Notes
ceph-mon 10.2.5 active 0/3 ceph-mon jujucharms 7 ubuntu
ceph-osd 10.2.5 active 0/3 ceph-osd jujucharms 239 ubuntu
ceph-radosgw 10.2.5 active 0/1 ceph-radosgw jujucharms 245 ubuntu
glance 12.0.0 active 0/1 glance jujucharms 254 ubuntu
keystone 9.2.0 active 0/1 keystone jujucharms 262 ubuntu
lxd 2.0.9 active 0/1 lxd jujucharms 7 ubuntu
mysql 5.6.21-25.8 active 0/1 percona-cluster jujucharms 247 ubuntu
neutron-api 8.3.0 active 0/1 neutron-api jujucharms 247 ubuntu
neutron-gateway 8.3.0 active 0/1 neutron-gateway jujucharms 232 ubuntu
neutron-openvswitch 8.3.0 active 0/1 neutron-openvswitch jujucharms 238 ubuntu
nova-cloud-controller 13.1.2 active 0/1 nova-cloud-controller jujucharms 292 ubuntu
nova-compute 13.1.2 active 0/1 nova-compute jujucharms 262 ubuntu
ntp waiting 0 ntp jujucharms 17 ubuntu
openstack-dashboard 9.1.0 active 0/1 openstack-dashboard jujucharms 243 ubuntu exposed
rabbitmq-server 3.5.7 active 0/1 rabbitmq-server jujucharms 59 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 unknown lost 0 10.0.8.183 agent lost, see 'juju show-status-log ceph-mon/0'
ceph-mon/1 unknown lost 1 10.0.8.209 agent lost, see 'juju show-status-log ceph-mon/1'
ceph-mon/2 unknown lost 2 10.0.8.141 agent lost, see 'juju show-status-log ceph-mon/2'
ceph-osd/0 unknown lost 3 10.0.8.159 agent lost, see 'juju show-status-log ceph-osd/0'
ceph-osd/1 unknown lost 4 10.0.8.115 agent lost, see 'juju show-status-log ceph-osd/1'
ceph-osd/2 unknown lost 5 10.0.8.216 agent lost, see 'juju show-status-log ceph-osd/2'
ceph-radosgw/0 unknown lost 6 10.0.8.48 80/tcp agent lost, see 'juju show-status-log ceph-radosgw/0'
glance/0 unknown lost 7 10.0.8.61 9292/tcp agent lost, see 'juju show-status-log glance/0'
keystone/0 unknown lost 8 10.0.8.117 5000/tcp agent lost, see 'juju show-status-log keystone/0'
mysql/0 unknown lost 9 10.0.8.123 agent lost, see 'juju show-status-log mysql/0'
neutron-api/0 unknown lost 10 10.0.8.96 9696/tcp agent lost, see 'juju show-status-log neutron-api/0'
neutron-gateway/0 unknown lost 11 10.0.8.140 agent lost, see 'juju show-status-log neutron-gateway/0'
nova-cloud-controller/0 unknown lost 12 10.0.8.238 8774/tcp agent lost, see 'juju show-status-log nova-cloud-controller/0'
nova-compute/0 unknown lost 13 10.0.8.190 agent lost, see 'juju show-status-log nova-compute/0'
lxd/0 unknown lost 10.0.8.190 agent lost, see 'juju show-status-log lxd/0'
neutron-openvswitch/0 unknown lost 10.0.8.190 agent lost, see 'juju show-status-log neutron-openvswitch/0'
openstack-dashboard/0 unknown lost 14 10.0.8.111 80/tcp,443/tcp agent lost, see 'juju show-status-log openstack-dashboard/0'
rabbitmq-server/0 unknown lost 15 10.0.8.110 5672/tcp agent lost, see 'juju show-status-log rabbitmq-server/0'
Machine State DNS Inst id Series AZ
0 down 10.0.8.183 juju-ec5bf1-0 xenial
1 down 10.0.8.209 juju-ec5bf1-1 xenial
2 down 10.0.8.141 juju-ec5bf1-2 xenial
3 down 10.0.8.159 juju-ec5bf1-3 xenial
4 down 10.0.8.115 juju-ec5bf1-4 xenial
5 down 10.0.8.216 juju-ec5bf1-5 xenial
6 down 10.0.8.48 juju-ec5bf1-6 xenial
7 down 10.0.8.61 juju-ec5bf1-7 xenial
8 down 10.0.8.117 juju-ec5bf1-8 xenial
9 down 10.0.8.123 juju-ec5bf1-9 xenial
10 down 10.0.8.96 juju-ec5bf1-10 xenial
11 down 10.0.8.140 juju-ec5bf1-11 xenial
12 down 10.0.8.238 juju-ec5bf1-12 xenial
13 down 10.0.8.190 juju-ec5bf1-13 xenial
14 down 10.0.8.111 juju-ec5bf1-14 xenial
15 down 10.0.8.110 juju-ec5bf1-15 xenial
Relation Provides Consumes Type
mon ceph-mon ceph-mon peer
mon ceph-mon ceph-osd regular
mon ceph-mon ceph-radosgw regular
ceph ceph-mon glance regular
ceph ceph-mon nova-compute regular
cluster ceph-radosgw ceph-radosgw peer
identity-service ceph-radosgw keystone regular
cluster glance glance peer
identity-service glance keystone regular
shared-db glance mysql regular
image-service glance nova-cloud-controller regular
image-service glance nova-compute regular
amqp glance rabbitmq-server regular
cluster keystone keystone peer
shared-db keystone mysql regular
identity-service keystone neutron-api regular
identity-service keystone nova-cloud-controller regular
identity-service keystone openstack-dashboard regular
lxd-migration lxd lxd peer
lxd lxd nova-compute regular
cluster mysql mysql peer
shared-db mysql neutron-api regular
shared-db mysql nova-cloud-controller regular
cluster neutron-api neutron-api peer
neutron-plugin-api neutron-api neutron-gateway regular
neutron-plugin-api neutron-api neutron-openvswitch regular
neutron-api neutron-api nova-cloud-controller regular
amqp neutron-api rabbitmq-server regular
cluster neutron-gateway neutron-gateway peer
quantum-network-service neutron-gateway nova-cloud-controller regular
amqp neutron-gateway rabbitmq-server regular
neutron-plugin neutron-openvswitch nova-compute regular
amqp neutron-openvswitch rabbitmq-server regular
cluster nova-cloud-controller nova-cloud-controller peer
cloud-compute nova-cloud-controller nova-compute regular
amqp nova-cloud-controller rabbitmq-server regular
lxd nova-compute lxd subordinate
neutron-plugin nova-compute neutron-openvswitch subordinate
compute-peer nova-compute nova-compute peer
amqp nova-compute rabbitmq-server regular
ntp-peers ntp ntp peer
cluster openstack-dashboard openstack-dashboard peer
cluster rabbitmq-server rabbitmq-server peer
You can run lxc start to bring them back up.
When I try that I get an error which probably explains why things didn't just come back up on their own:
$ lxc start juju-ec5bf1-0
error: Missing parent 'conjureup0' for nic 'eth1'
Try `lxc info --show-log juju-ec5bf1-0` for more info
I'm not sure how to proceed. Is there something else I could check? I've re-installed Ubuntu and conjure-up in case I had done something wrong but every time it works perfectly until a reboot, then it ends up in this state again.
EDIT-1: I didn't think to add the lxc info that it said to look at, adding that now.
$ lxc info --show-log juju-ec5bf1-0
Name: juju-ec5bf1-0
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2017/02/20 04:12 UTC
Status: Stopped
Type: persistent
Profiles: default, juju-conjure-up-openstack-novalxd-561
Log:
lxc 20160220041252.329 WARN lxc_start - start.c:signal_handler:322 - Invalid pid for SIGCHLD. Received pid 437, expected pid 452.
EDIT-2: I just fixed mine!
After much research I discovered the command lxc profile show
$ lxc profile show juju-conjure-up-openstack-novalxd-561
config:
boot.autostart: "true"
linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables,netlink_diag
raw.lxc: |
lxc.aa_profile=unconfined
lxc.mount.auto=sys:rw
security.nesting: "true"
security.privileged: "true"
description: ""
devices:
eth0:
mtu: "9000"
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
eth1:
mtu: "9000"
name: eth1
nictype: bridged
parent: conjureup0
type: nic
root:
path: /
type: disk
name: juju-conjure-up-openstack-novalxd-561
From the output of the lxc info --show-log juju-ec5bf1-0 I surmised that somehow juju (or some other component) saw my other nic (I'm running this on real hardware as opposed to Mirto Busico on a VM if I read correctly) and was looking for a bridge called conjureup0 that didn't exist. I suspect there is a bug somewhere which is why this was not created. I think I could have done one of two things to fix this. 1) create the missing bridge 2) remove the eth1 device from the profile. I chose the latter.
$ lxc profile device remove juju-conjure-up-openstack-novalxd-561 eth1
Rebooted, and now lxc list shows all my instances are up and running as expected and my dashboard works again.
juju statusreturn anything? The containers are set to boot on system start so that is a concerning issue on its own. You can runlxc start <instance-id>to bring them back up. – battlemidget Feb 20 '17 at 00:41