一、什么是s3dis
s3dis,即Stanford Large-Scale 3D Indoor Spaces Dataset,是斯坦福大学发布的大规模室内三维空间数据集。它包含了6个建筑物的室内三维地图和物体标注数据,其中每个建筑物的数据集都包含了数千个点云和高质量的渲染图像。s3dis提供了丰富的数据资源,被广泛应用于室内场景分割、多视角图像生成、室内导航等方向的研究领域。
二、s3dis的数据组成
s3dis的数据集包含了6个建筑物的室内空间,共计超过270万点的点云数据,以及高质量的渲染图像和物体标注数据。其中包括了办公室、教室、会议室、走廊、洗手间等常见室内场景。在每个建筑物中,数据集以房间为单位进行划分,并标注出了房间中的物体类型,如桌子、椅子、地毯等。 下面是s3dis数据集的一些统计信息:
Building A: 4532 room scans
31 object categories
9 object instances
Building B: 5063 room scans
27 object categories
4 object instances
Building C: 5463 room scans
27 object categories
4 object instances
Building D: 5117 room scans
27 object categories
4 object instances
Building E: 5292 room scans
27 object categories
4 object instances
Building F: 5117 room scans
27 object categories
4 object instances
除了点云数据、渲染图像和物体标注数据,s3dis还提供了每个物体在室内的3D坐标、旋转角度和尺寸信息,这为室内场景重建、物体识别提供了有力支撑。
三、s3dis的应用场景
由于s3dis数据集具有真实、多样、明确的标注信息,因此在室内场景分割、多视角图像生成、室内导航等领域得到了广泛应用。
四、s3dis的使用示例
1. 室内场景分割
在室内场景分割方面,s3dis数据集被广泛应用。下面,我们通过使用s3dis数据集训练模型,实现一个室内场景分割的样例。我们使用tensorflow框架和pointnet++网络结构来实现场景分割。
import numpy as np
import tensorflow as tf
import os
import sys
import time
## 定义pointnet++网络结构
def pointnet2_ssg(inputs, is_training, bn_decay=None):
# todo: add pointnet++ ssg
return seg_pred
## 数据读取
def load_data(data_dir):
# todo: load s3dis data
return data, label
if __name__ == '__main__':
data_dir = 'data/s3dis'
model_dir = 'model/s3dis'
if not os.path.exists(model_dir):
os.makedirs(model_dir)
tf.reset_default_graph()
pointclouds_pl = tf.placeholder(tf.float32, shape=(32, 4096, 6))
labels_pl = tf.placeholder(tf.int32, shape=(32, 4096))
is_training_pl = tf.placeholder(tf.bool, shape=())
batch_size = 32
num_point = 4096
num_classes = 13
learning_rate = 0.001
max_epoch = 250
with tf.device('/gpu:0'):
logits = pointnet2_ssg(pointclouds_pl, is_training=is_training_pl, bn_decay=0.7)
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels_pl)
loss = tf.reduce_mean(loss)
tf.summary.scalar('loss', loss)
if bn_decay is not None:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
saver = tf.train.Saver()
## 数据读取
data, label = load_data(data_dir)
num_data = data.shape[0]
## 开始训练
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('logs', sess.graph)
for epoch in range(max_epoch):
idx = np.arange(num_data)
np.random.shuffle(idx)
total_loss = 0
## 按批次进行训练
for from_idx in range(0, num_data, batch_size):
to_idx = min(from_idx + batch_size, num_data)
batch_data = data[idx[from_idx:to_idx], :, :]
batch_label = label[idx[from_idx:to_idx], :]
## 训练一个批次
_, batch_loss, batch_logits, summary = sess.run([train_op, loss, logits, merged_summary_op], feed_dict={
pointclouds_pl: batch_data,
labels_pl: batch_label,
is_training_pl: True
})
total_loss += batch_loss
print('Epoch %d, loss %.4f' % (epoch, total_loss))
## 每十个epoch保存一次模型
if epoch % 10 == 0:
saver.save(sess, os.path.join(model_dir, 'model.ckpt'), global_step=epoch)
2. 多视角图像生成
s3dis数据集包含了大量的高质量渲染图像,这为多视角图像生成提供了有力支撑。下面,我们通过使用s3dis数据集中的渲染图像,训练一个GAN网络来生成室内场景中的多视角图像。
## 定义GAN网络结构
def generator(inputs, is_training):
# todo: add generator network
return gen_output
def discriminator(inputs, is_training):
# todo: add discriminator network
return dis_output
## 数据读取
def load_data(data_dir):
# todo: load s3dis data
return data, label, imgs
if __name__ == '__main__':
data_dir = 'data/s3dis'
model_dir = 'model/s3dis'
if not os.path.exists(model_dir):
os.makedirs(model_dir)
tf.reset_default_graph()
z_ph = tf.placeholder(tf.float32, shape=(32, 100))
img_ph = tf.placeholder(tf.float32, shape=(32, 224, 224, 3))
is_training = tf.placeholder(tf.bool, shape=())
## 定义GAN网络
gen_output = generator(z_ph, is_training=is_training)
dis_real = discriminator(img_ph, is_training=is_training)
dis_fake = discriminator(gen_output, is_training=is_training, reuse=True)
## 定义损失函数
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=dis_real, labels=tf.ones_like(dis_real)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=dis_fake, labels=tf.zeros_like(dis_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=dis_fake, labels=tf.ones_like(dis_fake)))
tf.summary.scalar("d_loss", d_loss)
tf.summary.scalar("g_loss", g_loss)
## 定义优化器
gen_vars = [var for var in tf.trainable_variables() if 'Generator' in var.name]
dis_vars = [var for var in tf.trainable_variables() if 'Discriminator' in var.name]
gan_optimizer = tf.train.AdamOptimizer(learning_rate=1e-4)
dis_optimizer = tf.train.AdamOptimizer(learning_rate=2e-4)
gen_optimizer = tf.train.AdamOptimizer(learning_rate=2e-4)
gan_train = gan_optimizer.minimize(g_loss, var_list=gen_vars, global_step=tf.train.get_global_step())
dis_train = dis_optimizer.minimize(d_loss, var_list=dis_vars, global_step=tf.train.get_global_step())
gen_train = gen_optimizer.minimize(g_loss, var_list=gen_vars, global_step=tf.train.get_global_step())
saver = tf.train.Saver()
## 数据读取
data, label, imgs = load_data(data_dir)
num_data = data.shape[0]
## 开始训练
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('logs', sess.graph)
merged_summary_op = tf.summary.merge_all()
for epoch in range(max_epoch):
idx = np.arange(num_data)
np.random.shuffle(idx)
total_d_loss, total_g_loss = 0, 0
## 按批次进行训练
for from_idx in range(0, num_data, batch_size):
to_idx = min(from_idx + batch_size, num_data)
batch_z = np.random.normal(size=[batch_size, 100])
## 训练判别器
_, batch_d_loss, summary = sess.run([dis_train, d_loss, merged_summary_op], feed_dict={
z_ph: batch_z,
img_ph: imgs[idx[from_idx:to_idx]],
is_training: True
})
total_d_loss += batch_d_loss
## 训练生成器
_, batch_g_loss, summary = sess.run([gen_train, g_loss, merged_summary_op], feed_dict={
z_ph: batch_z,
is_training: True
})
total_g_loss += batch_g_loss
print('Epoch %d, d_loss %.4f, g_loss %.4f' % (epoch, total_d_loss, total_g_loss))
## 每十个epoch保存一次模型
if epoch % 10 == 0:
saver.save(sess, os.path.join(model_dir, 'model.ckpt'), global_step=epoch)
3. 室内导航
利用s3dis数据集,我们可以实现室内导航系统。下面,我们通过使用s3dis数据集和强化学习算法,训练一个智能体来实现室内导航。
import numpy as np
import tensorflow as tf
import os
import sys
import time
## 定义DQN网络结构
def DQN(state_ph, action_ph, is_training):
# todo: add DQN network
return Q
## 数据读取
def load_data(data_dir):
# todo: load s3dis data
return data, label, nav_path
if __name__ == '__main__':
data_dir = 'data/s3dis'
model_dir = 'model/s3dis'
if not os.path.exists(model_dir):
os.makedirs(model_dir)
tf.reset_default_graph()
state_ph = tf.placeholder(tf.float32, shape=(None, 4096, 6))
action_ph = tf.placeholder(tf.int32, shape=(None,))
is_training = tf.placeholder(tf.bool, shape=())
## 定义DQN网络
Q = DQN(state_ph, action_ph, is_training=is_training)
## 定义损失函数和优化器
target_ph = tf.placeholder(tf.float32, shape=(None,))
action_one_hot = tf.one_hot(action_ph, num_action)
Q_pred = tf.reduce_sum(tf.multiply(Q, action_one_hot), axis=1)
loss = tf.reduce_mean(tf.square(Q_pred - target_ph))
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
train_op = optimizer.minimize(loss)
saver = tf.train.Saver()
## 数据读取
data, label, nav_path = load_data(data_dir)
num_data = data.shape[0]
## 开始训练
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('logs', sess.graph)
for epoch in range(max_epoch):
idx = np.arange(num_data)
np.random.shuffle(idx)
total_loss = 0
## 按批次进行训练
for from_idx in range(0, num_data, batch_size):
to_idx = min(from_idx + batch_size, num_data)
batch_data = data[idx[from_idx:to_idx], :, :]
batch_nav_path = nav_path[idx[from_idx:to_idx], :, :]
## 训练一个批次
Q_pred_ = sess.run(Q, feed_dict={
state_ph: batch_data,
is_training: False
})
## 以一定的概率采取随机