其他分享
首页 > 其他分享> > 30_visdom可视化、TensorboardX及其案例、安装visdom、使用visdom的案例

30_visdom可视化、TensorboardX及其案例、安装visdom、使用visdom的案例

作者:互联网

1.25.visdom可视化

1.25.1.TensorboardX

Pytorch也能用的tensorboard,此外Pytorch还有visdom可视化。

首先上项目地址:https://github.com/lanpa/tensorboardX

1.25.2.安装

(base) C:\Users\toto>pip install tensorboardX
Collecting tensorboardX
  Downloading tensorboardX-2.1-py2.py3-none-any.whl (308 kB)
     |████████████████████████████████| 308 kB 726 kB/s
Requirement already satisfied: six in d:\installed\anaconda3\lib\site-packages (from tensorboardX) (1.12.0)
Requirement already satisfied: protobuf>=3.8.0 in d:\installed\anaconda3\lib\site-packages (from tensorboardX) (3.12.1)
Requirement already satisfied: numpy in d:\installed\anaconda3\lib\site-packages (from tensorboardX) (1.20.1)
Requirement already satisfied: setuptools in d:\installed\anaconda3\lib\site-packages (from protobuf>=3.8.0->tensorboardX) (41.4.0)
Installing collected packages: tensorboardX
Successfully installed tensorboardX-2.1

(base) C:\Users\toto>

或者从源码中build

pip install 'git+https://github.com/lanpa/tensorboardX'

您可以选择安装crc32c来加速。

(base) C:\Users\toto>pip install crc32c
Collecting crc32c
  Downloading crc32c-2.2-cp37-cp37m-win_amd64.whl (31 kB)
Installing collected packages: crc32c
Successfully installed crc32c-2.2

(base) C:\Users\toto>

从tensorboardX 2.1开始,你需要为add_audio()函数安装soundfile(200倍加速)。

(base) C:\Users\toto>pip install soundfile
Collecting soundfile
  Downloading SoundFile-0.10.3.post1-py2.py3.cp26.cp27.cp32.cp33.cp34.cp35.cp36.pp27.pp32.pp33-none-win_amd64.whl (689 kB)
     |████████████████████████████████| 689 kB 117 kB/s
Requirement already satisfied: cffi>=1.0 in d:\installed\anaconda3\lib\site-packages (from soundfile) (1.12.3)
Requirement already satisfied: pycparser in d:\installed\anaconda3\lib\site-packages (from cffi>=1.0->soundfile) (2.19)
Installing collected packages: soundfile
Successfully installed soundfile-0.10.3.post1

(base) C:\Users\toto>

1.25.3.例子

克隆文件:https://github.com/lanpa/tensorboardX/tree/master/examples
运行demo脚本:python examples/demo.py
运行TensorBoard可以用:tensorboard --logdir runs

在下面可能运行:

(base) C:\Users\toto>tensorboard --logdir runs
2021-02-12 13:35:13.345643: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
ValueError: Duplicate plugins for name projector

例子:

# -*- coding: UTF-8 -*-

import torch
import torchvision.utils as vutils
import numpy as np
import torchvision.models as models
from torchvision import datasets
from tensorboardX import SummaryWriter

resnet18 = models.resnet18(False)
writer = SummaryWriter()
sample_rate = 44100
freqs = [262, 294, 330, 349, 392, 440, 440, 440, 440, 440, 440]

for n_iter in range(100):

    dummy_s1 = torch.rand(1)
    dummy_s2 = torch.rand(1)
    # data grouping by `slash`
    writer.add_scalar('data/scalar1', dummy_s1[0], n_iter)
    writer.add_scalar('data/scalar2', dummy_s2[0], n_iter)

    writer.add_scalars('data/scalar_group', {'xsinx': n_iter * np.sin(n_iter),
                                             'xcosx': n_iter * np.cos(n_iter),
                                             'arctanx': np.arctan(n_iter)}, n_iter)

    dummy_img = torch.rand(32, 3, 64, 64)  # output from network
    if n_iter % 10 == 0:
        x = vutils.make_grid(dummy_img, normalize=True, scale_each=True)
        writer.add_image('Image', x, n_iter)

        dummy_audio = torch.zeros(sample_rate * 2)
        for i in range(x.size(0)):
            # amplitude of sound should in [-1, 1]
            dummy_audio[i] = np.cos(freqs[n_iter // 10] * np.pi * float(i) / float(sample_rate))
        writer.add_audio('myAudio', dummy_audio, n_iter, sample_rate=sample_rate)

        writer.add_text('Text', 'text logged at step:' + str(n_iter), n_iter)

        for name, param in resnet18.named_parameters():
            writer.add_histogram(name, param.clone().cpu().data.numpy(), n_iter)

        # needs tensorboard 0.4RC or later
        writer.add_pr_curve('xoxo', np.random.randint(2, size=100), np.random.rand(100), n_iter)

dataset = datasets.MNIST('mnist', train=False, download=True)
images = dataset.test_data[:100].float()
label = dataset.test_labels[:100]

features = images.view(100, 784)
writer.add_embedding(features, metadata=label, label_img=images.unsqueeze(1))

# export scalar data to JSON for external processing
writer.export_scalars_to_json("./all_scalars.json")
writer.close()

1.25.4.安装visdom

(base) C:\Users\toto>pip install visdom
Collecting visdom
  Downloading visdom-0.1.8.9.tar.gz (676 kB)
     |████████████████████████████████| 676 kB 27 kB/s
Requirement already satisfied: numpy>=1.8 in d:\installed\anaconda3\lib\site-packages (from visdom) (1.18.5)
Requirement already satisfied: scipy in d:\installed\anaconda3\lib\site-packages (from visdom) (1.4.1)
Requirement already satisfied: requests in d:\installed\anaconda3\lib\site-packages (from visdom) (2.22.0)
Requirement already satisfied: tornado in d:\installed\anaconda3\lib\site-packages (from visdom) (6.0.3)
Requirement already satisfied: pyzmq in d:\installed\anaconda3\lib\site-packages (from visdom) (18.1.0)
Requirement already satisfied: six in d:\installed\anaconda3\lib\site-packages (from visdom) (1.15.0)
Collecting jsonpatch
  Downloading jsonpatch-1.28-py2.py3-none-any.whl (12 kB)
Collecting torchfile
  Downloading torchfile-0.1.0.tar.gz (5.2 kB)
Collecting websocket-client
  Downloading websocket_client-0.57.0-py2.py3-none-any.whl (200 kB)
     |████████████████████████████████| 200 kB 20 kB/s
Requirement already satisfied: pillow in d:\installed\anaconda3\lib\site-packages (from visdom) (6.2.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in d:\installed\anaconda3\lib\site-packages (from requests->visdom) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in d:\installed\anaconda3\lib\site-packages (from requests->visdom) (1.24.2)
Requirement already satisfied: idna<2.9,>=2.5 in d:\installed\anaconda3\lib\site-packages (from requests->visdom) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in d:\installed\anaconda3\lib\site-packages (from requests->visdom) (2019.9.11)
Collecting jsonpointer>=1.9
  Downloading jsonpointer-2.0-py2.py3-none-any.whl (7.6 kB)
Building wheels for collected packages: visdom, torchfile
  Building wheel for visdom (setup.py) ... done
  Created wheel for visdom: filename=visdom-0.1.8.9-py3-none-any.whl size=655249 sha256=aa110012ea279783c86dcbc7778c90e8d4c021e5d9ac1a98f68c3134b7c52627
  Stored in directory: c:\users\toto\appdata\local\pip\cache\wheels\2d\d1\9b\cde923274eac9cbb6ff0d8c7c72fe30a3da9095a38fd50bbf1
  Building wheel for torchfile (setup.py) ... done
  Created wheel for torchfile: filename=torchfile-0.1.0-py3-none-any.whl size=5711 sha256=ac722c7bbb17ae1c36bef8db4b20cd6978462f240d76c5b6b0a4dfc6719db2b4
  Stored in directory: c:\users\toto\appdata\local\pip\cache\wheels\ac\5c\3a\a80e1c65880945c71fd833408cd1e9a8cb7e2f8f37620bb75b
Successfully built visdom torchfile
Installing collected packages: jsonpointer, jsonpatch, torchfile, websocket-client, visdom
Successfully installed jsonpatch-1.28 jsonpointer-2.0 torchfile-0.1.0 visdom-0.1.8.9 websocket-client-0.57.0

(base) C:\Users\toto>

1.25.5.后台运行visdom.server

(base) C:\Users\toto>pip install visdom
Collecting visdom
  Downloading visdom-0.1.8.9.tar.gz (676 kB)
     |████████████████████████████████| 676 kB 27 kB/s
Requirement already satisfied: numpy>=1.8 in d:\installed\anaconda3\lib\site-packages (from visdom) (1.18.5)
Requirement already satisfied: scipy in d:\installed\anaconda3\lib\site-packages (from visdom) (1.4.1)
Requirement already satisfied: requests in d:\installed\anaconda3\lib\site-packages (from visdom) (2.22.0)
Requirement already satisfied: tornado in d:\installed\anaconda3\lib\site-packages (from visdom) (6.0.3)
Requirement already satisfied: pyzmq in d:\installed\anaconda3\lib\site-packages (from visdom) (18.1.0)
Requirement already satisfied: six in d:\installed\anaconda3\lib\site-packages (from visdom) (1.15.0)
Collecting jsonpatch
  Downloading jsonpatch-1.28-py2.py3-none-any.whl (12 kB)
Collecting torchfile
  Downloading torchfile-0.1.0.tar.gz (5.2 kB)
Collecting websocket-client
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Installing collected packages: jsonpointer, jsonpatch, torchfile, websocket-client, visdom
Successfully installed jsonpatch-1.28 jsonpointer-2.0 torchfile-0.1.0 visdom-0.1.8.9 websocket-client-0.57.0

(base) C:\Users\toto>
(base) C:\Users\toto>python -m visdom.server
D:\installed\Anaconda3\lib\site-packages\visdom\server.py:39: DeprecationWarning: zmq.eventloop.ioloop is deprecated in pyzmq 17. pyzmq now works with default tornado and asyncio eventloops.
  ioloop.install()  # Needs to happen before any tornado imports!
Checking for scripts.

lines: single trace

from visdom import Visdom

viz = Visdom()

viz.line([0.],[0.], win='train_loss', opts=dict(title='train loss'))

viz.line([loss.item()], [global_step], win='train_loss', update='append')

在这里插入图片描述

lines: multi-traces

from visdom import Visdom

viz = Visdom()

viz.line([[0.0, 0.0]], [0.], win='test', opts=dict(title='test loss&acc.',
                                                   legend=['loss', 'acc.']))

viz.line([[test_loss, correct / len(test_loader.dataset)]],
         [global_step], win='test', update='append')

在这里插入图片描述

visual X

from visdom import Visdom

viz = Visdom()

viz.images(data.view(-1, 1, 28, 28), win='x')

viz.text(str(pred.detach().cpu().numpy()), win='pred', opts=dict(title='pred'))

在这里插入图片描述

1.25.6.使用visdom的案例

import  torch
import  torch.nn as nn
import  torch.nn.functional as F
import  torch.optim as optim
from    torchvision import datasets, transforms

from visdom import Visdom

batch_size=200
learning_rate=0.01
epochs=10

train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=True, download=True,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       # transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=False, transform=transforms.Compose([
        transforms.ToTensor(),
        # transforms.Normalize((0.1307,), (0.3081,))
    ])),
    batch_size=batch_size, shuffle=True)


class MLP(nn.Module):

    def __init__(self):
        super(MLP, self).__init__()

        self.model = nn.Sequential(
            nn.Linear(784, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 10),
            nn.LeakyReLU(inplace=True),
        )

    def forward(self, x):
        x = self.model(x)

        return x

device = torch.device('cuda:0')
net = MLP().to(device)
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss().to(device)

viz = Visdom()

viz.line([0.], [0.], win='train_loss', opts=dict(title='train loss'))
viz.line([[0.0, 0.0]], [0.], win='test', opts=dict(title='test loss&acc.',
                                                   legend=['loss', 'acc.']))
global_step = 0

for epoch in range(epochs):

    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)
        data, target = data.to(device), target.cuda()

        logits = net(data)
        loss = criteon(logits, target)

        optimizer.zero_grad()
        loss.backward()
        # print(w1.grad.norm(), w2.grad.norm())
        optimizer.step()

        global_step += 1
        viz.line([loss.item()], [global_step], win='train_loss', update='append')

        if batch_idx % 100 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                       100. * batch_idx / len(train_loader), loss.item()))


    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28 * 28)
        data, target = data.to(device), target.cuda()
        logits = net(data)
        test_loss += criteon(logits, target).item()

        pred = logits.argmax(dim=1)
        correct += pred.eq(target).float().sum().item()

    viz.line([[test_loss, correct / len(test_loader.dataset)]],
             [global_step], win='test', update='append')
    viz.images(data.view(-1, 1, 28, 28), win='x')
    viz.text(str(pred.detach().cpu().numpy()), win='pred',
             opts=dict(title='pred'))

    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))

标签:visdom,TensorboardX,lib,site,installed,案例,packages,data
来源: https://blog.csdn.net/toto1297488504/article/details/113795817