技术标签: 图神经网络 深度学习 DGL pytorch 链接预测
关于链接预测的概念和优化方法在“跟着官方文档学DGL框架第十天”上已经提到过。我们的目标还是得到节点表示,所以在随机训练时与节点分类和边分类的随机训练差不多,只是多了负采样过程。值得庆幸的是,DGL在随机训练时的负采样,只需要指定dgl.dataloading.EdgeDataLoader()中的negative_sampler为你需要的负采样函数。
依然使用“跟着官方文档学DGL框架第八天”中定义的DGLDataset类型的数据集。随机为每条边打上了标签,并随机选择了100条边作为训练集。
def build_karate_club_graph():
# All 78 edges are stored in two numpy arrays. One for source endpoints
# while the other for destination endpoints.
src = np.array([1, 2, 2, 3, 3, 3, 4, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 9, 10, 10,
10, 11, 12, 12, 13, 13, 13, 13, 16, 16, 17, 17, 19, 19, 21, 21,
25, 25, 27, 27, 27, 28, 29, 29, 30, 30, 31, 31, 31, 31, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33,
33, 33, 33, 33, 33, 33, 33, 33, 33, 33])
dst = np.array([0, 0, 1, 0, 1, 2, 0, 0, 0, 4, 5, 0, 1, 2, 3, 0, 2, 2, 0, 4,
5, 0, 0, 3, 0, 1, 2, 3, 5, 6, 0, 1, 0, 1, 0, 1, 23, 24, 2, 23,
24, 2, 23, 26, 1, 8, 0, 24, 25, 28, 2, 8, 14, 15, 18, 20, 22, 23,
29, 30, 31, 8, 9, 13, 14, 15, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30,
31, 32])
# Edges are directional in DGL; Make them bi-directional.
u = np.concatenate([src, dst])
v = np.concatenate([dst, src])
# Construct a DGLGraph
return dgl.graph((u, v))
class MyDataset(DGLDataset):
def __init__(self,
url=None,
raw_dir=None,
save_dir=None,
force_reload=False,
verbose=False):
super(MyDataset, self).__init__(name='dataset_name',
url=url,
raw_dir=raw_dir,
save_dir=save_dir,
force_reload=force_reload,
verbose=verbose)
def process(self):
# 跳过一些处理的代码
# === 跳过数据处理 ===
# 构建图
# g = dgl.graph(G)
g = build_karate_club_graph()
# train_mask = _sample_mask(idx_train, g.number_of_nodes())
# val_mask = _sample_mask(idx_val, g.number_of_nodes())
# test_mask = _sample_mask(idx_test, g.number_of_nodes())
# # 划分掩码
# g.ndata['train_mask'] = generate_mask_tensor(train_mask)
# g.ndata['val_mask'] = generate_mask_tensor(val_mask)
# g.ndata['test_mask'] = generate_mask_tensor(test_mask)
# 节点的标签
labels = torch.randint(0, 2, (g.number_of_edges(),))
g.edata['labels'] = torch.tensor(labels)
# 节点的特征
g.ndata['features'] = torch.randn(g.number_of_nodes(), 10)
self._num_labels = int(torch.max(labels).item() + 1)
self._labels = labels
self._g = g
def __getitem__(self, idx):
assert idx == 0, "这个数据集里只有一个图"
return self._g
def __len__(self):
return 1
dataset = MyDataset()
g = dataset[0]
n_edges = g.number_of_edges()
train_seeds = np.random.choice(np.arange(n_edges), (20,), replace=False)
依然选择最简单的采样器:
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(2)
负采样时有两种方式,一是使用DGL自带的随机采样,二是自定义负采样函数。
随机采样只需要指定“negative_sampler=dgl.dataloading.negative_sampler.Uniform(5)”,其中“5”表示负样本个数。
“drop_last”和“pin_memory”参数来自torch.data.DataLoader。“drop_last”表示是否去除最后一个不完整的batch;“pin_memory”表示是否使用锁页内存,建议在使用GPU时设置为True。
dataloader会返回“输入节点”、“正样本图”、“负采样图”和“子图块”四个结果。
dataloader = dgl.dataloading.EdgeDataLoader(
g, train_seeds, sampler,
negative_sampler=dgl.dataloading.negative_sampler.Uniform(5),
batch_size=4,
shuffle=True,
drop_last=False,
pin_memory=True,
num_workers=False)
负采样函数初始化的参数为:1.原始图“g”;2.一个正样本对应的负样本个数“k”。函数的输入为原始图“g”和小批量的边id“eids”,返回的结果是负样本的源节点数组和目标节点数组。
下面是按原始图中节点度的0.75次幂为采样率的例子。
class NegativeSampler(object):
def __init__(self, g, k):
# caches the probability distribution
self.weights = g.in_degrees().float() ** 0.75
self.k = k
def __call__(self, g, eids):
src, _ = g.find_edges(eids)
src = src.repeat_interleave(self.k)
dst = self.weights.multinomial(len(src), replacement=True)
return src, dst
dataloader = dgl.dataloading.EdgeDataLoader(
g, train_seeds, sampler,
negative_sampler=NegativeSampler(g, 5),
batch_size=4,
shuffle=True,
drop_last=False,
pin_memory=True,
num_workers=False)
与节点分类的随机训练使用一样的模型:
class StochasticTwoLayerGCN(nn.Module):
def __init__(self, in_features, hidden_features, out_features):
super().__init__()
self.conv1 = dglnn.GraphConv(in_features, hidden_features)
self.conv2 = dglnn.GraphConv(hidden_features, out_features)
def forward(self, blocks, x):
x = F.relu(self.conv1(blocks[0], x))
x = F.relu(self.conv2(blocks[1], x))
return x
这里使用边的两个端点的内积作为分数:
class ScorePredictor(nn.Module):
def forward(self, edge_subgraph, x):
with edge_subgraph.local_scope():
edge_subgraph.ndata['x'] = x
edge_subgraph.apply_edges(dgl.function.u_dot_v('x', 'x', 'score'))
return edge_subgraph.edata['score']
首先获得节点的表示,然后分别计算正样本图和负采样图上的边得分。
class Model(nn.Module):
def __init__(self, in_features, hidden_features, out_features):
super().__init__()
self.gcn = StochasticTwoLayerGCN(
in_features, hidden_features, out_features)
self.predictor = ScorePredictor()
def forward(self, positive_graph, negative_graph, blocks, x):
x = self.gcn(blocks, x)
pos_score = self.predictor(positive_graph, x)
neg_score = self.predictor(negative_graph, x)
return pos_score, neg_score
使用hinge loss作为损失函数。
def compute_loss(pos_score, neg_score):
# an example hinge loss
n = pos_score.shape[0]
return (neg_score.view(n, -1) - pos_score.view(n, -1) + 1).clamp(min=0).mean()
opt = torch.optim.Adam(model.parameters())
for input_nodes, positive_graph, negative_graph, blocks in dataloader:
input_features = blocks[0].srcdata['features']
pos_score, neg_score = model(positive_graph, negative_graph, blocks, input_features)
loss = compute_loss(pos_score, neg_score)
opt.zero_grad()
loss.backward()
print('loss: ', loss.item())
opt.step()
还是使用“跟着官方文档学DGL框架第八天”中人工构建的异构图数据集。训练集选择了所有类型的所有边,以字典的形式给出。
n_users = 1000
n_items = 500
n_follows = 3000
n_clicks = 5000
n_dislikes = 500
n_hetero_features = 10
n_user_classes = 5
n_max_clicks = 10
follow_src = np.random.randint(0, n_users, n_follows)
follow_dst = np.random.randint(0, n_users, n_follows)
click_src = np.random.randint(0, n_users, n_clicks)
click_dst = np.random.randint(0, n_items, n_clicks)
dislike_src = np.random.randint(0, n_users, n_dislikes)
dislike_dst = np.random.randint(0, n_items, n_dislikes)
hetero_graph = dgl.heterograph({
('user', 'follow', 'user'): (follow_src, follow_dst),
('user', 'followed-by', 'user'): (follow_dst, follow_src),
('user', 'click', 'item'): (click_src, click_dst),
('item', 'clicked-by', 'user'): (click_dst, click_src),
('user', 'dislike', 'item'): (dislike_src, dislike_dst),
('item', 'disliked-by', 'user'): (dislike_dst, dislike_src)})
hetero_graph.nodes['user'].data['feat'] = torch.randn(n_users, n_hetero_features)
hetero_graph.nodes['item'].data['feat'] = torch.randn(n_items, n_hetero_features)
g = hetero_graph
train_eid_dict = {
etype: g.edges(etype=etype, form='eid')
for etype in g.etypes}
依然选择最简单的采样器:
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(2)
负采样时有两种方式,一是使用DGL自带的随机采样,二是自定义负采样函数。
与同构图时无异,dgl.dataloading.negative_sampler.Uniform()同样支持异构图。
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(2)
dataloader = dgl.dataloading.EdgeDataLoader(
g, train_eid_dict, sampler,
negative_sampler=dgl.dataloading.negative_sampler.Uniform(5),
batch_size=10,
shuffle=True,
drop_last=False,
num_workers=False)
这个还没有调通,之后再来调。[flag]
class NegativeSampler(object):
def __init__(self, g, k):
# 缓存概率分布
self.weights = {
etype: g.in_degrees(etype=etype).float() ** 0.75
for _, etype, _ in g.canonical_etypes
}
self.k = k
def __call__(self, g, eids_dict):
result_dict = {
}
for etype, eids in eids_dict.items():
src, _ = g.find_edges(eids, etype=etype)
src = src.repeat_interleave(self.k)
dst = self.weights[etype].multinomial(len(src), replacement=True)
result_dict[etype] = (src, dst)
return result_dict
与节点分类的随机训练使用一样的模型:
class StochasticTwoLayerRGCN(nn.Module):
def __init__(self, in_feat, hidden_feat, out_feat, rel_names):
super().__init__()
self.conv1 = dglnn.HeteroGraphConv({
rel : dglnn.GraphConv(in_feat, hidden_feat, norm='right')
for rel in rel_names
})
self.conv2 = dglnn.HeteroGraphConv({
rel : dglnn.GraphConv(hidden_feat, out_feat, norm='right')
for rel in rel_names
})
def forward(self, blocks, x):
x = self.conv1(blocks[0], x)
x = self.conv2(blocks[1], x)
return x
这里使用边的两个端点的内积作为分数,与同构图的区别在于,需要分边类型执行apply_edges():
class ScorePredictor(nn.Module):
def forward(self, edge_subgraph, x):
with edge_subgraph.local_scope():
edge_subgraph.ndata['x'] = x
for etype in edge_subgraph.canonical_etypes:
edge_subgraph.apply_edges(
dgl.function.u_dot_v('x', 'x', 'score'), etype=etype)
return edge_subgraph.edata['score']
首先获得节点的表示,然后分别计算正样本图和负采样图上的边得分,注意返回的结果是字典形式,键是边类型,值是分数。
class Model(nn.Module):
def __init__(self, in_features, hidden_features, out_features, etypes):
super().__init__()
self.rgcn = StochasticTwoLayerRGCN(
in_features, hidden_features, out_features, etypes)
self.pred = ScorePredictor()
def forward(self, positive_graph, negative_graph, blocks, x):
x = self.rgcn(blocks, x)
pos_score = self.pred(positive_graph, x)
neg_score = self.pred(negative_graph, x)
return pos_score, neg_score
由于返回的分数结果是字典形式,所以需要自定义一个损失函数,这里对每种边类型分别使用hinge loss,再求和作为最终损失。
def compute_loss(pos_score, neg_score):
loss = 0
# an example hinge loss
for etype, p_score in pos_score.items():
if len(p_score) != 0:
n = p_score.shape[0]
loss += (neg_score[etype].view(n, -1) - p_score.view(n, -1) + 1).clamp(min=0).mean()
return loss
in_features = n_hetero_features
hidden_features = 100
out_features = 10
etypes = g.etypes
model = Model(in_features, hidden_features, out_features, etypes)
opt = torch.optim.Adam(model.parameters())
for input_nodes, positive_graph, negative_graph, blocks in dataloader:
print('negative graph: ', negative_graph)
input_features = blocks[0].srcdata['feat']
pos_score, neg_score = model(positive_graph, negative_graph, blocks, input_features)
loss = compute_loss(pos_score, neg_score)
opt.zero_grad()
loss.backward()
print('loss: ', loss.item())
opt.step()
import dgl
import dgl.nn as dglnn
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from dgl.data.utils import generate_mask_tensor
from dgl.data import DGLDataset
import torch
def build_karate_club_graph():
# All 78 edges are stored in two numpy arrays. One for source endpoints
# while the other for destination endpoints.
src = np.array([1, 2, 2, 3, 3, 3, 4, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 9, 10, 10,
10, 11, 12, 12, 13, 13, 13, 13, 16, 16, 17, 17, 19, 19, 21, 21,
25, 25, 27, 27, 27, 28, 29, 29, 30, 30, 31, 31, 31, 31, 32, 32,
32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33,
33, 33, 33, 33, 33, 33, 33, 33, 33, 33])
dst = np.array([0, 0, 1, 0, 1, 2, 0, 0, 0, 4, 5, 0, 1, 2, 3, 0, 2, 2, 0, 4,
5, 0, 0, 3, 0, 1, 2, 3, 5, 6, 0, 1, 0, 1, 0, 1, 23, 24, 2, 23,
24, 2, 23, 26, 1, 8, 0, 24, 25, 28, 2, 8, 14, 15, 18, 20, 22, 23,
29, 30, 31, 8, 9, 13, 14, 15, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30,
31, 32])
# Edges are directional in DGL; Make them bi-directional.
u = np.concatenate([src, dst])
v = np.concatenate([dst, src])
# Construct a DGLGraph
return dgl.graph((u, v))
# def _sample_mask(idx, l):
# """Create mask."""
# mask = np.zeros(l)
# mask[idx] = 1
# return mask
class MyDataset(DGLDataset):
def __init__(self,
url=None,
raw_dir=None,
save_dir=None,
force_reload=False,
verbose=False):
super(MyDataset, self).__init__(name='dataset_name',
url=url,
raw_dir=raw_dir,
save_dir=save_dir,
force_reload=force_reload,
verbose=verbose)
def process(self):
# 跳过一些处理的代码
# === 跳过数据处理 ===
# 构建图
# g = dgl.graph(G)
g = build_karate_club_graph()
# train_mask = _sample_mask(idx_train, g.number_of_nodes())
# val_mask = _sample_mask(idx_val, g.number_of_nodes())
# test_mask = _sample_mask(idx_test, g.number_of_nodes())
# # 划分掩码
# g.ndata['train_mask'] = generate_mask_tensor(train_mask)
# g.ndata['val_mask'] = generate_mask_tensor(val_mask)
# g.ndata['test_mask'] = generate_mask_tensor(test_mask)
# 节点的标签
labels = torch.randint(0, 2, (g.number_of_edges(),))
g.edata['labels'] = torch.tensor(labels)
# 节点的特征
g.ndata['features'] = torch.randn(g.number_of_nodes(), 10)
self._num_labels = int(torch.max(labels).item() + 1)
self._labels = labels
self._g = g
def __getitem__(self, idx):
assert idx == 0, "这个数据集里只有一个图"
return self._g
def __len__(self):
return 1
dataset = MyDataset()
g = dataset[0]
n_edges = g.number_of_edges()
train_seeds = np.random.choice(np.arange(n_edges), (20,), replace=False)
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(2)
dataloader = dgl.dataloading.EdgeDataLoader(
g, train_seeds, sampler,
negative_sampler=dgl.dataloading.negative_sampler.Uniform(5),
batch_size=4,
shuffle=True,
drop_last=False,
pin_memory=True,
num_workers=False)
# class NegativeSampler(object):
# def __init__(self, g, k):
# # caches the probability distribution
# self.weights = g.in_degrees().float() ** 0.75
# self.k = k
# def __call__(self, g, eids):
# src, _ = g.find_edges(eids)
# src = src.repeat_interleave(self.k)
# dst = self.weights.multinomial(len(src), replacement=True)
# return src, dst
# dataloader = dgl.dataloading.EdgeDataLoader(
# g, train_seeds, sampler,
# negative_sampler=NegativeSampler(g, 5),
# batch_size=4,
# shuffle=True,
# drop_last=False,
# pin_memory=True,
# num_workers=False)
class StochasticTwoLayerGCN(nn.Module):
def __init__(self, in_features, hidden_features, out_features):
super().__init__()
self.conv1 = dgl.nn.GraphConv(in_features, hidden_features)
self.conv2 = dgl.nn.GraphConv(hidden_features, out_features)
def forward(self, blocks, x):
x = F.relu(self.conv1(blocks[0], x))
x = F.relu(self.conv2(blocks[1], x))
return x
class ScorePredictor(nn.Module):
def forward(self, edge_subgraph, x):
with edge_subgraph.local_scope():
edge_subgraph.ndata['x'] = x
edge_subgraph.apply_edges(dgl.function.u_dot_v('x', 'x', 'score'))
return edge_subgraph.edata['score']
class Model(nn.Module):
def __init__(self, in_features, hidden_features, out_features):
super().__init__()
self.gcn = StochasticTwoLayerGCN(
in_features, hidden_features, out_features)
self.predictor = ScorePredictor()
def forward(self, positive_graph, negative_graph, blocks, x):
x = self.gcn(blocks, x)
pos_score = self.predictor(positive_graph, x)
neg_score = self.predictor(negative_graph, x)
return pos_score, neg_score
def compute_loss(pos_score, neg_score):
# an example hinge loss
n = pos_score.shape[0]
return (neg_score.view(n, -1) - pos_score.view(n, -1) + 1).clamp(min=0).mean()
in_features = 10
hidden_features = 100
out_features = 10
model = Model(in_features, hidden_features, out_features)
# model = model.cuda()
# opt = torch.optim.Adam(model.parameters())
# for input_nodes, positive_graph, negative_graph, blocks in dataloader:
# blocks = [b.to(torch.device('cuda')) for b in blocks]
# positive_graph = positive_graph.to(torch.device('cuda'))
# negative_graph = negative_graph.to(torch.device('cuda'))
# input_features = blocks[0].srcdata['features']
# pos_score, neg_score = model(positive_graph, negative_graph, blocks, input_features)
# loss = compute_loss(pos_score, neg_score)
# opt.zero_grad()
# loss.backward()
# opt.step()
opt = torch.optim.Adam(model.parameters())
for input_nodes, positive_graph, negative_graph, blocks in dataloader:
input_features = blocks[0].srcdata['features']
pos_score, neg_score = model(positive_graph, negative_graph, blocks, input_features)
loss = compute_loss(pos_score, neg_score)
opt.zero_grad()
loss.backward()
print('loss: ', loss.item())
opt.step()
import dgl
import dgl.nn as dglnn
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from dgl.data.utils import generate_mask_tensor
from dgl.data import DGLDataset
import torch
n_users = 1000
n_items = 500
n_follows = 3000
n_clicks = 5000
n_dislikes = 500
n_hetero_features = 10
n_user_classes = 5
n_max_clicks = 10
follow_src = np.random.randint(0, n_users, n_follows)
follow_dst = np.random.randint(0, n_users, n_follows)
click_src = np.random.randint(0, n_users, n_clicks)
click_dst = np.random.randint(0, n_items, n_clicks)
dislike_src = np.random.randint(0, n_users, n_dislikes)
dislike_dst = np.random.randint(0, n_items, n_dislikes)
hetero_graph = dgl.heterograph({
('user', 'follow', 'user'): (follow_src, follow_dst),
('user', 'followed-by', 'user'): (follow_dst, follow_src),
('user', 'click', 'item'): (click_src, click_dst),
('item', 'clicked-by', 'user'): (click_dst, click_src),
('user', 'dislike', 'item'): (dislike_src, dislike_dst),
('item', 'disliked-by', 'user'): (dislike_dst, dislike_src)})
hetero_graph.nodes['user'].data['feat'] = torch.randn(n_users, n_hetero_features)
hetero_graph.nodes['item'].data['feat'] = torch.randn(n_items, n_hetero_features)
g = hetero_graph
train_eid_dict = {
etype: g.edges(etype=etype, form='eid')
for etype in g.etypes}
class StochasticTwoLayerRGCN(nn.Module):
def __init__(self, in_feat, hidden_feat, out_feat, rel_names):
super().__init__()
self.conv1 = dglnn.HeteroGraphConv({
rel : dglnn.GraphConv(in_feat, hidden_feat, norm='right')
for rel in rel_names
})
self.conv2 = dglnn.HeteroGraphConv({
rel : dglnn.GraphConv(hidden_feat, out_feat, norm='right')
for rel in rel_names
})
def forward(self, blocks, x):
x = self.conv1(blocks[0], x)
x = self.conv2(blocks[1], x)
return x
class ScorePredictor(nn.Module):
def forward(self, edge_subgraph, x):
with edge_subgraph.local_scope():
edge_subgraph.ndata['x'] = x
for etype in edge_subgraph.canonical_etypes:
edge_subgraph.apply_edges(
dgl.function.u_dot_v('x', 'x', 'score'), etype=etype)
return edge_subgraph.edata['score']
class Model(nn.Module):
def __init__(self, in_features, hidden_features, out_features, etypes):
super().__init__()
self.rgcn = StochasticTwoLayerRGCN(
in_features, hidden_features, out_features, etypes)
self.pred = ScorePredictor()
def forward(self, positive_graph, negative_graph, blocks, x):
x = self.rgcn(blocks, x)
pos_score = self.pred(positive_graph, x)
neg_score = self.pred(negative_graph, x)
return pos_score, neg_score
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(2)
dataloader = dgl.dataloading.EdgeDataLoader(
g, train_eid_dict, sampler,
negative_sampler=dgl.dataloading.negative_sampler.Uniform(5),
batch_size=10,
shuffle=True,
drop_last=False,
num_workers=False)
# class NegativeSampler(object):
# def __init__(self, g, k):
# # 缓存概率分布
# self.weights = {
# etype: g.in_degrees(etype=etype).float() ** 0.75
# for _, etype, _ in g.canonical_etypes
# }
# self.k = k
# def __call__(self, g, eids_dict):
# result_dict = {}
# for etype, eids in eids_dict.items():
# print(etype)
# src, _ = g.find_edges(eids, etype=etype)
# src = src.repeat_interleave(self.k)
# dst = self.weights[etype].multinomial(len(src), replacement=True)
# result_dict[etype] = (src, dst)
# print('len_dict: ', result_dict[etype])
# return result_dict
# dataloader = dgl.dataloading.EdgeDataLoader(
# g, train_eid_dict, sampler,
# negative_sampler=NegativeSampler(g, 5),
# batch_size=1000,
# shuffle=True,
# drop_last=False,
# num_workers=False)
def compute_loss(pos_score, neg_score):
loss = 0
# an example hinge loss
for etype, p_score in pos_score.items():
if len(p_score) != 0:
n = p_score.shape[0]
loss += (neg_score[etype].view(n, -1) - p_score.view(n, -1) + 1).clamp(min=0).mean()
return loss
in_features = n_hetero_features
hidden_features = 100
out_features = 10
etypes = g.etypes
model = Model(in_features, hidden_features, out_features, etypes)
# model = model.cuda()
# opt = torch.optim.Adam(model.parameters())
# for input_nodes, positive_graph, negative_graph, blocks in dataloader:
# blocks = [b.to(torch.device('cuda')) for b in blocks]
# positive_graph = positive_graph.to(torch.device('cuda'))
# negative_graph = negative_graph.to(torch.device('cuda'))
# input_features = blocks[0].srcdata['features']
# pos_score, neg_score = model(positive_graph, negative_graph, blocks, input_features)
# loss = compute_loss(pos_score, neg_score)
# opt.zero_grad()
# loss.backward()
# print('loss: ', loss.item())
# opt.step()
opt = torch.optim.Adam(model.parameters())
for input_nodes, positive_graph, negative_graph, blocks in dataloader:
print('negative graph: ', negative_graph)
input_features = blocks[0].srcdata['feat']
pos_score, neg_score = model(positive_graph, negative_graph, blocks, input_features)
loss = compute_loss(pos_score, neg_score)
opt.zero_grad()
loss.backward()
print('loss: ', loss.item())
opt.step()
文章浏览阅读3.4k次,点赞8次,收藏42次。一、什么是内部类?or 内部类的概念内部类是定义在另一个类中的类;下面类TestB是类TestA的内部类。即内部类对象引用了实例化该内部对象的外围类对象。public class TestA{ class TestB {}}二、 为什么需要内部类?or 内部类有什么作用?1、 内部类方法可以访问该类定义所在的作用域中的数据,包括私有数据。2、内部类可以对同一个包中的其他类隐藏起来。3、 当想要定义一个回调函数且不想编写大量代码时,使用匿名内部类比较便捷。三、 内部类的分类成员内部_成员内部类和局部内部类的区别
文章浏览阅读118次。分布式系统要求拆分分布式思想的实质搭配要求分布式系统要求按照某些特定的规则将项目进行拆分。如果将一个项目的所有模板功能都写到一起,当某个模块出现问题时将直接导致整个服务器出现问题。拆分按照业务拆分为不同的服务器,有效的降低系统架构的耦合性在业务拆分的基础上可按照代码层级进行拆分(view、controller、service、pojo)分布式思想的实质分布式思想的实质是为了系统的..._分布式系统运维工具
文章浏览阅读174次。1.数据源准备2.数据处理step1:数据表处理应用函数:①VLOOKUP函数; ② CONCATENATE函数终表:step2:数据透视表统计分析(1) 透视表汇总不同渠道用户数, 金额(2)透视表汇总不同日期购买用户数,金额(3)透视表汇总不同用户购买订单数,金额step3:讲第二步结果可视化, 比如, 柱形图(1)不同渠道用户数, 金额(2)不同日期..._exce l趋势分析数据量
文章浏览阅读3.3k次。堡垒机可以为企业实现服务器、网络设备、数据库、安全设备等的集中管控和安全可靠运行,帮助IT运维人员提高工作效率。通俗来说,就是用来控制哪些人可以登录哪些资产(事先防范和事中控制),以及录像记录登录资产后做了什么事情(事后溯源)。由于堡垒机内部保存着企业所有的设备资产和权限关系,是企业内部信息安全的重要一环。但目前出现的以下问题产生了很大安全隐患:密码设置过于简单,容易被暴力破解;为方便记忆,设置统一的密码,一旦单点被破,极易引发全面危机。在单一的静态密码验证机制下,登录密码是堡垒机安全的唯一_horizon宁盾双因素配置
文章浏览阅读7.7k次,点赞4次,收藏16次。Chrome作为一款挺不错的浏览器,其有着诸多的优良特性,并且支持跨平台。其支持(Windows、Linux、Mac OS X、BSD、Android),在绝大多数情况下,其的安装都很简单,但有时会由于网络原因,无法安装,所以在这里总结下Chrome的安装。Windows下的安装:在线安装:离线安装:Linux下的安装:在线安装:离线安装:..._chrome linux debian离线安装依赖
文章浏览阅读153次。中国发达城市榜单每天都在刷新,但无非是北上广轮流坐庄。北京拥有最顶尖的文化资源,上海是“摩登”的国际化大都市,广州是活力四射的千年商都。GDP和发展潜力是衡量城市的数字指...
文章浏览阅读3.3k次。前言spark在java使用比较少,多是scala的用法,我这里介绍一下我在项目中使用的代码配置详细算法的使用请点击我主页列表查看版本jar版本说明spark3.0.1scala2.12这个版本注意和spark版本对应,只是为了引jar包springboot版本2.3.2.RELEASEmaven<!-- spark --> <dependency> <gro_使用java调用spark注册进去的程序
文章浏览阅读4.8k次。汽车零部件开发工具巨头V公司全套bootloader中UDS协议栈源代码,自己完成底层外设驱动开发后,集成即可使用,代码精简高效,大厂出品有量产保证。:139800617636213023darcy169_uds协议栈 源代码
文章浏览阅读4.6k次,点赞20次,收藏148次。AUTOSAR基础篇之OS(下)前言首先,请问大家几个小小的问题,你清楚:你知道多核OS在什么场景下使用吗?多核系统OS又是如何协同启动或者关闭的呢?AUTOSAR OS存在哪些功能安全等方面的要求呢?多核OS之间的启动关闭与单核相比又存在哪些异同呢?。。。。。。今天,我们来一起探索并回答这些问题。为了便于大家理解,以下是本文的主题大纲:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JCXrdI0k-1636287756923)(https://gite_autosar 定义了 5 种多核支持类型
文章浏览阅读2.2k次,点赞6次,收藏14次。原因:自己写的头文件没有被加入到方案的包含目录中去,无法被检索到,也就无法打开。将自己写的头文件都放入header files。然后在VS界面上,右键方案名,点击属性。将自己头文件夹的目录添加进去。_vs2013打不开自己定义的头文件
文章浏览阅读3.3w次,点赞80次,收藏342次。此时,可以将系统中所有用户的 Session 数据全部保存到 Redis 中,用户在提交新的请求后,系统先从Redis 中查找相应的Session 数据,如果存在,则再进行相关操作,否则跳转到登录页面。此时,可以将系统中所有用户的 Session 数据全部保存到 Redis 中,用户在提交新的请求后,系统先从Redis 中查找相应的Session 数据,如果存在,则再进行相关操作,否则跳转到登录页面。当数据量很大时,count 的数量的指定可能会不起作用,Redis 会自动调整每次的遍历数目。_redis命令
文章浏览阅读449次,点赞3次,收藏3次。URP的设计目标是在保持高性能的同时,提供更多的渲染功能和自定义选项。与普通项目相比,会多出Presets文件夹,里面包含着一些设置,包括本色,声音,法线,贴图等设置。全局只有主光源和附加光源,主光源只支持平行光,附加光源数量有限制,主光源和附加光源在一次Pass中可以一起着色。URP:全局只有主光源和附加光源,主光源只支持平行光,附加光源数量有限制,一次Pass可以计算多个光源。可编程渲染管线:渲染策略是可以供程序员定制的,可以定制的有:光照计算和光源,深度测试,摄像机光照烘焙,后期处理策略等等。_urp渲染管线