当前位置:网站首页>怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践

怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践

2021-06-01 11:39:25 InfoQ

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随着人工智能技术的发展,爱奇艺内部越来越多的服务使用深度学习模型和技术来驱动,为我们的用户提供更加智能和便捷的在线视频观看体验。其中在线类的服务,通常单个容器实例需要独占一个 GPU,以实现在毫秒/秒级延时内完成例如视频、图片、语音、文本的深度学习模型推理请求;为了保证响应延时,请求通常单独进行,无法对请求做batch以提升计算效率,且不同请求间隔随机,会导致这些服务的 GPU 计算资源的利用率通常较低(如图1所示)。且在线类服务请求量在一天或者一定时间周期内存在波峰波谷的现象,进一步降低了 GPU 的利用率。鉴于GPU本身高昂的价格,较低的 GPU 利用率浪费了大量计算资源,增加了 AI 服务的成本。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/5c/5ca627f83858d1ecd506595ed23ec1ae.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图1:在线推理服务 GPU 利用率统计","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在此背景下,最直接的解决方案是将多个服务部署在同一张 GPU 卡上,在保证服务质量的前提下通过 GPU 共享来提升 GPU 的利用率。目前英伟达官方的 GPU 共享技术主要有两种方案:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(1)","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"vGPU","attrs":{}},{"type":"text","text":" ;(2)","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"MPS","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下来我们将简单对比下两种方案。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Nvidia vGPU方案","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Nvidia的vGPU方案采用虚拟化的技术,基于 SR-IOV 进行 GPU 设备虚拟化管理,在驱动层提供了时间分片执行的逻辑,并做了一定的显存隔离,这样在对显卡进行初始化设置的时候就可以根据需求将显卡进行划分。其中时间分片调度的逻辑可以是按实例均分,或者是自定义比例,显卡的显存需要按照预设的比例进行划分。Nvdia的vGPU方案在实施中有下面","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"两点限制","attrs":{}},{"type":"text","text":":","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(1)vGPU划分完成之后,如果要改变这种预定义的划分,需要重启显卡才能生效,无法做到不重启更改配置。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(2)其方案基于虚机,需要先对 GPU 物理机进行虚拟化之后,再在虚拟机内部署容器,无法直接基于物理机进行容器化的调度,另外 vGPU 方案需要收取 license 费用,增加了使用成本。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Nvidia MPS方案","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Nvidia的MPS方案是一种算力分割的软件虚拟化方案。该方案和vGPU方案相比,配置很灵活,并且和docker适配良好。MPS 基于C/S架构,配置成MPS模式的GPU上运行的所有进程,会动态的将其启动的内核发送给MPS server,MPS Server借助CUDA stream,实现多个内核同时启动执行。除此之外,MPS 还可配置各个进程对 GPU 的使用占比。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"该方案的一个","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"问题在于","attrs":{}},{"type":"text","text":",各个服务进程依赖 MPS,一旦 MPS 进程出现问题,所有在该GPU上的进程直接受影响,需要使用 Nvidia-smi 重置GPU 的方式才能恢复。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"01爱奇艺的vGPU方案","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"调研以上方案后,为了更好地适用于爱奇艺内部 AI 容器化应用场景,我们重新开发了容器场景下的 GPU 虚拟共享方案,基于CUDA API 截获方式实现显存及算力隔离和分配,并基于开源项目aliyun-gpushare scheduler[1]实现 K8S 上对虚拟 GPU 的调度和分配,实现了多应用容器部署在一张 GPU 卡的目标。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们方案的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"主要特点","attrs":{}},{"type":"text","text":"是","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"配置灵活","attrs":{}},{"type":"text","text":",和K8S能够有机的进行结合,按需实时分配用户所需要的vGPU实例,同时尽可能的让物理GPU实例能够充分的被共享,实现资源的最大化利用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在完成方案的设计之后,我们对整体进行了效果的评估,测试这种隔离和共享对应用性能的影响。即对于单一进程来说,需要保证:首先它不会使用超过其被分配的算力大小,其次隔离本身不应该对于 GPU 算力有过多损耗,第三是多个进程同时共享的时候,与其单独运行时相比,不应有太大的性能偏差,即共享可以有效避免进程之间的干扰。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"针对以上标准,我们对 GPU 虚拟共享方案进行了性能测试,结果如图2所示。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"第一个测试是单进程算力隔离后性能的评估","attrs":{}},{"type":"text","text":"。物理 GPU上只运行单一进程,但配置了三次,分别为100%,50%和10% 算力时,其性能和该程序独立运行时的比例关系。纵轴为达到无虚拟化运行时性能的百分比,横轴为进行的单元测试用例,区域相同的颜色表示该组测试用例为同一CUDA kernel,但是不同的运行参数。其中图内的绿点,蓝点,和红点分配是500多个测试用例在各自算力分配的情况下得到的性能,和完全没有算力分割且独占GPU时运行的性能的相对比值。另外曲线是这些独立点的数值在全体维度上做了一个平滑,以更好的进行可视化的对比。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"第二个和第三个测试分别用不同算力配比对两个GPU进程进行相互干扰实验","attrs":{}},{"type":"text","text":"。如第二个两个进程分别配置为50% 算力,绿点为两个GPU进程性能平均值,而红色曲线为这些绿点的平滑曲线。该曲线和第一个测试中50%算力的曲线对比相差无几,这就说明了我们方案中配置50%算力时同时运行相互干扰是几乎可以忽略的。第三个为一个配置为70%,另外一个配置为30%算力,同样可以和第一个测试中的独立分配70%/30%时各自的曲线进行对比。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"测试结果表明了方案可以将GPU相互干扰控制在合理的范围之内。服务上线后内部统计显示,平均 100+ 深度学习容器服务可以共享的部署在 35 张物理 GPU 之上,并且做到应用相互之间无影响;对于单张GPU物理卡,平均承载的服务数量从 1 变为了3;同时GPU的平均利用率也提升了2 倍以上。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/37/37516e31ede15a4f36d9d7233597cef5.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图2:隔离性测试结果","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"02爱奇艺GPU虚拟共享的底层原理","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先我们来看看 GPU 虚拟共享的底层原理。GPU作为一个强大的计算外设,提供的两个主要资源是显存和算力。要实现单个 GPU 的共享,我们要实现对显存和算力资源的隔离,并验证隔离方案的效率及性能。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.1显存隔离","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于深度学习应用来说,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"对于显存的需求","attrs":{}},{"type":"text","text":"来自于三个方面。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)第一是模型的CUDA kernel context,可类比于CPU程序中的text段,提供给CUDA kernel执行的环境,这是一项刚需,没有充足的显存,kernel将无法启动,且context的大小随着kernel的复杂程度有增长,但在整体模型显存需求中是最小的一部分。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)第二部分来自于模型训练得出的一些参数,如卷积中的weight和bias。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)第三部分来自于模型在推理过程中的临时存储,用于储存中间的计算结果。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于一般的模型来说,基本都不需要占用整个GPU的显存。但是这里有","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"一个例外","attrs":{}},{"type":"text","text":",Tensorflow框架默认分配所有GPU的显存来进行自己的显存管理。当然Tensorflow框架有相应的选项可以屏蔽该行为,但是对于平台来说,要让每个用户修改 TF 的配置为屏蔽该行为,就不太可行。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"应对这一问题","attrs":{}},{"type":"text","text":",一个巧妙的方法可以在不需要应用开发者参与的情况下,让Tensorflow的部署应用只分配它所需的显存大小而不出现问题。该方法即API动态拦截。Tensorflow之所以可以知道当前GPU的剩余显存,是通过cuDeviceTotalMem/cuMemGetInfo这两个CUDA library API。通过LD_PRELOAD的方式,在的钩子so中实现这两个API,那么Tensorflow执行的时候,link首先会调用的是的API实现,而不是CUDA的,这样就可以动态的修改这两个API的返回结果,如这里想做的,将特定Tensorflow应用的显存配额限制在其申请数值。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在系统实现的过程中,还对cuMemAlloc/cuMemFree做了同样的拦截,目的是为了能够对同容器中的多个GPU进程进程统一管理。当多个GPU进程分配显存之和超过其配额时,可以通过cuMalloc来返回显存不足的错误。容器内显存配额管理是通过share mem来做的。图3展示了显存隔离和分配的整个流程。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/60/60487ebd2fd6131db5356ba01ae63943.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图3:显存分割中隔离和分配流程","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"2.2算力隔离","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了显存,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"另外一个重要的GPU资源是算力","attrs":{}},{"type":"text","text":"。对于Nvidia volta显卡架构来说,它的算力来源于三个方面,浮点计算单元、整形计算单元、tensor core加速计算单元。其中浮点计算单元和整形计算单元是流处理器SP的内部结构,而SM中包含着多个流处理器SP。对于V100来说,它有80个SM,每个SM中有64个SP,合5120个流处理器,其中tensor core是位于SM的内部,与SP共享寄存器/share mem/L1 cache。图4给出了Nvidia GPU的硬件架构组织关系框图。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/85/855073f42101fa024019d78ba4aba589.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图4:Nvidia GPU硬件架构组织关系图","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于Nvidia的GPU的编程语言CUDA来说,它的语言设计逻辑上和上图的硬件层次是对应的。CUDA有","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"三层逻辑层次","attrs":{}},{"type":"text","text":",分别为","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"grid","attrs":{}},{"type":"text","text":",","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"block","attrs":{}},{"type":"text","text":",和","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"thread","attrs":{}},{"type":"text","text":"。Grid可以认为是对整个显卡的逻辑抽象,block可以认为是对SM单元的逻辑抽象,而thread是SP的逻辑抽象。为了达到最高的并发程度,SM之间可以认为是没有交互的,当然这也不是绝对的,有一些程序为了自己的特殊逻辑,也可以设计出SM之间依赖的程序,但这个代价是极大的性能浪费。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在知道了GPU的底层结构,以及CUDA的设计原理之后,可以就如何算力虚拟化来做一下初步设想。既然一些模型无法完全利用GPU的全部算力,那么何不削减其占用的SM个数,使得空闲下来的SM可以为其他GPU程序所用?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这样的想法是好的,但是一些限制阻止了这种优化的实现。GPU程序的执行,是通过kernel的片段来具体实施,在CPU侧launch了 kernel之后,具体的kernel及其调用参数随即交由GPU的硬件调度器来在某个未来的时间点真正运行起来。在默认的情况下,kernel是被派发给GPU上所有的SM,且执行过程中不能被中断。如图5所示,软件系统在发送完毕启动命令之后,随即命令及参数由PCIe转交给GPU硬件,并插入其队列中,由GPU硬件中固化的逻辑去具体处理在何时真正启动。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9a/9aafb70429702eb73f265e8a92ecdc09.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图5:GPU软件和硬件调度系统的交互图","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但世事无绝对,默认情况下不行,不代表没有别的办法。让我们再来回顾一下CUDA的设计。CUDA作为一个用于操控GPU来完成高效并行计算的语言,它的代码编写逻辑是以thread为基本单元的。SM上所有SP都运行着一份kernel的代码,且在一定程度上来说连运行节奏都完全一致。CUDA中用来区分thread,来判断代码应该处理数据的偏移量的方法,是通过CUDA中的blockIdx/threadIdx这两个内嵌变量。这两个变量在机器码上是只读的,在thread由硬件调度器派发的时候所指定。通过硬件调度器,就完成了抽象的blockIdx/threadIdx和具体的SM/SP的绑定。图6大概的描述了这一映射关系。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6b/6b7c68a3ea11676a0f7c70aecefde355.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图6:CUDA逻辑块和硬件计算单元的映射关系","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了能够精确的控制算力,我们就不能再依赖硬件调度器来控制内核启动。在这里用了一个取巧的方法,就是让内核启动之后被“困”在固定数目的SM上面,这个数目的值和GPU整体SM个数的比例就是给这个内核算力配比。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了形象化来阐述思路,这里我们对GPU做了一个抽象化的改动,SM的个数被定义为10个。然后有一个启动参数为<<<15,1>>>的内核,即CUDA block size为15,thread size为1。它正常启动的时候,硬件调度器会给每一个SM上分配一个内核的副本。这样在第一时间就消耗了10个block的副本,随后每个SM上内核执行完毕之后会退出,硬件调度器会进一步分配剩下的5个block副本,在这个也执行完毕之后就完成了整个内核的执行。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"算力切分之后,我们会在内核启动时,动态的修改其启动参数,将其CUDA block size从15变为5。这样硬件调度器就会将内核副本分配到GPU上一半数目的SM上,空闲的一半可以为其他内核所使用,如图7所示。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ec/eccd18298274965139edfa4f0f7f1f8d.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图7:动态修改启动参数来进行算力分割","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们虽然通过动态修改启动参数的方法,避免了内核占满全部SM资源,但此时还没完成“困”这一动作。所以此时的内核行为是其完成预定逻辑之后,会退出,导致此时内核不能覆盖block size为15时的数据空间。为了将其“困“住,我们在内核的汇编EXIT处,替换成了BRANCH操作。这样内核完成本身的逻辑之后,会跳转到我们预设的一段逻辑中。这个逻辑完成虚拟blockIdx/threadIdx的自增操作,随后再跳转到内核开始位置,来基于更新的blockIdx/threadIdx来进行新一轮计算。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这次需要指出的是blockIdx/threadIdx为只读寄存器,所以没办法直接更改它的值。作为一个替代的解决方案时,将内核中的blockIdx/threadIdx进行整体替换为可写的寄存器,这样我们就可以在预设的跳转逻辑中做更改操作,如图8所示。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/e8/e8d510961b1193f1778e87bb0365bd24.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图8:汇编修改更改内核运行逻辑","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"03爱奇艺GPU虚拟共享的调度设计","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"完成了 GPU 底层的资源隔离之后,我们还需要基于 K8S 平台实现对隔离的 GPU ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"资源的分配和调度管理","attrs":{}},{"type":"text","text":",以方便业务的深度学习服务能够","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"快速部署","attrs":{}},{"type":"text","text":"到共享的 GPU。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"K8S 容器中使用 GPU 的方案一般采用 Nvidia device plugin(英伟达官方插件),它可以为 Pod 分配一卡或多卡,分配的最小单元是1张卡,无法支持底层隔离的 GPU 资源调度。调研之后,我们选择阿里云容器服务开源的aliyun-gpushare作为调度方案,实现对 GPU 隔离资源的调度。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以显存为例,使用aliyun-gpushare,为 Pod 分配的是一张卡中的部分显存,这样从逻辑上说,单卡的资源就可以再进一步被切分。假设有一张 V100 32GB 卡,可以给 Pod1 分配 4GB 显存,也可以同时给 Pod2 分配 8GB 卡,直到 32GB 显存分配完毕。整个调度过程如图9所示","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/09/09b25056a6fa0423815af310e0d79a80.png","alt":"怎样节省 2/3 的 GPU?爱奇艺 vGPU 的探索与实践","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图9:阿里公开的整体调用方案图","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"其中,Share GPU Device Plugin 和 Share GPU Schd Extender 是主要的新增组件,下文简写为 SGDP和SGSE。其余的组件均为 k8s 官方组件。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"主要流程","attrs":{}},{"type":"text","text":"如下:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"用户创建一个 Share GPU Pod 时,必须带 aliyun.com/gpu-mem 这种 K8S 自定义资源,表明其需要多少显存。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"SGSE 根据用户的 Share GPU 显存请求和集群整体资源情况,给该 Pod 分配一个 Node,并通过 patch Pod annotation 来指定使用某张卡。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Kubelet 调用 SGDP 的 Allocate 方法,将指定的 GPU 卡分配给 Pod 使用,同时设置环境变量 ALIYUN_COM_GPU_MEM_CONTAINER(容器可用显存)、LD_PRELOAD(其值为限制显存的动态链接库路径)。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"Pod 启动后,因为设置了 LD_PRELOAD,所有 AI 框架的 GPU 显存申请都被动态链接库的钩子劫持,当使用的总资源超过 ALIYUN_COM_GPU_MEM_CONTAINER 的值,则拒绝请求。从而达到限制用户使用显存的效果。","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"算力资源的调度策略类似以上显存调度。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实践中,对于 1 张物理 GPU 卡,我们进行 1/4 和 1/2 的显存和算力的分割,业务可根据实际需要选择对应的比例,单张 GPU 最多可部署 4 个不同应用,并可实现有效隔离,互不影响。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"04结语和展望","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过底层LD_PRELOAD动态劫持的方式,我们实现了容器中轻量级 GPU 显存和算力的隔离,从而支持多个容器应用部署在同一个 GPU 上。该方案从一个动态的维度实现单张 GPU资源的划分,针对在线推理服务场景,很好的提升了GPU硬件的使用效率。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"后续工作中,我们还计划开发和实现跨主机 GPU 远程调用的方案,来解决 GPU 虚拟共享之后,产生的单机多卡机器上 CPU/GPU 比例失衡,导致的部分虚拟 GPU 资源无 CPU可分配的问题。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"参考文献","attrs":{}},{"type":"text","text":":","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1.aliyun-gpushare: https://github.com/AliyunContainerService/GPUshare-scheduler-extender","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2.Nvidia vGPU:https://docs.Nvidia.com/grid/latest/grid-vGPU-user-guide/index.html","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3. Nvidia MPS:https://docs.Nvidia.com/deploy/mps/index.html","attrs":{}}]}]}

版权声明
本文为[InfoQ]所创,转载请带上原文链接,感谢
https://xie.infoq.cn/article/eb24edc21bfda6527df8aaa31?utm_source=rss&utm_medium=article

随机推荐