00:21.12 greybeards And now it is my pleasure to introduce Matt demo Steamos Cto of gig I o so Matt why don't you tell us a little bit about yourself and what's going on at giga io. 00:30.90 Matt Demas Sure. Thanks Ray so my name is madine is as you said I'm the cto running kind of the technical strategy for for for how the company is going to move forward. Um, we've been doing a lot of really cool, interesting things in the the realm of composability. Ah, you, you may have heard of composability in the past from companies like Hp and dell composing drives I'm sure you guys have talked about that in the past but we're we're not only doing kind of storage. We're taking the whole realm of what makes up a system disaggregating and recomposing it back together. And now we're even implementing things like composable memory. Um and a cxl comes out is a pretty exciting field for us to be in. Yeah. 01:12.70 greybeards Composable memory where does that come from this is a whole different world for me. It's not a dim anymore. It is a dim but it's on a box someplace out in the world. 01:20.20 Matt Demas Yeah, so it's actually ah, it's It's a dim in another box one one area we're seeing a lot of interest in is people saying I've got these old servers that I could throw away those are old servers have a lot of memory in them. Can I go give that memory to a new server and we're saying absolutely let's go do it. 01:36.30 greybeards Um, that's where Cxl fits in Hu Um, what. 01:39.90 Matt Demas Actually we're we're doing it before Cxl Um, we're We're actually partnering. Yeah yeah, we're we're partnering with ah with our friends over memge and we are implementing a capability to allow you to actually compose dram directly into your system. And then let that your system actually see that dram as if it was natively installed prior to cxl. 01:58.49 greybeards All right? that's. 01:59.46 Jason Collier Hey Matt so I so f why this is so Jason Collier so I'm now co-hosting and I would like to know more about that. Ah, ah, member relationship. What? what are you guys doing with the memberge. 02:13.91 Matt Demas Yeah, so um, so that's it's fairly new for us to be to be clear right? A lot of the stuff that that we've been talking about from a memory perspective is we? We're really looking for a lot of customers that that that are trying to do a lot of this work today. Um, having having ten twenty terabytes of memory inside of a server isn't for everybody? Um, but it's certainly the customers that we're looking for right now. So we've been working with them for the last little little while getting um, getting into beta stage now when we actually compose memory into systems. They run their memge software. And then from there they're able to um, go address that memory space and go do all the the memory management capabilities that they have so I can do things from just doing load store of memory and into remote servers all the way up to the the checkpointing and snapshoting capabilities that the member offers where can um. Create snapshots move those snapshots from one server to another um and and soon enough will we drive into the point where we are allowing multiple servers to share the same memory so they can so so they can all access the same big pool simultaneously. 03:14.76 greybeards M. So is issues there issues like the pmem interface kind of thing to talk to storage or talk to to dems out on this. Ah, it's got to be a Pcie kind of extension right? or something like that right. 03:19.47 Jason Collier So Matt did go ahead. Ray. 03:34.30 Matt Demas Yeah, yeah, yeah, that's exactly yeah, so so giek io has natively been a memory fabric from the beginning meaning that we have a Pci switch um and and Pci interconnects. But the way it communicates. 03:36.17 greybeards What is this? What is this box. You guys got. 03:50.34 Matt Demas Is I have memory in every device out there I have memory and storage I have memory and gpus. Um I have memory and obviously servers right? So what we do is we allow connections to talk directly from one memory ah space to another memory space. So if I want to compose Gpu's it's talking to the Gpu's memory it's not just creating logical path. It's talking directly to their memory and so we're able to go utilize that across that Pci fabric or what we call our memory fabric and it allows us to go pool remote pools of pull remote pools of memory from a from our ah distant server and then allow us to go capture that and utilize it. As ah as memory living on the that that initial initial host then so once we create that connection. That's what gig io does we create that connection and allow that system to see the remote memory and then from there member takes over and says hey I see the memory and I'm going to treat it just like I do my normal pm pmem. And and so memorge does what it normally does. It's just utilizing our remote memory access. 04:48.30 greybeards In the server but on the back end of that is a real some memory someplace on this memory fabric I see I see. 04:55.24 Matt Demas Exactly and and because because we're running across Pcie as well. Our latencies are so low that you don't really see a performance hit so we're seeing customers that not Gom will get to twenty thirty terabytes and of dram on a box and without having to go. 05:14.50 greybeards 20 or thirty terabytes of dram on a box. What what are we doing with this thing is this guy is like redis gone mad or some or yeah I don't know Sap whatever hanna I can't thirty terabytes seems 05:15.16 Matt Demas Buy one of these crazy Super water system. Seriously. 05:29.37 Matt Demas To. 05:29.40 Jason Collier I Also want to know when you're talking about Latency What kind of latencies. Are you talking about? yeah. 05:34.13 Matt Demas Yeah, so we're talking similar latenciess to traditional high high bandwith memory. So you're talking three hundred nanoseconds roughly exactly I mean so we're talking that type of latency so traditional latency you see in Hbm's is about three hundred nanoseconds 05:42.40 Jason Collier They're talking hbm stuff that's wow. 05:52.15 Matt Demas And and we're talking right in that same same realm. 05:53.84 greybeards And yeah, normally a dram on ah on a server or something like that is probably what an order order of Magnitude faster than that is that? yeah. 06:01.17 Matt Demas Um, it it general is about 40 to 50 yes, so so it is faster and when we're talking about pci gen 4 right? As we as gen 5 comes out then we hit we hit less than one hundred nanoseconds and then really don't see a difference between composed and and noncomp composed. Um, so so we'll be able to offer full scale out composed memory. Um, where it's almost imperceivable. Um here on the very new feed your very new future was with pci gen 5 exactly. So so we will have a cxl memory appliance. but but even 06:28.87 greybeards And that's before Cxl hits. 06:29.17 Jason Collier So write. 06:38.10 Matt Demas Without implementing the Cxl functionality we will have this capability. 06:40.70 Jason Collier So what are the customers and workloads that you're looking at deploying like that you're deploying today and then what are you looking at deploying as far as customer workloads when cxl pretty much. 06:43.18 greybeards And. 06:56.64 Jason Collier Hits with ah with ah the two dot X Spec and then when the three dot X Spec hits I can only imagine ah that the customer workloads grow up significantly right? so. 07:09.96 Matt Demas That's exactly right? so so today um I do I really kind of focus on more at the Aihpc type of workloads and mainly because those are are workloads that that are seeing where single systems need lots of memory. Where and you're also going to see areas like spark things like that that we will likely be starting to work with here soon. Um, so large single systems that need lots of memory. Um, then you're going to see in the near future. You start taking that same concept where I can dynamically apply memory. Um, an ad remove it to a host on demand and then you get into virtualization environments and you start saying? Well what if my vmware cluster has 32 nodes in it and instead of having to put 7 or terab terabyte of memory in every box where I'm only actually utilizing about 40% of it or I chose to only put 500. 08:02.69 greybeards Um, yeah. 08:02.74 Matt Demas If I could choose to put five hundred gigs in the box and I'm using eighty ninety percent um go go up to a terabyte and only using 40%. so so I'm in that kind of weird spot now with how how large memory space has to be in each each box to be ah to be ideal or optimal. So I'm able to go. Kind of set them to a much lower memory amount in each node and then have a memory pool available to any node in the cluster. So if a Vm starts to run and I get higher than I want to be on on a single node instead of having to go try to move vms around to optimize I can just simply compose memory to it and it'll automatically just grab memory from a pool. 08:37.90 greybeards This whole memory stuff is brand new to me I mean I've I've been you know Proposability Gpu storage you know, networking cards things of that nature. Okay, but I guess I guess I guess but you know I saw you you do support. 08:40.28 Matt Demas Um. 08:48.00 Jason Collier Um, come on, you are gray beder on storage. Of course. 08:50.33 Matt Demas This and gap. 08:55.99 greybeards Composability of storage and gpus and things of that nature right? I mean it's not just a memory solution. 08:59.16 Matt Demas That that that is absolutely right? Because as I said before every every device has memory so we talk to them all right? if I'm going to go compose gpus I'm going to write directly to the to the hpm inside that Gpu I can even let gpus talk to each other across the fabric. So I can do things like with Dgx's for example or um or os yeah so so so you have your gpu direct rdang is what's out there. What everybody kind of knows about today. Um, that's done with ah infinibban or high-speed ethernet. 09:19.98 greybeards So just the gds is Gpu direct is that. 09:29.91 greybeards Right. 09:35.61 Matt Demas Um, but when that happens it has bounce buffers on every single host that it has to go hit in order for those gpus to talk to each other and so then that's because the rdma protocol doesn't allow it to directly talk memory to memory of each of these gpu types. It's got to go translate into. Into all these different memory or memory layer out to the to the ib layer communicate over ib back down again to the memory stack and over the gpu it. It tries to be it tries to optimize some of that workload and it and it does a decent job but it's kind of like when you saw gds or gp direct storage. When they were able to go take the r out of rdma in in gpu direct storage. It made that the the claims are 5 times faster from storage access so we take the r out of our Dma when we do Gdr or GGpDirect rdma um we allowed it to be a dma now. So same same types of things apply because I no longer have to go hit all these boce buffers and no longer have to translate all these protocols I'm talking directly from memory of host one to the memory of Gpu and host two doesn't have to go through the host at all bypasses the kernel altogether and and and goes directly to the remote gpu. 10:48.85 greybeards This would be real advantageous for an Ai kind of environment where you got you know you need a gaggle of gpus just to keep the training and inferencing activities going on but you never know where you really need them kind of thing right. 10:51.20 Matt Demas So. 11:01.39 Matt Demas Right? exactly and then you then you take the value of composability where I can say I have training going on but just because training is going on doesn't mean or sorry if I'm not doing training if I'm doing inferencing if I'm doing if I'm preparing my data those gpus don't have to be in that box yet. So I could those Gps could be code to a different box while that but all this box is preparing data. So I compose disk to it. Let it in ingest all the data but tag it label it and there's no gpus of being used there. They're being used somewhere else and then when it's time to actually train let me bring the gpus to it and and then they're able to be utilized. So I get maximum efficiency out of all all that all those gpus I use at the same time I can let those gpus all talk to each other using full dna capabilities right. 11:44.26 greybeards I think I need this for my crypto mine. 11:45.46 Jason Collier Ah, yeah I was gonna say having seen this technology. It is actually it is so cool. Um, especially that that whole composability piece and you know what Matt can you describe a little bit about because there's a significant piece of. You know hardware ah that you have and basically the interactions with hardware that you sell can you tell us what you sell from Basically what do you plug into the server. What kind of switches you connect to and then what kind of devices can you connect into to connect to those Gps that we're talking about. 12:21.92 Matt Demas Yeah, yeah, absolutely so we we start off with as I said the the memory fabric. Um, and it's it's kind of core basis is that Pcie switch so inside the pci switch I have a bunch of pci gen 4 by 4 ports. I have um, a com express module in the back that actually runs all of our software. Um and and from there I plug any other types of devices I want so I have an hba that plugs directly into servers. So those don't really care what the oem is just just plug the hba in it connects into that into that fabric. Then those those servers can all talk to each other anything that lives inside those servers can all talk to each other so I can talk to the drives that live in the server next to it I can talk to the gpus that may live in that server. But if I if I keep them in the server they're still kind of locked to that sheet metal meaning that if I'm going to go if I if I want to go. 13:10.26 Jason Collier Yeah. 13:14.24 Matt Demas Kind of build a new server for for something unique on unique workload I have to still communicate across nodes and that's fine that works great. But I also have another option and that that other option is going to be using what we call. Um our pooling appliances. So some people would call them jbogs or just a bunch of Gpus. We call them accelerator pooling appliances and storage pooling appliances and and what those are is just chassis that are that are built for power cooling and uplink of pci devices. So I can put gpu fpgas vector engines whatever other types of devices you may want to have even nis in there uplink. 13:47.57 Jason Collier What is this? What is the size of the power supply on that thing. 13:50.96 Matt Demas Yeah, it has more than one. Let's just say that. So yeah, yeah. 13:55.95 Jason Collier I I can imagine Also I do I do kind of want to rewind just a little bit. Um, what is the so that. Card that you are sticking in the server that is connecting into your thing is it. It basically like a Pci card that is literally just transferring Pci ah into your Pci switch. 14:20.22 Matt Demas That that is exactly right? Um, great. Great. Great piece to point out so we don't We don't have offloads. We don't have to translate anything. It is native Pcie So What that means is if I'm going to compose a device across that that card it doesn't have to translate to anything. It's just if I plugged. Ah if I plug a gpu into our our apa and that goes through our fabric to the hba. It's communicating pci the entire way so it is literally talking as if it's plugged directly into the server. 14:51.65 Jason Collier Um, another silly I T nerd question. What does that cable look like right. 14:56.95 Matt Demas So ah, the cable is it comes to 2 different form factors. So so one is going to be your your copper cable and that actually looks just like a saas cable. So if you're looking you used to kind of connecting storage arrays and filers. It looks just like a saas cable. Um. 15:06.77 greybeards Um, ah right. 15:15.14 Matt Demas If you are going to go longer distances. You're going to use our fiber option and that's going to look ah very similar to ah to an Aoc from from mellonox. 15:25.42 Jason Collier Go go. 15:25.78 greybeards So humory for a second the storage stuff how how does this play out and effectively there's ah, there's a gaggle of nbmme Ssd sitting in your pooling appliance and and they can be connected to any of the. 15:26.49 Matt Demas Um, yeah. 15:42.14 greybeards Um, servers that have your Hba card in it is that how it plays out. 15:44.56 Matt Demas Yeah, we actually have two ways to do it. So so that is one way using our storage pooling appliance. Um I basically put a bunch of drives into a 1 ah 1 you um, pulling appliance and then I have uplinks from there. Um, and I assign how many drives I want to go to what server. And and basically in a matter of 5 seconds those those physical drives are electrically connected to ah to that remote server and that server has full Dna capability that server owns those gp that those sort of those drives. Um, as if they were plugged directly into the box. That is by far the most performant way to go connect a a drive. In fact, we we so run some tests with ah with optane and we when we use optane because of the the latency characteristics of Octane and so we we we ran some tests with optane and found that. Um we. 16:31.32 greybeards Right. 16:38.97 Matt Demas When doing the full composition I was able to go do full reads and writes onto that optane composed adding one more microsecond of latency over if it was just locally installed in 70 box. 16:45.44 greybeards Oh whoa that is nice but you know the problem is it's like a ten Microsecond latency prompting so okay now it's 11 It's not it's not great but it's it's it's still pretty damn nice. Nice no yeah, well. 16:54.49 Matt Demas Um, right. Um, exact. But we may talk about nanosecons this whole time. So You know it seems slow. 16:59.45 Jason Collier Yeah, eleven microseconds in storage is awesome is ah. 17:05.44 greybeards It is. It is no doubt no doubt. So yeah, yeah, yeah, yeah, so you're with me for a second now when you change let's say from I have a server that had 5 of these mbm me Ssds and I want to now move 2 of those to another server. What has to happen here I mean does it does it those. 17:09.31 Jason Collier Yes. 17:25.22 greybeards A to the 2 servers have to be rebooted or can it be done. You know nonsruptively and and c you know b you must have some sort of software orchestrating all this stuff right. 17:36.79 Matt Demas Yeah, no, exactly. So if I mean that's one of the great things about what nvm of really did back in the day was give give you that a hot ad hot remove capability. So and it really actually was more nvme than mvmf with the implementation of nvme into a server. 17:47.65 greybeards Um. 17:54.76 Matt Demas It forced all these kernels to be able to handle hot ads hot removes in a much different way than it used to right if I pluged the pcie device into a server what eight years ago that server that server is gone right? but because Nvmes had to be able to be pulled out and pulled in at the front of a server without that that kernel crashing. 18:02.70 greybeards Um, real days. Yeah yeah, right. 18:13.30 Matt Demas Um, all those those pieces of code have been put in place now where I can hot add and hot remove devices on the fly. 18:15.81 greybeards So the hot swap support made this all available for mvmme ssds. How does this work for gpu I mean do Gpus are gpus hot swapable. 18:24.94 Matt Demas Yeah, so so it really depends on the exact os um some os is supported. Some are a little little quirky meaning that if I have one if I have a Pcie hub composed to it I can add more Gpus and remove them without an issue. Um, but if I'm kind of adding a whole new set of Gpus to it. I'll have to go restart the system um to to be honest, though when you're talking about gpu you're talking about drivers right? when you have talk about those drivers. All those drivers have be restarted anyway when you added or remove a Gpu. So the idea of having to restart the server versus restart The service is is not really that that. 18:45.19 greybeards Right. 19:01.70 greybeards For Gpu's yeah yeah, and. 19:03.68 Jason Collier When it when ah and when it comes down to it like those those are the components that don't fail that often and and when they do it's because you've overstressed them heavily. Yeah well. 19:03.74 Matt Demas Big of an issue. 19:09.20 Matt Demas Um, exactly and. 19:10.11 greybeards And this why that swap. Um, yeah, yeah, you need to look at my crypto mind with the the bone yard of Dpu is their guy. Oh I'm sorry thank you, thank you? No doubt. 19:20.30 Jason Collier Right? right? You are excluded from this because you do you do all kinds of silly things that you shouldn't do. 19:24.13 Matt Demas Well well actually on that note you think about it if you're putting those gpus inside your server there. They're they're they're fighting for cooling right? They're fighting the cpu they're fighting the memory they're fighting the disk for for cold air. 19:28.77 greybeards No Doubt. And I know right. 19:42.32 Matt Demas And so by putting those those hot devices inside of our our gpu chases or a memory pool Sorry accelerator pooling appliances I'm able to go really increase the life of both my gpus and my servers because they didn't have. They're not fighting and battling for for that that cold air the entire time we actually look. 19:57.40 greybeards Right? right. 19:58.85 Jason Collier Matt Matt if you're arguing the point that Ray needs your appliances like I don't think you need to argue that it's like yeah yeah, Ray Ray is going to totally agree with that one. 20:01.61 Matt Demas Attempt with that. 20:07.23 greybeards Yeah, yeah, that's a different question. That's a different question I know wait wait. Let's let's let's go back to customers. How does this play out in an hpc environment and things of that nature that yeah you know you would think like these supercomputer environments. 20:07.41 Matt Demas Um. 20:26.56 greybeards Could really benefit from a gaggle of gpu sitting in you know a rack or 2 that could be allocated to wherever they need to be allocated this? yeah. 20:32.43 Jason Collier Oh come on tell us about tact Really, that's what that's what Rays asking. 20:35.23 Matt Demas Um, yeah I mean there's there's it's definitely a huge advantage for for a lot of Hpc customers right? So the idea of being able to be dynamic and you'll talk to some some people in the hpc space and they'll. They'll kind of fight against it because it's not what they're used to and and a lot of people are kind of set set in the way they do things but when you when you look at what hpc is today and how Ais merge with Hpc and and the fact that most of these especially larger systems are not built to do one problem. 21:04.72 greybeards Niare. 21:10.29 Matt Demas They're built to do hundreds of problems and so they expect to have different challenges all the time the idea of having ah a homogeneous compute environment makes no sense because if if everything has to be the same that means instead of um instead of trying to solve a problem the right way I have to go change my code. And make it adjust to the hardware and and so I'm not writing the code that I want to write I'm writing the code that I have to write um and so what we really enable is that ability to to software define your hardware and so a lot of these universities are are really starting to see this this capability. Where I can I can now say yes to my customers instead of saying well and we could but you got to change this this and this and we got to go buy something that that looks like this in order to go make that happen. Um, give give me your wallet and and talk to you in nine months right and so instead of having to go do that. They're able to say yes or at most. Hey buy that new card that you want to have I'll add it to the fabric and and then I'll say yes and so we're talking about a couple weeks instead of instead of nine months to a year 22:14.50 Jason Collier Matt Matt I got a lot of friends and coworkers I need to introduce you to half of them I think you already know. 22:14.64 greybeards Um, ah I mean right? right? right? right? Talk to me a little bit about the software. 22:18.29 Matt Demas Um, yeah. 22:28.33 greybeards But you must have so so is this like ah operating console or something like that that you talk to your composability solution or is it Api driven. Um. 22:38.53 Matt Demas Yes, so so we made a ah conscious effort early on to say you don't need another gui right? We want to be transparent and so so what we've done is we've made everything redfish based so I can do all of my composition. Through the same api that you're already using to go manage your hardware. So since we're moving hardware around and creating hardware connections between devices. It makes sense that redfish is the Api that was chosen so we actually don't have a guoey in our environment. Um, so to today everything is redfish api-driven. And we've actually integrated with a bunch of of partners and when I say integrated we didn't we didn't build a plugin. They actually came to us and asked to go integrate our capabilities into their software because they saw the value it would be to their end customers. So these. 23:24.00 greybeards So like slurm or something like that or or right. 23:28.10 Matt Demas Yeah, so so so bright for example, bright cluster manager has integrated gigao. Um, obviously you you saw some some big news of those guys. Ah this week and um, so bright you have um some slm through a couple different partners. They've done some swarm integrations being that's an open source product is' something that. People can can do and a few of our partners have actually integrated us into serm. We. We also have a company called control iq you you may have heard of them. Ah great. Kurtzer's new company. They are building a product called fuzzball and fuzzball is already certifying or implementing us in their one dotno release. 23:48.96 Jason Collier On. 24:06.13 Matt Demas Set to come out here shortly and that's actually gonna be a Cloud Native Hpc Ah, ah toolkit. Yeah. 24:11.52 Jason Collier Yeah, it's a good. It's good. It's good tech. It's the it's fun to see like innovations and basically innovators spurning other innovators to innovate So a lot of usage of the word innovate. But. 24:14.46 greybeards Yeah. 24:22.58 greybeards Yeah, so how would something like so how would something like go ahead. Jason go has something like red hat or v. 24:22.86 Matt Demas Um, right right? and. 24:31.51 Jason Collier But but startups awesome dad. No, it's like I'm done I'm done is it like innovation. It's like oh like I get so excited about this stuff when I see when I see startups startups like fueling startups. 24:32.51 Matt Demas Um, yeah. 24:41.28 greybeards Here I know. 24:42.78 Matt Demas And. 24:47.99 greybeards Yeah. 24:50.28 Jason Collier That that that is the the number 1 thing that basically a ah you know a founder can be proud of is. 24:55.86 Matt Demas Um, yeah, and I mean shoot this is even started it. 24:57.24 greybeards And and you jasonson you gut a lot to be proud of right right? Let me get back to scheme here. So how does something like this work with red hat or or Vmware or. New tanics those kinds of guys I mean where how does this play out in that space. 25:11.29 Matt Demas Yeah, yeah, so so that is some of that is in the works right now. Um, there's we've done some testing with with Vmware for example and and we have beta codede with with esxs yeah e sx x esx that allows us to go compose. And so we can compose it actually in esx we can do it without any reboot at all when adding Gpus and and and devices into esxi so that's super exciting. We're waiting to see what what comes further from that relationship then you have things like things like red hat. Um, we are we. But and in in talks today. The the easiest way to go implement that is actually through super micros integrated a a product called super cloud composer. Um, they're kind of getting in that software business now which is which is nice to see and and their first release at it is their kind of platform management software and they've integrated gigai on that as well. So you can run manage your whole rack is ah rack to data center um worth of systems and that's super micro systemsstems dell systems hp systems. It kind of manages them all but he can but you can actually compose your devices across those systems using that that toolset as well. 26:26.91 greybeards So from an esx perspective what you're doing is you're you're actually messing with the yeah esx is hardware in real time which is not something you typically see so you're going to provide you and Vmware are providing a capability to. 26:37.50 Matt Demas Um, no. 26:43.87 greybeards So make this sort of thing happen with gpus and and nbm. Yeah ssds I Guess ah. 26:47.20 Matt Demas Yeah I mean you you see vmware has been pretty excited about trying to get this composability aspect to work right? They've they've made acquisitions to go to go do that. Um, and and obviously we're still we're still in early stages with those guys and. And we have it working we we can actually go work with some customers and show them how to use it still waiting to see kind of what those next steps look like with Vmware pretty pretty excited about what that'll do for the enterprise market. 27:14.63 greybeards Yeah, and but none of these guys really deal with the memory size of things so when when you start talking about being able to expand ah an esx solution from 512 gig to a terabyte or 2 in real time. It's it's a different world I would think. 27:33.90 Matt Demas It is and and these are these are conversations that are they're likely to be being had soon. So I can't really talk too much on kind of what that looks like today because I'll be honest, it it doesn't look like anything yet today. but um but I 27:47.63 greybeards Yeah, I'm thinking SAPHanna and and Redis and all these guys are driving bigger and bigger servers anymore and and having the ability to do something like this would be something vmware and those guys would want truthfully. 27:51.49 Matt Demas Have a feeling it will be soon. 27:56.55 Matt Demas Um, they are. 28:04.38 Matt Demas Right? No absolutely and and not only that I think it's more about just traditional data center flexibility. They we've been told that that composability is the vmware of of I mean is the vmmore of today right? that but that I ability to be as flexible as you need to be. 28:07.40 greybeards You know. 28:23.21 Matt Demas To meet your customers' demands. That's what that's what vlware was founded to do right? take that server and and let you say yes, all the time because I was able to take something big and make it in all these small things and be very flexible. That's what they that's what the purpose of virtualization is and and we're going to. Kind of help hopefully take that to the next level. 28:40.25 greybeards Yeah, yeah I can imagine. 28:41.27 Jason Collier And I think I think you're well positioned in taking it pretty much to the next level. Um, and one of the things that I've always looked at as one of those components of like. Where does where do virtual machines. Go ah to to come to the next level. Ah, and I think when you can create a virtual machine that's bigger than the physical constructs of what that virtual machine is yeah dude So when you. 29:13.54 greybeards Boy huh. 29:15.46 Matt Demas Um, yeah. 29:20.43 Jason Collier When you create a virtual machine so you you got a machine with a terabyte a Ram but you can create a virtual machine with two terabytes of memory and something special and that's where cxl is going to come in. That's where everything that you guys are doing. 29:28.41 Matt Demas Um, yeah, yeah. 29:33.12 greybeards I'm gone. 29:39.34 Jason Collier Ah, at Gig I O where that's going to be That's that that's going to push computing forward. 29:44.90 greybeards There there. 29:45.12 Matt Demas I totally totally agree and the way I see it right now with cxl is vmware with the first generation of cxl won't be able to do anything with it right from from that perspective. You won't they won't be able to go share anything across servers now with I said gig io can. 29:55.94 Jason Collier Um, absolutely. 30:03.63 Matt Demas And we actually have designs to go do that so we will be having cxl enabled sharing even in Gen Pci Gen 5 with CXl one dot one support inside the servers we've figured out how to do that. so so cxl will be coming in a shared arena. 30:15.10 greybeards That's great. 30:20.71 Matt Demas Here right? along with the Pci gen 5 servers as they as they come out. 30:23.17 greybeards Yeah, hey Matt besides the the cxl standards and stuff like that there are other you know standards organizations in the composability space. You guys play in that environment as well or. 30:33.99 Matt Demas Yeah, so so obviously we've been on the Csl Consortium since the beginning. Um and we are um, it's really kind of more focused on the ocp piece is really where more of the composability is is is really kind of driving into and so be. 30:43.30 greybeards Run. 30:52.36 Matt Demas So so redfish has been a big part of it. That's why everything's also been really focused on redfish. But you're going to see a lot more from us here. Um, working with ocp and the composable aspects of it. Yeah right. 31:02.15 greybeards Yeah, this metaverse thing seems to be a ah pretty serious solution there waiting for Moibility as far as I can tell I was thinking. There's this open flex thing. Are you guys involved in the open flex at all. 31:10.82 Matt Demas Yeah. Um, so not not not really um, now I'll say at this point. No, we are not yeah. 31:21.82 greybeards Okay, well,, That's all that's that's fine. That's fine. Yeah, yeah, yeah, So what's ah, what's a well let's talk big things like what's the biggest memory yeah memory pooling appliance that you guys support at this point and and and how many servers is it. Ah, is it potentially distributed over. 31:41.67 Matt Demas Well, that's the thing is it really? It's kind of whatever your imagination came up with I mean there there are limits that there are limits but but I can make the so basically I can create as many memory I can. 31:47.58 greybeards I can imagine a pretty big world here now. 31:59.38 Matt Demas Create a a certain amount of memory windows that I can go Mount memory to for that server. Um, it gets kind of technical um I can create so many of them based on the bios of that server but how much memory I put into each of those windows is Configurable. So. If I have servers each have a terabyte of memory in my my memory pulling appliances. 32:17.77 greybeards It's almost like a virtual page space. So you got a physical page space that you're managing on the on the server itself and but the virtual page space behind it used to be on storage now. It's sitting on a memory device off a Pcie fabric is that what you're telling me. 32:27.40 Matt Demas Um, but right that's exactly right? and then then use memverge and their technology to literally keep it hot and cold dynamically. 32:39.69 greybeards Hot and cold memory. 32:41.58 Matt Demas Yeah, Oh yeah, um, so so well, it's more the memory pages right? So I'm I'm bringing it bringing the the warmer pages up whenever they're whenever they're needed and I'm dynamically trying to keep everything in the fastest memory. But. But I'm storing it in I mean when it's not in the fastest memory still in really fast memory. It's not it didn't have to go go pull down to to microseconds. It's still well within the nanosecond range. 33:00.33 greybeards And. 33:08.21 greybeards God This is mind blowing. 33:10.20 Jason Collier So bray How how gray is that beard feeling in storage now I'm just saying. Yeah. 33:14.31 Matt Demas Are. 33:15.50 greybeards Yeah, tell me about you know I just well you know I've I've been doing virtual memory for about you know, 4 decades here but you know I was talking like 16 gig right? or something. Yeah, tell me about it's a different world. 33:25.66 Jason Collier Um, that was awesome. Yeah. 33:32.82 greybeards Ah, so the the memverge thing so it actually plugs in. It's sort of like a it has P-mem sitting on the server and and then how is it? How's that connected to the fabric I guess I'm trying to I'm trying to understand so P-mem seemed like it was in the past it was just a it was p-mem. 33:45.65 Matt Demas Um, yeah. 33:52.83 greybeards Memverge was just a couple of Pms and dram and it would it would carve it up for you internally in the server but there was no external version of that in the old days. 34:03.65 Matt Demas That's exactly right? and um, so so today they still offer that capability right? Pmem is just a tier of of storage and remote memory is going to be another tier of that storage. So. So basically if if I have p-mem on the system and to be honest, the latency between p-mem and remote memory is is pretty similar. We're just we're just faster on on the on the backside of it and and and gives you the option to I could even compose p-mem to remotely across that Pci fabric if I wanted to so you can choose dram or p-mem. 34:34.77 Jason Collier Um, hey so Matt with that. What is that latency. What's the latency differential. 34:35.46 Matt Demas As that remote memory. 34:41.67 Matt Demas from from pmem versuspose dram it's ah it's actually pretty similar so they're but they're both run right around three hundred nanoseconds yeah 34:46.62 Jason Collier Yep. 34:51.40 greybeards Yeah, yeah, I keep thinking there should be ah a plugin to the dmem with ah with a Pci bus floating back behind or something like that is that how this worked I'm just trying to understand how the how the fabric so it's all logical. It's all Pci. It's that there's no real plugin other than. 34:53.73 Jason Collier Eric cool. 35:03.89 Matt Demas Um, yes. 35:11.00 Matt Demas Um, that's right, yeah, the Dis live in the dim slots on the server and they all talk over Pci. 35:11.17 Jason Collier It's all Pci. That's the beauty of they this is the beauty of the architecture. It's all Pci. Yeah. 35:11.27 greybeards There's an hba sitting on that fervor right? is that what's happening here tell me about it. And and it's a memberge software that makes that happen as well as your composability software some place back end of that. 35:26.30 Matt Demas Yeah I mean we provide the Transport. We actually create the connectivity and and from memory just perspective it sees the memory we connect just the same as if it sees the Pm living on on on its own server and so it just access it and create and says hey. I'm going to create make you a different tier than the P men living on me and then if I were to compose more more memory from either a farther away server or from P-mem on another Server. It would make that a different tier and and with its own characteristics and then it'll kind of page according to the performance characteristics of the memory. 35:59.65 greybeards Right? right? So you guys have tightly integrated this solution with with memge. It appears. 36:00.66 Matt Demas It's on that system. 36:04.90 Matt Demas Um, it is is getting tighter by the day. Um. 36:05.30 Jason Collier Yeah I was I was I was getting ready to go there Ray I'm just like you guys keep saying memverge a lot. Um. 36:13.47 greybeards Ah, yeah, yeah, but there's a it's a It's a a total solution here all right? so. 36:19.78 Jason Collier Um, yeah, it's it is a total solution I mean it's a fantastic solution. Um, what? So what is to come of of your organization and memberge. 36:32.95 Matt Demas Um I that would not comment on that one. Yeah, not going there. 36:35.42 greybeards But that's a good That's a good way So tell me about all right? So back to the back to talk here. So how does this thing sold you sell through Partners only are you direct sales that sort of. 36:38.30 Jason Collier Ah, like. 36:46.80 Matt Demas Yeah, so so we are a partner only organization so we do have a direct sales team but that direct sales team still will only work with partners. Um, we we. 36:56.88 greybeards So what are some of your bigger partners than I guess. 37:01.51 Matt Demas Yeah, so so from ah a channel perspective from a federal perspective. We have federal integrators from Ctg federal the Cambridge computing to id technologies and then we have icc from from more of the commercial side. We have ah. Advanced data systems that we just did some stuff with um with sdsc with or san diego supercomputer center with um so it's an evergrowing list our distribution right now we're going through through arrow. Um, and but we are trying to keep it fairly small. 37:38.11 greybeards And. 37:40.27 Matt Demas And our partners are always going to be those those partners that that value technology first and want to kind of drive the the latest and greatest and and the new cool stuff I'm not looking for a partner that's looking to just make a phone call say hey you need a server here you go I can go to your server. 37:55.88 Jason Collier The air air was a great disy by the way I Just like those guys are awesome. Yeah. 37:57.92 Matt Demas I. 37:58.67 greybeards Yeah, yeah, yeah, so I mean it seems like this is ah it's almost targeted primarily at Hpc but there's ah, there's a commercial side of this as well. Right? I mean. 38:00.95 Matt Demas Um, yeah, yeah I love them. So um, yeah. 38:11.79 Matt Demas Oh absolutely So you have hpc. You also have the ai side of that and so so's it's definitely merging together. Um, and as the memory piece comes comes out farther. You're going to see a lot more things like ah. 38:15.92 greybeards Yeah. 38:28.42 Matt Demas Um, ah, traditional deep databases in-memory databases that are going to be more in focus and then you'll Also um, you're going to see some more than this devops stuff that that I'm I'm really excited about right that ability to go compose devices to a container as as they spawn is is really cool. 38:46.20 greybeards Well, you didn't say anything about using Kubernetes and all this stuff so kubernetes wait a minute so I can I can I can change the pod configuration on the fly to run the containers. Yeah. 38:48.22 Matt Demas So. 38:56.60 Jason Collier I. 38:56.27 Matt Demas Well today we do that through bright right? So bright controls all of that for us and so so all of that works well there but but honestly, it's all Api driven so it can be scripted also um so but yeah as you create a new container I can go compose memory yourpose devices for that container might to go. 39:06.81 greybeards Yeah, so. 39:16.24 Matt Demas Um, say those devices are only for for that specific container and and go um and and literally let you change change change your code sets immediately. It is the right way to go if you're trying to be a a true devops. Very flexible environment. Ultimately, we will be your cloud. 39:18.80 greybeards And. 39:31.10 greybeards Yeah, yeah. 39:35.97 Matt Demas Right? that that is our goal is to give you all the Cloud flexibility without having the price that comes with it. 39:41.99 Jason Collier That's okay so great segue. Um, so from Cloud perspective. Um, what? Ah so when you're talking about the Mag data centers out there out there in it in the world. If I wanted to basically take a look at your technology What clouds could I go to. 40:00.72 Matt Demas Um I I would say I can't yeah can't I Yes, so but I they think think have very strong ndas and lots of lawyers. 40:04.55 Jason Collier Um, not disclosed all right fair enough fair enough. Um, yeah, no I know I trust me I know. 40:06.90 greybeards At this point that that that's fine. That's but yeah you mentioned you mentioned the money. Yeah, okay Matt you mentioned a money word. How much does something like this cost and how is it? How is it. 40:18.88 Matt Demas Um. 40:24.86 greybeards Charge for is it charge for I mean obviously there's there's storage. There's gpus and there's memory and all that's charged whenever it's charged at but and then there's this rack device that you actually are supporting and then it's obviously your own. 40:28.87 Matt Demas Um, yeah. 40:39.84 greybeards Um, Pcie switch and and the controller right or something right. 40:40.30 Matt Demas Um, yeah, yeah, yeah, no, absolutely So it's all relatively inexpensive. Obviously of course I'm I'm going to say that Um, but ah, but basically what we found is because you gain so much utilization of all your devices. 40:48.31 greybeards Ah. 40:59.20 Matt Demas Um, you you go from having 30% gpu utilization to 70 we generally end up actually selling less hardware. Overall um, a lot of times. It's actually servers that we sell less of because um, you're able to go reconfigure your hardware to to match match your your unique needs. And you end up spending a lot less in a composed solution and and the amount of jobs you can run ah actually significantly increases. So it's it's hard to say what it costs because generally like I said you'll you'll end up reconfiguring your design to have less of this less of this. Um, and to go meet the same job requirements. 41:36.80 Jason Collier Do you have any examples of that. Um that you can provide aka any like kind of like total cost of ownership kind of documents that you got. 41:48.77 greybeards So I like my my my crypto mine has got like 6 gpu per server and and you know I've got like 1 or 2 of 1 with 1 sort 1 gpu 1 or 2 with 4 you know things like that I kind of like to put them all across all the servers. 41:58.34 Jason Collier Get. 41:59.73 Matt Demas Right? right? right? exactly? So so all all those things are are possible from a Tc perspective. Um, we do have a tcl calculator that that we could ah that we could show and it literally what what I love about it is we actually. Show a plot map of a whole bunch of jobs being done with certain certain sizes for each of those jobs and show you what it would look like if you create with a certain static architecture and what that looks composed as far as those jobs completing um you using a couple definable characteristics for those jobs and you'll literally see. 42:33.53 greybeards And. 42:37.12 Matt Demas Um, you can then pull back certain sets of hardware and go I'm still doing more jobs still doing more jobs all right now we're finally breaking even and you're seeing how much less hardware you can do that which is of course power significant power savings um to boot not to mentions hardware cost. 42:47.11 greybeards And and something like this now. 42:51.40 Jason Collier I've I've got a lot of friends in Hpc that um will love that right? Um, you know, only because they they have been through that they've ah they've went through the procurement cycle of oh. 42:54.86 Matt Demas Um, yeah. 43:08.26 Jason Collier We have to put Gpu in every every node ah that we're deploying in this supercompute platform that we're putting out there. Um, however, the people that are developing the algorithms are not developing the algorithms for gpu. 43:14.30 Matt Demas Um, yep. 43:24.36 Matt Demas Um, right. 43:24.90 greybeards Yeah. 43:27.20 Jason Collier Ah, so there there there is this giant lag of of where that stuff is actually used and and being able to compose that infrastructure like which is exactly what you guys are doing. Um. 43:40.10 greybeards On the fly I might add. 43:40.17 Matt Demas Um, yeah. 43:43.24 Jason Collier Being able to compose that infrastructure into um determining what assets you have available and and how you allocate those that that's the that is the gold mine of this now. 43:57.61 greybeards Um, pretty damn impressive. 44:01.42 Jason Collier Now this has been tried many times composable infrastructure. You know that my first my first is like tow stepping into the water of composable infrastructure was with the sgi origin 3000 44:03.10 Matt Demas Um, yep. 44:17.14 greybeards Um. 44:17.43 Matt Demas Um, okay, yeah, yeah, yeah. 44:20.96 Jason Collier That was a great system. Yeah, that's why I have a gray beard that that's why I'm on Gravebeards on storage I Remember that thing it was it was It was a great system. Um, but you know, but honestly what it did I mean guess what it was. 44:24.77 greybeards That's why you have a gray beard. 44:29.56 greybeards Exactly exactly exactly. 44:29.66 Matt Demas Um, and. 44:39.75 Matt Demas Um, right. 44:39.77 greybeards And. 44:40.52 Jason Collier Ah, Pci switch it which it wasn't pci at the time it was sgi's the proprietary stuff at the time but it's exactly the same thing that you're offering now how how is what you're offering now different. 44:45.35 Matt Demas Um, here. Yeah well I mean even Intel tried it right? So Intel had ah tried doing the same thing but what they were doing it on was Pci gen 2 and and. 44:45.64 greybeards Then. And. 45:03.28 Matt Demas The challenge was his latencies just could not keep up with what was actually required right? So you're talking about almost Microsecond latencies at the at the time and and composing resources over that type of distance for that type of Latency just caused too many errors in in the hardware. Um, not to mention. 45:06.75 greybeards All right. 45:23.24 Matt Demas Um, we've now implemented non-transparent bridging so that Ntb is how we are able to go talk memory to memory and a lot of the the kernels for operating systems haven't really enabled that until fairly recently so a lot of that communication path using Ntb is is is fairly new I mean. 45:40.28 greybeards Yeah, and 1 45:43.10 Matt Demas So So a lot of this stuff is is really wasn't an option wasn't truly an option to do it the way we did it. They tried um, but but they end up finding scenarios where when they composed. Um, they were not able to have gpus talk to each other. For example, um, like dell had this? ah. Um, this C Four Ten X Gpu Chassis and everybody loved It. I was actually working at dall at the time and it was a really cool looking box I thought I was going to going to do really well. But what they found was because they couldn't have those Gpu talk to each other it. It just felt art and and just died in the vine. 46:14.36 Jason Collier Mean. 46:19.25 Matt Demas So so lot of hype lot of people really excited about it. But it's just some of those core technology features just weren't there yet and we're finding it a spot that that we can get there and then with Cxl really down the pipe people are starting to ah already having to vision have a vision of this in their head of what this could look like. 46:23.93 greybeards Yeah, yeah, and. 46:34.37 greybeards Starting to open their eyes and this stuff. 46:36.20 Jason Collier Is. 46:38.78 Matt Demas And and and so so we're we're just kind of making that vision come to reality. 46:40.68 greybeards Yeah, yeah, yeah, yeah. 46:42.50 Jason Collier I I completely agree with so so so I completely agree with everything you're saying and I really would love to see this ah push forward I cannot wait to see what the next generation of. 46:46.16 Matt Demas Are. 46:49.70 greybeards I I. 47:00.17 greybeards All right all right? So Jason any last questions for Matt before we leave Matt anything you'd like to say to our listening audience before we close I know. 47:00.52 Jason Collier This technology is going to look like. Now. 47:08.84 Matt Demas I You know where I said a lot feel like I'm going to get in trouble after after I get off of this thing but ah, but you know, um it was worth I Really enjoyed the time and I look forward to this again sometime. 47:21.41 greybeards All right? Well Matt this has been great. Thanks for being on our show today that and that's it for now bye Jason and by and bye matt all right good day. 47:26.64 Matt Demas Um, right? Thank you. Um, Bye you? Yeah,. Thanks you guys. It was. It was a lot of fun bye. 47:33.19 Jason Collier Bye now hey Matt Matt thanks awesome awesome awesome conversation 47:40.49 greybeards Bye.