2021-11-07
Listen in your podcast player by searching for Future of Coding, or via Apple Podcasts | Overcast | RSS
Scott Anderson has spent the better part of a decade working on end-user programming features for VR and the metaverse. He’s worked on playful creation tools in the indie game Luna, scripting for Oculus Home and Horizon Worlds at Facebook, and a bunch of concepts for novel programming interfaces in virtual reality. Talking to Scott felt a little bit like peeking into what’s coming around the bend for programming. For now, most of us do our programming work on a little 2D rectangle, with a clear split between the world around the computer and the one inside it. That might change — we may find ourselves inside a virtual environment where the code we write and the effects of running it happen in the space around us. We may find ourselves in that space with other people, also doing their own programming. This space might be designed, operated, and owned by a single megacorp with a specific profit motive. Scott takes us through these possibilities — how things are actually shaping up right now and how he feels about where they’re going, having spent so much time exploring this space.
Some of the stuff we reference throughout the interview:
Scott also wanted me to plug the Experimental Gameplay Workshop — a lot of games that end up influencing FoC are shown here first.
Glide are building a batteries-included app development platform where the database is Google Sheets. If you’re excited about making end-user software development a reality, go to glideapps.com/jobs and apply to join their team.
Replit are building an online REPL with all of them languages. It’s a collaborative space for deploying web servers, developing games, training ML models — all driven by the REPL. They’re hiring, looking for people to come enrich the core and ecosystem of Replit.
Please know that the episode transcript is challenging to produce. It is unrefined at best, inaccurate or even unreadable at worst. But despite the rough shape, it’s valuable to many people in our community. If you’ve got the time and energy, we would love help cleaning it up.
Welcome to The Future of Coding. My guest today is Scott Anderson. Scott is a game developer and a game designer with a special focus on graphics. And that has him some very interesting places in his career. He’s done indie games like Luna and his own project Shadow Physics, but he’s also worked on big AAA blockbusters, like Call of Duty. And he’s currently at Unity as a senior graphics engineer. But before that, he worked at Facebook on Horizon Worlds and an unreleased VR scripting system in Oculus Home. So this interview was conducted several months ago, but in light of last week’s announcements from Meta, Facebook, the things that we talked about in this interview are suddenly a little bit contentious, because I think some people are probably sick of all this talk about the Metaverse, but there’s a really interesting angle to it, that to do a proper Metaverse you need a huge focus on user created content and on really good tools to enable people to make content.
And if that content is going to be at all interesting, it needs to have very rich behaviour. And so a large part of what we talk about today is what alternative interfaces for programming one could create, if they wanted to, for instance, bring a whole bunch of people into a brand new environment that is computational and how we might enable those people to create whatever it is that they imagine will at the same time accounting for all the things like new possibilities for abuse and the questions of power, and responsibility that come from the imbalance between a large corporation and the individuals who are going to come and play in their playground. There are questions about the technology that goes into building these tools and building this new so-called Metaverse. How would we actually go about building this stuff on a technical level and especially with respect to graphics? What things are out there right now, and how do they work, that we might bring to bear in this new area?
This is a fairly long interview and we go some places and we go quite deep. So I hope you’ll enjoy it. One thing that I will say right off the bat, just because this is something that weighs on my conscience a little bit, is that Facebook, especially in light of the recent revelations that have come out, the papers as it were, but of course in a much longer arc of their relationship with society over the past, let’s say decade, maybe their entire existence, depending on how you want to frame it. They’re a company that have a deserved reputation for being a little bit unscrupulous, to put it mildly. And I just wanted to say that the material that we discuss on this show and the thoughts that we have about what they’re doing are relevant, regardless of who among the large corporate overlords that rule our lives end up pushing to create this new Metaverse that we find ourselves confronted with.
The fact that it’s Facebook is unfortunate, but it is incidental. If Apple come out with their thing in the next couple of years, I think this conversation would apply just as well to what they’re doing. If HoloLens from Microsoft ends up taking off in a big way, these ideas are going to manifest there as well. So the fact that this is Facebook, just like as much as you can try to set that aside, don’t get too hung up on the fact that they’re a company with questionable morals. And instead look at the fact that we are on the cusp of what might be another major technological revolution on the scale of the phone. At least that’s what these large companies like Facebook are hoping it will be because it will give them an opportunity to wrest power.
And so if they’re going to do that, and if there’s going to be a role of creating new kinds of culture within this new environment on this new platform, I think we need to be having these conversations about it, regardless of which of the corporate overlords ends up pushing their particular vision for it. And regardless of whether or not this is entire early in VR or is just another manifestation of the internet powered by regular mice and keyboards and screens. In any case, all of that stuff being said, there was one other little caveat I wanted to throw in. I normally try to push for the absolute, highest possible sound quality on this show. Scott’s recording has a little bit of background noise in it. I’ve done my best to clean it up. It’s not pristine, but it should be more than good enough for everyone to enjoy what Scott has to say and to benefit from the absolute trove of wisdom and experience that he brings to our show.
So with that, let me kick it over to past Scott and past Ivan to give you a whirlwind tour of what it might be like to do end-user programming in the Metaverse.
I’ve seen lot of what I would call programming in VR projects that are just like a rectangular screen with a virtual keyboard. And whenever I see those, I just think to myself like, “We have to be able to do better than that.” That seems like the thing you see whenever there’s some new medium that emerges, like the first thing that everybody tries, is let’s just recreate the old medium inside the new medium. Early films being like, “Let’s just record theatre,” early podcasting being like very informed by radio before it found its own voice. I’m wondering just as broadly to set this up, what are your thoughts on programming in VR? And what kind of projects have you seen where people are doing interesting things in that space? And do you have any of your own VR native programming ideas that you’ve been thinking about?
Yeah. I’ll start with the first part of your question. Just opening up a rectangle in space and having a virtual keyboard in VR is pretty terrible experience right now. It’s something that people get excited about, I believe because it’s something that they’re familiar with. And there is a cool factor to some of the environments that do that, where you type code in a rectangle but there are things happening around you that are physical, you spawn a cube or some rigid bodies fall from the sky or whatever, it gives you this down like effect in the universe that you wouldn’t get in a flat screen game programming environment, even if it was live coded. But at least with current, and especially last-gen VR hardware, screen resolutions aren’t really good enough for that to be a good experience.
Virtual keyboards suck on most platforms, but especially in VR, but we’ve seen big advances in VR, HMD resolution, so you can imagine a future where all VR, HMDs are 8K plus. Where you have features like eye tracking and varifocal, so you can actually look at something and it’ll change your lens focus, so you’re not… One standard feature of VR hardware is that the lenses don’t change focus and they’re optimised for, I think it’s roughly like two metres in front of you, which is great for a lot of games. But it’s not really good for reading text. So you actually want to change the focal length for reading text. You can imagine with all these harbour advances and you have a track keyboard, you can use a real keyboard and seeing VR, which is a feature that’s available on Oculus Quest 2 right now, if you buy the right keyboard.
Where the experience, even if it’s a traditional programming experience gets good enough that… It’s still the way people code in VR, but as you hinted at it’s still not really taking advantage of the medium. You’re bringing what people are familiar with, the present of coding into VR and making it work in VR rather than looking at VR as maybe a future of coding device. And looking at, “Oh, well, how does physically embodied coding work?” That brings me back to when I first got interested in coding in VR, which was I believe 2015, early 2016, sometime around then, around when Oculus Rift and the original Vibe were launching, I was playing around with some VR prototypes.
I had just, I believe, started to work at Phenomena on a VR game called Luna. And I was doing a lot of prototypes for that game. Luna as a game has a creative mode where it’s almost like a gardening system or like playing toy building thing. That was a lot of my inspiration for some of the design for the UI, is actually just playing with toys because the game itself is very playful. It’s got a children’s story book, illustration, art style. I did a bunch of VR track controller interaction features, you can plant trees and to plant the trees, you drop them on a terrain. But then you can edit the trees and to edit the trees, you grab different points along the tree.
If you grab the top and pull up, you can scale the tree up and down. If you grab the top and pull to the side or grab the middle and pull to the side, the tree itself is an IK chain. You can actually bend the tree and warp it. And if you grab at the bottom of the tree, there’s a colour picker and you basically rotate your hand around the tree to colour pick. And then, Luna had a lot of fun little interactions, but I felt like I was just getting into the like really basic prototyping phase, even though all this stuff shipped in the game. The game, you can buy it now, it’s on multiple VR platforms, pretty much all of them and all of this stuff shipped and it was good, but it made me think of like, “oh, well, what are deeper ways these systems could work? What does a volumetric colour picker look like? What are more complex IK changes look like, et cetera?
Or add new nodes or… What’s the term for that in IK chains? It’s been so long since I’ve done… The bones. Can you add new bones to an existing tree with an IK chain in it? Or is that like…
Yeah, you cannot do that in Luna. You can’t edit mesh, there’s no mesh editing or anything like that. And there’s no terrain editing. So, it’s not actually like an art tool or anything. It’s a game and the gameplay is like decorative gameplay. You would have an Animal Crossing, but with less items. The modifications are pretty limited. But those things that you brought up would be interesting things to add to Luna or some other similar application that is maybe more advanced. Luna and some other VR prototypes I did, got me thinking of a lot of different things, but one of them specifically that I wrote really early on was tangible coding in VR.
This idea that you could bring physical interactions or things that felt like real physical interactions into a digital world and still have all the affordances of being in a game engine and being able to be fast and loose with physics and fast and loose with matter and mass, obviously you have constraints in terms of performance, like CPU and GP performance, and how many of things you can render and what complexity they can be, et cetera. But, you have a lot of constraints that are entirely lifted or don’t exist without complex systems that would exist in the real world. One thing with tangible coding, I’ve always been interested in it, but you need to have a lot of physical blocks. There’s not really too many real commercial tangible coding systems and they tend to be expensive and they’re all universally targeted at kids.
There’s no professional, tangible coding, really. You could argue that electronics, when you start to move into electronics, that’s technically like tangible coding, but not really.
You thinking like Arduinos and that kind of thing?
Yeah, yeah, yeah, yeah. But, when I talk tangible coding, I mean, like Google project blocks, like things that are like, let’s write code with a bunch of Lego blocks or let’s write code with magnetic connecting pieces or something like that. That was like my one line blurb. I didn’t really have a lot of ideas about the system or what it would do or how it would actually work at that point. But for various reasons, I did not have time to prototype that.
But, then I interviewed with the Oculus Home team, right before they were launching the new at the time Oculus Home, which is still the Oculus Home that you launch into when you go into the PC version of Oculus software.
Can you give us a one sentence, what is Oculus Home?
Oculus Home is… effectively you have a house or an apartment in VR, it’s like your VR desktop. That’s the idea. And Quest has a similar thing. It doesn’t have the modification aspects of it, but similar to Luna or, I mention Luna and mentioned Animal Crossing style, decoration software. You’d have an inventory of items and you could decorate those items. You could put TVs or computer monitors or desk, there are some interactive objects like bone arrows and an NES style Zapper gun that shoots lasers and make sounds and basketballs and stuff like that.
There are a lot of interactive elements, but pure interactivity is cool, but if you want to actually start building even small games of that stuff, you need some programming language scripting.
Yeah. Some way to inject some dynamism.
Yeah. Yeah, exactly. I was hired to work on visual scripting in that environment. For various reasons I never actually shipped any of my work in there. And we’ll get into that later but, I spent about six months doing a feature where that allowed for screen sharing, because it also had pretty good PC desktop integration. So you could actually see all of your desktop Windows in VR on various virtual screens that you placed in the world. So we had TVs and computer monitors and things of that nature. And then we added multiplayer, and for multiplayer you could screen share with everyone in your space. You could choose to broadcast and you could use it to watch videos together or… Is even prototypes and none of this shipped, but they were prototypes that would allow for co-located game playing as well.
Like couch co-op, but two people thousand miles apart kind of thing?
Yeah. Yeah. And latency was an issue. For video watching, the latency wasn’t that big of a deal, but the latency wasn’t ideal for gameplay because it’s obviously all hosted on one person’s PC. So if you’re just forwarding inputs, you can’t predict in that case. You have to solve game streaming, but…
Without the benefit of a huge cloud, yeah.
Exactly. It’s peer to peer and it could be over really long distances. It’s not really a solvable problem in terms of making it good.
Thanks physics.
Right. But that’s why it was a prototype. It’s fun to just see it.
Yeah. And to me, that kind of thing feels so inevitable that I can see there being games created specifically to accommodate that latency as a baseline where it’s not going to be a Twitch shooter or a platform or something, it’s going to be something more strategic maybe or something like that. But I can’t imagine a future of VR without two people are in a room together and they want to play a game on the same virtual TV in front of them.
Yeah. And you could absolutely do something like Jackbox, or a party game style thing, or code names or whatever, a party game style game where latency just doesn’t matter at all, or it matters very little. And it would be a lot of fun in VR. Definitely more fun than playing it on Zoom.
Yeah. And I know there’s going to be people out there in the audience who hear me say something like that and who hear us talking about like, “Let’s put TVs in VR” and it’s like, “No, that’s the problem. VR gives us this.” You want to go live on Pandora or whatever. That’s what VR lets you do. Why do we keep porting the screens and the rectangles and that hunched over narrow view into the dynamic world, into what should be the ultimate dynamic world short of… I put some fancy contact lenses on and then get perfect AR overlaid in the real world. Let’s set that aside for a minute, but VR, you can define a space to be whatever you want. That should unlock a tonne of new creative potential and let us revisit a lot of our assumptions.
But I think one of the points that you made about, “Hey, Facebook Home is like Animal Crossing.” And I’m reminded when I hear you talk about that. Back in the late eighties, early nineties, when I was a kid, a lot of these early gooey programmes that I was playing with were decorating your computer and making it feel more like the desktop metaphor, like this idea of files on your desktop and a little trash bin that you can put stuff in. I think the urge to recreate the familiar surroundings from before in the new medium is a good urge.
And I think it’s worth being conscious of how much of that is about giving people an easy way to get themselves familiarised with the new environment and make themselves a little home so that they’re comfortable in it. And then making sure that that’s not the stopping point, that we keep going and we’re like, “Okay, now that everybody’s got a little nice cosy space that they’ve made for themselves now, can we go and let them swim around inside Metaverse or something like that?” I want to wave my hands around in the air and have that conjure up computations, like their magic spells. I’m curious, where do we keep pushing to get closer and closer to that bigger reinvention?
Yeah, definitely. I definitely agree that. I think there’s two sides of that. One side is making sure users are familiar, making sure that if you’re selling people a product and you show them something that is… This isn’t a hundred percent true by any means, but if you show them something that there’s someone unfamiliar with, then they don’t see the value in it. They may me not be interested in it. So it’s like, “Oh, I can watch videos on a big screen with my friends without owning a big screen.” Is a fantasy that a lot of people have, even though it’s mundane. And Facebook especially is especially bad at this. I feel like as an organisation in general and it’s unfortunate in some ways that they’re shepherding VR at this point. And no one’s close to Facebook in my opinion right now. It’s a big company, it’s relatively corporate. And then they’re not necessarily going to want to push the boundaries in that sense.
It’s probably not a priority for them that VR has a really strong demo scene right out of the gate.
Yeah. Yeah. It’s a little bit unfortunate because, I feel like VR in general has been that way outside of, with maybe a few exceptions. There’s some people that… I remember really early on and even later, like Isaac Cohen, who’s a creative coder/demoscene person, made a game called BLARP! and he still does a lot of more out there, VR and AR experiments. And there was some really weird stuff. But, I think VR especially has always been very much about how quickly can we get a new smartphone effectively. How can we get a new mobile gold rush where there’s this new platform that everyone has to move onto. And you see it now with Metaverse conversations.
Where it’s like, “Oh, well, everything needs to be 3D now, because. And we’re going to have a new web and it’s going to be great. And we can make the same billions of dollars we made on the web doing the same stuff over again, but if we’re 3D.” I think, a lot of this conversation has been like, “What gets lost in that kind of thinking?” I think there is a lot of just weird experiments that would fail or will fail, or aren’t for everyone that can push the mediums that just don’t happen because of the way these things are funded.
Like Apple being so restrictive about what runtime environments you’re allowed to ship inside your app. Like, are you allowed to have a scripting environment? What are the limits on that scripting environment? That sets a real creativity limit, I think. And I do fear the same thing happening when it comes to VR and AR, more broadly, just because it’s… Yeah. There’s such a strong incentive, like you said, for this to be the beginning of a new market and not the beginning of a new culture and just because of who’s stewarding it. I don’t know what the right recipe to get around that is. Because there was just as much financial motivation going on in the eighties and nineties and other times as well when new platforms were emerging. But those platforms like, yes, the personal computer isn’t all that it was supposed to be.
But I think it’s arguably close, whereas like mobile is definitely not close to what it could have been if it had been a little more of a wild west maybe. And I know the arguments in favour of having mobile be locked down as it is, but I hope that the fact that VR is so much more intimate and so much more personal and as we get computers closer and closer to the cores of our being, it heightens the amount of damage that could hypothetically be done by that technology, that computer being used maliciously. And I just hope that the safety harnesses that are put around computation by these corporations to ensure that they work well for… As things that make Jim Kramer salivate, even if he can’t really articulate what the Metaverse is. I don’t know if you saw that clip, it was hilariously distressing.
Yeah.
But I hope that those restraints that are put on them, aren’t the restraints that keep us from having a really strong culture and keep us from creating new kinds of art and, especially coming up with new philosophies about how to programme computers and how to have a dynamic medium and how to actually use these platforms to not just serve as another platform for consumption, which is the often used refrain when talking about mobile devices, but our actual tools for thinking with, and for augmenting human ability. Yeah. All of that stuff is up in the air right now, but it is… I don’t know, I’m nervous about it.
But that’s a whole thing separate from what can we do and what are we doing to reinvent programming, current finance constraints be damned, what are the possibilities that have opened up? Or what are the things that it would be like, “Hey, VR, it might someday allow us to do this thing, were it not for resolution and latency and focal distance and other factors that contribute to nausea or quality of experience, if it weren’t for the limitations on that kind of stuff, where can we imagine this going hopefully in the near future. What do you think?
Yeah. You bring up that maybe safety or driving the Metaverse will force… Force is maybe the wrong word. But push VR into a direction that encourages it to be a… You mostly live in Walled Gardens, it’s more of a consumption device, like a mobile device, which its form factor is optimised for consumption, as you mentioned, but also communication. But that makes sense because that’s where mobile came from, it’s a communication device and that’s really still this best strength of a smartphone. even though you can use it to watch videos or read, it’s not the best reading or video watching device, it just tends to be the most convenient. That’s one challenge with VR, in general is not particularly convenient.
But one thing about VR, as I’ve mentioned, and I haven’t talked a Horizon at all yet but, as I mentioned before, both of Hoculus Home and Luna, they’re both creative apps. There’s a lot of art tools that are popular in VR, like Tilt Brush and Medium and Quill. It’s not as far as professional or hobbyist art tool in any of those cases, but they’re still micro creative tools. And because I have a game development background and a lot of indie game development, I thought about a lot of game ideas. But as I started thinking about game ideas in VR, I was actually like, well, the strength of VR is in the physical-digital mix that I was talking about. That you have track controllers that you can move your body through space. And that is ideal for creation, not consumption.
VR experiences are cool because they’re immersive, they’re more immersive in film. It feels like you’re in the place. You could do interesting things and people are doing interesting things with immersive theatre. A former coworker of mine has a startup called Lila where… It is funny because I talked about tangible coding in VR, but actually, even back in those days, I was like, “Oh, it would be cool.” Are you familiar with the book Diamond Age at all?
No. Never heard of it.
I feel like future coding people should definitely read Diamond Age in general. It’s a Neal Stephenson book. Snow Crash is more popular. And Snow Crash comes up a lot with… it coined the term Metaverse, et cetera. But Diamond Age is similar to Snow Crash and certainly is, but it’s further in the future. And this story is mostly about this girl who’s growing up in poverty and one of the richer… I believe they call them claves. They’re like groups of folks that are political entities or groups or whatever.
And one of them invents this thing called like a primer for young ladies, they’re Victorian claves. The enclaves and they have its fancy name for it, but it’s basically a digital book that’s an educational book that can do probably a lot of the things that you think of a tablet doing today, but more advanced. But one interesting thing about it is that instead of having an AI entirely drive the experience, there’s actually an actor that effectively serves as a teacher and a parent to this girl. She’s interacting through the book, but through this actor. And they call them ractors because they’re interactive ractors. And the conceit is, even though this ractor is very special and the book in combination with the ractor helped this girl who grew up in poverty have a much better outcome in society.
And they actually play around with there’s other copies of the book and some of the copies of the book have no ractor. I’m dropping splinters in now, but that’s the basic conceit of the book. The idea was basically do a VR app where… And I described this because I think, back in those days especially, it was fun to call everything, like an Uber for something, but Uber for actors. Where you could have sets set up and you could have a experience set up and it’d be immersive theatre, effectively a digital immersive theatre experience, that actors could actually get in a headset and get paid to run sessions for people. And you could also consider this like role playing as well. Where you hire a DM and there’s some startups that do that now, but you do it in a VR environment instead. And a former coworker of mine actually is doing this as a real startup not just as a random idea. But I think there is a lot of power in that human element right, in VR that you can’t necessarily get in other places.
Because VR is that much more personal than mobile, which is that much more personal than the PC. It makes me wonder if the visions of what it would be to programme within this environment, aren’t just going to be about the fact that it is… You are now inside an immersive 3D virtual space where you can bring all of that great game engine technology to make things look and have physics and animate and maybe have a bit of tangibility to them however you want. But that fact like what we’re seeing with VTubers on YouTube now where there are these people who just for the benefit of the audience if they haven’t been following this. People who will buy full body mocap suits and set up digitizers so that they can go on YouTube or on Twitch and live stream themselves as some kind of a virtual character.
So they’ll make some 3D model of a character and that character will be who you tune in to watch. And the person behind it is just like playing the role of this character but they’re doing it with technology in a way that feels kind of novel and interesting because they can change things about their appearance on the fly in ways that in the past, like in theatre we’d accomplished this with, somebody’s going to run off stage and have a quick costume change and then run back on. And fast costume changes are a big part of theatre. How do you iron out the production so that everybody can transition scenes very quickly and the set spins around and new backdrops fly in and that sort of thing. And as we’re getting more and more technology that allows us to have these human relationships in more seamless ways or more wild kind of different ways.
You can actually make yourself look like a photo realistic alien or something like that. It gets better and better at crossing the uncanny valley in some sense. And it makes me think that programming has kind of gone through a similar sort of evolution where it’s from punch cards or even before punch cards, right? From doing computation where it’s on paper or in your head or with something like tablet weaving or something like that where it’s not aided by the machine at all, where it’s entirely driven by the person. And then we get large physical computers that you can programme with switches or whatever the big physical mechanisms were into terminal based programming and teletype and that sort of thing. And now we have programming editors that have syntax highlighting and you can have thousands of files and you can have type hinting.
And we have all these richer tools that are gradually getting programming to be closer and closer to people and to what a person is in a better fit for the human being, like using colour, space better. Letting you organise things in a way that suits how you think. Having your choice of programming language so that the kind of thinking that you feel most comfortable doing can be reflected by the kind of tool that you have available. And it’s all turning into machine code in the end, but there are so many different ways that you can approach it to suit who you are. And so it makes me think that maybe as VR expands in popularity and becomes more available to more people and the technology gets better and we kind of get over some of these early little hiccups in getting it figured out and out there.
Maybe one of the areas where we’ll see VR programming really improve on what came before. Isn’t just in a game engine and 3D stuff and whoa, cool shaders, but using it to be a better reflection of our humanity. And maybe that does involve… It’s allows us to get more people together working on the problems in a more intimate way than before where it’s right now we’re using GitHub. And that was a big change from the processes that came before it. It’s like the social networkification of programming. And so maybe VR gives us, I don’t want to say the gamification of programming that’s a loaded term. That’s not what I mean, but like maybe there’s some aspect of it where it changes what the incentives are or it changes what the experience of getting an error is like. Maybe it lets you explore errors in a way that is less infuriating and frustrating. Because there’s…
I think an easy way to look at this would be at the parts of programming that are miserable right now and what can we do with more intimate technology to take some of that misery and reduce it. Or take that misery and forge it into something that is more purely about what we’re wanting to do with the tool rather than just affordances because the machinery isn’t very robust or very rich right now. I love thinking about this idea of, here’s how we’re seeing it applied to theatre, right?
People are, maybe you could have a, Uber for theatre and VR kind of thing, like you suggested. And I’m thinking yeah maybe the factors to pay the most attention to in thinking about VR programming are the human factors not the technology factors that are kind of more easy to point to because they’re what’s existing already. The culture’s not there yet so we don’t know what the human factors are all going to be, but we know what game engines let us do. And so it’s easy to kind of pay a lot of attention to that stuff. Anyways those are my two cents…
Yeah, definitely. I think that theatre thing was one idea that I was messing with that was kind of somewhat along the lines of, it’s a mix of consumption and creation, right? But a lot of the prototypes that I was interested in outside of the tangible coding one were, are you familiar with Verlet physics at all in games?
I am, but our audience might not be.
Basically it’s a… I’m trying to think of a reference point. I don’t know, World of Goo might be a popular example, or like certain bridge builder games use this tech pretty often. SODA constructor. Fantastic Contraption is an example of VR that kind of does a similar thing right? So basically you have a bunch of particles, it’s a physics system, right? Where you have particles that move through the world they collide, but the particles all have constraints between them. And usually those are rendered as lines or…
And they behave like springs where they don’t let the particles get too close or too far.
Yeah. And they can be sift springs and Varlet comes from it, uses Verlet integration because the integrator is more stable than an oiler integrator, for example. So you can actually apply these constraints and kind of do…
You get a reasonable amount of stability.
It tends to not explode, right?
Yeah.
So it tends to, but it’s fun to play around with. So I was thinking about doing a Verlet builder in VR or a sign distance field kind of editor thing, right? Sign distance field is, you can define an object based by the distance to its surface or interior. It’s actually, it’s signed because it’s both positive distance and negative distance. So distance is surface. So for example, a sphere would be defined by, it’s the distance to its centre point and you know your inside or outside based on the distance to the centre point. And there are various techniques to actually render that, right. You could render it as polygons using marching cubes or some other meshing system but one common way to render it is actually directly using sphere tracing or Ray Marching.
So it’s kind of shoot a Ray out from the camera position to the pixel you’re looking at right, or you’re trying to draw. Evaluate the entire distance field which you can do. You can either store it explicitly in a 3D texture or you can… The common way to do it is to kind of calculate it implicitly and then you move along the ray, whatever the shortest distance you got from that method. And if you’re close enough to a surface by some margin you kind of decide you’ve hit the surface and you draw that as a solid hit, if you get far enough away from any… If you march far enough away and you haven’t hit anything, then you… It’s probably a miss and you just draw background colour or something like that. Right? So the kind of technical details of rendering sign distance fields aren’t super important for this.
But they’re cool. People should go learn how to do this stuff. It’s really fun.
Yeah, you should. It’s more of that and there’s some… There’re not really too many VR ones, but there’s tools like clayxels within unity and dreams obviously on PS4 is another one that uses sign distance fields kind of as its base, even though it does not do this simple rematch, because the scene complexity is way too high it’s a lot. Seem bit more complex in that, but it does do a rematch kind of on the micro level to actually draw the surface and magic CSG is another one and that’s a free PC tool which would probably be closer to this thing and it’s nice for modelling because you’re dealing with volumes and shapes instead of edges and points and polygons.
Yeah. Like it handles intersections really nicely.
Yeah. So intersections aren’t as weird. You don’t have to worry about managing mesh topology or T junctions or like kind of… There’s a lot of ugly things that happen with meshes that require a lot of handholding and babysitting that. You don’t really have this problem with sign distance fields. There is actually one relatively popular VR sign distance field modeller called medium. And it was Oculus medium, it became Adobe medium and now I believe it’s called substance modeller.
Substance picked it up.
Yeah. Well Adobe picked it up and Adobe picked up substance and they rebranded a lot of their 3D tools as substance because it’s a better brand.
It’s a better brand. Yeah.
It’s that kind of thing, right? So…
Interesting.
It’s all under Adobe’s umbrella, but yeah. So but this one would be like a lot simpler, probably would’ve been a simple version of MagicaCSG. So yeah, I had all these ideas for creative tools in VR and still have more. And I think that’s really like the strength or one of the major strengths of VR that gets overlooked sometimes. I think the problem is, and we hinted to this earlier is that a lot of the time people aren’t building… Some of these apps are built from first principles, but a lot of time these apps aren’t really built for first principles. It’s like, let’s add a VR mode on top of the game engine editor, right?
Right.
And you quickly run into, well, okay. You have to translate all of UI from the whole game engine to VR, right. So you end up with a lot of floating panels that have hard to read UI, you end up with, maybe navigation methods to navigate a large level that aren’t ideal. Right. So a lot of this stuff didn’t really take off. And like performance, a lot of time game engine editors aren’t really performance tuned as in most of the time. Right. Because well, if you’re editing a level on a desktop PC and it runs at 10 FPS or 15 FPS that’s interactive rates, right?
Right.
Even if the game itself needs to run at 60 or 120 or whatever. Right. But in VR, you want to keep higher frame rates consistently. You want to keep 90 or 120.
Yeah. Like 10 or 15 is unlivable.
10 or 15 is vomit inducing. So there are a bunch of challenges there where like, that made it not really usable. But you can imagine a game development tool that exists entirely in VR was designed or kind of initially for VR right, that just works. So with some of my… To kind of bring it back to stuff that I’ve actually worked on and some ideas that I’ve had as well in Oculus home. Eventually I started working on an actual scripting language and the initial implementation of that was just kind of a list of commands that you could apply to objects, right. So it was a very turtle style programming language. In that version, there’s actually no, what we might traditionally consider programming, right. There were no variables, no operators, you weren’t doing any data transforms really, it was just a list of commands.
And the reason for that was one, it was easy thing with bootstrap, but also I kind of looked at three parts especially of an end-user game creation system, right. And breaking into three parts is entirely orbituary it’s, there’s more stuff going on than just these three, right. But there are three things that I could focus on that would take most of my attention, right. And the first one is behaviours, which I decided was most important because if you can write interesting programmes or you haven’t cool UI or whatever to build those programmes. But they don’t actually do anything, which doing something generally means moving them or changing their material properties assuming that objects, aren’t mostly three meshes, three models, right? In these environments, which they pretty much are, right.
Some of them are particle systems and stuff like that. But playing animations, if they’re animated, playing audio. There’s only a handful of things that are really output, especially when you’re talking about kind of a higher level, game creation environment that you care about. And most of those are moving things or checking collisions, right. I added collision events, added some input events and I had behaviours to rotate this thing by 90 degrees over one second. And you could edit these things and they were, are all kind of parameters that were hard coded, right. And that wasn’t it obviously, but that was a good start.
Yeah. Those are just assumptions made so that you can focus on the part of the problem you want to focus on.
Yeah. Yeah. And then the other portions that were kind of the actual programming portion, which is, you could call it logic or you could call it scripting or you could call it VM or whatever you want to call it in this case, right. And then the last part would be the UI. So that was kind of my stage plan, where I did behaviours first. And like, okay, these are an interesting set of behaviours where you can do a lot of things, you build on them, right. And then kind of logic second. So I wrote a VM and then UI last, right. And UI is actually the hardest and VR it’s the hardest or the most interesting part.
I mean, it’s the hardest everywhere.
Yeah. Yeah, definitely. But, so in oculus some of the trick is I didn’t really get to UI. I just had this list based UI. I wrote the VM, so I could actually apply some logic. But without any UI to actually write any code, it wasn’t easily testable, it was hard coding some test and that’s it, right. And then kind of feeling like I was close to getting this scripting system working the Oculus Home. That got pretty far along and then I was moved to a different organisation than the team I was in, right. And told I need to work on scripting an entirely different programme, which is Facebook Horizon, which had no interactive behaviour when I started, right. So effectively, I kind of rewrote a lot of the work I had done, but not entirely in this new environment, right.
Was that a useful exercise to do in helping you distil your thinking on this a little bit or get closer to kind of the heart of the issues? Or was it just like, all right, fine got to rewrite it, mechanical work. Yeah.
A little bit of both. But I could simplify some of my ideas a little bit. Initially, I actually planned for the long term goal for it to be kind of a multi application. Maybe not the language itself, right. Because it becomes tricky when you start talking about UI, right. And in different environments and across different game engines, where assuming you still want use that game object model, which in this case both Oculus Home and Facebook Horizon did. There’s not a lot of room to port that aspect of it, but the VM could theoretically be entirely portable, it was pretty self-contained. That was not the case in reality, right. Because I just like, you have three months to get scripting working in this new app, right. So, but yeah, it did help me define or refine some of the ideas, right. And I pushed things a little bit further clearly because I worked on it longer.
Am I remembering that it didn’t end up shipping? I think there was a Twitter thread that I read of you talking about that at one point.
Oculus Home did not ship any scripting stuff. So there were some working scripting stuff in Oculus Home it didn’t launch, right. Oculus Home is interesting because it is a live product, right. So it shipped in the sense of, if you had the right internal Facebook employee ID and you had white listed for it, you could use it, right?
Right.
And it was in the programme and theoretically, if I turn that on for everyone, everyone could get access to it then it wasn’t far along. I kind of described how far the polished work I did was, and it wasn’t far enough to actually be considered a usable product, right. But Facebook Horizon shipped, it’s out in beta, it has not launched, there’s no open beta, right, of Facebook Horizon it’s invite only.
And what is Facebook Horizon?
So Facebook Horizon is a social VR application, right. So you can basically… And there’s other ones that are as popular or maybe more popular. They’re definitely more popular in terms of users like Rec Room or VR chat, are the two probably most popular ones. But it’s basically kind of a multiplayer game slash creation in game creation environment, right, slash online hangout space. So when you talk about the met averse a lot of the met averse apps right now are actually really just these things. They’re not really… I don’t know met averse is a super loaded term, so I’m not going to say they’re not the real met averse or they’re…
I think it’s fair to say that. Because…
It’s fair enough.
Yeah. Like there’s definitely a, as we talked about earlier, a business reason to make the new buzzword, the new hot thing, get people excited about it. And it is not the vision of the future that we technologists have been chasing.
Yeah. That’s fair. So for metaverse or for these apps, right? They’re kind of like many meta verses or whatever or they could just be considered online games with relatively robust end-user creation tools as well. So usually there’s a world builder, there’re mobile see players. So, four to 100 players can join depending on the maximum player account can kind of join together in a session and play games together or just hang out and talk to each other, right? You can build spaces and share them, right? And you build spaces usually with various shape primitives, sometimes with meshes that you can import from a third party modelling tool. You can kind of change material settings on them, et cetera. So it’s a creative tool, but they usually have some sort of scripting or interactive behaviour and then as well.
So you can make your own games, even though the scripting is usually but not always visual scripting. Some integrated Lu or other text based languages. I think Roblox is not a VR app, right. But Roblox is probably the most popular and by far version of this. It’s a little bit different in the sense, it’s not just not being VR, but that the creation tools are very much outside of the game that most users are experiencing, right? And that’s mostly true for VR chat as well, but in Rec Room and Facebook Horizon, it’s kind of the expectation is that the world building tools are part of the game, right. And not everyone’s going to world build, some people will, many people will consume. But you can easily just go into the world building tools and decide to make your own space or dive into scripting if you want to do something interactive.
And that’ll be big, like having… To me like you referenced Roblox. I think the one that maybe arguably popularised this in the first place was second life. And there again, if my memory serves the editing tools were outside of the actual playable environment.
Yeah. They were.
And it’s this dream of something like a HyperCard or dreams on PS4 but in a collaborative space where there are lots of people who can be together editing something. And you see that a little bit with Minecraft communities where it’s like a bunch of people get together on a server and build a, the giant statues that were carved into the cliff face in Lord of the Rings or that kind of thing. Like make a scale model of the USS Enterprise or something like that.
But putting it into VR should, with some of these other technologies you’ve talked about, maybe with SDF for modelling. So that the modelling is just that much richer and feels more like working with actual clay in the real world rather than working with triangles that can do all sorts of degenerate shit. It just feels like we’re on the cusp of kind of pulling together a lot of these threads in a really satisfying, empowering way, which is exciting. And so it’s cool to hear that Facebook of all companies… I’ve like heard of Horizon, but I haven’t heard exactly what it is and what it’s about.
So it’s cool to hear that the plan is that editing tools are in the environment that even consumers will be in. But if they so choose, they can go from being a consumer to a creator without having to do what we do now. Where it’s like, how do you go from being a consumer of software, a creator of software? Well, you got to go get a tool chain and an editor and learn how to compile code and jump through all of those hoops. Just like, I think shortening that gap between consumer of a thing and creator of a thing is unambiguously good to do?
Yeah. Yeah. And a lot of my, I mean, I have a lot of influencers when working on this stuff. I think a lot of people when they see the tools they’re just like, it’s cool. It’s VR scratch, which is… That’s kind of the reductive version of it. It is pretty much VR scratch. Right? In the sense of, it’s a block based editor for actually editing logic for the code. Right. I hinted at this, but I didn’t get to it, but I actually didn’t implement UI for Facebook Horizon. I did mostly behaviours and even in horizon, I did mostly behaviours. And I did almost all the VM, right.
In your translation of the work that you started on home.
Yeah, yeah. Yeah. But they had other folks work on UI. Which I can start I’ll… I think not right this minute, but maybe a little bit later, I’ll talk about some UX ideas that I had, that I didn’t get a chance to try out. But yeah, it shipped it’s out there, people are building stuff with it. I think the kind of scratch type editor, the reason why horizon has that is… And at one point there was a kind of node based editor in horizon. Like early… This never shipped or anything, this was never public, but we actually did implement a node based editor. Right. And there’s this idea of what people were calling black box scripts, right. Where it’s… And I think part of this was, like I said, I wrote an entire VM about three months and it’s a very simple VM, right.
It’s not anything to brag about or anything, but it’s like, it works. But I think there was a little bit of hesitancy as to whether or not anyone could get a full visual scripting language working in the timeframe that they wanted to at least get to alpha, because everything came in kind of hard. Right. So there were things that felt like they were hedging bets and not necessarily ideal for the product. But one of them was to do kind of these black box scripts, which were mostly higher level logic written in C sharp, and then you could wire it together. And that was actually pretty promising. It’s just the nodes themselves and VR when you talk about fully 3D node and wire placement. It’s already hard to manage nodes in 2D, on a surf, on a 2D plane.
And 3D space, it’s like a nightmare, you have 3D spaghetti. You can have things behind you, like occlusion is a big problem. You can have nodes occluding each other. And this is one of the challenges, then really talked about programming in 3D for real, right. Like actually building full… You’re placing objects or placing text or whatever you use to represent your kind of programming elements, right. In full three occlusion becomes a problem, things being behind you become a problem suddenly. So it was definitely a problem and performance is also a problem, right? And you have so many nodes, you have X amount of nodes rendering it slows things down. Right. So that guy kind of removed entirely, even though there were some things there that were promising and there are other environments that have gone with that approach. I think there’s an app called Neil VR that had scripting like that kind of around the same time. Rec Room has similar kind of node based scripting. Right.
And a lot of the ways people solve it is… And Dreams actually does scripting like this, both in VR and in 2D with kind of a microchip circuit thing, but they’re really like nodes. Right. And they do kind of data flow programming with these nodes that they call my micro chips. And usually the way to solve it is to still constrain most of the nodes to a plane. Right. So you kind of have a window in 2D in the 3D world, right? And also making it a somewhat zoom able UI where you can collapse and expand node graphs. Right. So, I mean, I think horizon could have worked with a node-based UI if we went that approach, but I think blocks work well. The kind of one regret with the UI that they ended up having and I wasn’t really involved in this.
But I was hoping that the language would maybe me influence some of this was that it still required that users spend a lot of time on a virtual keyboard. And the idea there was to not do that. And I have a kind of going back to the Luna editor influence, right. I had a bunch of ideas for number pickers and vector pickers and rotation, gizmos and stuff. Right. Where you generally speaking you would never touch the keyboard to input constants. Right. And then for variables, you’d name them once and you just copy and paste them, right. And for the block based editor, it’s kind of… The final block base editor ended up being very much driven by… It’s still kind of a flat 2D UI. Right. A lot of people are familiar with scratch.
It’s, so you don’t have free canvas placement like you do in scratch a lot of the time, it’s still kind of a list of things. Right. But you can, it’s kind of AST editor, right? Where you get empty slots in the blocks and you can put things in the blocks. Right. Originally I actually wanted to use the exact same 3D world building tools to build the AST for the scripting language, which meant it would be kind of two and a half D, right? And I was inspired by scheme bricks, which is, has kind of a two and a half D look from rendering. Right. And their various points were the horizon scripting language was kind of two and a half D, but apparently they had a design meeting internally. I don’t know, I wasn’t there, but you it feels like they had a design meeting or something and across the app.
It feels like they had a design meeting or something and across the app, they’re like, “We need to unify the design of this thing and we’re going to go with flat design.” So everything’s flattened now.
Right.
Which is, I hinted at big companies and doing stuff. But for a block-based scripting language is actually really cool, or potentially really cool to be able to reach into your blocks, instead of just hover over them, right? And occlusion is not so bad in that case, and you can highlight blocks and stuff.
And for folks who haven’t seen SchemeBricks, it looks like a Scratch-like programming environment, but the blocks are stacked on top of one another in a way, where it makes little triangular shapes coming out towards you a little bit, which looks really neat. It has almost a fractionally look to it, but it is still, if you took away the graphic aspect of it, it is still lines of text code one after another with indentation. So it’s perfectly readable as code, it just uses the blockiness of the UI to imbue a sense of depth to it. So it really helps you get a sense of what stuff is nested inside of other stuff, because that stuff comes closer and closer to you in how it appears.
Yeah. And you can think of it as stacking blocks in a tower, right? Where you have a base, that’s your entry point or your top level event, or whatever your …
Your main function? Or your function definition, whatever. Yeah.
You’d stack statements, and then you would stack operators and expressions inside of there.
It’s like literally turning the indentation into depth information, where the more indented something is, the closer it is to you.
Exactly. It actually started out a little bit more like that, and then they moved away from that since, but I also had an idea that got vetoed quickly by a UX designer, but in order to reduce the number of options, you were selecting from a list or a category. The idea was, or the number of blocks or whatever you want to call it, the idea was to once again, take advantage of VR and you’d basically pull out … And it was probably inspired by weird role-playing game dice or something. But you pull out a cube instead, that would have operators on each of the faces, and you could rotate the block and then place that.
So, I don’t know, there are weird things to make that Scratchy line more 3D. And a lot of them didn’t get in. Or some of them got in and got reduced, but it is still a little bit compromised, in the sense of that, it’s safe. It’s not really, it’s like, “Okay, how can we use VR for this thing that mostly exists, but it’s not really an entirely brand new paradigm?” And that gets into some of the stuff that I wanted to prototype. I would have, if I was maybe on the Pocket Zone team, and had a slightly longer prototyping phase, but maybe I wouldn’t have. I mentioned cellular automata before, and I was pretty heavily inspired by this environment called Moveable Feast Machine. It showed up on the Future Of Coding Slack. I believe it’s Dave Ackley is the creator of it.
It’s this thing called Robust-first Computing, right? Where it’s basically you have a grid of small, independent-ish, I’m not sure if they’re actually independent, in the implementation, if they’re actually independent threads or processes, or running out different machines, but they’re modelled as independent processes, and they could theoretically be independent processes. The Horizon scripting language actually does this too, where each individual script instance on an object treats itself as a distributed independent process effectively. They communicate via message passing, and message passing could happen locally or through the network. Theoretically it could have been threaded and stuff like that, even though I wasn’t-
It’s like it’s inherently meant to be async.
Yeah, exactly. I was really inspired by that. Once again, I was already doing, in the path of doing the message passing stuff, even with Oculus Home in the early days. I started developing a message passing system where it ends up looking a lot like broadcast and Scratch to people or just a delegate NC Sharp or something. And different people have different ways of thinking of it. But it was really inspired by small talk and specifically an environment called Croquet, which is early … We talked about Second Life, but even before Second Life there was an early distributed, 3D web type environment, called Croquet that had ways to sync-
This wasn’t the networking model because it is, for both Oculus Home and for Facebook Horizon, they’re peer to peer networks, but you have a fixed number of clients. You don’t have people moving through space and connecting to a different variable rate of clients.
You don’t have to sync across a large distributed peer to peer world, as cool as that would be. So the problem to solve, the same synchronisation problems just don’t exist. You send a reliable RPC or something to a client, they’ll get it eventually. There’s latency issues involved and stuff like that. But for scripting it’s not the end of the world in most cases. Yeah. So, for the Moveable Feast Machine inspiration, the idea is I wanted to play around with using cellular automata. So you just basically have a Voxel editor in VR that you’re using to write your code. And whether it looks like a wire world thing, or whether it looks more like Movable Feast Machine, which has scripting language called Ulam. Where you place nodes and each object would be a node.
But you’d still write code of some sort to decide how those nodes work, but you do it at a very, at a more granular level than maybe an object. And I guess I haven’t really described what an object is in these kind of social VR apps. It’s really like a collection of shapes or a 3D model that form and a literal object that can transform by itself in the space. There’s a single script, a single script could be associated with that object. So a cellular automata-inspired thing would probably be more granular than that, where you’d have multiple cells. It’s tricky to think about how it would interact with … It’s cool if you’re just making Minecraft, or you’re just making a Voxel world.
Easy, right? Because everything’s Voxel’s already, everything’s on a grid, cellular automata make a lot of sense. If you have more free form placement of meshes in the world, it’s tricky to think about how well that works with the cellular automata approach. You can do things with, there’s still input and output. You trigger a cell, a cell gets triggered, a cell’s state changes based on a specific input, like a collision event. Then you have a bunch of intermediate logic cells that do stuff and route a signal based on that input. Then it will route to a specific output or convert to a specific output. It’s tricky because it didn’t really work with these environments. So it, I’m not sure how worth it, it would be of typing that one, but it’s definitely something I considered, especially when you think about having a volumetric space to deal with.
And there was a 2D, there are some 2D programming languages that do this, that aren’t necessarily completely cellular automata-based. There’s one called AsciiDots, which is like moving … And I don’t know if people saw it, but it’s still text-based, but you basically draw Ascii-art to move a ball through a pipe system or I don’t know what the best metaphor is. But, and there’s different transforms and stuff. You can split the ball or combine the ball, destroy balls and stuff like that. Or, they can change state of various cells in the world.
I think of it almost like trains on on a little weird railway system.
Yeah. That’s probably a good metaphor for it. Let’s see. What else did I consider? There’s an environment that Ken Perlin made, called ChalkTalk, which is gesture-based. So you draw and I think he demonstrated in VR actually at one point.
Well, I haven’t seen that. I’ve only seen him demo it on a projector’s screen.
Yeah. I’ve I tried it, the source was released and I tried it myself and I saw demos on the projector only. In his videos, I think he’s only done projector demos, but I think he said it worked in VR or something, or maybe he was just planning on forming it to VR and to map. I’m not sure, but yeah, I did think a lot about gesture-based, and ChalkTalk’s gesture-matching was lack lustre for me, and doing good gesture-matching is hard, but they’re, it would be easy to beat them. So that was one thing I considered might be fun. Another variant of gesture-based would be just to do, and this is not future, it’s pretty present of kind, especially in light of Facebook has released another social VR app.
That’s focused on enterprise, recently, called Workrooms and Workroom actually uses the same tech, I mean, engine of course, because they both use Unity under the hood, but there’s a lot of extra tech for networking and avatars and world management and just random features and gameplay code and stuff required in Horizon. They use the same code base as Workrooms and Horizon, even though there doesn’t appear to be any scripting or world building or anything like that in Workrooms, theoretically, there could be right. But Workrooms also has an infinite whiteboard. I think one thing that came out of the Workrooms and we talked about this earlier, when we had the screen discussion in Oculus Home was that there’s a little bit of a lack of imagination.
It’s like, “Okay, we can be cartoon avatars in an office conference room that looks exactly like a Facebook office conference room. But now we have an infinite whiteboard instead of a finite whiteboard.” And it’s like, “Well, maybe I don’t want write on a whiteboard? Why can’t we be on a beach or something, or on some fantasy world or whatever?” Regardless of that aspect of it, it might be fun to, as people probably know, big tech companies have a really big whiteboard interview culture and just whiteboard culture in general. And So I thought it’d be fun … And one of the big complaints about whiteboard interviews, is you can’t actually execute the code. So, this is half serious, half trolling. But I thought it might be fun to do a thing where you actually can write. And I actually haven’t seen this demoed, especially not in VR or even on a smart whiteboard, but maybe it exists somewhere. Where you can write code on a whiteboard and it actually executes.
Right, like it does OCR or whatever?
Yeah, exactly. You do gesture-recognition stuff. Then, which, once again, that’s not super interesting from, it’s interesting from an interaction standpoint. Where there are fun things you can do, you could potentially add to a whiteboard coding environment that you couldn’t do in a standard IDE.
To me, this feels almost the same as some of the tools that allow you to put executable examples inside of your documentation, just to make sure that if you change something about the thing that’s being documented, and it breaks the example, you get a compile time error or whatever. There’s this space that has previously been used for talking about computation, but it hasn’t been a computational space. We should, it’s incrementing from where we are now to having smart dust, any space where we’re talking about computation that is not in itself a computational space, that seems like low hanging fruit for somebody to figure out how to do it and make it … Let’s get the actual dynamic medium to be everywhere where we are talking about the dynamic medium.
Yeah, definitely. And then even though you don’t really see it in some of the things that shipped, definitely in Horizon, especially at the time, I was pretty inspired by Dynamicland when you talk about brain computation to space. And Dynamicland explicitly says it’s not an AR or VR space. So it’s not in the spirit of Dynamicland to be like, “Lets make Dynamicland in VR.” It doesn’t really make a lot of sense. But one thing about Dynamicland that I liked was the idea of, and I think this did actually come through in Horizon a little bit, via the message passing system, and some other systems in place, was that objects, or in the case of Dynamicland, it’s a page.
It’s whatever your atomic programme thing, both has a physical location, and might have other physical things associated with it that aren’t code. Sometimes they have behaviours, sometimes they’re just visual, but it doesn’t really matter. It’s the case definitely in the Horizon, but that they’re all self-contained things and they can work together as a whole, but they don’t necessarily require the whole. So it’s not like a whole world programme, which came up as a possibility. But it couples things in a way that’s not as, one as shareable, or as remixable, but also that decoupling. And it’s interesting, because I feel like a lot of programmers, professional programmers get really scared when you talk about decoupling things to the point where, oh, any object can send a message or make a claim or whatever, and it’ll just do something because it’s like, “Well, you don’t know what else, you bring something in the environment that could completely wreck the rest of the environment.” And it’s like-
Yeah, it introduces fragility? Yeah.
Yeah. But maybe that’s not a bad thing? Especially when users have control of what’s in the environment. And can decide what … One thing I really wanted to do in Horizon and they didn’t implement it, but they talk about safety a lot in social VR, especially because they’re Facebook. But really only the safety features are there’s a recording feature, which has questionable privacy implications already for some people. Because it’s like, “Why is this app always recording my gameplay and sending it to Facebook in some cases?” But theoretically that’s actually a privacy feature because it will, if you get reported, it sends the gameplay footage so they can review it. But once again, it’s all surveillance tech, ultimately. Then there’s a safety button and a panic button.
But the problem with safety button and a panic button is they require, or even reporting or banning someone, is they all require user input and reaction to something happening that they don’t like in the programme. It’s actually easier to just take off the headset and never log into the thing, than it is to deal with panic buttons and emergency modes and all this stuff. It’s easier for the user just to say, “This isn’t for me,” to take off the headset, not log in ever again. That’s the easiest thing.
At least right now. I mean, you could say the same thing about Twitter or any other case where you get cyber bullying, but I think the social momentum to actually participate in something, once it becomes a part of the culture, that’s a very fundamentally strong force. So I think it is important to be thinking about this kind of stuff.
Yeah. I think it’s important, definitely. But I’m more hinting at that there can be more, even in Twitter, once again, you do proactively ban a lot, but who you follow and who follows you makes a big difference in your experience. And that happens before you … The ban is reactive, and you still need those reactive elements. But if you only have reactive elements, or you mostly have reactive elements, and obviously who your friends are in Horizon and stuff matter, and what spaces you enter matter. But harassment is a big problem, especially for women in VR spaces. And it’s physical harassment, even though it’s not physical-physical, as in real life physical, it’s physical and people are entering your personal space.
It might as well be the same thing. The whole point of making this technology more personal, is not just to have it both ways. Where it’s like, “Yeah, it’s so intimate and personal and it’s like you’re really there and inside the world.” But when somebody does something inappropriate, “Oh, it’s just virtual reality.” My phone is an extension of my mind in the same way where I don’t feel comfortable giving my phone to anyone. Not because I don’t trust them, but because it’s the same way I wouldn’t feel comfortable detaching my arm and giving it to somebody. This is now a part of my being. I completely am on the page of we need to respect people’s physical autonomy within virtual spaces to the full extent that we socially respect one another in non-virtual spaces. Or at least the way we should respect one another in non-virtual spaces.
Yeah. The way we should, definitely the way we should.
Yeah.
But you can, because it is a virtual space and it’s immediate, theoretically you can do more. So one thing I really want to do, at least in the scripting space, and a lot of this was inspired by a Second Life again, and various attacks that you saw in Second Life, that were enabled by scripting, or you have flying objects fly in people’s spaces, and stuff like that. Or you could-
Or other inappropriate things on live TV.
Yeah. Or you could teleport someone to an inappropriate place or move them super, things that would physically, in VR that can actually make people physically sick. So my thought was to have a permission system. And permission systems aren’t future of coding, really even. You have permissions in Android and there are known issues with permissions as well, where people just say yes to everything, because they don’t really understand. But if you can imagine permissions in a VR scripting language. Or VR worlds, or however you want to, whatever level of granularity, put at right where you could say like, “Oh, I don’t want anyone in this world to be able to access my name. So I’m going to hide the name, but also scripts can’t get my player name. I don’t want them to be able to read my position in the script.”
So that way you can’t have objects that chase you or enter your personal space. I don’t want anyone to be able to change my movement parameters or teleport me. It could go further where you could say like, “Oh, I want this world to actually be entirely static. I don’t want to run script.” No script mode effectively in the browser. And Horizon doesn’t implement any of that stuff as far as I know.
Those are thoughts even on a technical level. There’s also the whole suite of things where you can build these systems to embody principles. And I think one of my favourite examples of this is SimCity. The original SimCity was presented as, “This is an objective depiction of the the systems in a systems-thinking framework, that are taking place within a city.” Of course, as some really great recent reporting has shown, no, it actually is this really wildly unrealistic model of this disproven libertarian utopian idea of what cities should be run like. It embeds that within the model of the simulation in the game, so that you get all this, almost cartoonish deviation from reality, because it’s backdooring in this world view.
I love that as a counter example. Whenever you’re building a virtual space that’s meant to embody parts of the real world, you’re going to be doing that through a framework. What framework you choose will establish the culture, and the norms, and the relationships between real people, as well as the virtual relationships between systems. Even things like the way you consider how to model communication between objects and your idea of no script, or that kind of thing. Those things have cultural ramifications. So I think if you are deliberate and conscious about how you establish those kind of systems, and what things you’re simulating from the real world and how you’re simulating them, you can have a tonne of leverage over what culture emerges and what is acceptable, and how people will treat each other.
Just as one more example of this, one of my favourite ever video games is a game called Journey. One of the things that they did in the design of that game, because this game came out, I think in 2011, maybe a little earlier, it was at the time where online multiplayer games were just this absolute toxic cesspool of harassment, and people swearing and using racial slurs constantly. And the big, Microsoft and Sony and whatever, trying to clamp down moderation. And it being this cat and mouse game. Journey’s premise is that it’s a multi-player game that it’s, spoilers 10 years in the future. You’re not supposed to necessarily realise that it’s a multi-player game as you’re playing it. They do that by very carefully allowing the sense of another person being in the world with you to seep out in a way where it makes you feel when you’re playing it for the first time, like, “There are these other characters here, and I can’t tell if they’re real people or if they’re AIs.”
As you go through the game, which is this very powerful, emotional story told without dialogue or language or anything, that just told through imagery and the experience of play, you go through this very transcendent, emotional experience, and you realise that you are going through it together with other people. You get to the end of the game, and it’s the same exact … Gamers are playing that game one moment and then they’re playing Halo the next moment, and are just swearing at each other miserably. But in this game you get to the end of it with another person.
There’s this little patch of sand on the ground. This culture emerges where you draw like a little heart, or write this little expression of your fondness for the other person on the sand on the ground at the end of the game. Just because of how they constructed the way that you relate to other people. They did this very deliberately and it worked. So I think there are absolutely ways that you can structure the way that people are allowed to relate to one another, so that it encourages us to bring out the best parts of ourselves and make really genuine connections. Rather than just indulging in all of our worst impulses.
Yeah, definitely. I guess I’m a little bit cynical, because I don’t have a lot of hope that that type of thinking … I didn’t work on Journey at all, but I’ve worked with, and I know a lot of people that did work on that game, and they have similar thoughts about, like Phenomenon was founded by two of the developers of Journey. So I had a lot of discussions about that type of thinking with people, and they’ve almost all been people that have some connection to that team. And then you go into a larger company like Facebook that does similar work, and there’s just, it feels almost like ignorance, right? And you see it with newer social apps often too, or they follow a lot of the same patterns, because the focus is really on user growth and virality. And a lot of these engagement, and a lot of these, even if they do say like, “Oh, we want to make a safer or more inclusive place.”
I think some of it’s also just systems literacy isn’t really great. So, and this goes beyond just software or social media or online games. But just how society and politics function in general, where a lot of systems are taken for granted. They were made as arbitrary or ad hoc choices, or they were made a long time ago. And then they become dogma. Obviously this is Future Of Coding podcast, so …
Yeah, that’s the thesis of this show, exactly.
Yeah, a lot of people feel listening to this will feel the same way. But it extends beyond coding as well. Where, I entirely agree with that. I actually would love to see social networks and social VR programmes and online games and anything … Especially, that’s one thing that’s the whole Metaverse push has shown, I think, at least a little bit. Software isn’t so siloed. And what’s the difference between a browser and a game engine? Or what’s the difference between a game and a social network or whatever? Really there’s not that much.
It’s culture mostly. It’s just the culture.
Yeah. It’s the culture, and it’s just how serious we consider it. If something’s for business, it’s clearly more important, arguably more important.
Or it’s going to get more funding, let’s put it that way?
Yeah, or going to get more funding. Or if it’s not, games often have marketing problems. Or they only appeal to a niche audience.
That’s why I’m turning this into a gaming podcast.
Nice.
People here who are going to be building the future of coding have better love games when they’re doing it. Because there’s just so much value in that, that is ignored.
Yeah. But it’s all the same thing in a lot of ways, but yeah, I think, I would love to see these apps … And Journey is a great example and maybe Sky and some other things are great examples. I think mobile games take a little bit more of those lessons to heart, because they are quick play a lot of the time. They are, you’re going to get lots of churn, and play against lots of randoms and stuff. They’re more casual player based, so a lot of the time, even if people may or may not agree with monetization strategies in some of these games, the interaction that you experience is a little bit less toxic, I should say. Not that toxicity doesn’t exist, but yeah.
I’d love to see a major social network or something that’s designed with kindness or community or whatever first. And obviously that’s hard. And whether that happens in VR or on mobile or wherever, it doesn’t matter so much. But yeah, I’d like to see that being the first principle rather than, “How many users can we get?” Or, “How much funding can we get?” Or, “How much money can we make?” Or, “How many ads can we sell?” Which is maybe unrealistic in a lot of ways, but yeah.
Yeah, I mean, maybe it is unrealistic or maybe it’s just the moment we’re in. I remember Myspace very fondly, because Myspace was on the one hand of social network in the way that Twitter and Facebook and Instagram and all those are social networks, about people getting together and talking online and presenting themselves, and maybe peacocking a little bit. But much in the same way that Instagram is a social network about photos, Myspace was really a social network about music. It was for musicians at the time. It was this just absolute phenomenal thing that happened. Now that we’re in this era of, with the exception of maybe TikTok and Instagram and Snapchat and a couple of those ones, we’re in this era where the dominant social networks are just based on conversation and link-sharing, and creating the opportunity to inject ad units that are aligned with the way that the network is being used for communication, so that they just slip in there a little more easily.
I think that maybe what it would take for VR, if what becomes a dominant social relationship on VR, is about a thing. It’s about a creative pursuit way that Instagram or Myspace are, that might … Instagram’s not a great example because there’s plenty of problems with that, that I think maybe Myspace avoided just because it was so early, or relatively not huge in mainstream culture compared to social networks of today. But I just feel like every new technological paradigm is a chance for a do-over. Maybe the corrupting forces of major corporations will ruin it once again, this generation will have to try again in a decade with whatever the next thing is. But it just seems to me your cynicism is earned, but I’m still going to take the side of hope, and at the very least using whatever I can to encourage people to just get weird with it, and try and find ways to make it interesting and inspiring and about something more than just infinite whiteboards.
Yeah, definitely. I mean, I think the current, at least the current zeitgeist with all the Metaverse stuff is that, and even apps that aren’t calling themselves Metaverse, there’s quite a few apps that are getting funding or coming out, that are all nascent, but they’re really about … A lot of these existed for years on console. Some of my inspiration was WarioWare D.I.Y., which I’m not sure if you’ve ever played that game, but it was a DS WarioWare game, where you could make your own mini games. It used a really simple scripting language. It had sound editors and pixel art editor and stuff. They’re 30 second games, that by the nature of WarioWare formatting. So it was really easy to make something, but they would do procedural suggestions and stuff like that. But yeah, a lot of people are coming out with new platforms that are designed to make micro games or shareable games by end users. Nothing has really exploded …
…games by end users, right? Nothing has really exploded yet in that space. Roblox is once again, big, but it’s not quite that space particular. But as you bring up with MySpace, you had music and that came out of MP3s being widely available and people sharing around that. With Instagram, you had photos and that came out of mobile being a thing and being able to do filters on the photos and stuff like that, and everyone having a camera on their phone. And later you saw Snap and TikTok and other networks that have taken video, and I guess Vine, or some other ones that take video because everyone has a video camera on their phone as well and did that.
But we’ve done social video, we’ve done social photos, we’ve done social music, all those things exist out there. Games are big, but you don’t really… and obviously writing happened before all those things. You had blogging platforms and status updates and stuff like that. But games are big, but the difficulty in making a game is significantly higher than producing those things right now. So I kind of believe that we’re getting to a point where people are starting to think about a wide popularity kind of end-user game creation platform but it’s still not quite there. We’re like pre-MySpace if we’re even there, I think.
Social micro game creation where it’s like, I could make a little warrior where I’ll blow all of the nose hairs out of this giant person’s face. Micro game as easily as making a tweet. Which is kind of like the relationship between a tweet and a newspaper of old, is like the relationship between some hypothetical future very easy to make game tool that doesn’t exist yet but that hopefully someone who will, and what it’s like to make a game today.
Yeah, exactly. It’s a good question of what the audience is for that and how many people would engage with that. It might be huge. You could look at the PICO-8 community, there’s a thing called Tweetcarts. But I think other communities have similar things like JavaScript and various 8-bit computers that people do this for and stuff like that where you do a programme which is usually a mini demo, but could be a game also that fits in a tweet or two tweets or something like that.
Yeah. Like the little procedural animations at beesandbombs and all those folks are doing where they make some processing sketch and then make a little three second looping gif and put it in a tweet. And with all of those things, the value isn’t so much in the individual tweets or individual little processing sketches, but the value’s like, for one, the accumulation of all of this culture of having made all of those things, the way that that lets humans relate to one another, I think is much more valuable than the artefacts themselves.
But then the biggest value in my opinion, personally, is that it gives people really fun ways to grow the skills that are grown by doing that work. And I think game design is one of those things where it teaches you systems thinking and it teaches you how to relate to different parts of reality and different parts of the human experience that a social network where you’re building micro games would be valuable, not because of the silly micro games people make but because of the rising tide effect that would have on people’s ability to think and reason, and all of that kind of skill development that would happen as a second order consequence of people making all these little silly games and sharing them with each other.
Yeah. I mean, even now I think Vanguard’s gameplay/creation because you have both in any environment. You have consumers and producers in all these networks. So I think when we talk about future stuff right? Way in the future, I think it’ll be or maybe not that far in the future but five, 10 years from now or eight. I think it’ll be more experience rather than game, which is a subtle differentiation.
Yeah. What is a game Scott?
Yeah. Because experience still implies interactive and if it’s in software, right?
Does a game have to be interactive?
I guess it doesn’t have to be interactive.
Does a game have to have goals? Does it have to have a win condition?
It doesn’t have to have goals or a win condition. I would go as far as saying yes, it needs to be interactive depending on your definition of interactivity.
Yeah. Where interactivity can mean explicitly withholding interaction.
Yeah.
There are games where it’s like, don’t push the button, that kind of game?
Yeah. But you could choose. You have a choice there. You could choose to push the button.
Yeah.
A choice could be an entirely… in a sense, almost all VR content, even if they call it film or whatever is a game because you have a choice to where you’re looking at. And you could extend that and you can make it super Meta and be like, well, film’s a game because you could decide to pause the film or rewind it. I think that’s a little bit too Meta for me, but I don’t… yeah. The, what is a game? discussion is a rabbit hole that I don’t want to get into right now or ever.
It’s very much like the, what is programming? discussion. Now, the real discussion is, is programming a game?
Yes. Definitely. Programming is definitely a game but… or are games programming, some games definitely are programming so…
I mean, Photoshop is programming as far as I’m concerned.
Yeah. That’s true. We went off on some tangents there, but kind of with the experience thing, sharing thing, you think about Metaverse and you think about large scale reality capture, being able to bring 3D spaces or 3D objects captured directly. And there aren’t really decent consumer tools for this. There’s not really good rendering for this even at this point so it’s not ready yet even though there’s research in NeRFs which is neural radiance fields, which is a way to take a few photos, basically do some machine learning on it and output a volumetric representation of that photographed object.
And it’s a little bit beyond photogrammetry or LIDAR scanning or whatever because it actually captures a full volumetric radiance feel so you can… The trick with photogrammetry is that it doesn’t really capture, let’s say flat surfaces because it’s doing essentially edge detection and it generates a mesh so you don’t get transparency. You definitely don’t get volumetric type effects so you end up with weird looking 3D meshes a lot of the time. And then they have like textures splashed on but they just look like photo textures. They don’t necessarily capture the lighting.
The lighting is baked into the texture a bit.
Yeah, exactly.
And for anybody who has no idea what the hell we’re talking about now, we’re talking about techniques for basically taking an object that is in the physical world and getting a 3D virtual version of that object. Photogrammetry is one of the popular techniques for doing that. This is something that people want to do a lot when you start. Trying to make a virtual world is like, I’m holding this thing in my hand, how do I get it into VR? And so there’s emerging techniques for doing that.
Yeah. Yeah. When I first started working in VR, I had a lot of people come up to me and they’ll be like, “Well, how do I just get an actor in VR?” And I’m like, “You need a hundred thousand dollars and a performance capture stage or whatever. And you need to hire artists to work for three months or four months to get one character.” And there are easier ways to do that. Even then, there were easier ways to do it. You could buy something off the asset store. Now you could use Character Creator or like Unreal MetaHumans. There are various options to get modifiable, decent characters in a game faster.
Basically like a character creator from a video game.
Yeah. it’s like an advanced character creator from a video game, but it’s still like, not. If you just want to capture yourself and get a 3D version of yourself that can be animated. That’s not trivial. So you can imagine a future where that does become trivial all of a sudden and you have a pretty easy way to capture volumetric things, 3D things in the world and display them and share them with people and remix them. I think that’ll be pretty massive.
Especially if you can do it on a relatively large scale, that might be the point where you have something that you’re calling the Metaverse. Because I think as much as a lot of people want fantasy or even prefer fantasy or prefer stylized art or graphics, I think a lot of people want… even if they don’t explicitly say that’s what they want, they would prefer being able to experience things that remind them of the real world, or are in the real world as well. If you just ask people that aren’t really gamers, that aren’t really super heavy tech people necessarily, VR tourism is one of the number one things and there are VR tourism apps and you could do a 360 video. You could do it with rebuild this area and kind of do it but-
I mean, I was on stage with Paul McCartney.
Yeah. It’s not quite that compelling and you can do it at scale. You could do it, you do one experience. People are like, “Oh, that’s a pretty cool experience.” You may not want to return to it, but you can imagine something where there’s effectively infinite content, like a YouTube brain or something. Obvious experiences. And maybe that’s more interest thing for people going into the future.
To kind of circle this back around a little bit to programming. Sure, object capture is a big part of it and you can get all of these objects looking photorealistic into the virtual world but unless we somehow crack programmability. They’re not going to behave in a way that is an analogue of the real thing and that’s why it’s like, I was on stage with Paul McCartney. Big deal is it’s because if I was actually physically on stage with Paul McCartney in the real world, I would want to go over and play one of the instruments or something like that or interact with it in a way that it was giving back to me as much as I was able to give to it. And that was a very high amount. Whereas virtual tourism and whatever, it’s like, what you are able to give back to it is almost nothing and what you are getting from it is very, very narrow compared to actually being there.
Yeah, exactly. I definitely agree that you need interactivity. And I probably should have explicitly said that. This is on top of, you have your virtual game creation environment so you do have programmability whether it’s just like… In some cases it might be fine just to set a set of standard properties and this would be something that you could do in Rec Room or Horizon today where you say this thing is a rigid body so it’s physicalized. That might be kind of junky in a lot of ways in the sense of like, “Oh, well, is it really rigid? Like, what does the collision mesh on it look like? In today’s tech. But you can imagine a future where some of that’s solved.
We’ve Nanite for physics or what have you.
Yeah. That’d be a thing. That’d be pretty amazing actually like a super scale position based dynamics engine or something like that, lots of level collisions or something like that.
Yeah. Actually wasn’t that PhysX, was kind of like that in a sense?
No, PhysX is just the standard physics engines and unity are unreal. It’s a rigid body dynamics engine. It doesn’t do anything super fancy.
I thought it was doing something very small per-triangle penalty springs or that kind of stuff as opposed to your Havok derived rigid bodies and then, soft bodies that have some amount of deformation. I thought it was more everything is a soft body and there was some fanciness going on there.
No, not in PhysX. I mean, PhysX has a weird history where they briefly built physics hardware and I’m not actually sure what they did necessarily but they got bought by Nvidia and they have a bunch of products that are named PhysX so they do have some PhysX that run on GPU to do like fluid Sim and stuff like that, potentially do. But yeah, they haven’t really had that scale or at least if they do or did then-
It definitely didn’t survive. It didn’t become part of the-
Well, so Nvidia has a thing which is separate from PhysX, that is a GPU position based dynamic system that kind of works that way, not that many apps use it. There was a VR tech demo app that Nvidia made that was like Funhouse that used it. That stuff isn’t widely used in games. Basically that’s the answer. In most popular gameplay types, rigid body physics isn’t used very heavily in games.
You mean soft body physics aren’t used very heavily.
Rigid bodies are not used very heavily.
I thought everything’s… your character’s a capsule until it rag dolls.
For collision, yes. Yeah. Your character’s a capsule, but almost all character controllers and shipping games, especially popular games are fully kinematic. They don’t use dynamics.
Yeah. For like locomotion and that kind of thing. Yeah.
Yeah. They use like shape cast and stuff. And most gameplay is ray cast against static geometry and-
But then it’ll use dynamics for effects like I shoot the concrete and it’s going to crumble and little chunks are going to roll along the ground. That’s dynamics.
Yeah. So it’s actually used a lot less. Vehicle Sims are a little bit different and they tend to use it a little bit more. So there are certain genres that use it more heavily. It’s a little bit more common in VR because you can pick up and throw objects and objects need to feel a little bit more physical so you see more of it. And there are games that obviously use it for… A lot of puzzle games use it like Portal or like puzzles in Half-Life 2 or whatever use it because you’re picking up cubes and stuff. I worked on a game that use it pretty heavily. My opinions about rigid body dynamics that were partially influenced by a game, a puzzle game that I worked on called Shadow Physics that used rigid body dynamics pretty heavily for some objects in the world and it was challenging to tune them and stuff like that.
So, I mean, they’re used pretty widely like most games, I would say, rag dolls is probably the most common usage in your average video game. That wild tangent-
Glide’s mission is to create a billion software developers by 2030 by making software dramatically easier to build. We all marvel at how successful spreadsheets have been at letting non-programmers build complex software. But spreadsheets are a terrible way to distribute software. They are an IDE and the software built in it, rolled into one, and you can’t separate the two. One way to think of Glide is as a spreadsheety programming model, but with a separable frontend and distribution mechanism.
The way it works right now is that you pick a Google Sheet and Glide builds a basic mobile app from the data in the spreadsheet. You can then go and reconfigure it in many different ways, including adding computations and building some pretty complex interactions. Then you click a button and you get a link or a QR code to distribute the app. The data in the app and in the spreadsheet will automatically keep in sync.
For the Glide team that’s just the beginning. Glide needs to become much more powerful. Its declarative computation system has to support many more use cases without becoming yet another formula language. Its imperative actions don’t even have a concept of loops yet, or of transactions. Glide needs to integrate with tons of data sources and scale up to handle much more data. To do all that Glide needs your help. If you’re excited about making end-user software development a reality, go to glideapps.com/jobs and apply to join the team! My thanks to Glide for helping bring us the future of coding.
I think the folks in this community already know Replit. I think they know Replit because of this show, and because of the fact that they have been a benefactor of ours for quite some time now, helping to bring us the transcript. But also because they keep cropping up on other adjacent shows. Like the Metamuse podcast. In the most recent episode that I just listened to, they talk about how Replit fits into the space of tools that really minimize the number of moving parts that you need to concern yourself with if you’re just trying to make a little personal piece of software, that’s meant to go somewhere and be situated somewhere and just live forever without needing to be tended to and maintained and be a constant suck of your attention. Because, in the case of Replit, they abstract a way so much of the pointless complexity that we need to concern ourselves with if we’re doing more conventional styles of programming, like having to concern ourselves with what operating system our software is running on and what versions of dependencies there are that you need to bring into your project, like having a build tool or some kind of compiler or something like that or concerning yourself with the physical hardware that your software runs on if you have to do that. If you’re setting up some little tool that you want to run, come constantly, and you put it on some home server, now you have to maintain that home server.
So Replit, if you are looking for a way to just make a little piece of software and set it up and make it available on the internet somehow and let it just run in perpetuity, it’s a wonderful place to do that because they have a very nice contract between your software and the environment that it runs within.
And it’s a really interesting successor to what Heroku gives you, for example, where Heroku kind of has a very clearly defined contract between your software and the system, so that Heroku can concern itself with changing the underlying things and you don’t have to. And so Replit is like the next generation of that which is a really interesting way of framing what they offer. It’s not just a tool for giving you a nice developer experience or a nice learning environment if you’re new to programming, or a collection of nice integrations if you want to work with GitHub, or a nice multiplayer programming environment if you want to get a bunch of people together in a single screen and have them all typing away (the way that we are increasingly used to with things like Google Docs and Notion and what have you). But it’s also this environment for thinking about making something that just needs to be low maintenance for the long term.
So I really enjoyed hearing that framing come up on Muse. And so I guess this is a double plug: Replit rules, and The Muse podcast is also really good. So go check both of those out. But in particular, because we’re here to thank Replit, go to replit.com. You can sign up for a free account there and get started and play with a REPL in — I don’t know how many languages it is now. Let’s just say all of them. I have a two year old daughter, and when I ask her, “Oh, how many raisins would you like?” she says, “All of them!” So yeah. Replit has all of them languages. Go to replit.com. Thanks to Replit for sponsoring that transcript and helping bring us the future of coding.
Yeah. I want to go back to the experience thing and like what does programming look like in a future Metaverse thing where you have perfect reality capture. It’s very nascent, extremely nascent tech but I’ve played with OpenAI Codex a little bit. It’s a GPT-3 based transformer model that’s trained and fine tuned specifically to, and it clearly has a lot of extra software functionality to actually function properly. It’s basically coding environment where you write a prompt so you give it a natural language prompt kind of like documentation or comment really. So you basically give a comment. One example I did was… they have an environment that will generate web apps for you, front end web apps so generate JavaScript using standard browser stuff.
It might be able to generate react code or some other really popular common stuff that you see on the web as well. My comment was, “Draw spinning triangle using WebGL.” And it generated the code for it and it works. You can run it and it compiles and stuff like that. And they have some cool demos. It’s currently enclosed beta or alpha or whatever you call it but if you’ve messed with transformers at all. If you’ve messed with any of these large language models, GPT-2, GPT-3, they do a pretty good job of generating readable text. So this generates runable code, this particular model, which is cool. And GPT-3 can actually generate things that aren’t… it can generate well-structured JSON, and some other things, SVGs so you can actually generate images kind of with it, but that’s kind of an abuse of the system. It’s not really signed for that. But you have these large language models that… now they’re still kind of novelties. They can be used for writing assistance but they have a bunch of flaws where they’ll only generate 1,024 characters at once so that’s very limiting in coding. You have to keep reprompting it and try and keep a thread together and stuff so it’s hard to write larger programmes but you can imagine a future where someone is coding in one of these environments or in general and they’re partially assisted by an AI or fully assisted by an AI and you’re mostly writing very detailed documentation.
You don’t care about implementation which is even higher level than like, oh, we have compiler. We talked about this earlier, I think, where you start with assembly language or machine code or punch cards or whatever and then, you kind of have assemblers and then you have some higher level languages and then you have better editors and debugging tools, visual programming languages and other advanced tools to help you but maybe the further assistance of programming looks like something AI driven.
And even then, designing a programming language so that it wasn’t requiring this hack of using an AI model that’s trained on reproducing text from examples, but you could instead be like, what does an AST designed for AI generation look like? What does it look like to just cut out that middle layer of the textual syntax of it? That might be an interesting frontier in which to explore this.
Yeah. I mean GPT-3 ended up being this tool they had and it’s super expensive. It costs like millions of dollars or whatever to train these models which you could imagine an approach that’s more like, let’s a bunch of running programmes as they exist and this is kind of super fuzzing, I guess, where you can using various fuzzing programmes, people have demonstrated like the ability to create programmes based on a certain input that can like parse the input or whatever, or do something for you. You want a specific output. And that’s a really basic. You drive it via effectively a genetic algorithm or something like that.
But you can imagine, instead of taking a huge corpus of code, you actually take a huge corpus of running programmes and you train an AI and I imagine like a transformer model would not be relevant at all in this case to generate that code instead. And that’s even lower level than the AST level or any kind of language level. It’s just like, okay, well here, we either label the programme in a certain way of what it’s doing or allow people to input screenshots or mockups or something. That’s different to a programme and it generates a working programme of the thing that you mocked up.
And once again, for any of this to be useful, you need to do a lot of curation and a lot of very careful prompting and even then, it still falls down a lot of the time because there’s issues with this, with Codex at least being it’s still probabilistic. So you don’t get deterministic output based on your prompts. Your prompt will probably do a similar thing or the same thing every time you put it in but it’s not going to do the exact same thing so you can write the prompt once, get a perfect result and then, write the prompt again and it’s not quite what you want it to and maybe that’s not a bad thing. I don’t know.
Yeah. I don’t mind that so much. Because then, that’s how people programme too.
Yeah. Yeah. So I don’t know. It’s an interesting thought and that’s not necessarily related to VR particularly, but-
It’s just another nascent technology that will have an influence on how we programme.
Yeah.
Well, let’s just wrap it up here. We’ve got two hours and 20 minutes worth of wandering in the wilderness which is exactly what this show is about. Even though we hit basically one of the five different areas we could have gone into, I think that we hit it really hard and that’s super cool.
All right. Well it was good talking to you. Hopefully, that’s a good set of material. We did go off in the weeds sometimes, but you know.
That’s what this show is all about. That’s why… Muse can do their tight 45 minute or the Notion podcast, they can do their tight 45 minute interview with Alan K. Over here, we’re going to get fringy. We’re going to get weird. We’re going to go very deep into hypothetical and that kind of wilder side of futurism so thank you Scott very much for going on that wander in the VR woods with me.
Okay. Cool. It’s my pleasure.
That’s the end of the interview. Thanks Scott for coming on the show. Thanks to Replit and Glide for sponsoring, and thank you to you for listening. You can rate the show or leave a review — but don’t, because that’s not how this show spreads. It spreads by word of mouth, so don’t bother. It’s fine :)
We have another interview coming shortly — I’ve already got it recorded — with Ella Hoeppner, about the Vlojure programming environment. So stay tuned for that.
That’s all for today. I will see you in the future.
[weird outro music, as usual]