Welcome to my development journal. This is an experiment in radical transparency. You can read my unfiltered daily thoughts below. Pardon my typo-laden stream-of-consciousness. If you don’t want to miss a musing, here’s the log RSS feed .
It was inspired by Jamie Brandon’s development journals for Eve and Imp.
The data for this page is pulled from the commit message history for this repository. It’s similar to what you’d see if you did git log
. (I used to keep these kind of notes in the now deprecated journal. Those journal entries are below in the log - imported via some skullduggery.)
I have decided to pause my research and podcast journey. I posted a letter to my Patreon explaining the details, which I will paste below for posterity:
Hello Dear Friends,
I have bittersweet news to share: I have decided to step down from my leadership role in the Future of Coding community. I still plan to be involved, lurking in the Slack, responding to emails, helping where I can, but I will no longer be organizing events, doing research, or releasing podcasts (beyond the final few I still have in the queue).
I asked Ivan Reese, one of the first and most dedicated community members, to take my place. Not only is he an incredibly smart researcher and talented interviewer, he has excellent judgement, is kind, and thoughtful. His one guest interview (with Jack Rusher) was the best-produced and most-listened-to episode on the feed!
You’ll hear more from Ivan soon about his plans for the future of the podcast, community, and website.
Tidying up a few loose ends:
Why am I stepping down?
In the short-term: to focus on family, plan my wedding, and make a feature film with my fiance. (Out of left field, I know!)
In the longer term, I plan to come back to this work fully as an “organizer” of it, in the shape of a company, non-profit, or governmental organization. Will keep you posted!
What about my research?
I still think it’s a terribly interesting thread to pull on, and am very excited to continue to follow Conal Elliott, Adriaan Leijnse, and the denotative community. Even in my less-active state, I will be really quick to reply to all emails about this! All of my thoughts and notes will remain in futureofcoding.org/log.
What about Patreon?
Thank you so much to everyone who has supported me and our community so far! I am shutting this Patreon down. I expect Ivan to start his own in the coming months.
Thank you so much, everyone! Two and a half years ago I started talking to an empty room and all you beautiful people showed up, listened, and had so many wonderful things to add to my life, and to each others’. The world became a bigger place and a smaller place for me. For all it’s faults, the internet has allowed us all to find each other in this lonely world. It’s a magical thing having close friends who can speak to your soul that you’ve never met in person.
I better end this before I get too teary-eyed. But really, to those of you who have been my cheerleaders and biggest fans, you know who you are, please know that none of this would’ve happened without you. I am so grateful. I can’t wait to see what new heights you reach in this next chapter.
Love always,
Steve Krouse
On Wednesday I spent two hours preparing for the call with JE, thinking through my plan for a “vision statement” project. The notes from the call are here. He approved of the plan and suggested this year’s Salon de Refuge as a place to publish these notes. We agreed on next Friday, Oct 4 as the time I will send him a couple paragraphs of a plan for this project:
Here are the prep notes:
Are there similar efforts like this? Clearly the Mother of All Demos comes to mind. But the Internet, Wikipedia, and open-source were all existence proofs. Yes some people hyped them up but it just took time for them to proof in the pudding. BV’s talks come to mind but they also have a mixed legacy. They got me (and Webflow and Notion and Explorables) so I think its net super positive.
Let’s get visual! This system exists. What do I build?
The system would let me interact with my Inbox as a stream of emails of some sort. I think that the idea of an “inbox” that you try and keep to zero by either doing the thing or putting it in a bucket of whent to do it is a good strategy. All emails (accept spam or others filtered out) will go to this inbox. I will also put new todos in here that are generated in real life such as meetings. This is like emailing yourself todos. –> One way to think of this strategy is emailing yourself todos and snoozing things. That works but it doesn’t allow reordring of todos or nesting of todos or a lot of other expression. Something closer to Coda is what I want but also email within it. But you can do that with Coda! You can even view the emails one at a time in something that looks like Gmail for iPad. But it’s not nearly as nice… This is where it gets subtle. I want to be able to have everything how I want it but I don’t want to have to make it all myself. I’d like to customize building on what others have done, occasionally adding my own little customizations, mostly through composability, and occasionally through work from scratch. I think I have to bite the bullet and actually design it… Here’s a ~20 minutes of work on some sketches:
Places I drew from and shouldn’t forget when doing this work:
It’s been a record 5 weeks since I’ve done a proper log entry here! Apologies to my wonderful readers. However you haven’t missed terribly much as I’ve mostly been putting the finishing touches on the Whole Code Catalog, going to a few conferences and other traveling. Here is the quick recap:
I decided last minute to attend BobKonf colocated with ICFP in Berlin this year to see Adriaan Leijnse of “Relativistic FRP” and Conal Elliott give talks. (These are both online and well-worth watching!) I was fortunate enough to spend a couple hours with them both, talking about the future of extending FRP, particularly into the domain of distributed computing. While we didn’t speak of him directly, I started watching Paul Borrill’s videos about this time about the intersection of physics, time and distributed computing after Adriaan sent them to us, as they were instrumental in his thinking through relatavistic FRP. More on this later.
I attended a User Liberation Front meeting with Jonathan Edwards, Antranig, Luke and Marianna, Stephen Kell, and others in Cambridge before we all went to PPIG. It was wonderful seeing them all! I wasn’t as much of a fan of PPIG and ended up leaving a day early to get back to work at home.
Last Monday I finally launched the Whole Code Catalog. It was definitely successful enough for me to be happy (front page of HN for a couple hours, 172 retweets, couple hundred new twitter followers, ~50 new Slack members, no uptick in podcast downloads, still about 1.1k listeners/subscribers, 500 people on futureofcoding.org/index.html but unclear how many on the Catalog because I sadly forgot analytics on it), but not much more than that. It definitely didn’t go crazy viral, but it still feels like it justified the time I put into it. It was nice emailing with Stewart Brand and getting a nice Twitter comment from Gary Bernhardt. However I feel very much “done” researching others’ projects, especially modern ones, for the next couple months at least, so Edition 2 of the Catalog is at least a year a way, if not longer.
I had more fun than expected at Strange Loop! The talks were way better than I was expecting. I didn’t get to see it live, but Paul Chiusano gave a talk. John Austin (from the FoC Slack) gave a wonderful talk on the history of RGB color. I saw a fun talk using the Spotify API and I am now working on rebuilding the Spotify Running feature they discontinued to allow me to again run to the beat of the music. I started it on Dark but am now on Glitch because there was another user’s project that I could fork easily. I am looking forward to watching this talk on Mesh networks. As a fellow RSI sufferer, this talk was very inspiring.
I enjoyed spending time with Paul Chiusano, the Dark team, @pvh, and John Austin. I bumped into Hillel Wayne a couple times but sadly did not get to speak to him much. I’ve been thinking a lot about his What we know we don’t know talk about empirical software engineering. Here’s the summary I put in the FoC Slack this morning:
Is global state really bad? I sure think so but where’s the empirical evidence?! If it were so clearly bad, then that badness we detect must somehow show up in reality and can be measured empirically. Is the issue that it’s too costly to perform the test? Or that it’s actually not as bad as we thing… (Reminds me of the debate I have with my mom about alternative medicine.)
The Dark Launch Party went really well! It seemed like there were at least ~200 people. My talk seemed pretty well-received. I got a nice laugh at a comment about subtracting minus 1 in JavaScript. It was so fun to meet people who listen to the podcast in person!
On a personal note, my engagement party in NY happened on Saturday, my two-year anniversary with my fiance. It was very fun. Glen came in from Boston to be there - thanks Glen! - as he’s the only person who was there that reads this log.
In addition to spending 2.5 hours per week learning French for my fiance birthday this year, I also started taking dance lessons to be able to keep up with her on the dance floor. For her part, she decided to enroll the online computer science class I took as a kid, IMACS, where she learned the basics of Scheme. She flew through a semester-long college course in a month and is now embarking on the second course in Python and Haskell. Very fun!
I completed the first part of Quantum Computing for the Very Curious by Michael and Andy. SO GOOD! I’m now supporting them on Patreon and looking forward to more goodies. Love this innovation on mediums.
Speaking of new mediums, I am excited to be a part of the beta for Ink & Switch’s Muse app for iPad. Maybe I can get some of that content embedded here.
While I was planning to attend SPLASH and LIVE 2019 in Athens this year, all the recent conferences have made me a bit conferenced-out. I really want to have more to show of my own work before going to another one, so I’m not sure I’ll make it out to Athens this year. Hopefully I can deputize someone to film LIVE for us!
I want to go deep back into research very soon, but first I have a few loose ends to tie up. I have 4 podcasts waiting for editing (Jonathan Aldrich, Jenn Jacobs, Amjad, Dark) that I’d like to move forward, if not publish/schedule them all.
I also need to add Dark to the Catalog, now that they are public. I might have to wait on this till they squash a handful more bugs from version two (“fluid”) of their editor.
Finally, I have been sending hundreds of todo’s (some of which are important and others which can be ignored for likely forever) to Workflowy so that deserves at least a couple hours of love. I must also remember to my Github Projects here and here that have a number of research ideas and tasks that could use some organizing.
After a vacation- and consulting- heavy summer, I am eager to dive into deep work over the coming months. Here’s the meta-plan:
I don’t have the time or energy now to flesh out these sections so I will simply sorta-organize my bullets below to be fleshed out in the coming weeks. The first section below is where my heart and mind is mostly at these days.
- Logic vs FP and what is a specification and executable math and black box optimizers vs bit manipulation. Unleaky abstraction. Real math that’s fast. Really really abstract code so I don’t have to worry about being stuck or have to rewrite…
- Something about an open standard a la emacscrpt that’s super denoational and mathematical that allowed for many implementions. And it’s a super broad open framework that allows for all sorts of ad hoc extensibility. One key question still is what is computation, what is computable, what is doing vs being, and most centrally are the questions of time and space.
This is not shaping up to be my most productive month ever. I got almost no work done the week of July 1, but instead learned how to wake surf and hung out with my family in Connecticut. The following week of July 8 I kept up with my emails and French, worked a handful of hours on Dark, and spent most of Friday (~4 hours) subreviewing a paper for Jonathan Edwards about multi-tier FRP which was actually really rewarding. The week of July 15 I traveled to France, spent 14 hours on the Whole Code Catalog, and kept up with email and learning French. Yesterday I mostly spent in Monco – my first time in the Mediterranean and a casino – and today I spent on email, investing, and traveling to Cannes and back. Hopefully I can get three full days of Whole Code Catalog work in this week. On Saturday I leave for a week cruise in Turkey where I expect to get almost no work done. Maybe I’ll finished Where Math Comes From and reviewing JE’s new version of Subtext.
I will be in London for all of August, and except for PPIG at the end of the month, hope to be extra productive after my very relaxing July. If I can put in two 30-hour weeks for the Whole Code Catalog, I’ll be in great shape. Then I’ll have some time in August to start thinking again about my own research…. as well as planning my engagement party and wedding!
Speaking of my own research, I have begun to notice a steadily increasing level of excitement in myself towards getting back to research. One factor is the interesting work I reviewed recently on distributed FRP, including the paper I subreviewed on multi-tier FRP that I can’t talk about yet, and another paper that I can’t stop talking about. It was written by Adriaan Leijnse and titled “Relativistic FRP” because he takes an spacetime approach to extending FRP. I love it! It builds upon Adriaan’s earlier work on building an algebra for specifying CRDTs which was also cool.
Here are some quick notes I took to help spur on my thinking the next time I can return to research, hopefully in late Aug:
Apologies for those following this log hoping to get interesting insights about the future of programming. The last few entries, and this one, are very much about how I’m spending my time. For the next month or two, I will be mostly working on The Whole Code Catalog with Dark, and various other review of other tools, languages, and research. In other words, my own research is on pause and I will take this time to learn more about what others’ have done. I also plan to continue diving into Category Theory during this time.
How I actually spent the time I planned here:
Total 35 hours. Pretty solid, because this doesn’t include lunches or a nap on Tuesday. Moving c9 to aws took much less time than expected (45 min), so I included it in Inbox time. I spent about the right amount of time on Dark work, did less running than expected, and filled my extra time with Category Theory and two small random freelance jobs that popped up.
Last week I got a very minimal amount of work done. Monday was eating shellfish in Maine and then traveling to NY. On Tuesday I organized things, and did a lot of work for Dark. Wednesday and Thursday I spent mostly day preparing to propose to my girlfriend (she said yes!), and Friday was recovering from that and celebrating with family. I spent no time on French last week, so I am now 3 hours in debt.
Monday was a vacation day. This week I’m with family but still hoping to get ~3 hours of work in per day. Yesterday I spent two hours organizing my inbox, workflowy, writing this, and then spoke with Alan from the Slack #1-on-1s channel for an hour. The plan for this week:
Last week I published the rntz episode with 7/6 hours of work, built and shared on #meta a prototype for searching past the last 10k Slack messages for 4.5/2 hours (but it was ok because it was really fun and people seemed to be thankful for it), did my 1/1 hour of French left, spent an extra 1 hour on curriculum for TCS, and casually (while walking) edited the audio for the podcast’s with Lane from Coda and Jenn Jacobs and sent off for transcription. Ivan Reese did me a huge favor by cleaning up the audio from the Amjad interview so I’ll hopefully edit that soon.
I also spent ~4 hours watching/reading Bartosz Milewski’s Category Theory for Programmers. It’s been really fun following along the #category-theory group in the Slack. Just took 20 min to blast the internet about the group…
Last week felt really productive, partly because I’m still jetlagged in a positive way so I’m waking up before 6am most nights. I’ll try to keep it up by going to bed early this week.
I also am really liking the idea of pausing my own research for the next couple months while traveling, working on the public release of Dark notes, and improving the podcast, community, etc. I feel much less overwhelmed with this framing.
For this week:
Total: 34. Any extra time can be spent on finding a new home for the Slack, the Slack searching, category theory, #priority1 #FoC #resesarch in Workflowy, or thinking through Alan Kay’s recommendations on how to balance learning, making money, and creating something of value.
I spent 4.5/3 hours on curriculum which was good, 2/2 hours on WoofJS. I prepped for the Intel meeting for 2/5, because I ended up needing a nap the day before. I caught up on my French “debt” (and at this moment actually have a 30 min French surplus). On Friday, I finally published the podcast episode with Cyrus Omar.
The meeting at Intel was interesting. I got a lot of time with some Urbit folks, which was useful. And it was really interesting to hear about the sorts of things CPU designers talk about, what acronyms they use, etc. Here are some of my takeaways:
Before my flight from London to NY yesterday, I put the audio of all 4 podcast episodes I’ve recorded on my iPad for editing. I’m over halfway done with the rntz episode. I’d like to make some podcast progress this week – particularly because I’m not feeling like research and am not currently on other deadlines.
Today I have 2 more hours, tomorrow I have ~4 hours, Thursday I have ~4 hours, and Friday I may only have ~2 hours, so just 12 more hours this week. Short week because travel yesterday, and I have my mom’s birthday in DC Thursday, and my cousin’s bar mitzvah in Maryland Friday/Saturday/Sunday.
Here’s what I’d like to get done:
That’s 3 extra hours, so we’ll see how much I have time for the end of this week and this weekend… Next week is super open, so maybe I’ll do some research! Maybe publish another podcast. Maybe research alternative platforms to Slack.
Went to bed too late last night and so am only getting to research at 11:30am this morning. Cleaned up my inbox and planned the day 11-11:30am. It’s sunny for the third day in a row here in London so I may go for a run today. Also have to do some curriculum research for The Coding Space (my old company, which I’m now consulting for). But the most of today (5 hours) should be on research!
Finishing up previous tasks… I looked up https://github.com/futureofcoding/futureofcoding.org/projects/3 and it seems like I mostly forgot about it when I switched over to Workflowy but that’s OK. Those tasks are hanging around in the backlog which is fine.
It seems like Unison has a pretty straightforward story for immutable code editing. Similar to IPFS in spirit. One difference is that it doesn’t seems as focused on the FRP side of things; no time-varying values as far as I can see. It seems like Paul has also had thoughts about IoT/heterogenous computing networks which are somewhat similar to my recent OS thoughts about expressivity over hardware. Again, without FRP. It’s more related to algebraic effect handlers, which I think Conal would say are like monads in that they import the imperitive way of thinking into FP instead of building better abstractions on top. I shot Paul a text to catch up because I think his thinking here would be really helpful at this stage.
consider helping REBLS with publicity. I’m leaning no here but find myself reluctant to fully say no given how little time it seems. I’m going to tell JE I want to say no, and double check with him one last time. –> put this in next JE agenda
Think about research abstract vs concrete. I don’t want to solve a small problem but I do need a more specific thesis or angle than I have now. I also want big thoughts and lots of reading. And broad reading like mcluhan and alan kay / curious reader / risd class stuff. Type theory (books I have in kitchen) too but not just papers from recent conferences.
In summary, I want to continue with my OS thinking (let’s see what JE has to say about it tomorrow), and fit everything else in with a holistic plan for balancing my podcast, community projects, broader reading. I want to prioritize things, allocate a certain amount of time the various projects, and go for it. I’m going to let this meta-planning project remain undone for now and continue muddling along, organizing what to work on on a week or bi-weekly basis, mostly pulling on memory and emotion to allocate time to various projects. It’s a reasonable heuristic for now.
Reading about Unison is actually a great contrast and counterpoint to spur on my current “always running” and OS-focused thinking. Let’s specify something simple that’s inexpressible today: this portion of my computer screen should be the live value of my phone’s front facing camera. Here are some questions:
Behavior (x, y) -> Color
which we mostly talk about. For example size transformations would happen on the intermediate representation to get it into the right shape for my computer screen.search Id -> Maybe Camera
, where Id
can be a name or path or other identifying information. A Camera
would in theory expose a Behavior Image
, Behavior Zoom
, possibly a Behavior Focus
, and an Event Snapshot
.Behavior Image
is always the last thing it got from the camera. If there’s a lag in the connection, the image stays frozen and then when the connection goes through again the image skips to the newest value it gets (skipping the intermediate frames). Another way to model it would be a Behavior (Maybe Image)
where we can react to when we’re getting Nothing
s from the camera for some reason. Ultimately, we would probably denotationally model the receiving of this video data as an Event (t, Image)
where the Event
’s time is when the Image was received and the t
is when was recorded. (We can also model this as an Event (Event Image)
.) It’s then up to us to decide how to apply flow combinators to reduce this Event to a Behavior of various types. Ultimately we need a Behavior Image
for the screen, but there are many possible algorithms to get there. For example, we could filter out any images with t
’s smaller than that which we’ve already displayed so as to never go back in time. We could also encode logic to “speed through” lagged video to “catch us up” to the current feed.Image
, denotationally (x, y) -> Color
to display. We can give it a single Behavior Image
as well and it can sample it ~30fps and get its required Image
. We can construct our Behavior Image
from as many sources as we want, splitting the screen up into sections and composing them together.Behavior [Bytes]
(maybe with dependent types to encode the number of bytes into the type). We could image sending our Behavior Image
to disk in the same way we send it to the screen; the main difference would be accounting for what happens when we run out of disk space.Behavior Image
changes internally. That is, the camera is quite immutable, while the Behavior
type allows for the image it exposes to change over time. The screen’s Behavior Image
also allows for changes internally (different images over time), but it must also allow for external changes to how its computed. The problem is greatly simplified because the scope of these external changes are limited by the fact that the screen requires a Behavior Image
. Thus they won’t actually be changing the type of the definition. The simplest way to model this is to have an Event (Behavior Image)
where each event occurrence signifies the new definition for the screen’s output. We’d simply apply the switcher
combinator to obtain a Behavior
from this Event Behavior
. But how we produce Event (Behavior Image)
? For simplicity let’s give my screen a public/private key pair like in IFPS. Then we could define the event as all occurrences of signed Behavior Image
s on various channels, such as over wifi, bluetooth, ethernet, a blockchain, HTTP server path, etc. However this answer feels a bit like a cop out: instead of modeling a changing definition for the screen we merely model a static definition that accepts new definitions in a specified channel. In this way we decouple the output of the screen in a non-definitional way. For example, nothing would prevent or coordinate multiple entities from writing to the screen-channel at the same time, producing a jarring, glitchy image output. If we want a truly orderly, definitional approach the screen’s definition must fully point towards all dependencies, while also be able to change over time to point to different dependencies, but be somehow immutable. On second thought, we probably do want to decouple these things and it should be up to the programmer to only give out the appropriate private key to the “last step” in the computation so as to avoid multiple sources trying to overwrite each other.I’ve been noticing over the past couple of months that my stated priorities don’t match how I’m spending my time: I say I prioritize my research but I end up doing it last, if at all. The obvious reason may be that I’m not getting paid for it, so I don’t take the deadlines as seriously. But another reason is that I was doing the work after my paid work, so it sometimes never happened. Inspired by my friend Dan Shipper, who’s been successfully writing a book for ~4 hours every morning and doing paid consulting work in the afternoons, I am going to go (back) to a schedule of research in the mornings. I am also going to go back to trying to plan out my week on the calendar to ensure I am spending the right amount of time on various projects. This week:
As you can see, I’m trying to list out all of my projects every week even if I choose not to spend time on them. It’s harder but I think possible to run 11 small projects. I want to invest some time this week in considering a better platform for managing that than Workflowy, but I don’t want to get dragged into making one myself…
Waking up at 10am this morning was really hard with jet lag but I did it! Tomorrow will try for maybe 9am but that sounds hard… Maybe I will get to bed before midnight…
JE and I met last weekend (not this past one) to discuss my research statement and next steps. The summary:
However, I don’t want to forget the other 7 interesting questions I came up with in writing this statement! Maybe it would make more sense to start with one of those.
Lucky for me, Cyrus and David came to NYC last Tuesday and let me explain my immutable FRP semantics of editing code shtick and gave me feedback on it for an hour!
The meeting’s notes included:
The main takeaways were:
I used to have bad back plane on airplanes (until I mostly mastered it with the Alexander Technique) so I treated myself to junk food and movies. However this flight from NYC to London I ate no junk food and watched no movies. I did work the entire time! I spent a couple hours taking notes on my research. It was fun and productive.
I meant to answer JE’s and Cyrus/David’s questions about my immutable FRP semantics of editing code but it instead morphed into a session where I daydreamed about what a truly expressive operating system could look like. Some highlights:
Some questions:
I just got off the phone with Tudor, who was very kind to take over an hour to hear my very messy thoughts about what I’m working on and give me advice. Some highlights:
I’ve been thinking a bit about the financial sustainability of my work. Currently I achieve this by living super cheaply but what if I want to support a family one day? The obvious first step I could do is stop the podcast / community organizing and spend that time on more consulting work. (Or stop my personal research for consulting time and keep doing the podcast and community organizing.) That, mixed with charging more for higher-paying clients, would allow me to earn a lot more. Another idea is that I could change the financial structure of my work, such as start recruiting, working with venture investors to start companies in this space, or start a startup of my own.
However after speaking with my girlfriend, we decided to hold off on all these discussions for the time being. The plan is to continue “living cheaply” until the end of 2020 at least, which gives me another year and a half to focus on my research, podcast, community organizing, strange self-directed PhD thing. This is exciting! (Speaking of the podcast, I really want to release more episodes! I will hopefully release Cyrus this week, and make time for editing a bunch more this month.)
Probably 30 people (including myself) have complained about Slack as the platform for the Future of Coding community. So the question is: what do we do about it?
It’s important to first acknowledge that there is no perfect platform. There are always trade-offs. There is no way to make everyone 100% happy, particularly for a large group now approaching 500 total users (closer to 114 monthly active members, which I found by going to upgrade my Slack because they only charge you for active members). In other words, there’s significant risk that after moving to another platform, people want to move back or want to move again to yet another platform.
One thought I had would be to make it a bit of a “competition” where I construct how to “pitch” an alternative platform so people can argue for where we should go, and give a date on which a vote will take place. And you can only vote if you participate in the showdown to a certain level.
The purposes of the Slack I have on the readme are:
- Sharing of ideas, debate, feedback, constructive criticism, and collaboration
- Encouragement, high fives
- Organizing IRL meetings and meetups
Questions:
I have a lot more that I want to think and write about, so I will put those things here for tomorrow morning:
I sometimes go through phases of creativity where I start a bunch of essays or talks that I never get around to finishing. This happened early this past August. But I had so many ideas that I didn’t even have time to finish outlining them all. I took an hour today to get those outlines in here without cleaning them up much so they will mostly be nonsense to anyone but me. I created separate files for these outlines:
The rest of them were just small stub outlines so I’ll put them right here in the log:
Programming is setting up computation over time. Unfortunately this means that many important insights about your program aren't apparent until future times Live programming is about bringing insights from future times to the time of programming
— Steve Krouse (@stevekrouse) April 4, 2019
And this thread: https://twitter.com/stevekrouse/status/1084424882765520897
For what it's worth, I really like the "soft" part. My vision for software is that it "feels like clay", is infinitely customizable, the sky is the limit. It is the stuff of thoughts, not of atoms. Fiction, not nonfiction
— Steve Krouse (@stevekrouse) April 7, 2019
Programming is the study of precision like math is the study of long chains of close reasoning without paradoxes and history is the study of past humans through text.
When helping someone learn to code, I ask them what they want to be able to code. Then I help them find an environment (coding notebook, online IDE, desktop IDE, Zapier, spreadsheet, etc) with the best feedback loops, easiest setup. The language is not part of the equation https://t.co/aNwGdnIkmd
— Steve Krouse (@stevekrouse) April 6, 2019
editor = f(node, editState) so f can be a recursive polymorphic func… cyrus and hazel… cursor is key<
Recursiveness of the state = lift validatorfunction allModifyerFunctionstreams initial state
But then of course each piece of this defition is a behavior with it’s own validator function that’s a behavior with it’s own validator function…
Ultimately this Programming Language is one single fucking immutable expression! It is Behavior [Expressions] (aside: I want Behavior length list where each item is Behavior) defined as (fold processEdit (fold concat newExpression []) Behavior [self]
This SVG is hard to read but shows a bit of the thinking here.
Yesterday I cleaned up my email, todo lists, random errands, and went on a run. I also had dinner with Sam John of Hopscotch, which was fun.
Today I did more errands, a bit of consulting work for TCS, and adding things to the website I’ve been meaning to for a while. I also cleaned up my workflowy which feels great.
My meeting with Dark was pushed to next week, which frees me up to work on my research proposal draft 2 this week, which is due on Friday and then I’ll speak with JE afterwards about next steps.
Next week the focus will be Zapier for Dark as well as the draft of the piece framing the research we will be publishing on my site. So it’ll be Dark-heavy next week. I’ll also be traveling back to London Weds, so won’t get much work done then or later in the week as I readjust, so let’s just say Dark and random errands is all next week.
The following week I might try and (finally) publish the Cyrus Omar episode and edit a few others. I’ll also want to get back to research, as well as doing the work for TCS on curriculum research I promised.
So many balls in the air! It’s key to keep my organizational systems clean to manage it all. I really feel like I need the malleable software medium to really build what I need here to make my brain powerful enough to handle it.
This log entry is way overdue. My head feels like it’s going to explode given all the mushy thoughts trying to not be forgotten. I gotta do a better job of logging more often and in smaller increments instead of saving them up for these massive dumps.
To recap, my lunch with Alan two weeks ago was ridiculous. I was also surprised to see my casual notes go slightly viral. I got way more people reaching out to me to thank me for the notes than in the past. Maybe it’s because the notes had a more personal feel? Or maybe it’s because of Alan’s reputation?
As I should’ve expected, Alan saw the notes and wasn’t happy with how I published them so roughly, and without giving him a chance to clarify things. In particular, he didn’t like how I published the photo of the notes its unedited transcription. I removed it, which was sad because it was many peoples’ favorite part. However he was very nice to work with me to get the other parts of the notes more in line with what he meant instead of my (mis)impressions.
I got to spend ~4 hours with Jonathan Edwards on 4/20 to discuss the future of both of our research, but we mostly focused on my stuff. (In short, he’s considering pivoting from a more general purpose language to one with a more limited focus, such as data science.)
For my own research direction, we zoomed out even further from the aprt.us+Conal research proposal I was working on. For starters, I put out there the idea to put all of my research threads on hold for the next couple years and instead focus on being more of a “community manager / podcaster” full time. JE did not like this idea at all. (Very few people do. It makes my girlfriend and family bug their eyes out.) JE asked me if I’d be ok just supporting others who get all the glory and don’t even give me a head nod. That doesn’t sound great, but it really does feel like our community could use a full time (or maybe multiple) people organizing conversation… Maybe over time this person will manifest themselves and I won’t have to self-sacrifice to make it happen.
So putting that aside, JE asked me about what my research goals are. I am happy to report that my mission and thesis from the /about page have stayed remarkably stable over the last year or so. Either this means I am burying my head in the sand (which seems unlikely given all the research, conversations, podcasts, etc I do) or it is holding up over time.
The mission of this project is to enable all people [1] to modify the software they use in the course of using it.
JE pointed out that this focus on modification brings up thoughts of simple plugin systems of simplistic customization. Of course, that’s the opposite of what I’m going for, so it’d be nice to better convey that distinction. I am committing a draft of this research statement in this commit. Maybe that content will flow into a new version of /about at some point soon… I owe JE a version of this by tomorrow, but Friday is the final deadline for him to review the first draft.
Last week I mostly did personal errands, travel, personal finances, family and NY-friends time, and a podcast interview with Jennifer Jacobs. I got a few hours of research in, but not as much as hoped.
One recent highlight was that on Saturday I spoke to Juan Benet for 90 minutes! If you’ve been following, you’ll know this is a big deal. I see him as a modern day Alan Kay, or a younger Elon in some ways.
He tentatively agreed to come on the podcast to talk about meta Alan-Kay-style things, such as what makes R&D labs work and so important, and why the internet is great but needs work, and his plans for Protocol Labs, what they’re working on now, and any thoughts he has on programming languages and interfaces, such as Dyanmicland, VR/AR, AI, and brain-computer interfaces.
He also mentioned that he’d be interested in collaborating on an in-person, recorded conference on some of these topics. I’d come up with the agenda and people and his team would find the space and fly people out. Sounds too good to be true!
Google Inbox wasn’t perfect but the simplicity of a single list for email and tasks, and then being able to push things out of the list with a snooze was great. I guess I can replicate that by just emailing tasks to myself given that Gmail has snooze…
I spent 45 min this morning migrating from Todoist to Workflowy. I’m using #tags for categories, organizing little tasks by when I plan to do them hierarchically, and larger tasks get their own tree but are also placed in a date/time. We’ll see how this works…
I’ve been saying for ~6 months now that I have too many projects going on. It’s hard to keep track. Maybe workflowy will help. I think a tool is the right answer but I don’t have the bandwidth now to build something…
Aidan Cunniffe and Dev suggested that I leverage others. For example, I could have someone do a guest interview on the podcast. I asked Ivan Reese and he seems to be interested… I could also probably nominate people to help “run” the Slack, but it’s a bit strange because it’s mostly self-sufficient. Aidan mentioned again that he’d be interested in running a sort of online meetup for people to demo their projects, so maybe I could have a “Future of Coding Community Board” of people interested in planning these sorts of things and people can divy up tasks, etc…?
Of course the easiest way to feel less overwhelmed is to quit or stop doing things. For example, it may not make as much sense for me to take on more projects at FRC given that I may also be taking on coding work for Private Prep (the people who bought The Coding Space) for WoofJS… Maybe eventually I could get more sponsorship to stop fundraising entirely, but that feels like a risky move to me… Maybe one day when I feel more secure in my “research reputation.”
This past Monday I had a five hour lunch with Alan Kay here in London. It was amazing. I wrote a blog post about it here: /notes/alan-kay-lunch.
As I went on and on about in this log yesterday, I think it’s time for a zoom out and pivot. The main reason this makes sense is that gametrail, which masqueraded as simple was actually solving too many problems all at once. Another very key reason is that gametrail was just too out-of-nowhere to communicate what I was trying to accomplish here. Even if I did climb the gametrail mountain and make something ok, nobody would understand why that is a big deal.
In my convo with JE yesterday, we agreed that a better “next project” for me to work on now would be to “take aprt.us and add Conal Elliot to it.” Part of why JE liked this idea so much is that JE’s Subtext directly and explicitly inspired aprt.us – he even spoke with Toby about. This means that this is likely the first real time JE and I will co-author a paper/talk!
Because the aprt.us/Conal mashup virtual anyone, another way to put it is add Conal Elliot to spreadsheets. From this perspective, this logic is building directly on the talk I just gave at the Salon, which I hope to put online real soon (tomorrow).
Next steps on my aprt.us/Conal pivot… JE suggested that it would be a good idea to reach out to Toby with what I’m trying to accomplish and get his perspective and, if possible, buy-in. In fact, we think that’d it be a great target for my next next step. Basically, my next step is to put together a “research proposal” of what I’m trying to accomplish by adding Conal to aprt.us, my game plan to do it, what technologies I am considered, and what challenges I expect. As far a research I need to do to make the gameplan, I want to do some mockups in figma, as well as poking around the aprt.us code for an hour or so.
As far as gameplan goes:
One thing that’s also important to add is the strange ordered-tree-ness of HTML. I don’t know where this fits in the gameplan, but it’s a real challenge (maybe the biggest challenge), so I should decide if I am including it in this experiment towards the beginning (as it’s main risk), or leaving it out of this iteration entirely to simply things.
HTML is masquerading as a boring old tree when it's in fact a much more complicated species of "ordered tree" https://t.co/cC3T6ELr6M
— Steve Krouse (@stevekrouse) April 10, 2019
Next week my priorities will be:
And then I fly to NY for almost a month, where my priorities will be:
Oh boy, I have a feeling this is going to be one of the biggest log entries I ever write. I may end up taking pieces and splitting them off into their own files, but I also like just plopping thoughts in here too, so we’ll see… I also have a feeling this is going to take more than 1 sitting to write, but so I may have to come back to it this afternoon after lunch with my girlfriend and mom…
[Note from the future - the next day, April 9th. This entry did end up needing to be split into two days of effort, which is why the title of this is 8-9. For the record, the first half of this entry was written mostly on the 8th and the second half mostly on the 9th.]
First I have to say that I was really not looking forward to this trip. I was already flagellating myself for the mistake of wasting a whole week before I even got there. So I was quite surprised to be entirely delighted by the week, and feel that it was incredibly well spent!
Firstly, I felt very lucky to get to spend so much time with the wonderful crew at Salon de Refuge 2019, including friends from before:
and absolutely lovely new friends:
I also felt like the luckiest obscure researcher in the world as I got to spend time with some of the only “distributed FRP” researchers in the world:
I felt incredibly lucky to get to sit at the banquet at a table of all of the above FRP people. In fact, I was seated in between Pascal and Florian, which was hilariously awesome.
It may be a good idea to email Pascal or Guido to collaborate somehow as we have similar-ish visions. Ultimately, I am focused on denotational semantics, which according to Guido, might make me more similar to ICFP-types. He suggested I google for “denotational semantics for distributed systems”, “denotational consistency” or the “denotational semantics for pi calculi” (which led me to this Wikipedia article on the denotational semantics on the Actor model). He also suggested this ICFP paper on Fault Tolerant FRP.
I also very much enjoyed meeting Patrick Rein and Jens Licke. I was very sad to miss Patrick’s literature review of LIVE Programming but I do plan to read it
Once I met with JE who helped me scrap my version of the talk that was just an outline of my paper, and instead focus on just one particular idea (visualizations), it was a blast to prepare, practice and deliver my Salon talk. I think it went over well, too. As far as I could tell, besides the large-room talks, the most of the other speakers at <Programming 2019> made their slides the night before or day of, and didn’t do more than one run-through, if that. So I think my ~5 run-throughs served to make my talk one of the more polished ones. It seemed like more people were keeping their eyes on me than on their computers as they did in other talks. I think there were 15 people in the room for my talk.
My critique from Philip (and from Tomas Petricek who sent in his notes but couldn’t attend) was very good. I found that it was excellent in its portrayal of common criticisms of denotational programming, which embed misunderstandings of the model. In particular, I think a common confusion is separating the denotational model from it’s UI/PX/notation.
After Philip’s talk, the real fun began. We then had a full 30 minutes of the audience asking Philip and I questions about visual denotative programming. I didn’t want it to end! Such an indulgence for me: a whole room of brilliant people asking me questions about my passion! On thing I hadn’t heard of is GeoGebra, which Clayton mentioned. Luke Church also mentioned Dynamo, which he offered to show me some time later.
When I started thinking about attending conferences, I was in a minimization mindset: I hate traveling, and distractions from my deep work on research, so I wanted to go to the fewest the JE would “permit”. However, having so thoroughly enjoyed SPLASH 2018 and <Programming 2019>, particularly LIVE 2018 and Salon 2019, maybe I should reconsider this stance and now think about which other potential conferences I may want to attend…
(To give you an idea of how much I enjoyed <Programming 2019>… I was so stimulated by the talks, but mostly the conversations I was having, that I wasn’t able to fall asleep. My head was revving at a many thoughts a second, so I ended up just staying up late tweeting my thoughts off in big stormy bursts.)
Here’s a few conference ideas that immediately come to mind:
With these being already in my cal:
Mostly sparked by bouncing ideas off all the wonderful people at * <Programming 2020>, my brain became very active in developing old “research threads”. A research thread is a line of inquiry that my brain has encountered and walked at least a few steps down. As I’m just one person, I really am only able to walk down one thread at a time, but I often wander down other threads, mostly for the pure fun of it. The grass is always greener, after all.
The first subsection (of this subsection) is a “thread” about the importance of improving (ideally, democratizing) the construction of user interfaces. This is the foundation for my current research thread, gametrail. So while I’m not focused on developing the importance argument now, it’s in the background of my current research, and worthwhile to iterate on casually. The next section in my current thinking around what has been my main research thread for the past ~6 months since REBLS: gametrail (aka visualizing cyles or a hareactive/turbine devtool). The following subsection four subsections contain insights that push forward other fascinating research threads that will likely be necessary for my BIG VISION of a mashup of Wikipedia, Smalltalk, Haskell, Facebook Origami, and Hazel (leaving out a few sources of inspiration, of course). The final section is related to SciHub, which is only tangentially related to the FUTURE OF CODING.
To be fair, I walk around wearing ui-colored glasses. It’s like walking around with design of everyday thing glasses – which happens to be another pair of glasses I am adding to my closet: you see the ways better interface design could improve things everywhere you look. Here’s the shortest way to put it: every time a human communicates with a human who communicates with a computer is an opportunity for user interface design. My tweets (of 1 second ago) say it best:
You people think AI is going to eliminate jobs, just wait for the democratization of user-interface construction: anywhere a human communicates with a human who communicates with a machine is an opportunity for a better UI that the first human could use directly
— Steve Krouse (@stevekrouse) April 8, 2019
Such as:
— Steve Krouse (@stevekrouse) April 8, 2019
* ordering food from waiters
* many functions of customer support
* many functions of retail workers
* helping you pay at a register
ML will help get rid of these jobs as well, but oftentimes a better screen-based UI is the right answer.
After all, I want to input my address in the Uber app, not say it aloud to a self-driving Uber-version of Siri.
I’m beginning to see how difficult this project is going to be. That doesn’t mean I think it’s not important, but I want to ensure it’s worth my research-time budget for the next 6 months.
For starters, I think my new goal (if I still want to pursue it after this analysis…) should be a Bret-Victor-style-demo of a gametrails dev tool experience, not an actually-working devtool. Let’s start there and see if I want to realize it more. One of the main reasons for this downgrade is seeing harective/turbine’s API change so quickly; it’s just not worth the investment to make something that will be instantly outdated.
However, before resuming the design work for flows, want to take this opportunity of realizing how difficult this is (even with the BV-style simplification) to pause and consider: what problem am I trying to solve here and do these visualizations solve them?
Ultimately, I am trying to democratize UI construction, with something as usable as Squarespace/Facebook Origami, but with fully expressivity, like ReactJS, for example. (Ultimately, I am trying to do something even bigger than this, but UI construction feels basically as hard/easy as any other subtask of my eventual goal of democratizing human creative expression over computation, and I might as well continue here because I’ve already made decent progress.)
As I argued in my Salon talk (which does not yet have a home online… I hope to add it here soon), what enables the power and simplicity of spreadsheets visual, dynamic interface is the right computational model: denotative programming. In the same way, denotative programming empowers the power and simplicity of Facebook Origami’s beautiful visual environment (and where Origami diverges from the denotational model with pulses, it’s power and simplicity suffers, particularly in larger programs). Thus, before I build this dream UI UI, I need to find the right computational model, which I think I found in Conal’s DCTP (the artist formerly known as FRP).
However, Conal’s flavor of DCTP (from his papers) hasn’t caught on at all. Particularly next to the success of their bastard cousins, ReactJS and its ilk, the unpopularity of Conal’s DCTPs are glaringly conspicuous. Why is this?
One likely reason is that he hasn’t focused on HTML-like applications in his work; he’s more focused on abstract, geometric shape animations and interactive. And HTML-like applications add a whole slew of complications of their own on top of time-varying values: the ordering of HTML elements and modeling AJAX requests.
Ok, so which different but least basterdized flavor of FRP should I adopt as my model? I mostly liked Haskell’s Reflex’s semantics (which I think were the first to model HTML-element-order monadicly), so probably something a lot like that…
Ok… but what language/framework will I build my tool in? JE, suggested that I really need to build it in a language/framework with the semantics of the UI UI tool itself. I thought this was an excellent idea, particularly because it would allow me to get more familiarity with DCTP and continue to pinpoint the precise flavor I want the tool to embody.
Haskell’s Reflex wasn’t really in the running as the language/framework to build the tool in. When using it, I was so miserable waiting on the Haskell compiler: 7 seconds for a syntax or type error is 6.5 seconds too much, no matter how beautiful the type system or semantics. JE suggested that I actually create (or find) my own traditional text-based JavaScript (or compile-to-JavaScript via Elm, PureScript or TypeScript) DCTP framework.
I was very excited to re-discovered the Hareactive/Turbine JavaScript (/TypeScript) framework. It’s semantically very similar to Reflex, apart from it’s Now-monad (which I have mixed feelings about) and is exceptionally well written, even for a Haskell codebase, let alone a TypeScript one. I began playing with it. It was one order of magnitude better than Reflex simply tighter feedback loop enabled by instant JS “compile time” (which you can auto-trigger every couple of keystrokes). It also didn’t hurt that I was able to get it working live at a URL on the codesandbox.io platform.
Yet despite the massive improvement over Reflex, Hareactive/Turbine still wasn’t entirely pleasant. (To be honest, the main cause for my dissatisfaction was probably the quickly-changing and meager documentation; in other words, it is a new framework and I was personally roughing down the rough edges.) But the other main issue was that I didn’t have good insight into the structure of the streams I was working with, nor the structure of the data inside those streams.
A related issue, but one I now see as fully distinct, was having trouble reading and comprehending existing Hareactive/Turbine code, even when it was written by me the week prior. So I thought I could combine these two main problems (not including the documentation issue) and build a way to both see the outer-stream stream-structure and inner-stream-data-structure of a single stream as well as the global structure of dataflow between streams. And thus gametrail was conceived.
It’s important to mention prior work here. I wasn’t the first to think to try to solve both problems at once. RxFiddle does just that. However, I found both their local stream and global dataflow diagrams lacking. On the other hand, rxviz was the ideal tool to visualize a single stream, animated, totally ignoring the global structure that produced it. I initially disqualified this single-stream approach because it lacked the global structure bit. And of course, RxMarbles, the original flow viz to entice me many moons ago, was the ideal viz to solve the issue of understanding single stream operators, a real problem I used to feel before I internalized them, but is no longer the most glaringly apparent problem I now face – and to be fair RxFiddle (and my gametrail idea) does somewhat attack this problem as well. In summary, I was trying to take the best of all these tools, and combine them with some of my own design chops into a single devtool experience for gametrail, that would hopefully point towards (or eventually morph into) the dream UI UI tool.
However, I am now rethinking the scope of this vision. What problems am I really trying to solve here and why? I am solving a problem of a problem of a problem of a sideproblem here. And why am I solving three problems all at the same time, in a single prototype? There’s gotta be a way to simplify…
Here’s a metaphor for one way to simplify. MS Excel is pretty darn good. Yet it hides the formulas, only lets you look at one formula at a time, and has very limited capabilities for inspecting the global structure of a spreadsheet despite being on such firm denotational grounds for it. And still considering all that, Excel is still better than any other programming environment on a power + simplicity showdown. Thus, if I get the model right (which let’s assume I do with some flavor of the denotational model), and get just a one† or two†† things right with the visual environment, I will be “in the money” with power + simplicity. In other words, I won’t write the last word on UI UIs but it’ll be an order of magnitude better than what came before; even if I can just get one† or two†† things right.
Show the (inner) data (and it’s structure/type). BV says it best, “Some people believe that spreadsheets are popular because of their two-dimensional grid, but that’s a minor factor. Spreadsheets rule because they show the data…. If you are serious about creating a programming environment for learning, the number one thing you can do – more important than live coding or [ … or … or … ] or anything else – is to show the data.”
Applying this insight to my problem-domain (which I tried to kinda do in the parenthesis at the beginning of this paragraph), the number one thing I can do is show the inner-stream state (data) (which would imply its structure / type in the same way a dollar sign in Excel implies it’s type is “USD”). Likely the close second thing I can do is either 1) show the outer-stream structure or even better 2) automatically do the type/stream plumbing to help you get your inner-stream state/data out so you can compute on it.
In other words, STOP trying to visualize the global stream dataflow and STOP trying to visualize how stream combinators work. Those are super useful things to visualize but they are not the MOST IMPORTANT THING, so punt on them, because, to be honest, they are likely “research problems” in their own right.
However, I don’t want you to get the wrong idea. Going back to BV’s Learnable Programming (where I got the above quote), other key environmental factor’s of a programming environment are:
a) follow the flow (which is what visualizing stream dataflow is about) b) read the vocabulary (which is what visualizing stream combinators is about)
Yet I would argue that neither of these are worthy candidates for the second-most important design values to optimize for in a programming environment, because applying similar logic from above:
a) only some spreadsheets, only minimally, allow you to “follow the flow” b) much of spreadsheet vocabulary aren’t common concepts (for example, HYPGEOMDIST may require as much if not more learning as filterApply, to compare two random ones)
But despite these shortcoming’s spreadsheets succeed because of the above discussed adherence to “show the data” as well as embodying what I would argue is the second most important programming environment characteristic: create by reacting, which is distinct from by is enabled by live coding. (Quoting Learning Programming again: “it’s [live coding] not a particularly interesting idea in itself. Immediate-update is merely a prerequisite for doing anything interesting – it enables other features which require a tight feedback loop.”) It’s so damn hard to construct all these streams in ones’ head. (The essential complexity of UI problems is surprisingly high. We can try to escape it by coding imperitively, but that’s just wrapping it in bacon and adding more complexity. The only solution to lots of essential complexity is to augment humans, and one way of doing that is letting them think small incremental thoughts to existing structures – instead of forcing them to construct intricate castles that If we firstly get a tight feedback loop (live coding), and then allow the programmer to start with and gradually sculpt something concrete on the screen, we’re in amazing shape.
But again, to be clear, I want to focus on show the data as my only focus first, and maybe after that’s mostly done, I’ll see if I can slip in a create by reacting. (The jury is still out on whether live coding is absolutely require. My gut says it is, but I don’t want to make the mistake again of thinking more things than are necessary are necessary.)
Ok, so let’s summarize all that nonsense into a thing I can communicate to JE in our meeting in two hours:
Even shorter: have I sufficiently lost the forest for the trees that I bounce on this PX of my text-based DCTP framework problem for the original problem of GUI (think FB Origami or Josh Horowitz’s Pane or Webflow) for DCTP? Because that’s the really interesting problem, and I could begin working on it without solving the gametrail problem by starting my design work in doing mockups in Figma.
Ok so now let’s jump up from the Y of gametrail to the X of “the ui ui” (which is itself the Y to “democratizing computation’s” X).
I don’t have the time or energy to recap and then add my new thoughts to this project, but suffice it to say that it’d mainly be similar to Aprt.us, but instead of everything being absolutely positioned, everything would be cardinally ordered as children of some parent HTML element. In other words it’d be tree-like in that you can only add items within items or change the order of items – all ideally via drag-and-drop in some eventual version.
(As far as drag-and-drop/direct manipulation goes, I think it would easily express all reorderings and the swapping out of components in the “HTML monad.” But the rewiring of the bindings of the recursive HTML monad… that’s no walk in the park… But as with the last section, that may just be a rough edge to leave in there for another day… and there’s also a chance that we can come up with (or steal from some obscure corner of math) a better, foundation ui paradigm than HTML’s whacky notion of a tree structure but where the children are also ordered as well as there awful hidden dependencies between children, as well as parents…)
Continuing to extend Aprt.us metaphor, just like aprt.us allows you to customize all the properties of shapes but doesn’t overwhelm you with options, the ui ui would similarly give you access to all the event streams / behaviors of the “selected element” with the more obvious ones prioritized and the rest hidden. In order to compute with those flows, one would type text based combinator formulas – the equivalent of aprtus’s spreadsheet-like JavaScript formula interface. (Of course we eventually want something better than that, but one step at a time…)
To put this another way, I am leveraging my insights learned by mentally simplifying gametrail (discussed above) to simplify the first version of the ui ui. Just improve slightly on aprtus: add events, cycles and higher-ordered-ness, and don’t you dare dream of adding anything else. (Note: I purposefully did not include even single-stream visualizations in that list. That’s too much innovation in one step. Instead we will visualize events as the data of their last occurrence and its timestamp, and behavior’s as they currently do: their current values.)
I don’t want to do into it now, but one other interesting source of related ideas for the ui ui can be found from a few months ago in the log or here on twitter:
Love it! The text right below what he quoted:
— Steve Krouse (@stevekrouse) December 16, 2018
We can continue this idea for all kinds of expressions: if-expressions, pattern-matching, lambdas, function application… Can we build nestable custom GUIs for every part of System F/Haskell?! I think so!
Let’s go up more more level. This is the X that the last section began with an allusion to. In other words, this is the dream and the ui ui (which is really a holy grail in its own right) is just an implementation detail for.
I’ve spoken about this vision before elsewhere, so the only thing to add is that maybe instead of slowly building up to it, I should consider skipping to it now, mucking about it what the key problems are, and then retreat with my tail between my legs to the lowers levels that will actually make it possible – but then I’d be armed at those lower levels with insights!
Ok, that likely didn’t make sense to anyone but me. It likely even won’t make sense to a future me! Let me be concrete: instead of working on some version of gametrail or ui ui as discussed above, maybe I should recruit a couple fellow revolutionaries to begin collaborating on a piece of end-user software we all use every day, such as email, Slack-like applications, calendars, todo lists, social networks etc. This idea has some spiritual similarity with JE’s Chorus, but it’s entirely the same because I have a bias towards customizability of the UI instead of the opposite.
I’m sure there are a million details about this that make it not make sense that I haven’t even considered, but I think it’s a fun project to daydream about working on with Geoffrey Litt and others. (Whenever I am tempted by this, I force myself to remember how frustrating it was to work on BlinkNote. Yes it was a blast at times but it also wasn’t practical to be using a bleeding edge notes app with critical data of mine, while developing it on the side.)
Sorry to tease you with this title, but I don’t currently have the time to expand on it. The quick version is: what if we stop programming by mutating a bag of text and instead treat expressions and immutable as the values we want them to express?
CS problems get complicated in distributed settings (and others) because they break the core assumptions of Computer Science:
What would be a framework that abstracts over all settings (distributed, time-dilated both in space and computational speed) by first making all such assumptions explicit (like I tried to do above), and precise (like I failed to do above), and then abstracting over them all to spawn a more generic programming language.
Here’s another way to say almost the same idea: waht if denotational isn’t abstract enough? Could it be possible that what I’ve been yearning for all this time, wasn’t a better PX for Haskell, nor even a better PX for Agda or Coq, but actually a something more abstract/math-y than any of those (also with a good PX)? Maybe what I want is TLA+ plus program synthesis, but that doesn’t feel quite right either…
One thing I want to call out now, though: many people seem to subconsciously acknowledge that the future of programming will one day be programming in Coq/Agda/TLA+ when we can finally make the PX as decent as other languages. But it seems like we’ve all given up on this problem as “way too hard for our lifetimes” without acknowledging this out loud. I understand people might not want to waste their life researching a moonshot, but at least be honest at you’re working on a short-to-medium-term improvement to a inferior version of what we eventually want the solution to be. In other words, like acknowledge that we’re in a “program synthesis winter” and talk about what to do about it. (Maybe people in that community already acknowledge this. I need to learn more about them.)
Another related idea: Call-by-bullshit idiosyncrasies of various languages gotta go. This kind of order of operations is paper math bullshit. We got magic ink now - no more implicit, global evaluation-ordering conventions.
As you can tell, I’m getting tired/delirious so I’m just going to link to the remaining ideas for posterity to deal with and decode:
Quick recent insight about LogicHub: one way to make it more in line with David Deutch’s school of thought is to insist that it’s truly infinite in the downward direction. In other words:
It's nuances, not 🐢, all the way down
— Steve Krouse (@stevekrouse) April 3, 2019
A related insight: theories/facts/things are containers of metaphors that explain only the idea itself but does not defend it, as well some structure that contains arguments, where each argument is a thing that attempts to defend it in terms of other theories/facts/things, which are similarly recursively defended, all the way down to infinity… An interesting additional point to note: the structures that contain single arguments themselves are also theories/facts/things which need to be defended and can be overturned later, impacting all theories/facts/things that they helped prop up before their dethroning. For example, when (not “if”, because we are always at the beginning of infinity, everywhere) we later learn that some contextual caveat where the double-blind, controlled study is not as effective as previously thought, all arguments that violate the contexts in which the double-blind, controlled study reigns supreme, those arguments will be appropriately penalized.
I have 5 essay/talk ideas I have partially outlined to various degrees that I was planning to embed right in this journal entry but I got too tired. So I’ll just tease you with their titles and that I will eventually clean up those outlines at some point and add them to the internet soon:
Given how hectic this week has gotten with my parents in town, JE coming to town, and possibly a meeting with one of my all-time-heroes next week (more details soon), I am feeling like keeping my priorities small for this week:
Given my great conf experiences, I think I want to re-adopt JE’s suggestion to aim my research at specific conference deadlines… and those will be… to be determined when I talk to JE in 10 min!
Sometimes I wonder when I’ll ever get to all the resources I collect (and currently organized on Github Issues). My new plan is to go to primary sources of inspiration like that (which also includes programming random shit to get annoyed) whenever I am feeling down about the importance or likelihood of success of this work. As long as I’m excited let’s move ahead with what’s already in my brain!
This isn’t really related to the “rest of 2019” section but I had a funny idea to redo my logo with the background image of “we interrupt your regularly scheduled programming”… and I couldn’t think of a better place to put this sentence.
Got the flu in SF so it’s been a rough week! Getting back into it now. Apologies if the following thoughts are even more scatterbrained than usual… I find myself wanting to re-evaluate all my projects:
Dark - I have done ~30 reviews so far. I think I’m running out of interesting ones, but I’ve thought that before… I just shot Ellen an email with a few other ideas and floating the idea of publishing the past reviews out there.
First Round - I need to follow up with them here about maybe taking on a new project because the current project is mostly done.
As far as freelancing goes, I may have space to take on new projects, but we’ll see what happens with Dark and First Round. Given the sale of TCS, I’m not so worried about the extra time. I’ll take it for research and let opportunities come to me slowly as they have been.
I’m glad I stuck it out with the DCTP essay for the Salon. Next week I’ll work on the presentation and the following week I’ll head out there. I think it’ll be a worthwhile exercise. But then I’m going to focus on my research. No more distractions! The Salon definitely was a distraction from research for the past two months. But ultimately I learned a lot of criticisms of the DCTP paradigm which is a great thing.
As far as the podcast goes, I’m happy with the status quo. Still have a large backlog of interviews, but I’ll get through them at some point.
The Slack group continues to chug along. I keep teasing moving to another platform, such as Zulip, but nothing ever feels worth it on a closer look.
As far as my research goes… I feel a bit tired of the reactive/stream thread I’ve been pulling on for years now, but at the same time I do think it’s important to get to the end of it. I think there’s something important about the “view update problem” synergy that I struck on recently, but still need to look into…
Part of me wants to zoom out and go back into a “reading mode”. I almost want to pick another problem, instead of finding the right UI paradigm and solving the GUI GUI creator problem. I also want to finish TaPL and the Bob Harper textbook. And all the links I put into my trelloboard of things to research.
But let’s dig into my current status of gametrail, the reactive streams visualization/devtool project:
Things that are bumming me out:
As far as research next steps goes:
Talk with JE about the view update / bidirectional transformation problem, because I think that’s key. Check out the twitter links, jump into the bidirectional literature. In particular, I am wondering about if there’s a way to specify write scopes granularity while still feeling elegant: http://www.cs.pomona.edu/~michael/papers/ugrad_thesis.pdf
As well as working on more mockups. While it feels like I’ve spent a while on these mockups, I need to get into the design mindset of cranking them out before giving up. I think 100 hours is a good minimum to set here before giving up.
One other thing on my mind: while I do believe in the power of visualizations, I wonder if they’re ultimately necessary here. One of the main problems of stream is layout out a list and then doing something with that complex stream, and I can think of other tools to help with that than visualizations, such as an editor that can suggest actions on streams or just a better API for lists of HTML elements.
Also, especially after seeing how quickly the Turbine/hareactive API is changing, I am much less excited about actually building a devtool for it. It seems much more reasonable to a bunch of mockups and maybe a BV-like demo that explores the UI of it and then do a write up or something. I’m not sure what the next step would be after that, but I don’t think it’s super important to figure that out at the moment. I’m content to spend the next month or two actually focused on getting some FRP streams visualized.
Before I started writing today, I felt like quitting this research thread as being too difficult but now I find myself as excited as ever to get back to it. Where I used to feel weighed down by the difficulty, I now see it as an opportunity to do truly original and important work. Where other opportunities are easier and lend themselves to quicker turnaround and payoffs, this work is key. However in the future, I may try to be a bit more strategic about which problem I focus on. But I worry that it’s impossible to really guess these things from the outside. Most problems get harder the closer you get to them.
Despite being sick, I spent a couple hours on a new diagrams on Tuesday, March 5, which felt great. I was hoping to get 30 hours of work on these this week, but four is better than zero. Maybe I’ll have time tomorrow to work on it as well. Yesterday was just being sick, and today is catching up on emails and researching Coda.
Ultimately this was the culmination of Tuesday’s work, which built on JE’s suggestions to have vertical flows to help the text flow horizontally and Josh Horowitz’s suggestion to have “wormholes” to keep things from getting too graphy.
I also re-did the button click:
And messed around with collapsing the space between flows, but found it difficult to lay out the mapping text above:
In the response to my tweet about these new diagrams, Antranig pointed me to the bidirectional lens work by Pierce, which is funny because I have his book right next to me. I am wondering how related the view-update database problem is to the child-to-parent update problem is in HTML UI design. I’m a bit worried they are the same problem and I’ve wasted all this time not realizing it. It would explain why nobody has been able to figure out end-user-created UIs: the view-update problem is critical and unsolved. I’ll need to look into this more…
I haven’t been very productive the last few days. I came down with a cold towards the end of Saturday. I did just an hour of researching Coda for Dark, organized the meetups, started a test Zulip, and published the JE podcast. Ok, so not nothing, but I really need to get back to my main research!
Oh, I forgot! I also read 14 chapters of TaPL on Friday. Not my focus but still worthwhile!
Sometimes I find sad mantras rattling around my brain. At one point it was “grass is always greener” when I was considering doing something else, but everything I tried was worse than what I was doing. Ultimately, I got over it and now feel that research is a great choice for me. I don’t salivate over other careers anymore or imagine them as rosier.
I do have a new mantra I’ve been trying to kick: “what do you have to show for your efforts?” I do have almost 40 podcasts. The Dynamicland essay and the LIVE bootleg. The Slack and a few meetups. A Twitter history. A very long research journal. A mediocre paper and presentation at REBLS. Another soon-to-be mediocre paper and presentation at the Salon de Refuge.
I want something polished and shipped that I can point to. I think my Hareactive/Turbine devtool could be it, but maybe I should consider hacking it together in the Bret Victor style of a cool demo, but not an actual usable tool. After all, I’m not sure Hareactive/Turbine is the end-all-be-all that I want to support. It’s more just a decent library I can use as a proof-of-concept.
I’m going to henceforth refer to the Hareactive/Turbine devtool as gametrail because of “hare” and it’s about following the paths of events and stuff. And “Hareactive/Turbine devtool” is too damn long.
One clear next step is from my last meeting with JE, where he suggested flipping the diagrams 90 degrees so that the flows flow downwards, which will allow the text more space. I think it’d definitely be worthwhile to re-do some of the mockups in this direction – though maybe I should hold off until I do one or two more of these in live HTML so I get a better sense for how they feel. I could do the next live HTML one vertically, too.
I also had a really wonderful call with Josh Horowitz about layout algorithms on the 15th of last month. I’ve been meaning to add it here. One key point he made was that if a layout specification starts to “feel like sudoku” than it’s probably undecidable and you need an algorithm for it, that’ll likely have unpredictable outputs. What if the user deletes one node and the whole thing rerenders? Clearly that’s not what we want. He made a really solid pitch for simple, mostly linear graphs, possibly in HTML with embbed shapes instead of in SVG with foreignObject text, with maybe a few wormholes here or there for the far-away branches.
In short, I need to go back to mocking up!
I finally bought Pierce’s Types and Programming Languages, TaPL from Stephen Diehl’s recommendation. I also bought PFfPL from Cyrus Omar’s recommendation, but haven’t gotten it yet.
After reading it all Friday afternoon, I thought about it a lot, particularly on my run yesterday (which likely helped get me sick). It got me wondering about how function substitution is so strange, including all the variable re-naming tricks we need to do to get closures to work. I also thought about how function scopes are super similar to objects/dictionaries. In other words, I was questioning the primacy of the functional approach.
Sometimes I think about how functions are really just constant expressions that you can reach into and tweak various parts of simultaneously. So they’re “linked constants” + “tweakable constants”. But I guess that doesn’t cover first-class functions which are a strange closure thing.
All these thoughts eventually brought me back to an idea I’ve had for the last few months: taking denotational semantics seriously for user interfaces. Define everything as a function from x, y, t, keyboard events, and mouse events.
Part of how I arrived at this idea was I was thinking about how complicated HTML child to parent communication is, which is what spent last Wednesday on, and is related to my convo with Josh about layouts. But I am wondering if the HTML / CSS box model is clouding my thinking, and was wondering if I went to the denotational basics, I could get to the bottom of the complexity here.
I thought about how to implement two input boxes, down to modeling the cursor as a rectangle. I didn’t get very far with this. I bet, just like with multi-tier languages, there is a whole research topic that pulls on this thread of “layout languages.” Of course there are the SAT Solver / cassuary things, but those run into the “sudoku solver” issues.
At the very least, I’d like to get to the bottom of the essential complexity of common, 7 GUI, interface problems, and see if I can express their inherent recursiveness elegantly.
I’m beginning to get tired of these terms, because nobody ever defines complexity or points it out precisely. It’s always hand-wavy. I have a sinking feeling this is an issue.
Today was mostly a bust on account of the cold, but I did get the Zulip started, a bit of Coda research, and this log done. Tomorrow I will likely still feel sick, but hopefully I can get a few hours of work done, maybe Coda, email, or Gametrail mockups. I’d also like to get my meetups finished organized – as they are both very close to “done”.
I’ve spent a ton of time on this the past couple of weeks. It’s been a bummer because it’s time away from my Turbine devtools visualizations. But it’s not wasted time, for sure. I’ve learned a ton! Done a lot of research and firmed up my thoughts about things.
Yesterday I had a really wonderful conversation with JE about how to move forward, considering that my paper needs drastic improvements and reduction in scope. We considered a lot of things, but ultimately decided it could be good for me to explain how I became convinced of the original FRP paradigm, instead of trying to argue for it in the abstract. I spent all day today on it and put it on the dctp draft file, where I’ve been keeping all of this work. It may be silly but I didn’t feel like polluting this log with all those random notes.
The past week I did a bit of traveling, and lost some work time there. I’ve also been feeling a bit bummed about working on this Salon essay instead of my Turbine visualizations. Part of me thinks I should just scrap the Salon essay, pull out, not go to >Programming< at all, and just focus on my devtools.
Otherwise, I only have 2 weeks before going to SF where I may not get much research done at all, then a week at home, and then >Programming<. Basically, I am worried that come April 8th, I will be right where I am now on my main research thread.
Instead of that, I could use those three weeks to make solid progress on my devtools visualizations.
Or I could zoom out and see this as a longer-term investment in my career as a researcher. I am getting myself out there, explaining what I am doing and why I think it important, and setting the state for interesting research in the future.
Also, I wonder how little work I can get away with. Maybe the presentation I came up with today is decent, only requires another few hours, and I can ask not to be published (or just allow my original piece to be published because who cares?). Jonathan will help me navigate this…
The rest of today will be random inbox tasks, such as organizing the London and SF meetups. Tomorrow is travel so I do not expect to get much work done.
Friday and all of next week is open. I want to work on:
After struggling to parse Turbine/Hareactive data structures, I sent Simon Friis Vindum an email to schedule a meeting for him to help walk me through it. He shot back a great question, asking for a “motivating example” of what I want to accomplish. I thought it’d take me just a couple minutes to sketch out the “target data structure” I’d like to parse out of Turbine, but I ended up spending all of yesterday, and likely more time this week on it.
I started with the simple counter app, and tried to manually instrument the required streams to visualize it. I ended up doing some random tricks to get it to work, but it was a super valuable exercise.
I’m excited to continue with this for the other 7GUIs projects, like the temp converter, timer, etc. Even my buttons that make buttons! I think it’d be a mistake to try to abstract too early here, but instead focus on the specific goal visualizations and eventually map backwards to a more general tool.
Some little things that I’ll need to work at some point:
marker
svg tag.Note: I just had a great meeting with JE that cleared up some of these questions.
I have a lot of traveling the next few months. It’d be great to take the FoC Meetups on the road with me to these cities somewhere in these dates:
I think it’s worth setting aside time to plan this out, so I will maybe try to this over the coming weeks – sooner rather than later. I struggle so much with the format… dinner, drinks, at an office, or a bar/restaurant, presenting, etc, etc… I’ll ping around and think about it more…
Tomorrow I’d like to finally write the blog for and then release the Tudor Girba episode. I think it’d also be good to finish up JE’s episode and send over for transcription. Then maybe I should work on DCTP Todos.
Thursday I’m visiting the Vatican, and Saturday I’m doing more touristy stuff, so not sure how much work will get done then. Friday and Sunday are free. I have plenty to do (FRC, visualizer, meetups, DCTP essay), so I’ll just have to figure it out.
Woke up late, and only got a few hours of work in today. My girlfriend and I have been very active - running all four days we’ve been in Rome so far!
My DCTP paper was accepted into Salon de Refuge, which is a mixed blessing. It’s great that I know have to take the time to make the piece decent, but it’s also not what I was hoping to spend my time on right now… I’ll talk it over with JE and get his advice on decreasing the scope of the piece so I can get something good by the workshop on April 2nd. I’ll also have to prepare a talk too!
My favorite cyclic and higher-order FRP example now has a very fun visual to go along with it:
Glen was nice enough to read my last entry and make a very kind and helpful comment:
"These diagrams were initially disappointing." There is some visual design work you can do to make them clearer. First by organizing things spatially along a grid, and then by using colors and line/font weights to create layers a la Tufte: pic.twitter.com/ShIvO55I3s
— Glen Chiacchieri (@Glench) February 7, 2019
This is a really great point. My diagrams thus far have been created about as fast as possible while trying to get 80% of the details right. They are basically pencil and paper sketches, but done on the computer.
While part of me thinks it’d be fun to try and get these sketches pixel perfect, I worry that is a longer task, and I’d be better served right now inspecting the Hareactive/Turbine stream data structures to get a better sense of what I’ll be able glean from them to then visualize. Then once I have a better idea of what’s possible, I may come back and try and make these pictures clearer and more accurate.
I spent a bit more time Thursday last week visualizing single Hareactive events and behaviors. I spoke with JE and he thought I was roughly on the right track. I came up with a simple pan-and-zooming interface and had this wacky idea that we could simply shrink a large object so you’d have to zoom way in to see the values. But then I got bogged down in SVG text layout algorithms and gave up. I felt like I was losing the forest for the trees.
Friday was devoted to researching mbeddr for Dark, which was fun. I watched a bunch of their videos, got it installed, messed around, and, even read a few papers they published. I then finally printed out and read most of Cognitive Dimensions of Notation, which I realized was an instant classic, and I then spent an hour trying to compile a list of “classics.”
On Saturday, I recorded a very long in-person conversation with rntz, which was fun but also exhausting due to the length. It was a fun, math-heavy conversation. I recorded it on my new H4N mic where each of us was speaking into one of the stereo mics, which I think came out OK.
On Monday, my girlfriend and I flew to Rome for more sunshine in our lives, so I didn’t get any work done.
Tuesday morning was working on mbeddr for Dark. (I have been scheduling these Dark presentations every 3 weeks or so to give me more time for other work. The next two will be Coda and Retool. Maybe I’ll be in SF in March to give one of these in person!)
Yesterday and today were mostly research and I got a decent bit done, mostly in pencil and paper and Figma…
After getting lost in the weeds with text layout last week, I realized that I’d be better served prototyping in diagrams first rather than code, so I tried my hand at visualizing the first few 7GUIs tasks.
I should probably put { ... }
or DomEvent { ... }
as the values in the first line of the counter but I guess I forgot. Besides that it seems to make OK sense. It’s a bit weird that the 7
is at the bottom but it appears next to the count in the diagram, but that could be solved in a few different ways, for example putting the 7
where it belongs and sending an arrow up to it.
The temp converter has a nice parallel structure, but then there’s that awkward 3rd-and-4th switch over on each side. One possible way to simplify this is by only computing C (by converting F to C, combining, and stepping) and then mapping over that Behavior to get back to F. But it’s really half a dozen of one or six of the other…
I experimented at the bottoms with heights as a function of magnitude, which I think came out nicely. The loops at the bottom make reasonable sense. The main thing I dislike about this example is that newValue
at the top and setValue
at the bottom feel very imperitive, breaking the nice denotations, but I guess that’s kind of unavoidable given the way the HTML DOM works…
For the temp converter, I also tried an alternative with timelines that all line up that didn’t quite come together.
The flight booker was a fun. Here I experimented with a few different ways to display Behaviors. All the way to the right, I have a Behavior that’s always true and it’s simply a line with a T over it. Then down at the bottom-left corner, I have a Behavior that’s switches between T/F a few times, with hard lines between switches, and soft dotted-lines between where computations occurred but returned the same value as before. In the actual prototype you could imagine hovering over that place to see the computation that occurred there. (However, as I reflect on it now, this diagram might reflect the implementation detail that these behaviors are push-based too strongly…) One interesting note: I realized while making this diagram that the keepWhen
function is a better use that what I actually had in the code (filterApply
).
I don’t mind that the GUI elements are spread out haphazardly on the canvas here, as long as elsewhere I can see them in the order that they appear on the page. Another possible interface is to not have them rendered in the SVG at all, but when you hover over them, it’ll light up in the respective GUI, much like the HTML devtools inspector.
This is also the first time I experimented with putting a lot of code (for the lift in the bottom-left) on the screen. It doesn’t quite make sense in context (I just copied and pasted), but it’s a reasonable placeholder. In the first version of the prototype, this will not be possible, because we won’t have access to the source code.
The loopy bits for the background colors of the textboxes is pretty straightforward. I like how that came out much, much more than with the temp converter, because setting the background color to a Behavior feels more semantic.
It was fun to work with time and higher-order flows. I simplified this diagram by assuming access to a “seconds since the app started” stream that you see in the top-right and bottom-left. Like the last example, I put some “filler code” for the snapshotWith
that won’t be possible in the prototype.
The higher order stream came out well. I think it’s important to keep them lined up the way I did for clarity. I don’t know if it’s obvious that the third stream is always 5.7
. Maybe I should put the dotted line markers at 5.7
throughout and put the 5.7
at the end as well? The dotted markers are an interesting trick, but there’s also the step-function approach. And here’s another idea: number lines with firm line markers usually represent continuous numbers, but that’s how I’ve been representing Events. Maybe I should represent events as circles with no line (or a really stylized / faded line) and leave the timeline to Behaviors…
These diagrams were initially disappointing. Even when I look at them, they are immediately intimidating, even off-putting. They don’t yield to insights in 3 seconds, like many other visual aids do.
However, upon reflection, I thought about how many of Tufte’s diagrams take many minutes of study to fully grasp, and then even more time to mine for interesting insights. So while these diagrams might take 30 seconds or 5 minutes to comprehend, that’s an order of magnitude better than an hour or two to understand the code. And they will likely be helped along by liveness when they are connected to GUIs that can be interacted with, turning the crank on these diagrams.
I’m going to finish preparing for my mbeddr presentation for Dark in 3 hours tonight. Tomorrow and Monday I will try to continue pushing this work forward for my meeting with JE on Monday, maybe switching to coding some of this up… I’d like to do a visualization of my buttons that create buttons example before that.
As far as next steps on coding, I’d like to go high level and see what I can inspect/crawl out of Turbine stream and component data structures to turn into the diagrams above. Then I’ll combine the low-level work from last week with this high-level work to have beautiful diagrams! (I also need to do a bunch of refactoring with last week’s code. It was getting a bit annoying working with SVG in the way I was. I am wondering if I should build an abstraction over SVG that converts it to a normal X-Y plane…)
I think Tuesday next week would be a good time to finally publish the Tudor Girba podcast, and finish editing the JE podcast and send it off for transcription.
Wednesday could be a good time for some FRC work. Thurs and Friday I’ll plan on getting back to working on the research.
Aaaaaaaand, I’m back! It’s been too long since I’ve done research and a proper log entry. Instead of struggling through getting set up with working on Hareactive/Turbine, I thought it better to wait until I spoke with Simon, and use the waiting time for other work, such as freelancing and the podcast. I’ve also been obsessed with the Sword of Truth series on audio-book, so that’s to blame as well.
As far as the podcast goes, it’s been productive:
The Tudor Girba episode is ready to go, including the transcript, but I just need to write a summary blurb at the top and then publish and submit to HN.
The JE episode is edited. It just needs an intro and outtro, transcript and blurb.
I recorded an conversation with Cyrus Omar
I am speaking with with Michael Arntzenius in person on Saturday!
This will leave me with plenty of podcast editing and publishing to do in February in Rome. (My girlfriend and I are headed to Rome for ~3 weeks for more sunshine in our lives. And Italian food, of course.)
I’ve been doing a better job of balance than in the past. My only remaining problem is that it’s a bit difficult to decide when to do what, and when to switch. I’ve noticed that I get most of my research work done in the days before meeting with JE, and most of my work for Dark done in the days before meeting with Dark. I don’t often meet with FRC, so I sometimes forget about them, but I have two active projects for them that are waiting for whenever I have the time…
I’m glad I did the research on Modern Smalltalks for Dark. It was fustrating at the start, but well worth it at the end. Dreading doing it for mbeddr and TLA+ but I know I’ll be thankful afterwards. The plan is to spend all of Friday on Dark stuff.
Yesterday, today, and tomorrow are research so as to have impressive things to show and tell to JE tomorrow.
Next week will be Dark, FRC, and some research, some podcast.
I finally got Simon on the phone! It’s been a number of weeks of waiting for him to finish up with his exams. He was wonderful. We chatted for ~2 hours, first about a few issues I opened on his projects, then about the devtool, and finally I shared my screen and he help get me set up in the right code environment and got started adding SVG elements to the Turbine library.
One issue he mentioned with the devtools is that intermediate stream variable names don’t get captured in any data structure, so people will have to figure those out from context.
We also talked a bit about how push-based might has a memory leak problem because parents have references to children, that otherwise would be garbage collected.
Finally the rubber is hitting the road! It’s awesome to see this coming together in a real live project at long last. Simon and I came up with a plan for moving forward:
I realized this morning about all the important little things I need to figure out now, such as dynamically displaying all the stream data. I settled on the fastest solution I could think of: simple arrow based pan and zoom. I wonder if I should go extreme with it and just make the font of large objects really tiny and you can zoom in to see their properties. That’d work as a placeholder.
I feel myself already wishing to “eject” from a devtool, so I can do cool things, such as when you hit space, it pops open a box where you can search for operations like in Facebook origami. But instead you can type, with autocomplete, various flow names and it’ll suggest flow combinators based on their types, and supply default arguments, and show the output flow as you scroll through the options. Or maybe the options are the output flow!
Last week I met with the Ink and Switch team. It seems like it’s former Heroku people doing research towards a company a la Dark. It was really fun! I just got off a call with pvh from their team. He showed me their farm software. So cool! He was repeating back my whole mission back at me! He calls it “Personal Software”. They are using it at their company - right now it’s a dashboard of a todolist, personal updates, a chat thing. It’s living! Amazing! It makes me think that the day will come soon when personal software is upon us. I can already imagine moving the Future of Coding Slack to a platform like this… Very cool!
Their underlying technology is similar to dat/beaker/scuttlebutt/ifps. pvh had to explain how the internet works to me, Nat, UPD, WebRTC, etc, for me to follow. It’s build on top of a JSON CRDT so that it causes the least conflicts. It’s a peer to peer data model, and
Some quick updates:
Haven’t journaled in a while. Moved a lot of this thought to Github Issues which I don’t love to be honest but it works ok.
Feeling like progress is slow. Resisting working on podcasts. Feels like busy work. But I should probably knock it out.
Also resisting working on Dark but it’d probably make me feel great to install one of those and spend 4 hours on it. Maybe tomorrow.
Ok let’s say today is publish Vlad.
Tomorrow is 4 hours Dark.
Maybe Tuesday I can do Turbine 7GUIs. Maybe just keep with Dark if I have momentum.
Wednesday another 4 hours Dark or 7GUIs or prep JE.
Thursday is prep for JE. 4 hours should do it. Maybe find time to start earlier in the week too.
Let’s say no Tudor podcast work this week. No rush there…maybe let’s hold off scheduling next podcasts (Cyrus and Hillel) until Feb and am through publishing Tudor and JE. I’ve got enough on my plate with Dark back in the mix and Tudor’s 3 hour podcast. And all the P4 work! (And the community! And FRC!)
Ok so this week, the priorities are:
Let’s say no FRC, Turbine/P4, community, Tudor (but that just means only if I really want to). Ok now my week feels reasonable. For Dark, I’ll try with whatever feels easiest and keep switching until I’m mostly done with one. Maybe Pharo and GT because Tudor may help me?
Feeling a bit overwhelmed, rushed, frenetic today for some reason. Deep breaths, Steve, deep breaths.
Now that I’m settled on Turbine (modulus the 15 issues I created, and many more to come), I’m beginning to think about what I hope to build with Turbine: p4.
I was getting a bit overwhelmed with all my ideas for p4. I made a big list and asked JE for help figuring out what’s essential and what I can do without for this prototype:
He helped me realize that most of these ideas represent entirely different prototypes! I can’t just combine them all like this. Prototypes explore one new idea at a time. So what’s the idea for p4?
JE hit the nail on the head: p4 is an IDE for Turbine. This is particularly funny because p1 was an IDE for JQuery and p2 was an IDE for VueJS! Those both used blockly but that wouldn’t help with p4 because blocks solve a different problem than what p4 is trying to solve: improve the PX of DCTP, most directly by showing live streams as the user interacts with the output.
On my run a few hours ago, I was thinking about what my data model would be for p4. Then it hit me: Turbine is the data model. (Along with hareactive, io, and jabz.) Those stream data structures are the thing the p4 user is creating.
I could start p4 by building a projectional editor for Turbine/hareactive/io/jabz and all of JS. But can I get away with less?
One way to get away with less is to build a UI for the turbine/hareactive/io/jabz functions/combinators and then have users type regular text js where we need functional expressions (similar to aprt.us and pane).
The other way to do even less is to allow text to be the interaction model for Turbine and simply start with a devtools that shows the streams. If my main conviction is that seeing the streams will help, let’s just do that. What’s great about this idea is that a stream visualizer is a pre-req for any other p4 end-product anyways, so I might as well just publish it as it’s own thing, and then move on afterwards.
In summary, p4 is now the Turbine devtools project, in the spirit of the CycleJS devtools.
Wait… because this feels reachable (I can imagine myself achieving it in early 2019 and it existing), it makes me wonder more about next steps…
All valid answers. The thing is, whatever I do, I need a UI library I love, and that library needs a stream visualizer, so build that first. And then we’ll talk.
Before starting the Patreon, I wanted to look around for alternatives. I found Stripe, Paypal and Donationbox allow for subscriptions with much less fees. However, I asked a few people with Patreons and a few of the people who said they’d be my supporters, and both said that I should stick with Patreon, mostly for the branding. It explains itself and gives off the right vibes, which is hard to do with a do-it-your-self platform. Patreon it is!
Maybe something like:
Last week was productive in terms of publishing two podcast and recording a third. I also worked a bit freelance. I only need to do a dozen hours freelance at this point in conjunction with the money from the podcast sponsorship to make ends meet. Eventually with more sponsorship and Patreon, maybe I can slowly lower the number of hours freelance towards zero.
I also started my Dec Regroup Projects, including moving my todos to Github Issues / Github Projects. I now have three Github projects:
I’m a bit worried about how this new system will interact with this log. While it felt silly to copy and paste todo lists over and over in this log, it’s also a bummer doing things in my new system doesn’t show up in here. This log is supposed to be a log of all my work on this project, so I’d love to pull in Github issues activity somehow… Now that I say it “out loud” I wonder if I can do that automatically… For now though, I may just copy and paste some things from Github issues here when relevant.
I get really fustrated with Jekyll sometimes. My main issues were:
After many annoying hours I was able to make both of these things work. My main next big issue is a navbar on all my pages of:
Of course new styles and logo would be nice as well, and maybe a commenting system, but it’s not top priority.
And to be honest, starting the Patreon is bigger priority than all of these but I’m a bit scared so I keep procrastinating…
After doing 3 hours of emails yesterday (I was sick towards the end of last week, so I took Thursday off and didn’t do my emails Friday, sat, or Sun), I spent a few hours messing with Turbine, and then ~7 hours today in it. So fun! Despite being rough around the edges, it feels like there’s really something wonderful here - I’m very impressed with the design. Polishing this is a million times better than having to start from scratch.
To take it for a spin, I’ve been doing the 7GUIs tasks in Turbine, and whenever I get stuck or find a bug, I write it down, and sometimes open an issue. I’m tracking all the progress in this Github issue, which I will copy and paste here:
output
lift
:jabz
and do lift(f, b1, b2)
? Or I use it as b1.lift(f, b2)
?Attempt to sample non-replaced placeholder
IO.of
to be easier to get at than going to the tests to find them.filterApply(Behavior.of(undefined).map(() => () => true), stream).log()
upon an event from stream
errors: predicate.at(...) is not a function
. If you replaced undefined
with 1
, the error goes away.
time.log()
does nothingIt should at least error if it’s not going to show you anything, but I might prefer it to show me all the milliseconds.
when(Behavior.of(true)).log()
errors Cannot read property 'Symbol(Symbol.iterator)' of undefined
but yield sample(when(Behavior.of(true)).map(x => console.log(x)))
outputs an OfFuture
value
What is the moment
function (described here)?
Use loop
successfully without this.source.addListener is not a function
error
What’s the difference between go
and fgo
? Also, the placement of arguments here is confusing:
const tempView = ({ c, f }) =>
go(function*() {
When you can do infix dot vs prefix
I want a dummy model so I can see the view while I mess with stuff and not get The generator function never yielded a monad and no monad was specified.
without
const tempModel = fgo(function* ({ }) {
yield Now.of(10)
});
Need to call the function created with performStream
or performStreamLatest
(I don’t get the difference): performStreamLatest(book_.map(() => withEffects(() => alert('hi'))()))
It’s also quite annoying to keep the output and inputs in sync always.
This week, the focus is on the podcast and my freelance work. I would like to publish my reflection 14 episode as well as edit Katherine’s episode and prep for and record Vlad’s episode. Let’s say the podcast will be Mon, Tues and Weds this week and I’ll do freelance Thursday and Friday.
Next week I want to focus on research, playing with Turbine and other p4 thoughts…
I asked Paul Chiusano about hashing code that’s equivalent but syntactically different, such as 1+x and x+1, and apparently Unison “doesn’t normalize commutative operations.” Some relevant links he sent me are Normalizing and Rice’s Theorem.
A way to simplify this problem is to build an FP playground to solve normal FP problems with cyclical streams first and work our way up to UI:
[].slice.call(editorElement.querySelectorAll('*'))
.map(e => [e, e.innerText])
.map(([e, text]) => [e, text.match(/^(#+)\s.*$/)])
.filter(([e, m]) => m)
.map(([e, m]) => [e, m[1]])
.forEach(([e,m]) => {
e.style.fontSize = (50 - (5*m.length)) || 3;
e.style.marginBottom = 2 + "px";
e.style.marginTop = 2 + "px";
})
The issue with this approach is that we may create a FP playground that won’t scale up to cyclical UI problems…
I realize that part of why structured editors haven’t been able to compete with text-based coding is:
It’s simply not a fair comparison to expect a new interface to be as fluid as text from Day 1. Of course fluidity is important and of course making it work with people’s existing hardware and skills will help with adoption, but they aren’t the first things to worry about. Maybe the ultimate interface will require new input hardware and/or a lot of practice to get the hang of. Hopefully we can get away with the keyboard and mouse and make the onboarding simple, but we don’t want to pigeonhole ourselves over it.
The initial focus should be on the comprehensibility of the code and the liveness of the experience. Liveness means that an incremental action should result in an incremental result. This is possible without fluidity, which means that “taking the incremental action” may not be ergonomic for some reason.
(However, I will note that fluidity is SUPER important to me. I would love to build an interface that’s only optionally dependent on the mouse.)
If we throw out text-based coding and agree to a structured editor of some kind, you may realize that we can automatically represent colors as a color picker instead of (or in addition to) a hex value or rgb value. Cyrus Omar has work where he embed’s a regex playground right into the IDE. (As Tudor Girba says, whenever you leave the IDE, the “I” has failed.)
If you follow this line of thinking, you realize that ALL literals are GUIs! Numbers can be scrubbable or many other interactive representations, booleans are a checkbox thing… We can even nest GUIs! You could have a string as a widget which is basically a text box but you can add other arbitrary expressions inside - no more escaping characters necessary! Lists can be a specialized GUI where you can add and remove expressions or do comprehensions.
We can continue this idea for all kinds of expressions: if-expressions, pattern-matching, lambdas, function application… Can we build nestable custom GUIs for every part of System F/Haskell?! I think so!
The key here is the nestability. Normally a color picker or other GUI is a top-level thing. But why? I don’t see any reason why it can’t be as expression-like as coding.
When you define a function it will default to a basic representation but it should allow you to add a more specific GUI to represent your function!
These thoughts were inspired mostly by Tudor Girba, but also are related to Niko Autio’s Microeditor ideas.
I am curious how to combine this idea with the ideas of Hazel and Josh’s principle of radical visibility of preview evaluation with test data. Same with the hashing stuff. There are so many cool PX ideas to combine together!
Drawing this out will be interesting. And if I can show how we can do any pattern from Haskell, completeness is guaranteed!
In terms of implantation, not sure if HTML or canvas is the way to go. Would be fun to play with Turbine on this project…
Went really well. Notes here. The end-result was that I should either find or build my own Reflex alternative in JS. It’s easier to start from a compile-target, a DSL, than from a visual editor. Once I have this built, I can use it to think about visual abstractions to compile to it. And I can use the framework to build those abstractions in a bootstrappy way.
PhD student in Paris also interested in the malleability of software, but coming from the HCI and political perspective. His work sounds a lot like Dynamicland.
Another person interested in the same goal / themes of software malleability. It’s really fun to chat with people like Kartik and Philip where we agree so much on goals but are coming from different places and see entirely different implementations of these ideas. I got Kartik, to agree to record the conversation as an experiment. I think it went well. Apparently 18 people watched it. Maybe if we continue chatting from where we left off, I can grab the audio from the conversations and turn it into a podcast. Or maybe we can do a podcast from scratch at some point.
💭Yesterday I wished for a dream note-taking chrome extension
— Steve Krouse 🇬🇧 (@stevekrouse) November 23, 2018
🎁@dankantor supplied with me the initial code
👨💻I hacked for ~2 hours last night
✨I have a working Chrome extension! https://t.co/51Res4TnlI
This was a really fun project, and it really energized me. In my head, hacking this together was like running up a hill and the prideful energy I felt afterwards was like running down the hill and I still feel like I’m running downhill three days later! I am actually writing this journal entry in Blinknote. It’s so amazing to use tools that you make yourself. So much pride. It definitely encourages working and working with more of a smile. It also allows you to feel less boxed in - you can change things if they don’t work for you.
It makes me even more excited for my vision of programming as allowing others to make their own tools, collaboratively. Working on ones own tools makes me think of whittling a spatula from a branch with a knife. However I don’t think that’s the right metaphor for my system because I want to highlight the collaborative nature more.
This was a lot more work than I expected, finding a restaurant, getting people to RSVP, setting up payments, emailing people, dealing with dietary restrictions, confirming with restaurant, people’s plans change, etc. One way to keep myself sane during this process is not to judge myself on the outcome but on the process, and even by just the fact that I’m doing it, trying things. My refrain is “full credit for showing up”. I hope it goes well Wednesday!
This is a common theme I’ve seen a lot ever since my visit to Dynamicland. I saw it again in Pharo and wrote up a draft / ouline of an essay about it.
I’ve never liked the kit idea. In another incarnation, it’s “everything is a document”. I think the STEPS project had this. It’s always felt messy, ugly, and lame. On one level this is just a surface thing: overlapping windows, lots of gray top navs of windows, nested menus and right-click menus.
However on another level it feels like the messiness is a bit real. For one, it’s usual a dynamicly typed system. For another, it always feels less polished and usable than apps. As in, I can’t imagine my mom using it.
This is a big deal. For a while I’ve thought that HTML, CSS, and JS would be ok, as a compile target, but I no longer think so, particularly after Tudor showed me Pharo. He made the point that having everything in “a single render tree” is really key, and I don’t feel I quite understand why but this does feel critical to me intuitively.
Differences include:
Thus my goal-system-I-have-no-name-for (wow, I really should come up with a working title for this… potluck? d1 for dream 1? I guess logichub could be d2?) will have to be built on a new system. Initially this system can be embedded within the web as a web app, but eventually I can imagine it having its own standalone “browser” that would work on various operating systems. Or I guess it could turn into an operating system itself!
Why start as a web app? I have to pick some platform and web is what I know best and I hate installing things and I love my chromebook so web seems like a solid pick.
The key question for all smalltalk-like systems: how do you prevent it from becoming its own universe? For example, Pharo or Lively Web.
My answer is that you start with single-user apps, starting with daily productivity tools, like notes, email, calendar, task management, and then once you have a critical mass, people will slowly spawn more ambitious projects that the community will use, and it will grow from there.
I guess if it works within a webapp, people can use it on the web, so that’s a pretty good story, as compared to Pharo which requires a download. So maybe the answer is that if we build it on the web to start with, it could seem like a website to most people, kinda like Lively Web or Tiddlywiki.
I need to spend some time defining fluidity and structure, but this graph feels provocative:
I think fluidity has to do with live-ness, feedback loop speed, incremental actions causing incremental result, etc. And structure has to do with possible / impossible states, types, schemas. I’m stuck on where AST editors fit in this layout: they enable greater feedback loop speed knowledge of errors and prevent error states, yet they are not as fluid as using text but text doesn’t give you as much info as fast.
One interesting note is that Airtable is particularly noteworth as having high structure and also high fluidity.
Just as I decided (in the last meeting with JE) to build or find a JS lib for DCTP, I popped onto Twitter to find this:
Stumbled across another *spot-on* post about FRP by @paldepind: "Let's reinvent FRP" https://t.co/Ga7ywHnySI .
— Conal Elliott (@conal) November 23, 2018
Which led me back to Turbine, which I had found a few months back and mistakenly disregarded as the “wrong kind of FRP” as mistakenly interpreted their Now monad thing. I’m now quite excited about this library! I already sent two long emails to the creator, Simon, and hope he responds soon. Here’s what I wrote I’d like to collaborate on:
Documentation. For example, I had to figure out how
list
worked from reading the source and puzzling together a few examples without explanations. I’d love to help document every function in the API. (Additionally, I believe the code itself could use some comment documentation but that’s more up to you.)Understanding the types of the streams I’m working with would help a lot. Maybe getting TypeScript set up (as I failed to do in the issue above) would help here.
Being able to “inspect” streams better as a debugging and understanding tool. CycleJS has this wonderful devtool and there are a number of other really cool stream visualization tools we can draw on for inspiration. At the very least, a better
console.log
story would go a long way. It was really tough to figure out what was going on with my streams.Collapsing higher-order streams is really hard, but luckily this picture makes it a LOT easier. It saved my life last night as I was working on my favorite FRP problem of buttons that add buttons that add buttons… but only the odd buttons. Maybe we can build on this picture somehow or at least incorporate it into the documentation…
As stated in the issue above, I don’t like the way the model and view are separated. I wonder if it’s possible to combine them like in Reflex and other “original FRP” frameworks.
And after we/I work on these more pressing issues, the next step will be building a layer on top of that Turbine that would make the development experience much better. That is, building a GUI that “compiles” to Turbine, for example, kind of in the spirit of Conal’s Tangible Functional Programming. Here I am referring to more radical ideas than improving the documentation or a simple devtool, such a projectional editor in the spirit of Lamdu, Luna, Isomorf, Dark, Unison, and Hazel.
A fun question on Twitter yesterday encouraged me to think a bit broader as to an underlying theme to my work that would encompass this project, as well as other outside of “improving programming” (namely, LogicHub). I’m proud of what I came up with:
Offloading mental tasks better done by computers to computers, so humans are freed up to think creative thoughts
— Steven Krouse (@stevekrouse) November 21, 2018
Sean McDirmid suggested Haskell for Mac, which is very cool and close to what I want so I installed it, but I wasn’t able to see an easy way to get Reflex to work with it…
Luke Iannini apparently worked on a live recompiler for Haskell.
The monadfix people shocked me when they said:
None of us have been able to achieve the “fluid, live Haskell programming experience” that you’re yearning for, even without “has to work GHCJS” as an additional constraint.
My new thesis is that this fluid Haskell doesn’t really exist - at least for most Haskell developers.
Artyom from monadfix was also very kind and commented on my issue about installing intero. Too bad they can’t help here!
I made a full day’s effort of trying to steelman Haskell/Reflex and make the experience as good as possible before trying to improve it - yet I just have such distaste for installing and debugging shit in the terminal (as well as using the existing Reflex setup I have) that I wonder if I can simply pull on my memory for the key issues…
The main thing was that speed of feedback on all fronts (syntax, types, output) was so slow and required so many keystrokes. In particular there were times that I could point to places in my code where I just wanted to know the type of something but did not know how to ask Haskell for that information.
In other words, “I have some things. I want some other things. What blocks can I use to go from what I have to what I want?” In Scratch, all the “legos are on the floor” to help with this. I find that the lodash JS library also does a superb job of this. I found the Reflex documentation and the Haskell autocomplete tooling to feel like I’m basically guess and checking.
Such as converting Int
or String
to Text
and back, or worst of all, collapsing higher-order streams. I was very excited to find this extremely helpful video, Real World Reflex last night which has a really wonderful slide to help with this:
This includes seeing the “shape” of streams and how streams make up other streams as in rxmarbles.com, as well as watching the live data flow through streams.
This whole thing won’t feel natural without direct or bi-directional manipulation of the output because why not?
Continue working towards a fluid experience in Haskell/Reflex.
I’m going to spend the next ~30 min doing the sketching discussed above in preparation for my meeting in a few hours with JE.
Today I tried (and failed) to do the todo-item from yesterday of:
watch videos of expert Haskellers on Twitch and try to copy their setup
I've heard rumors of a fluid, live Haskell programming experience. Has anyone captured such a chimera on video?
— Steven Krouse (@stevekrouse) November 20, 2018
First I googled around, trying to find videos of this. Nothing came up that easily after 30ish minutes of looking. Haven’t gotten anything on Twitter either.
I then re-discovered intero which seems to promise 90% of what I was looking for, such typeahead suggestions, jump to definition, type of selection, and type errors as you type in the editor with underlines.
I spend a 1.5 hours trying to get this to work, and failed. I had lunch and came back with the idea to record it. It took me another hour to fail and produced this issue.
I emailed Chris Done (creator of intero) and monadfix, asking if I could pay them to help me set this up.
(I put an asterix (*) next to the items that I can do tomorrow.)
I woke early for an Alexander lesson this morning, which was a big mistake as I am not nearly over my jetlag. I went back to bed after getting back and thus didn’t get much work done today. Maybe 3 hours of reading, as discussed below, and random Inbox tasks I get done after writing this for an hour or two.
(I wrote this section yesterday and the shower note before that.)
Reflex is the closest thing to the way I think UI should be written and yet it’s so crappy to use. I think that fixing the worst parts of this is a great first step. Of course there are many other things to improve such as direct-manipulation, visual metaphors, etc, but let’s start with the most egregious places first.
Can I turn off optimizing flags in compiler, use a linter or code augmenter, haskell-id (or whatever thing that auto-checks code), sourcegraph? Maybe watching Haskell developers on Twitch would give me a better sense of the state-of-the-art workflow.
CycleJS or rolling my own thing. Maybe with JS linter to only allow consts and no object modifications (will be tricky to get cycles this way…).
Build an interpreter with ability to swap nodes as running (like Scratch, Smalltalk)
Today after my jetlag sleepiness I read another few chapters of Stephen Diehl’s book Write You a Haskell, including typing, evaluation, type inference, and the higher-level design of a “ProtoHaskell”. It seems like a tractable problem, mostly an engineering problem given that it’s mostly “solved”. I even found two already on Github here and here, the first of which comes with an online REPL! It would be interesting to test the speeds of these REPLS vs each other and GHC.
Now that I know a bit more about how Haskell works under the hood, I wonder how I would implement the features that I think would make a better experience:
Talking with Cyrus Omar, the Lamdu team, Paul Chuisano, Stephen Diehl will be key in this process… I feel very lucky to be able to confer with the world experts on these topics!
I can feel myself being pulled into “engineering mode” already and it’s too early. I don’t have my eye fixed on an experience target well enough to get dragged into trying to achieve it. For my work tomorrow, I shall draw out, in tedious frame-by-frame detail, what a simple experience with this “ideal Haskell experience” would look like. I wonder if I could make a simple stop-motion video out of it… The features I think are most important:
For contrast, I think maybe I should start by trying to accomplish a few UI tasks in Reflex and record them and then make a list of all the pieces of data I wanted to know or actions I wanted to be able to take faster.
I normally don’t like to travel but this trip was possibly the best thing I’ve had to travel for in my life. I’m very excited to do more things like this, maybe a couple times per year.
I’ve been referring to this period of my life as my “Twitter Friends Phase” because I am making so many friends and then I also get to see them in person. Just in last week in Boston alone, here are all the amazing people I got to spend time with:
Will Chriton, Joel, Jonathan Edwards, Ravi, Brian, Cyrus, Glen, Sean McDirmid, Paul Chuisano, Geoffrey Litt, John Maloney, Charles Roberts, Chris Granger, Eyal Lotem and Yair Chuchem from Lamdu, Roben Kleene, Daniel Moon, the ReScala team from Technische Universität Darmstadt (Ragnar, Pascal), Josh Horowitz, Roly Perera, Caleb Helbling, Evan Czaplicki
Sadly I only have one photo from the event. I should do better next time!
I have trouble with lectures. I’d much prefer sitting at home in my sweatpants, listening at 2x speed and bouncing if it’s not for me right now. However, LIVE 2018 was one of the best days ever, not in spite but because it was a day full of mind-blowing demos one after another.
I recorded them on my phone and uploaded online to a lot of thank-you’s. The Bootleg page had a fun run, including Jeremy Asheknas’s transcript which landed on the front page of HN for the day (not when I posted it, but when he posted it a few hours later).
I really have two very interesting directions to go in: making FRP experience better, or expanding the FRP universe to multi-node. We choose FRP experience for now, but I’m thinking about the other thing on the side.
(Tracked at https://github.com/futureofcoding/futureofcoding.org/issues/86)
The goal is to submit this work to <Programming> in Feb and a good title for the PX workshop (says JE) is “FRP eXperience”.
There are a few different levels here:
But then I thought: let’s focus on the really key issue here. What’s the highest-leverage improvement to be made? What’s the worst part of using Reflex/Haskell now? In the shower note on the right below, it says:
JE send me an email:
Can you implement your FRP eXperience using FRP, and apply it to itself? That might be challenging, but would also be very impressive. It will be seen as a limitation if it can’t apply to itself, although that might be unavoidable in the first phase. But I think that at the very least you need to implement it in some FRP framework to avoid the charge of hypocrisy.
My response: The only implementation of the FRP I argue for in my paper is Reflex/ghcjs. It doesn’t exist in PureScript, Elm, F#, etc. Turns out laziness makes it a lot easier to implement. My next experiment is to try to do it in JS with CycleJS. I’ve tried this in the past. It’s not easy – it’s not built for this use case – but it seems possible, even cycles. More to follow…
Some other ideas for building this:
(Tracked at https://github.com/futureofcoding/futureofcoding.org/issues/85)
Behavior
so that we can change this if we want to revoke access.Now that I’m clear I am building a system, and not just producing research, I am thinking on a name. Here are some themes it could embody:
And here are my favorite names of the moment:
And here’s what Twitter thinks:
Preferences on potential new programming language name..?
— Steven Krouse (@stevekrouse) November 11, 2018
I got this project started a year ago as fast as possible, and it seems like now-ish is a good time to regroup a bit and work on a few upgrades… Maybe I’ll set aside Dec in do some of these things…
It seems like there’s interest in hosting/attending these in lots of cities, and with a bit of effort (a website with guidelines) we can have them in a bunch of cities. I can do London and maybe New York, Caleb Helbing has been talking about starting for Boston, Amjad Masad (of Repl.it) says he can take SF…
Yesterday’s practice was pretty solid! I’m not sure if I’ll have time to review it and do a better one today…
This morning I poked around on the internet for tutorials related to creating “toy Haskells” particularly interpreted and not compiled. This is related to wanting to create a FP language with a much livelier feel, aka prototype 4 and sometimes referred to as viv, potluck, stride, etc. I came across Stephen Diehl’s many writings on these topics and saw on his Twitter that he went to the London Haskell meetup last night, so I messaged him and we had lunch a few hours later! I love the internet.
He claimed that in order to design a functional language you need to settle on three things:
The code is parsed to the AST, the AST is reduced to something that resembles the core calculi (you can see this in ghci with D dump core), and then the evaluation semantics determine how things are evaluated.
Dinner Thursday night with some folks. Message me if you want to join!
I found edsu.org via this HN post. Seems cool. Similar to a lot of other projects floating around the ether these days. The comments are full of them. Just wanted to jot it down here.
The plan is to practice the talk once a day until the conference and review it on video and take notes on how to improve each time. Here are the videos I’ve done so far. They get better so you’d probably want to just go to the last one:
https://www.useloom.com/share/340140803ec24931b0bfb0e0f4d260e3 https://www.useloom.com/share/d50ae47bb6f44056b3a5231406598643 https://www.useloom.com/share/c0db356f8193400c92441410e4005686 https://www.useloom.com/share/bc107f1d6e7249208e5152b909e9a659 https://www.useloom.com/share/b636f469dcea42ce946b25dd5c0a75d3
I spent ~2 hours yesterday morning cleaning up my .git
repo for this website. It’s only fair considering how I abuse this repo so, but it was still quite frustrating. The .git
repo was over 200MB! Eventually I learned that the issue was all the large .mp4
and .pdf
files I added a while ago and since removed but since lurked in the git info. So I removed them, and then accidentally pushed those changes to github. Now I’ll have to eventually fix those broken links.
Conal’s talk from two days ago pointed me to The Next 700 Programming Languages by Peter Landin. It felt like very modern essay, not one written in 1965! The beginning didn’t speak to me but sections 8 and beyond really did. He coins the term “denotative language” to replace the “functional” vs “imperative” debate: a language with nested subexpressions, where each expression denotes something, it’s dependent only on its subexpressions listed’s values. This is a useful definition!
At the end of this article there’s a discussion section where the author and other famous computer scientists discuss these issues, debating what is a declarative language. It’s really amazing to see!
I want a place to consolidate all my work on this, so I’ve started a github issue for it
user :: (public key, private key)
sign :: User -> Stream a -> UserStream a
the stream.lift
a Stream
to a Stream (Stream)
of some sort (to account for when we get the stream as well as when it happens), and in this lifting we can optionally supply a User
parameter. Pieces of state themselves will decide if and how they will allow themselves to be lifted, and to which users.Yesterday I resolved to do more research (roughly two days per week, leaving three days for emails, podcast, and freelancing), and I had a good start yesterday with a half-day of solid research!
A few scattered notes from earlier this month that are semi-interesting:
Conal Elliot is the man. My god, I’m so excited to be able to grok a part of what he has to say! Yesterday I re-watched Denotational Design: from meanings to programs and was re-blown away. It’s funny: whenever I recommend Conal to friends I have to caveat it with, “I need to watch his stuff multiple times before I get it.” This was true of this talk as well.
So here’s the denotational methodology as far as I can tell:
So I tried to follow it in the shower yesterday and I think it went pretty well!
I had a few key insights:
Events a
on one computer to something like Event (Event User a)
, where the outer event represents this computer’s perception of the event happening and the inner event represents when the event actually happened.I had another insight, but more about prototype 4:
Getting set up in London has taken a lot more effort than anticipated (wifi, broadband, furniture, food, new friends, transit, etc, etc), but I’m settled enough this week to have time for focused work.
JE suggested that I record my practice talk (slides here), which I did here two weeks ago, and send it around to a few friends (Glen Chiacchieri, Geoffrey Litt, Ivan Reese, and Joshua Horowitz) and ask that they all meet online to give me feedback, writers’ workshop style. We are meeting today at 6pm London time. I’m excited to see how it goes! I imagine I’ll have another few hours to hone the talk this week afterwards. I’d like to have a clean recorded version I can share around online.
Last week I attended at two-day conference in Bertinoro, Italy after learning of it from Tomas Petricek on Twitter. I traveled on Monday and Thursday and the conference was on Tuesday and Wednesday. Almost every time I travel I consider the trip not to be worth it. This trip, however, was almost worth it. I definitely did learn some things, gained some new perspectives into the history and philosophical origins of my field, but I regret that I could’ve learned and did more from home on the internet, in a book, or working, rather than listening to random talks at 1x speed. I fear I will have a similar reaction to SPLASH, except that it’s 3x the cost of a cheap jaunt to Italy from London.
Ivan Reese brought it to my attention that my life set up wasn’t entirely clear to him from my podcast and notes, which I’d like to correct on the next reflection update episode. The way I’ve been explaining it to people IRL is that I have three things going on: freelance software work, my podcast, and my research. My research is the aim, yet it’s also what I have recently found the least time for. My freelance work is for a venture capital firm connecting their SaaS services together to make money. My podcast is something I do for fun, but recently has gotten sponsorship - it’s not enough for me to stop freelancing but Amjad (my sponsor) seems to think we may be able to get it there with some more growth eventually
Here’s the time I have this week:
Today - 4ish hours Tues - 7ish hours Wednesday - 2ish hours (busy with Alexander Technique) Thursday - 7ish hours Friday - 6ish hours (meeting Nadia Eghbal)
In terms of freelancing, I can get away with not working much this week.
In terms of the podcast, I merely must edit and release Quinn Slack’s episode in the next day or two. (3 hours)
This leaves me a lot of time for my research, which has been much neglected the past few weeks! Very exciting.
While my intention was to continue my research in the direction of prototype 4, I have been tantilized by the problem of how to extend the FRP abstraction to “the backend”. In other words, apps with data stored somewhere “in the cloud”.
To extend the simple counter application, simply make it a multi-computer counter, that aggregates all the counts to a button across all computers. As is often the case, once put in those words, the problem doesn’t seem so hard. One approach is to have a “lift” operator that would allow us to transform a button’s Event ()
into a MultiWindow [Event ()]
, upon which we could do various operations, such as merging all the event streams and counting the occurrences.
Or consider the much more complicated problem of a realtime multiplayer game like agar.io or slither.io. As I’ve learned the hard way, you cannot simply send the x- and y- positions of each player to each other player and update position accordingly. You must instead anticipate where each player is going to be based on their current position and velocity and project them there until you receive the next word on their position and velocity and then you can subtly nudge that player to where they actually are, and are headed.
How could I model such an arrangement without mentioning low level details such as sockets? I think the concept of perspective is relevant here. From a single player’s perspective the x- and y- position, and velocity are all FRP Behaviors, defined at all points in time because their computation happens to quickly. However, those values for other players must be modeled as FRP Events, because we only get glimpses of them at discrete points in time. However we must on the screen display other player’s positions as continuous behaviors so we must use the complicated logic to predict and update our perception of the other players to construct a Behavior for them out of our Event of them.
My thinking above is lead by my intuition of FRP, types, and what the abstractions allow. However I feel a desperate need to have a proper medium to help shape my thoughts to only go in proper ways. Potentially Haskell or PureScript could be such a medium. I am considering watching/reading Conal’s advice on denotative design again. Maybe I need to come up with mathematical objects to model what I mean by a “multi-widow” or “multi-user” application or event stream. These are the question on my mind when I pick up this stream of thoughts…
My girlfriend and I moved to London this past Monday. I haven’t done any proper work since then, I’ve been so busy getting our home set up. The trickiest bit was the internet, both wifi and mobile, but I solved it for now with a combination of a mobile wifi hotspot and Google Project Fi with a Google Pixel 2 XL. the jetlag is also annoying. I am hopeful that I’ll get back to a real work cadance the week after next, with some work done next week.
I’ve been (very much in the back of my mind) developing a new framing for my work that I have (somewhat successfully for the first time last night) explained at parties to lay people. (Halfway down this partly-true account, I delve into more technical details than I would not do with a lay person.)
Me: I am working towards a world where people can modify the apps they use while they are using them. The dream is to abolish the “settings menu” of each app, because you can literally modify anything about it. For example, what’s an app you use every day - email? Could you imagine any ways you’d want to change it to suit your workflow better?
Party Person: I can’t think of anything… I’ve never thought of it before.
Me: Exactly! When you can’t change anything, it doesn’t occur to you how it could be better changed. Do you ever question changing the speed of gravity? That’s the first step: enfranchisement, empowerment. Once you realize that you can change things, you won’t be able to stop yourself for coming up with ideas for improvements. For example, I would like to be able to have my email remind me in 3 days when someone I emailed doesn’t email me back, but only when my phone’s location is at the office because I don’t want to be bothered at home. That’s custom logic that’s impossible today for no good reason.
PP: That’s neat… but many people can barely even use technology as complicated as it already is. Who would really want to change things?
Me: That’s a great point. I don’t think my grandparents would ever customize anything. Neither my parents probably. However, if customizing software became as easy as I hope it can be, I can imagine a world where my father, a businessman, would hire a tech firm to customize much of his firm’s software to their needs, like they now hire a consulting firm to customize their Salesforce for them. But that’s all down the line…
Realistically my main users will be other programmers like myself. You’d think that programmers today could customize the apps they use, but not at all. Today if Gmail open sourced itself, it would be as impossible for me to change any part of it. The codebase is simply to large and the coding style too unwieldy for a single person to comprehend and change it. However, with the new programming language I envision, it should take me a reasonable amount of time (a couple hours, depdening on the task) to customize the software I use in the course of using it – similar to using a settings menu, but on a bigger scale.
This new language would unleash the creativity of millions of programmers to improve the apps they use all the time! In the past, open-source software only worked for developer-facing projects like operating systems and programming languages, but with this new language, I think we could build open-source versions of BETTER quality than company-created apps. We have more people and time at our disposal than any company! If this sounds crazy, people thought that a regular-person-created encyclepedia was crazy, yet all the world needed to make an encyclepedia better than any private company’s was the platform of wiki software. I believe a similarly democratizing platform could exist for software itself. Do I sound crazy?
PP: Um, a bit < … nervous laughter … > But don’t all those apps have proprietary licenses and content deals in place?
Me: Yeah, I don’t know how we’d get around liscening deals, such as the ones Netflix have. But as far as other proprietary stuff goes, we could rebuild it in this new language, such as rebuilding social networks into open, federated protocols like Mastadon. (I do worry that Facebook is so big and high-quality already that it’ll be difficult to compete with it, but I hold out hope!) It was the same when open-source started. At first, all the offerings were made by large companies and that was assumed that was how it had to be. Then Linux and git, distributed version control changed everything. I think making software customization 100x easier would cause an even more disruptive shift in the way software is consumed and produced. If software is eating the world, I want individuals, not massive organizations to be doing the eating.
PP: Wow. I guess that makes sense… But it’s also nice to have things uniform, such as a back button in the same place in all my apps, and all my connected Google services.
Me: That’s a good point. There’s a lot to be had from centralization as well (until Google shuts down or ruins your favorite service). There are always trade-offs in these things. It’s hard to imagine the solutions to those problems now – like it would’ve been impossible to imagine Two-Factor Authentication before the web was created – but I believe we can make a decentralized world as convenient, or even more so, than a centralized one because everyone will be empowered to improve things that they think need improving. Maybe paid services would pop up that would help transport your data between apps.
PP: When you say “decentralized”, do you mean “on the blockchain”?
Me: <… laughs and shakes my head …> Honestly I have no idea. It’s a complex question: who has control of what. Another essential problem: how to manage millions of slightly different versions of the same piece of highly customized software. That alone is an incredibly difficult research question! I’m currently just working on a piece of a piece of a piece of this puzzle.
PP: What piece are you working on now?
Me: I’m working on the visual side of things, building an interface that’s similar to a WYSIWYG like Squarespace or Weebly or Microsoft World but can create any arbitrarily complex interface that code could create.
PP: Oh, that would be useful.
Me: Yep. It’s the holy grail. People have been working on this for decades. My old boss Lloyd Tabb worked on it at Netscape in the 90s. Then he was optimistic as myself, but after beating his head against this problem for years, he believes it’s impossible. I’m still optimistic. There has been a ton of amazing research in the past decades since then, particularly in the field of functional and functional reactive programming.
PP: Do you really think someone who doesn’t know how to code would ever be able to make any significant change to the apps they use?
Me: I really do. For three years I taught coding to people of all ages and skill levels, but mostly beginner children aged 8-13 years old. I learned that the right programming environment makes all the difference. We used MIT Scratch which allowed many students who had never coded before to finish a full whack-a-mole style game in a single class without much of our help! I believe that with the right programming environment and proper motivation, anyone that can read can learn to code. I am hoping to solve both problems with a single solution: if we build a programming environment that allows people to customize the apps they use, we will have also provided them with the motivation to learn how to do it.
PP: So is this even possible?
Me: I intend to spend my career finding out.
PP: Well, good luck with all that…. I see my friend. Great meeting you.
I’ve had a lot of trouble getting this log entry out. I just got backed up with stuff and then it got harder and harder. I will try in the future to spurt out shorter updates because this was overwhelming and no fun.
Names are hard but names matter:
What field are these people of?
— Steven Krouse (@stevekrouse) September 15, 2018
Vaneer Bush (Memex), J. C. R. Licklider (ARPANET), Ivan Sutherland (GUI), Douglas Engelbart (computer mouse, “the mother of all demos”), Seymour Papert (LOGO), Alan Kay (OOP, desktop metaphor), Ted Nelson (hypertext), Mitch Resnick (Scratch)
Agreed that a name doesn't *really* matter, but as @geoffreylitt says, it's nice to be able to have a phrase instead of handwaving with "you know, the topics that we're interested, related to Alan Kay, Bret Victor, Engelbart..."
— Steven Krouse (@stevekrouse) September 17, 2018
Best book I read this year. So amazing. What an optimistic vision for the future. The opening of the book has a quote from one of my dad’s favorite philosophers, Spinoza, as well as one of mine, David Deutch. Highly recommend this book to anyone anywhere, but especially people who are anxious about the future. In short, I would vote for Stephen Pinker for president (of anything) after reading this.
Read this also gave me hope for a LogicHub platform for ideas. In the past, I figured we’d “teach” people to be logical and then let them loose to do their crowd-source-y thing, but after reading this book I wonder if we would want to make it more like a betting market in some ways. What’s neat about a betting market is that:
What’s not great about a betting market is that its “resulting” (term from Thinking in Bets), the decision-making process is opaque so we can’t share or improve on each others’ thoughts. It’d be great if we could somehow create a place where you could show your decision-making-process as well as take a stand for your conviction and then eventually get the benefit (or not) when time shows “who wins.”
Turns out many of our heroes in this “field with no name” did LSD and it got me pumped:
🤯 Engelbart did LSD! To aid in problem solving!! (from “How to Change Your Mind”)
— Steven Krouse (@stevekrouse) September 18, 2018
Me finding Kartik’s work is a success story in our wonderful internet research community. In the Future of Programming Slack, Stefan Lesser tried to galvanize us around RSS feeds, then Nikolas Martens posted his list, which Daniel Garcia then found Kartik’s work in! Really a team effort.
I am in LOVE with his /about page and want to use it as a jumping off point for a new draft of my own /about page:
I’m working on ways to better convey the global structure of programs. The goal: use an open-source tool, get an idea for a simple tweak, fork the repo, orient yourself, and make the change you visualized – all in a single afternoon. Understanding a strange codebase is hard; I can’t change that, but I think I can make it easier for people to persevere. I want to carve steps into the wall of the learning process. I want to replace quantum leaps of understanding after weeks of effort with an hour of achievement for an honest hour (or three) of effort.
This focus on helping outsiders comprehend a project is unconventional. I’m less concerned about the readability of a codebase. I find the usual rhetoric around ‘readability’ tends to focus on helping authors merge contributions rather than helping others understand and appreciate their efforts. If you’ve ever seen an open source project whose CONTRIBUTINGdocument consists of a nit-picky list of formatting rules and procedures for submitting patches, you know what I mean. There’s a paucity of guidance earlier in the pipeline, when newcomers aren’t thinking about sending a patch, just trying to understand the sea of abstractions, to keep their heads above water. I think improving this guidance might greatly increase the amount of citizen involvement in open source, the number of eyeballs reviewing code, rather than simply using projects and treating their internals as externalitiesuntil the next serious security vulnerability. Our society is more anti-fragile when there’s greater grassroots oversight of the software that is eating our world.
Everyone doesn’t have to understand every line of code that helps manage their lives, but all software should reward curiosity.
Surprisingly and also excitingly, I think Kartik and I disagree on a number of points on how to reach this shared goal. I’m excited to read some of his posts soon and figure out why. Here are some I want to start with (some were recommended by him and not written by him) but I imagine I’ll read ‘em all shortly:
For Dark, I did a deep dive on the spreadsheet paradigm. I want to write more about this but don’t have the time at the moment. The thoughts are in this private google doc I can hopefully make public soon
One fun picture (will find this later) is that Forms/3 beat APL at its own game (matrices) - I think they did a user study to “prove” this.
Turns out my seemingly-novel ideas for improving spreadsheets were in Lotus Improv in 1990 https://t.co/GF8B9l1sy0
— Steven Krouse (@stevekrouse) September 11, 2018
The NYC Future of Programming meetup went out with a bang. (It’s likely not the last one forever, but it’s the last before I move to London so will have to wait till I visit or someone takes it upon themselves to continue.)
Jason Brennan talked about Beach which was amazing. I find myself inspired by his ideas, including the infinite canvas for a programming language.
Then after some technical (and climate) difficulties Corey spoke about his experiences from Eve. He mentioned how raising VC money probably wasn’t the right move. Now he’s finishing his degree and is teaching coding with robots and building his language/environment (called Mech) to support that. Mech is quite similar to Even in many respects, which is quite exciting.
Josh Horowitz (from Dynamicland) talked about Pane, his functional programming node-and-wire prototype that he’s presenting at SPLASH. One of the key ideas is that he’s reversed it: the nodes are data and the wires are functions. This makes a lot of sense when you are trying to show the data. We met up afterwards to chat for ~2 hours about these ideas - hopefully we can meet up again this week to continue. My current thoughts are very much in this direction.
I got an email during the Future of Programming meetup from Amjad from Replit, offering to sponsor the podcast! This is a huge deal to me, even from just a “stamp of approval” perspective. But also the money will help pay for transcripts for episodes which people have been asking about, and other various upgrades. Big win!
(I also want to be careful about this, so if anyone isn’t down with this or how I mention them in the beginning of episodes, I am all ears.)
I am sorry that this will make likely Bret uncomfortable, but I hope he’ll realize that it’s in the best possible spirit of love, respect, and admiration for his ideas.
I think I'm gonna do it... pic.twitter.com/9wKgw8b7g8
— Steven Krouse (@stevekrouse) September 19, 2018
I have been doing a poor job of moving my shower notes into a digital form of any sort over the past few months. I figured I’d put em all here in case anyone’s curious. You never know - sometimes I get those “in your shower notes from last year you said…” emails.
The future work section of my paper talks about visual metaphors for FRP. While I do think this is quite important in order to “democratize visual intuition” (Penrose), I wonder if it’s necessary. What if we take normal FRP haskell-ish notation as a starting place and simply augment it with LIVE-ness, such as an interpreted environment, showing data, and evaluating as far as possible even when there are holes (Cyrus Omar’s work). Geoff Litt and Paul Chiusano suggested that expressions with holes are just functions with those holes as parameters, and we could put a slider or examples values in there. Always concretions, never pure abstractions.
Here’s another idea I’ve been toying with: Jason’s Brennan’s notion of a programming environment on an endless canvas, like in Sketch or Photoshop. One thing this could enable is a structure-less structured editor - in that you could put together pieces of expressions in various places and combine them later. I guess this would be similar to Scratch or Blockly… which I don’t love… One possibility is to use the layer metaphor from Photoshop as a programmatic abstraction, but I don’t know what that would mean exactly yet.
My first thought was to build this as a FP thing first and build my way up to FRP. It could be the data slice-and-dice ninja thing, starting with JSON/CSV slice, dicing, and joining – this is related to datafun.
But then I went ahead and did a drawings for it as an FRP platform:
Very exciting, getting my first paper accepted. Got a bunch of feedback to read and then incorporate. Will read it this afternoon, take notes, put them here, and then meet with Jonathan Edwards about it as well.
(This is related to the new /about page Kartik has inspired.) Software is never really ever started but only incremented, added to, improved upon. Thus the key is to shoot for customize-ability, modify-ability, change-ability, mold-ability - I want to evoke clay, play-doh, etc. This would truly catalyze the computer revolution because regular people would be able to modify the software they use. (The killer app for this language/platform could be a build-your-own-email-app thingy. I can imagine a world where top-level execs hire college grads to help them customize their email app workflow.)
In order to achieve this, we need the two pillars of comprehensibility (understand what’s up quickly) and composibility (plug and play with existing pieces).
Functional programming is great but we need solid abstractions to fully liberate it from the Von Neuman architecture (Out of the tarpit, Conal). For example, Reflex > Redux (my paper), but we need to make FP and FRP usable with LIVE and other visualizations.
Currently, I’m working on drawing out various prototypes. I want to stay in this phase for a while, making sure I know what I want to build, getting feedback on it from Jonathan and other smarties. Next step is to scope it down to a reasonable prototype and to go code.
I was sent this paper by Jonathan Edwards for feedback to the authors. It was recently accepted into LIVE. It was right up my alley. It’s the notion of taking the DAG visualization of a FRP app, but making it editable at the visualization level, similar to a node-and-wire environment, such as Facebook Origami. It was a fun read, and I gave a few notes to the authors.
There are a few things Conal said that have really stuck with me, so I took the time to transcribe them and put them here. But I’ll paste them below as well because they are just so wonderful:
It’s important to me that you cannot look in a behavior and say like “when does it change (change in the sense of events)?” Because in order to answer that question One would need a more complex semantic model. Now, many or most so-called FRP systems I see out there do have some notion of “when does something change.” And every one of those breaks the semantic model.
It’s like you have arithmetic, right? So FRP is arithmetic for time-varying quantities of all sorts of types. By arithmetic, I mean some algebra, some set of operators that have nice laws, nice properties. And imagine if you thought of arithmetic as about compositional structure or about the express that you evaluated.
If you added 3 and 4, can you tell tell the difference between that and what you get by adding 2 and 5? It’s very important to the laws of arithmetic that you cannot tell the difference. If you could tell the difference, then what you would have would not be a type of numbers with a nice set of operations. It would be something more like a type of trees or something like that. And there would be no interesting equational properties. And you’d have something very complicated. And you’d have to talk about your API instead of talking about… You wouldn’t be doing math, you’d be doing data structure manipulation.
For instance every time you hear somebody talk about FRP in terms of graphs or propagation or something like that, they are missing the point entirely. Because they are taking about some sort of operational, mechanistic model, not about what it means.
And what I see happen over and over is not only do people generate a complex model but they don’t know it’s complex because they haven’t looked at it precisely. And they thwart most attempts to do nontrivial optimization’s because they’ve exposed too much information. So I want to make it as abstract and useful as possible, so it’s simple for somebody to think about, and I can do radically different sorts of optimization experiments.
Of course, this doesn’t mean that a “debugging engine” can’t break these abstractions to keep track of these things so as to aid you in understanding. The key is that the programmer cannot break this abstraction at the semantic level, so we can do creative optimizations, etc. In other words, I can write 3+4=2+5
and we can talk about each of those four numbers indepdenently. They don’t immedialy collapse to 7=7
the instant I write them down.
I also love this beautiful quote about functions as boxes and values as flowing across arrows:
It’s an inherently first-order visualization. In other words, the visualization itself makes a hard visually syntactic distinction between functions and values. What’s great about functional programing is that functions are values… [with] visual arrows, with every composition, you get something more complex than what you had before. Complexity grows and grows and grows. Visual langauges are very sparse, rather than a dense notation. So pretty quickly you get swamped… You use up a lot of space not saying very much.
The method is:
Here’s what that looked like:
What I decided was (for now) to give up on wires and arrows to visualize flow and depedencies. Instead I want to be more creative about showing those things, such as spatial arrangement, labelling (including colors, words, and on-hover interactions), etc.
I then pseudo-coded a few classic GUI problems. (I may steal other problems from 7GUIS.)
What I discovered here is that the classic notation of FP isn’t all that bad. (That’s not quite true. For example, Conal’s original bouncing ball is tough to read. More on this in the todos below…)
The problems are:
You want to be able to construct these expressions in any order, such as:
A) Start with an widget, like a button, and then derive some state from it, and then plop it back in itself.
B) Start with a notion of some state (you haven’t yet defined), and build up a widget from that state, which then can be defined in terms of the widget.
Self-referencing makes this possible more difficult, but nothing a structure-y editor couldn’t solve.
I dont see any reason why this shouldn’t be possible as an “overlay” of some sort on top of the classic FP notation.
I can also imagine an on-hover interaction to functions that allow you to visualize how they transform their inputs into their output, such as this one.
It’d be neat to have an over-arching shape of an application so you can visually get a sense for how it fits together, but I don’t know if anyone’s done that yet. Node-and-wires don’t tell you much: they tell you less than regular FP syntax, and often in more confusing ways.
I’m not clear on what the “type” of widgets should be – that is, what they “return”.
Are they a “structure” of some sort that you can then get various event streams (such as click) “from”?
Or should they simply return the (or a list of the) event stream(s) that they “contain”?
In the past I called this idea “higher level HTTP requests” or “HTTP = JQuery”. Here are some thoughts I put on Twitter along these lines yesterday:
As expected, I haven’t gotten much done here the past few weeks. I was on vacation and wanted to do more freelance stuff, which I did. I’m also reading a great series of novels by Robin Hobb which are distracting. However, I’m excited about all the chatter around my podcast, tweeters, on the Slack group. Thanks to all that listen, and especially to those that write in - even just to say hi!
I’ve been thinking a lot what we take for granted, and questioning everything. So why not the foundations. There’s the Turing Machine (wah ich is the epitome of inelegant) and Lambda Calculus (which is a million times better by comparison). But can we do better? Are there other alternatives? I did some Wikipedia-ing, but didn’t come up with much interesting. Here are some links:
* https://www.reddit.com/r/AskComputerScience/comments/78wnv4/is_there_an _alternative_to_lambda_calculus_as_a/
I’m going to this whacky workshop in Italy next month that will hopefully encourage me to think outside the box on these topics. What even is a computer program?
I’ve been having fun Twitter conversations with Antranig (who I think lives in London!). He has a really cool paper on this OAP, which is very similar in spirit to r0ml’s notion of “liberal software.” Many of the examples were in about object inherentance in Java which I am allergic to, but I love the idea of formalisizing this ideal.
What are the key actions that need to be enabled for this principle:
I also had two other interesting, random thoughts:
Last week I got the Exceptional Tech Talent Visa which means I’m moving there on Oct 1. Just bought my plane ticket. Hope to be there for a year or two.
Last night I submitted my the the submitted PDF version of the paper to REBLS! It took ~10 hours to translate the blog version to the PDF version! I had a lot of trouble with LaTeX, so I fiuged I should put up the full LaTex code and associated files in case it’d be helpful for future independent researchers. Turns out they extended the deadline for another two weeks just now (probably because they didn’t get enough submissions), but it’s nice to have it behind me.
Looking at the cost of flights to Boston from London, it seems pretty stable, so I don’t need to get it yet. I can also register for SPLASH up until Oct 1 for $100 off. I hear back if I get in on Sept 25th.
Next week I agreed to write an essay about Dynamicland. Also, I’m visiting Boston for a few days to visit my programming research friends! I’d also like to do more work for First Round Capital next week. So I don’t think I’ll have much time here.
The following week will be more vacation focused with my parents, brothers, and grandparents. I may do a bit of work for First Round and Dark, but not likely work here.
Here are a few ideas. I’m going to chat with Jonathan Edwards in Boston next week so maybe we can review this together.
I’m actually very excited about this section. Here’s basically the plan. Take the Reflex TodoMVC diagram from this paper and tweak in in Figma a bunch. At the same time, work on getting this stuff to compile in JavaScript somehow. The goal here is very explicitly a language / visual environment. If it’s good, I win. If it’s bad, maybe I turn it into a paper.
This is a neat line of inquiry but I don’t have a very concrete next step here. It’s very related to the ideas in this tweet storm:
And the lovely replies:
Interesting.
— Ivan Reese (@spiralganglion) August 10, 2018
2¢ — I think when you push for extreme reductions, throwing away ever more context.. you eventually run into the Berry Paradox (https://t.co/4ULmDa7DPt)
sounds like church encoding https://t.co/4Md0GECpXo
— Mariano Guerra (@warianoguerra) August 10, 2018
This is very much jumping to an entirely different problem. It’s intriguing, but not as likely.
From a motivation/vision perspective, this is important. I’d love to collaborate with r0ml on this.
As usual I received some prompt and insightful feedback from Jonathan Edwards yesterday:
Abstract scalable->modular, larger->entire “this paper shows how higher-order and cyclic streams can improve comprehensibility.”
✔
Introduction they depend => they depend upon
✔
Elm architecture Elm is more like ML than Haskell Defer mentioning higher-order/cyclic streams till later when you define them. Instead: “user interfaces are inherently a cycle of alternating input and output, which is reflected in the Elm architecture”
✔
Reflex Show the types of button and display
✔
Note how the do notation is implicitly assembling the view as if by accumulating side-effects from button and display, unlike the explicit construction in Elm.
The view is very explicitly assembled in Reflex - the vertical order of statements is the explicit order of the view. The “side-effects” we get from button
and display
are their event streams, which are also explicit. It is unfamiliar to use do
-notation to construct an event propagation network, instead of how it’s normally used for imperative code, but that doesn’t make this code imperative.
Is it easy to hierarchically compose views in Reflex?
✔
delete “we here”
✔
The price we pay for this explicitness is that events are abstracted from the single values in Elm into a time-stamped stream of values. In Elm we write a global state reducer function that pattern matches on these events. In Reflex we use stream combinators to define the model and view as streams of values of each other. The single global I/O cycle of Elm becomes a number of cyclic definitions between model and view streams in Reflex.
✔
Diagrams Finding them a little hard to understand. The top level is the big yellow boxes, but what exactly are they? Categories of some sort? I’d expect the top level of the Elm diagram to be Model, View, and Messages.
This is a great question. I think this makes it clearer:
I think it’s shaping up!
Also added some fun new images:
I added a lot to the essay today. It really feels close, and like it’s shaping up. I am aiming to get a completed Draft 2 to Jonathan Edwards by this upcoming Monday.
I added counter button examples to explain Elm and Reflex, and then some ToDoMVC code from each to get into the comparison. I could definitely come up with better example apps (larger would be better), and another way to improve the comparison could be to articulate the Elm Architecture in Reflex (which is supported) in order to keep the syntax more uniform, but I don’t have the time for that, especially considering how impossible it is to code in Reflex - at least with the set up I have.
I had a VERY FUSTRATING hour or two today when I was working with ghcjs and Reflex and emacs in my remote shell. My god I hated it. I had to take a break because I felt upset afterwards for an hour or so. I wasn’t prepared for how annoying it all was. Such a slooow feedback loop. And so fustrating using emacs! For example, copying and pasting code doesn’t work because the formatting gets messed up. Just one example of how everything takes longer.
In terms of what’s left, I have to finish explaining the meat of Reflex TodoMVC, and then maybe explain one other part of Reflex ToDoMVC (should only take another 3 hours), and then write out the “is the cure worse than the disease” section (~2 hours), and tie it all together with the related work, conclusion and a final read through and edit (~2 hours). I should be able to spend some time tomorrow and Sunday on this, so we’re in good shape for getting it done by Monday.
(In other news, last night at 11:30pm I had some exciting ideas for a future research topic.)
The changes I’m comitting now were actually done this past Friday. It’s not a ton of work but it wasn’t pulling teeth - it was fun. And I’m excited enough about it that I actually made my girlfriend listen to me read it to her - she was a great sport. The langauge is a bit technical / high level. There’s a lot I need to expound on / elaborate in there, and I imagine that some ambiguity in the essay are weak spots I need to explore more.
Yesterday I had a lot of fun comparing and contrasting TodoMVC in Elm and Redux. I printed out the code for both on paper and wrote all over them. I have a lot of questions about the types of things in Redux that I’ll have to investigate next.
One idea I had is that to make the comparison more apples-to-apples, I could write TodoMVC in Redux but with the Elm Architecture - this is possible but not the reverse. However, that is a bunch of work considering how slow my Reflex/ghcjs feedback loop is. Another possible task is implementing focus on input elements, localstorage, and url hash get/set in Reflex to make them truly equal - but that’s not really the point so it may be more work than it’s worth.
I only have two weeks before I have to submit to REBLS. I think I have enough for the paper in progress section, but I want to make it as best I can before that date. However, I also have work for Dark and First Round I should do this week - as well as taking some time off to spend with family that’s around this month. It’s a lot going on!
I spent 3 hours this morning compiling lists of various interesting progrmming environments. What I’m most excited about is my list of classics that I’ve never played with:
As well as a few I keep haering about:
Hard to figure out why this happened, but I’m less pumped about this these days. I think I’ll give myself a break and do other work and hopefully I’ll naturally get re-excited about coming back to this. Yesterday I spent 2 hours reading the Reflex documentation. It felt like a useful two hours. The key thing I figured out is that it does cyclic updates via an “event propagation graph”, but I wasn’t able to figure out much more details than that.
Some interesting links:
I didn’t feel like working on my computer, so I printed out my essay, Jonathan Edward’s feedback, and grabbed a few papers to re-read I felt would be relevant. Turns out they really were!
From What’s Functional Programming All About?
The core of Functional Programming is thinking about data-flow rather than control-flow.
it’s about managing the same complexity in a way that makes the dependencies between each piece of code obvious, by following the graph of where function arguments come from and where return values end up.
From Out of the Tarpit:
There is in principle nothing to stop functional programs from passing a single extra parameter into and out of every single function in the entire system. If this extra parameter were a collection (compound value) of some kind then it could be used to simulate an arbitrarily large set of mutable variables. In effect this approach recreates a single pool of global variables — hence, even though referential transparency is maintained, ease of reasoning is lost (we still know that each function is dependent only upon its arguments, but one of them has become so large and contains irrelevant values that the benefit of this knowledge as an aid to understanding is almost nothing).
Out of the Tarpit turned me on to Dijkstra’s The Humble Programmer:
A study of program structure had revealed that programs —even alternative programs for the same task and with the same mathematical content— can differ tremendously in their intellectual manageability. A number of rules have been discovered, violation of which will either seriously impair or totally destroy the intellectual manageability of the program. These rules are of two kinds. Those of the first kind are easily imposed mechanically, viz. by a suitably chosen programming language. Examples are the exclusion of goto-statements and of procedures with more than one output parameter.
Argument four has to do with the way in which the amount of intellectual effort needed to design a program depends on the program length. It has been suggested that there is some kind of law of nature telling us that the amount of intellectual effort needed grows with the square of program length. But, thank goodness, no one has been able to prove this law. And this is because it need not be true. We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called “abstraction”; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. Of course I have tried to find a fundamental cause that would prevent our abstraction mechanisms from being sufficiently effective. But no matter how hard I tried, I did not find such a cause. As a result I tend to the assumption —up till now not disproved by experience— that by suitable application of our powers of abstraction, the intellectual effort needed to conceive or to understand a program need not grow more than proportional to program length.
(Bold is mine.)
Reading Dijkstra was a revalation. This quote almost makes my part of my first FRP draft look like plagiarism for not attributing this idea to him!
the intellectual effort needed to conceive or to understand a program need not grow more than proportional to program length
Honestly, I didn’t know! But know I do.
I’m realizing that much of my essay need to be deleted, which is sad, but its important to be ruthless about “killing ones darlings” when it’s time to. I went ahead and deleted it all and started afresh in the same file, but I expect that I will be coming back to the first draft a lot to copy and paste various elements.
I see now that my essay is less about the importance of explicit dependencies, but more about how Elm doesn’t have them, but Reflex does, and the reason is higher-order and cyclic streams. I worked on this essay for ~90 mintues today and it was a bit pulling teeth. I’m not sure if it’s because I’m in a new setting – the NY Public Library instead of home – or if I’m sad about the mass deletion, or distracted by my other part time work, or whatever, but I’m going to stop working on this essay today in favor of other part time work, and start again tomorrow morning.
I’ve been busy with my part-time jobs! I spent the first two days this week working for Dark, and the rest of the week working for my new gig at First Round Capital. I found two other gigs (another one at Dark and one at the Jain Family Institute), but pushed off their start dates for a month to give me some time to get this FRP essay done. However, I do expect to be splitting my time 50/50 for $ and research going forward, and if-anything leaning more heavily on the $-side of things because I want to make sure things are moving forward there. It’s exciting to not have to worry about money as much! I wonder if I’ll get sick of this context-switching at some point like Paul Chiusano did and want to go full time on the research.
After completing the first draft last week, I sent it over to Jonathan Edwards for feedback. It was truly excellent feedback. I kept reading it over and over again. He was able to highlight the parts of my essay that I was most concerned about, and give very on-point suggestions. Here is his feedback and my thoughts underneath each point:
“modular comprehensibility” is a heavyweight phrase, and modularity has many meanings. Maybe “locally comprehensible” is lighter and more precise?
Agreed. “module” is a loaded word in tech already. Another phrase could be “piecemeal comprehensible”. Or how about “self-contained comprehensibility”?
A metaphor I find helpful is the difference between a novel where anything can happen to anything at any time, and you have to read it sequentially and completely, vs an encyclopedia, which is referential in that you can just look up what you care about (and the entries it specifically references) and be done. So another possible phrase is “referential” or “definitional” comprehensibility.
“implicit dependencies and side effects decrease modular comprehensibility” Is the problem being implicit or non-local? Would non-local explicit dependencies be OK?
Really great question. I totally agree that I don’t make this point well, because I’m partially confused by it myself.
For one, it’s important to not only list what the dependencies are, but specifically how the dependency affects it’s dependent. It’s not “this term depends on x”, but “this term is precisely 2x + 17”.
At the end of the day, I want “data = …”, which is an explicit definition. If the definition of “…” lives in a different file, I don’t care because we can build an automated tool to bring them together for visualization purposes. Explicit-ness is the key, and side effects lead to implicit-ness.
In the essay I complain about “action at a distance”, but I think you are correct in pointing out that that doesn’t really matter. It’s more the implicit-ness that’s the problem.
The graphs are larger than needed to make your point. It took me a minute to figure out what you were doing with the edge adjacencies. Probably should explain in figure captions. It might be clearer to make this point with different syntaxes for representing the graph.
Ok, good to know.
“count is constant in the sense…” is asking for an argument. Is a lazy stream really a constant? You don’t need to pick this fight.
Ok, good to know! (Didn’t realize it was a fight.)
The reflex example needs more detailed explanation. What does foldDyn do and what is its type? Show how the stream is higher order. Also note the dependence on lazy evaluation (which is also the reason for space leakiness)
Agreed, will do.
You need to show todomvc in reflex to compare with the Elm example and show how locality is improved. This is the crux of your argument, and you stopped before you got there!
Ah, ok, will do! I think this will help me articulate some of the questions in your comments.
Don’t credit Glen with inventing live programming. Best citations are Steve Tanimoto and Chris Hancock.
Ok good point, will look them up.
You should address the counterargument that your cure is worse than the disease :). Isn’t the bottom line really the cognitive effort to understand the code? Evan says the Elm architecture is no longer FRP and is easier to understand http://elm-lang.org/blog/farewell-to-frp. FRP can get hairy - do you end up just trading non-local dependencies for convoluted semantics?
Totally, totally agreed. I used to have a section in this essay that referred to Elm’s Farewell to FRP, but removed it. I’ll bring it back!
Some might argue that the data flows are just as complex topologically, you’ve just encoded them in a very terse and abstract way. You propose building code viz tools to help. But perhaps we could also add assistance for understanding the non-local dependencies in Elm also.
Really on-point question. I think about this myself too. One way to phrase this question is: can we build an automated tool to parse an Elm project into a Reflex project?
I fear that the Elm Architecture is functional in the same way that passing around global state as an explicit argument is functional as they articulate in Out of the Tarpit:
There is in principle nothing to stop functional programs from passing a single extra parameter into and out of every single function in the entire system. If this extra parameter were a collection (compound value) of some kind then it could be used to simulate an arbitrarily large set of mutable variables. In effect this approach recreates a single pool of global variables — hence, even though referential transparency is maintained, ease of reasoning is lost (we still know that each function is dependent only upon its arguments, but one of them has become so large and contains irrelevant values that the benefit of this knowledge as an aid to understanding is almost nothing).
We had a fun meetup on Monday of this week. It was great. There were 8 of us at the Work-Bench offices, which was a great location. The way we ran this meeting was that different people showed what they were working on and got some feedback. Nick from Windmill showed a terminal app that runs compile processes in a more visual way. Sam and Rodrigo from Hopscotch showed interface ideas for linking between projects. Steve and Joe from Universe showed a new on-boarding flow that teaches the interface, which reminded me of this article that I needed some help from Twitter to find.
I found this link on lamdu.org. So great! The motivations are very similar to my own FRP essay but they take a very different approach. Their goal is:
allow the user to selectively focus on parts of the execution to help their understanding
Which sounds a lot like my goal to make reading proportional to comprehension.
Glen sent me this link after an email exchange we had. It’s a very thought provoking short demo essay thing. It provides evidence for the “we need lots of little visual domain-specific environments” argument.
Crazy I haven’t read this yet. Really wonderfully written. Of course I am a bit tired of trying to visualize imperative programs, but this is an interesting take on it. This is a bit more advanced than PythonTutor, which they reference in the essay.
Found this article via a Michael Nielsen essay. Turns out my experiment in publishing all my work in progress isn’t so new. This guy has been doing it since 2008, and says that it’s called Open-notebook science.
My new motto (from I don’t know where):
Imperative Programming is doing
— Steven Krouse (@stevekrouse) July 27, 2018
Functional Programming is being
My favorite picture (from What’s Functional Programming All About?):
And my new obsession (from Can functional programming be liberated from the von Neumann paradigm?):
move I/O entirely out of our programming model into the implementation of a denotationally simple model for whole systems
Here’s an example of moving I/O out of the programming model. You have a beautiful FRP app. No IO required to render your app. That’s what libraries like React are for. They are like JQuery as a service. You tell it what things should look like it and it does the imperative I/O (virtual-dom diffing and patching) below your level of abstraction.
But what about some arbitrary HTTP requests that your app needs to make to some backend service? How do we get this IO out of our code?
My first thought was that we should use a stream of some sort to abstract over the request. But that feels a whole lot like what we already have with JavaScript Promises, and is still very imperative.
Then it hit me (in the shower, as it always does). HTTP requests are like the JQuery DOM queries and mutations. We have to disallow arbitrary HTTP requests in favor of a more holistic abstraction (that will do HTTP under the hood).
I’m not firm on the specifics yet… but the idea is that you would declaratively say which data are needed on the frontend, and others should be persisted on the backend. Potentially, the way to do this right is to blur the front-end and back-end to a single codebase that speaks in a higher-level abstraction, which is then broken up when the app is compiled – maybe like Meteor. This could help solve many of the issues with trying to the front and back ends.
Let’s take another HTTP example. Let’s say you have two services that you want to keep in sync with a third server. Each one sends you webhooks when their data updates. Then you go to the other and update the data there.
Let’s say service A notifies you of a change via a webhook. You go notify service B with a POST.
All is good until service B hits you with a webhook (because you just changed it’s data with a POST and it’s letting you know that it’s data was changed). You could go update A with the changes, but that’s a bit crazy because A already has the changes - it was the one who first propagated the changes.
In order to prevent this craziness, you’d need some way at your server to remember what each service knows so you don’t send them redundant updates. However, that feels kinda dirty. Shouldn’t your service be stateless…?
Here’s my take on the no-IO solution to this problem:
--------------------------------
-- Functions we need to define
--------------------------------
stateView :: State{serviceA} -> State{serviceB} -> State{internal}
serviceAView :: State{internal} -> State{serviceA}
serviceBView :: State{internal} -> State{serviceB}
------------------------------------------
-- Virtual States for diff and patching
------------------------------------------
initialState :: Service S => State{S}
stateReducer :: Service S => webhook{S} -> state{S} -> state{S}
diffState :: Service S => state{S} -> state{S} -> diff{S}
patch :: Service S => diff{S} -> [HTTP{S}]
Our job is to explain how our internal model of the states join together, and then explain how that model should be persisted in each of the views.
Under the hood, our library will pull the initial states from both services (hopefully not the entire states but just the relevant parts), and maintain it’s internal model of each service’s data (the virtual-state), and when there’s a diff in what is and what should be as specified in our model, it’ll do the diff and patch with the appropriate HTTP requests.
One thing to note here is that in the virtual state diffing and patching, I’m using the “Elm Architecture” of a reducer thing, which I normally don’t like. In an ideal world, it wouldn’t be a global reducer like that but more like how I argue in my FRP draft. However, it hurts my brain to think about how to do it that way… Anyways, it’s an implementation detail so it doesn’t matter too much either way.
I am angling to submit my FRP draft essay to REBLS on 8/17/18, so I’ll be mostly working on that, along with my Dark and First Round work, over the next three weeks.
As far as the essay goes, I have plenty of work to do as specified in Jonathan Edward’s comments above. In particular I need to get clearer on what’s problem (implicit dependencies or non-local dependencies) and also make the case that the solution isn’t worse than the disease.
I also made more changes this morning. I think it’s really coming together!
I found some solid advice on Twitter for writing a research paper and it got me excited about turing my article into a real paper. I gave it a shot and like it so far.
I got started this morning at 7am and worked until now (4pm) with only two breaks, a 90-minute and a 30-minute one, which means I worked on this for 7 hours today!! I’m very proud of myself. I’m also excited about how the piece is coming together. I feel my understanding of these topics deepening as I research them more fully in order to write the piece.
One thing in particular that I came across is the imitate method in CycleJS’s xstream, which makes me wonder how far I can get with an FRP framework in vanilla JavaScript… It’d be neat if I didn’t have to build a whole runtime!
I was only planning to work on this until noon-ish, then run, and then work on my UK visa application, but I didn’t do it that way. I’m going to go run now. If I have energy after that, maybe I’ll do my visa app, but if not I’ll do that tomorrow. I also have to do my Dark work for next week tomorrow, because I’m going to be out of town this weekend, so I likely won’t work on this again until Monday, or more likely, Tuesday of next week.
Yesterday I had a wonderful conversation with Jonathan Edwards, which helped me figure out my focus for the next month, and possible beyond. I feel very lucky to get his advice.
I updated him on my progress, that I was able to solve DOM Recursion Problem with Reflex, but haven’t made much progress on the UI part of visualizing the stream operators yet.
He was able to give me really great advice here, because he’s potentially the original guy who’s been insisting that we need to work both on the model and the UI at the same time - as opposed to PL people who only work on the model and Bret Victor people who only work on the UI. It’s called co-design, desigining two different pieces towards each other.
He suggested that before I work on the UI, I should write a blog comparing Reflex to Elm, why I like the Reflex semantics as a model better, why the UI of Haskell is hard to use and why we need a better UI, and only vaugely hand-wave about how I think the UI will look.
This essay will be a great foundation to build off of. I’ll share it on social media, HN, etc.
I started the essay today here.
Then, I should pick a few very specific and small examples, and draw out by hand and Figma what they should look like from a storyboarded perspective, not worrying too much about the general case. I can then publish this as a follow up essay. However, Jonathan warned me (and I agree) that this will likely be difficult, possible where this research thread comes to an end, so we’ll see…
However, if I am able to get something that looks promising, I should, in effect, concatenate the two essays into a single narrative and submit to either LIVE or REBLS at SLASH on Aug 17th. In particular, Jonathan noted that I should explain in my submission the “technical challenges for building this for real.”
If I don’t make it in time for SPLASH, the next relevant workshop would be PX at
It’s been a while since I reflected on my work here. Part of me things that reflection text should really show up in my log… Maybe I’ll add that feature one day, but until then, you have to click on the link above.
Also, maybe I’ll turn this reflection into a short podcast update episode. I could tack it onto the beginning of a podcast interview, like Sam Harris does, but I feel like that’s a bit disrespectful to my guest, who doesn’t really want that nonesense before their interview, especially if they’re going to share it with their audiences.
While watching fireworks the night before last, I couldn’t stop thinking about Flowsheets, my fuzzyset refactor, my ideas to improve ObservableHQ, and for my own reactive playground.
Here’s how I put it to my girlfriend: “Glen and I are working on the same problem - code comprehensibility - and with similar approaches - visualizing the live data - but for different types of code. He’s working with normal batch code (mostly static input, mostly static output), like data processing, while I’m working with reactive code (which responds to inputs over time), like user interfaces. So my problem is strictly more difficult - possibly even a superset - and is ever harder, because I want to allow higher order streams (streams of streams).
So it occurs to me: why start with the harder problem? We still as an industry haven’t solved the easier problem. If I want to do simple data processing, I don’t have any satisfactory options. The easiest to use is probably Excel / Google Sheets, but that’s a pain and limited. And then there’s Jupyter or ObservableHQ, but those are a nightmare in all sorts of errors and slow feedback loops.
My first thought was to create metaphoric and live visualizations of all the FP list operators in Figma, such as map, filter, fold, find, some, etc. My thesis here is:
All list (or stream) operators have highly visible structures that programmers have to simulate in their heads. Why don’t we bring those mental visualizations out onto the computer screen as the actual interface? WYSIWYG for list manipulation!
For example, here’s map:
This is literally how functions are taught in school to children. They are a mapping from a domain to a range. Arrows from one list to another. And you could imagine actually writing the map function on any of the arrows and seeing the data update through the arrows.
For the fuzzyset problem I’m working on, the visual representation I have in my head is something like:
(It was prettier in my head.)
However, I was discouraged when I went to the lodash documentation to see how many of these I’d have to make. Part of my thesis is that there are only a few key primitives you need to do anything you want with lists. (While this is technically true because all you need is fold, it’s likely not true when you consider our mental visualizations of these operators: they are more specific. For example, it’d be difficult to detect that I’m doing a map with foldr (\x xs -> x+1:xs) [] [1,2,3]
to get it to expand to the arrows viz above. Ditto with filter, etc.)
However, this isn’t a showstopper. Just because there are a couple dozen (maybe even ~100) operators doesn’t mean I should go home. It’s actually a simple-ish project if I want to commit to it. Here are a few questions I have:
How do you visualize the current values of computations within the body of lambda functions? Maybe this isn’t a real problem (as long as you have concrete data to flow through), or maybe it’s the same problem as (1) above.
What about user-defined functions? Do they write their own visualizer?
Are the FP combinators the essence of list (or data) combinatorics? What about APL? Sometimes nested lists and maps within lists seem cumbersome. What about SQL?
I did not have fun with Luna, despite it being squarely centered in all the things I’m into: Haskell, visual coding, live data. For one, Luna only visualize dependencies between values, which are much less rich than I’m thinking in terms of specific metaphoric pictures for each combinator.
Secondly, it’s not nearly as live as I’d want.
It was easy to normalize the string (except the regex wasn’t compatible, so that part needs work).
There’s no range(10)
function, so I had to create the series by dragging down +1
of the previous number manually, which won’t scale. I’m not sure how to get around this.
I was surprised to see that there are named ranges now which are similar to variables. The names don’t show up on the sheet (I had to add them manually above as a label) but you can use them in formulas. I did this with value
, normalized
, and gramSize
, but got bored and stopped after a certain point.
Splitting into grams as a mapping was very straight forward (except that there’s no substring and I had to use LEFT
and RIGHT
- it’s a bummer there’s no function creation either).
Grouping and counting was funny - I actually used the Query
function which is very SQL-like. It worked like a charm, except it didn’t keep the original ordering, which was a bummer. (You can see the query in the bottom of the sheet.
In order to try to keep the order, I pulled the counts back up into the original grams list. This works, but then there are duplicates of course. I guess I could first remove the duplicates and then join with the counts, but I didn’t do that.
Overall, it’s possible to do stuff in a spreadsheet, but:
I haven’t had the chance to play with it myself, so I don’t really know, but it looks great. I’d prefer a FP language instead of Python, but doesn’t really matter. I’d be curious how it handles nested data structures.
I spend ~3 hours yesterday on replicating this code in APL. God it was hard!
Here’s a link to it on tryapl.
It took a very long time, it was very frustrating, I barely understand how the code works, and I am still not quite done. I haven’t figured out how to abstract over arbitrary word inputs which would allow me to remove the ‘-mississippi-‘ string and hardcoded 11’s and 3’s.
One of my biggest complaints about APL is the documentation. It’s like it’s written in another language! And I wasn’t able to find great Stack Overflow support.
I’ve printed out Iverson’s Turing lecture on APL so maybe reading that will help. I’m dying out here!
I tried to jump ship (again) to a different problem. I thought maybe it would be easier and help with my intended problem, but surprise it’s difficult in its own right.
I’m not sure where to go from here… I’m feeling as lost as I did in my last todos.
I guess learning Reflex and building visualizations for it (in Figma to start) is a good next step.
(As an aside, I spent about an hour yesterday learning about PureScript which was fascinating. It compiles to readable JS! It does this by not being lazy. The author has a FRP library called Behaviors that’s just ~100 lines of code. It’s neat but not as firm an abstraction as in Reflex, which I prefer. And so I’m stuck with Reflex. But now I know that maybe I can use ghci to get better type instrumentation this time so maybe it’ll be better.)
It may be a couple days (or even a week) until I can work here again because I have visa paperwork do to (I may be moving to London) and work for Dark to do. (On the other hand, maybe I’ll procrastinate on those things and be back here sooner.)
I started the day off with a rant about abstraction haters, which was fun. I’m waiting on some notes from a friend before posting it around.
I needed a break, so I read through Glen’s new essay, in part because its topic is excitingly similar to my own. There’s a lot he says that now I no longer have to say myself - I can simply quote him. In fact, I think this essay could provide me with the “intellectual cover” from which to begin my own essay - which would allow me to do away with contexualizing why code comprensibility is important.
I also had fun converting some of his code to an FP style, which allowed me to make some points about why the model is important, along with shwoing the data. I’ve included the email I sent to Glen:
Hey Glen,
I took some time today to dig into your Fuzzyset explanation. It’s wonderful! In particular, I love the little details, such as providing links at the very button numbers that show the calculations of the dot product and magnitude in a popup.
The second interactive section of your essay, “Turning a string into a vector”, features some code that incredibly difficult to comprehend, even with the explanation and GUI beside it. As an exercise I removed the incidental complexity in the code (in other words, converted it to a functional style) to clear things up:
Now you can better follow the logical flow of the code - as opposed to the control flow of the computer. This includes eliminating iteration variables, in fact all mutable variables, which eliminates bugs, such as a 5-line section that doesn’t appear to do anything. (Please correct me if I’m wrong.) It also is shorter now: the core of the code is 13 very clear lines (excluding comments).
Additionally, in this form, you don’t need your comments on the side (about normalizing the string and adding dashes) because I’ve added them in context, which makes it even easier for a user to tweak them.
And perhaps the best part of the functional style isn’t captured by the code: it’s quite visualizable in a way that doesn’t require console.log. Each line of code is a constant value, mapped from a previous value. In it’s current form I can imagine it being directly converted to something like Flowsheets, and you could marry the abstract symbols and concrete values even further. Dream, here we come!
Best, Steve
Here’s the current status of the outline. It’s tough. It’s pretty messy. As I said above, I may use Glen’s essay as a starting place and just jump into comprehensibility and reactive programs, and not contextualize why those things are important. At the end of the day, my vision of everyone constantly improving and customizing the software they use as the use it for free, while very compelling for me, is less so for others who have less of the picture in their heads.
It took me ~1 hour to edit the podcast and ~20 minutes (I timed it) to put it online, including uploading it, writing the description, links, etc. This is one of my favorite conversations so far!
Unfortunately, Thursday was a bit of a bust. I went to the Games for Changes conference but didn’t get inspired. I imagine that’ll be the case for most conferences - unless they’re nonferences, which is basically a conference without talks and focuses on peer-to-peer interactions.
Speaking of nonferences, the tickets for CycleJSConf 2018 go online today and they only have 40 spots so I should decide about that ASAP. It looks like a long an expensive flight, so I’ll have to think on it… As an aside, my girlfriend and I are thinking about moving to London in the next few months, in which case it’s cheap and a 2 hour flight. But then it’ll be harder to get back to the States for Splash in Boston in early Nov…. These conferences are complicated and expensive!
I spent 2 hours on Friday (before I had to go to Philly for my grandfather’s birthday) on yet another vision-y essay. This one is about how the smallest unit of improvement a person can make in technology is an app, and how that’s too big, especially when people mostly just want to add on to existing apps with a feature or customization.
At this point I have 4 such essays:
And starting on Wednesday, I’ve been maintaining a Workflowy nested list of the “outline” of this massive essay / thought dump. Workflowy felt like the right tool for this because I’m making a linear argument with a lot of tangents and nested points, and I want to be able to zoom in and out on the sub-points as well as collapse arguments to see the broader shape. The Workflowy is not on source control, so I put the current version of it in this gist, and here’s a link to the live one.
For better or for worse, these ideas keep metastasizing and gobbling up other projects into their purview. For example, on Friday I re-read the final STEPS report as well as Guido’s Computer Programming for Everybody, the second of which I found from watching @roml’s Liberal Software. All three projects were highly related and I copied and pasted lots of their text into the Workflowy for reference.
I also sent an email to @roml, asking for help articulating the vision for “liberal software.” It’s a fascinating vision. In particular, I want to clarify how it’s different from the Smalltalk, Lively Kernal, Morphic, STEPS, Boxer vision of a unified computing environment. Maybe it’s not different from those, but it feels different. For one, I’m more concerned with web-based tools, while those are more OS based. For another, I care a lot about the visual polish of the end software, while those almost deliberately seem to not.
I’d love to zoom in on a sub-topic of my outline and write / polish that concept well so I can publish it, get it out there, but I am finding that easier said than done. I’m also becoming cognizant of how long this writing is taking me, and am worried that it’s not the best use of my time… Maybe I should go back to the problem as opposed to motivating and contextualizing the problem.
On the plus side, I’ve become skilled enough in communicating what I’m working on and why that yesterday I was able to explain it at a party (in conjunction with Nadia’s Independent Researcher article). Additionally, both Jonathan Edwards and Paul Chiusano sent me a shockingly relevant paper that came out a few days ago, discussed below, which is a great sign that people understand where my head’s at.
I worked from 9:20-2:00 today, had a 45 min break for lunch, and have been working on this write up for the last hour or two. Productive day! (Although it doesn’t really feel it because I don’t have much tangible to show for it.)
This is the paper both Paul and Jonathan sent me, which came out a few days ago. I haven’t felt this sinking feeling of “oh shit someone already beat me to it” in a long time, which is a great sign.
On note I’ll make up front: the reasoning in this paper feels backwards to me. It’s saying: people use FRP, it’s hard to debug FRP with imperitive debuggers, so let’s make visual debuggers. However, I feel that the visualizations are the key to why FRP should be used in the first place, and the fact that we don’t already have good visualizations for it crazy.
They focus on the rxjs library in this example. To be honest, I had considered doing something similar myself, so it’s great to see this before I did that. This means that they do visualize streams of streams, but not cyclic streams.
They also focus a lot on the user interviews, which I find silly, like they’re pretending to be a different kind of science than they are.
I speak more about their visualizer, rxfiddle, below in the “FRP Visualizations” section.
I find it hilarious that they include a reference to StoryFlow, a tool to visually track the evolution of a character in a fictional story. It’s funny to be because I’ve been using the metaphor of fiction as an example of how it’s difficult to follow state in imperative languages.
They also cited a bunch of papers at the end of their paper that extolled the importance of seeing runtime values in order to comprehend programs, which fits into my thesis nicely. Currently you have to instrument your code manually or set debug breakpoints to inspect. I want you to be able to look with your eyes, mostly, and interact only a bit to see what’s up in a program.
After reading the above paper which focuses on rxjs, I felt my understanding of “true FRP” shaken a bit. I often find it hard to keep them all straight in my head, so I read this article in part to see if I could figure it out.
This was an arrowized paper. It helped me get back into the mindset of FRP, but no great insights here.
I was in the groove, so I continued on. This article was fascinating, but a bit hard to read because I’m slightly allergic to monads. It was interesting to see that he explicitly says he wasn’t able to do “mutually dependent signals.”
Forget the past, change the future, FRPNow!
This paper is apparently the basis for Hareactive/Turbine (discussed more below) and builds upon Monadic FRP. It seems to solve space/time leaks with some clever restrictions about what you can and cannot depend upon. However, as many monadic interfaces do, this feels very imperitive to me. They have a Now
monad for goodness sake!
At the end, they make an tantalizing point that “synchronous dataflow langauges, such as Lustre, Esterel and Lucid Synchrone” … never have a issue with leaky abstractions, even for higher-order dataflow, because they are more restrictive than traditional FRP” - but he doesn’t say in which ways they’re more restrictive.
Fascinating talk! In particular, I loved the way he explained why FRP is the best:
Another key point I got from the video is that if you want to use ghci with reflex-dom, you can, but then you need to use the ghc binders, which requires a desktop environment to see stuff with. Or something like that. Maybe it means I can use ghci to do more than I thought I could…
In the talk, he shows this tutorial on building a reflex library through building a terminal library. Looks fascinating!
I also skimmed another video, which had a great example and quote about declarativity. He shows dynamic list, whose dependencies are all explicitly listed inside of it. Only one place to (recursively) look. This epitomizes why reflux is the only library I am still excited about.
I reached out to Obsidian, the Haskell dev shop he runs, to see if they’re still hiring. I could definitely see spending some time there, which I can’t say about almost any other company.
Before I found the FRP Debugging articles, I was saying something along the lines of: we need visuals that support casual understanding of large, complex software programs. FRP support this because the metaphors are very visual. Just look at rxmarbles.com and the cyclejs devtools. If you wanted to visualize imperitive programs, the best you could do is Python Tutor, which merely helps you “play machine in your head.”
It turns out, I’m not the only one to want to provide more live visualizations for FRP:
My inital reaction to this tool was “neat!” But upon closer inspection, I’m not impressed:
They try to vertically align data points. This is a reasonable strategy but doesn’t scale super well for complex flows (maybe better layout could solve this). One idea I had would be to use color to show the lineage/causality of events.
One point they make is that more work needs to be done for combinators that modify flows in a less merge-y way, such as filters, folds, etc. Another point is what to do with a high-frequency event. One idea is sparklines, but there are many ways to compress data.
Additionally, I find it useful to have both representations of streams: a lazy infinite list, and also a single value that changes over time.
One final note is that they only show acyclic graphs, which is insufficient for my purposes as discussed elsewhere.
These are fun. I like them a lot more than rxfidle. They do a great job of showing higher-order streams. They don’t, however, show how streams combine to make other streams. They just show one output stream or stream of streams.
For completeness, here are some other links on visualizing FRP programs:
This is a problem I’ve struggled with for a year or so now. There are so many flavors of FRP out there, with different points on the design space, it’s hard to keep them straight.
Andre Stalz has some interesting things to say on this topic here and here.
Jonathan Edwards suggested I make a survey type doc for them. Neat idea. FRPZoo already exists but doesn’t quite capture what I want. Also, I don’t care so much to compare them all. I want to simply find the one model I want and go with it.
Seems like the most Conal-based FRP in JavaScript.
However, you add stream on-click handlers in UI elements, which isn’t for me.
All have the same singleton state + reducer architecture I don’t like.
However, it is possible to do CycleJS without the singleton state as I did in this Flappy bird about a year ago. The question is: 1) can it do cycles? and 2) how do we like the devtools?
I haven’t done much more with this, despite watching the videos above, but it remains the only library I still like, despite the annoying type quantifiers and ghcjs nonesense.
Paul Chiusano suggested I see if Purescript has anything I like in FRP land. Maybe it’d be easier than Reflex.
There are a few social things to figure out:
A few other next steps:
update my Workflowy with notes from today’s reading & watching
cyclejs with non-singleton state - can we do cycles? what do the devtools do (particularly as compared to singleton state)?
look for PureScript FRP library
continue learning Reflex. I found this tutorial
I wish that it was more obvious what I should work on next or focus on for this week. I am feeling a bit anxious about getting to a polished published piece of work that I can point to and be proud of, similar to my Visual History of Eve in that it is polished and got attention, but more impressive and meaningful, like a Staltz or BV article.
However, while today wasn’t particularly directed, it was definitely quite productive. It took me ~2 hours just to write up all that I did today. I just have to trust in the messiness and hope that a smaller piece of a project will appear and provide itself for polish and publish at some point.
As far as tomorrow goes, let’s start by re-reading this entry and going from there…
I started another essay today. This isn’t a great sign that I keep creating new essays, instead of editing the old ones…
I’d like to find a way to release something, which means I’ll have to cut down the scope as much as possible. The essay I worked on today does a good job of that. It’s just trying to make the point that live modify-ability is a great thing. It’s trying to sell the vision.
The other two essays: visual and casual and more about why FP and FRP is neccesary, and how we’ll make it work. I’m stil not sure how to combine or seperate those essays.
Even worse, I’ve been collecting a large number of notes to add to these essays, and I don’t know where they go…
I spent yesterday researching GraphQL for Dark. I combined a few pictures on their website to explain Prisma. It’s a little complicated, because it’s solving a GraphQL problem with more GraphQL, but I think this photo explains it pretty well:
One graphic to explain @prisma, the #GraphQL ORM-like layer pic.twitter.com/8jaq2ayUGK
— Steven Krouse (@stevekrouse) June 25, 2018
I was inspired by a wonderful conversation this morning with James Somers (of The Coming Software Apocalypse and many others) to write up my thoughts on how visualizations will save programming. I somewhat-recently read Fred Brook’s No Silver Bullet where he very eloquently argues that software is “inherently unvisualizable.” What makes Brooks so great is that he gives you such a wonderful structure for your arguments, including great metaphors. I actually can’t believe my metaphorical luck: in a relevant quote he refers to software as an “elephant,” which is perfect because it allows me to reference the story of the blind men and the elephant. I ‘m pretty pumped about the rough draft / outline I’ve come up with in the last 2 hours.
I’m not sure how this essay relates to Casual Programming. Maybe it’s a replacement? Maybe they’re somehow complimentary?
I’m pumped to get this essay polished and published before continuing trying to understand Reflex, etc, etc. I think it’s useful to take a break to explain the why of what I’m doing before spending more time doing. This post might be similar to my first (and favorite) Andre Staltz post, which is similar in that its a mile-marker on his way to creating CycleJS.
As I wrote in The Model-View Recursion Problem in FRP:
In “Experience Report: Functional Reactive Programming and the DOM”, authors describe a “recursion between FRP and the DOM” problem: “…the DOM tree in an FRP application can change in response to changes in FRP behaviors and events. However such new elements in the DOM tree may also produce new primitive event streams… which the FRP program needs to be able to react to. In other words, there is a cycle of dependencies…”
You start with two buttons. The first button doesn’t respond to clicks. But the second button, when clicked, adds two more buttons to the page, where again the first button doesn’t respond to clicks, but the second button can add more pairs of buttons on click.
… I believe … the solution is mutually recursive like the code in AFRP, and looks something like:
clicksCount = count $ merge $ map (\[b1, b2] -> b1.clicks) nestedButtons
nestedButtons = repeat clicksCount [button, button]
buttons = flatten nestedButtons
Turns out the Reflex library allows me to neatly solve this problem. The core of the code looks like:
rec
countedClicks <- foldDyn (\a b -> if evenButton a then b + 2 else b) 2 clicks
clicks <- listListView (ffor countedClicks (\n -> [1..n])) (\ _ x -> dynTextButton x)
And it only took me ~5 hours to write those two lines. No joke. I use a few helper functions of my own creation. You can see the full code here. And the live version here.
And that ~5 hours is not including the ~3 hours it took to get reflex set up on my chromebook. Cloud9, AWS, Digital Ocean, and Codeanywhere all didn’t work for different reasons. Eventually I got Google App Engine to work but I didn’t realize the version I used would delete everything I install over night. So I created a more permanent instance on Google Compute which works as well and lets me SSH in using the browser. I just have to remember to turn it off when I’m not using it (which I am pretty sure stops the billing), otherwise it costs $100 per month because I’m using a big box.
I’m getting better at emacs which is fun. I install haskell syntax highlighting and am using the shell from within emacs to compile the code in a different window. And I have sudo python -m SimpleHTTPSever 80
so I can see the code after I compile it with ghcjs
. I can’t use a lot of the really cool emacs haskell extensions because I’m using ghcjs
instead of ghc
and ./try-reflex
instead of cabal
or stack
, but I’m doing OK now. It took me a while to get used to all the fucking types in reflex, but I’m getting the hang of it.
Next steps… I’m a little lost in terms of what’s next because this problem, along with Reflex ToDoMVC minus a few features seems to already be solved. One idea is that I can go through Reflex ToDo MVC, line by line, expression by expression, and type-annotate everything and add a bunch more comments. There are definitely parts of it I don’t understand, such as the def
keyword, &
, and .~
.
In a vague sense, I’m working towards building a “CycleJS devtools dataflow diagram” experience for Reflex, so another step would be to draw pictures for each type, or a way that I’d display it on the screen and show the values flowing through. And another good step that comes to mind would be to figure out how to get ahold of Reflex internals so I can get a sense of the network and hook into the different event firings.
Jonathan Edwards very generously offered to give me some comments on what I’m currently working on if I would take the time to write it up. I spend most of Monday and Tuesday (like 10 hours) on it. My first attempt spent a lot of time explaining the context for why I found this particular sub-problem (the DOM recursion problem) of the sub-field (FRP) interesting.
But I did away with all that in the version I sent over, because that’s what researchers do: they explain their problem in its context and attack it. They don’t need to sittuate the problem in the broader context so much, because readers of technical papers often already know that context.
This exercise was challenging, fun, and very worthwhile, even before Jonathan gave me any comments. It’s great to be getting some clarity on what I’m doing. I am proud of the document I sent to Jonathan.
Jonathan Edwards made some great comments. He suggested that I polish it up and submit to the REBL track of SPLASH 2018. I have until Aug 17th to submit. The workshop is Nov 4th. Dominique Devriese, one of the authors of the Experience Report: FRP and the DOM, is on the organizing comittee. Maybe emailing him would be a good next step!
Armed with the newfound clarity around my research, I tried to explain my what I’m working on to two other programmers yesterday. It didn’t go well. First of all, I wasn’t able to get across the why of my research at all. I didn’t motivate the problem well - neither of them agreed with me that a problem existed there in the first place. Secondly, I didn’t do a good job of defending the uninteresting parts of my solution. My solution is to use a form of functional reactive programming, so I need to explain why FRP is the right tool for the job.
So I spent ~4 hours today on a draft of [Casual Programming][./essays/casual] (a phrase Nikolas Martens used before me) to contextualize the problem I’m working on within the vision of the world I’m working towards. It’s a rough draft, needs a lot more fleshing out and examples. Also I’m not entirely sure who the audience is, and what it’s explaining to them. It’s definitely not to be submitted to REBl, because they already get FRP and the problem. So I guess the audience is the programmers who are interested improving programming, but don’t know all the academic stuff. The goal, I gues, would be to produce a blog post of similar caliber to Andre Stalz or David Nolan.
Continue working on [Casual Programming][./essays/casual]
Continue working on the academic version of Casual Programming, which I’m calling The Model-View Recursion Problem in FRP . In particular, formalize things up, probably by using an existing FRP library, maybe arrowized, to test things in, maybe on the browser.
Email Dominique Devriese
That’s it! Much last list of todos has either been completed or no longer feel relevant now.
Apologies for going dark here the last two weeks. On the bright side, I have been quite productive, but in an analog sort of way…
Two weeks ago, I spent 3ish hours reading Functional Reactive Animation on my phone. I was finally able to make it all the way through, reading every word, and understanding ~80%. I was very proud. But, I didn’t love reading on my phone - I wanted to print some papers out. I didn’t have a printer at home – I hate printers, they always break, and cost a bajillion dollars to buy ink for – so I went to kinkos to print out most of Conal’s papers:
But here’s where I felt like a stupid idiot: it cost $75 bucks to print all that! So I learned from my mistake and went on amazon to buy the printer with the cheapest cost per page, and found a Brother printer that fit the bill. It cost $70 itself, and will print ~500 pages before needing more ink. It’s a 10x price reduction!
Finally able to make it through – this was probably the 3rd or 4th time I attempted to read this paper.
As it turns out, Conal solved my bouncing ball problem in this paper. And I don’t mean that he gave me hints towards the solution. I mean that he actually gave the code to solved the same exact problem. And it’s beautiful:
It’s very similar in spirit to what I created. Of course, there are no graphs or visualizations in the code, which is where I want to take this. FRP lends itself really well to live values and graphs.
Also great. Also was able to read every word and understand closer to 90% this time.
Arrowized FRP is when you use signal combinators, not signals as first class. This is supposed to help with spacetime leaks, as well as modularity. But of course, they are two-sides of the same coin, and it’s easy enough to translate from one to the other.
The hardest parts of this definition are xvel
and yvel
, but it all makes sense when you read the paper and look at it for long enough:
Harder and slower. I probably got closer to 60% of this one, but I do feel like I have a much, much better grasp of what programmers mean whey they model programming concepts with mathematical objects, and what semantics means.
Read like a tutorial, explaining how to code an example project. Interesting and helped cement the ideas of FRP. Fascinating how he explained the structure of signal combinations in pictures:
Another tutorial. A bit harder to follow.
I’ve been having trouble with this one, so have been putting this one off for now, but hope to get to it soon. It’s a prerequisite for a number of other papers I want to read.
I skimmed through this one. I wasn’t terribly impressed with Elm. I can see now how it’s Haskell-lite and FRP-lite. It’s definitely more synchronous in spirit, taking some interesting ideas without super-solid foundations and running with them.
Really wonderful read - very easy for me to assimilate because I’m so familiar with the DOM. It also helped me see how much of the trouble of FRP comes from not properly and completely abstracting over the imperitivity of the DOM.
They do a good job of specifying this problem:
…the DOM tree in an FRP application can change in response to changes in FRP behaviors and events. However such new elements in the DOM tree may also produce new primitive event streams… which the FRP program needs to be able to react to. In other words, there is a cycle of dependencies…”
The easy solution to this is to give elements names, which we can then use to filter the event stream, but that doesn’t feel quite right.
Ultimately, the problem is referential transparency: I want to be able to create two buttons that look identical but when I put them next to each other, they emit two different event streams. How is this possible if they are identical? They are not identical! When they were put next to each other, the position of one of them was changed in the layout function, and thus they emit different event streams! Of course in order to preserve modularity (so we don’t only get event streams at the top level, when everything is being laid out), we need to use a trick I learned from Conal where you translate the event stream going into a “component” with the inverse of the translation you used on its position. (I haven’t entirely worked out this scheme, but I feel it’s promising.)
From Jonathan Edwards. Great recommendation!
Be careful about the “romance of mathematics”. It’s perfect-ness is illusory. Yet despite this, I still feel like math is our best way to reduce incidental complexity.
This definitely made me even more skeptical of monads! He showed cases where people put the word “monad” in the title of their papers when it had no business being there - just to show how cool they were.
Fascinating to learn about. Moggi introduced monads to structure the denotational semantics of state and exceptions, but “a denotation semantics can be viewed as an interpreter written in a functional language” and “Wadler used monads to express the same programming language features that Moggi used monads to describe.
I printed this one out and re-read it for the 7th time. It’s so fucking brilliant.
“Programming is more about the middle than the end, i.e., more about composition than output.” And quoting Roly, “What looks like imperative output can be just what you observe at the boundary between two subsystems.”
We need imperativity in our functional code (usually structured with monads) when we haven’t properly abstracted that particular API into a functional interface yet. For example, think about putChar
and getChar
from the terminal. How fucking imperative is that?! But FRP (even React) shows you that you don’t need to put and get characters from the screen imperatively. Instead you can describe the UI input and output declaratively - no monads required! The way to get rid of monads is by abstracting over imperativity (databases, file systems, operating systems, etc), one by one.
I enjoyed the beginning, but also had trouble sticking with this one, but hope to get back to it soon. It’s so up my alley!
This was a fun one. Live programming provides a rubric through which to rate a programming system, but not to come up with a programming system.
This is one of my favorites! These recipe diagrams are just devine! He really makes the case that functional programming is about explicit data dependencies, and it’s gorgeous.
Really great. He goes even further than I do when thinking about this topic: instead of inputting booleans (or union types) into functions, we should add lambdas, which are semantic explanations of what we want done.
I’m continuing from the last todos in this journal (without the completed or no-longer-relevant ones):
Follow up with Jonathan Edwards with my research direction, current problem(s), next steps, ask for relevant papers, schedule next chat, and send him Woof code
Update the plan, possibly stealing stuff from the never published plan v6
I was a bit exhausted from coding last week, so I let myself relax with The Mythical Man Month. I had started it a few weeks ago, but I finished it on Friday.
First of all, it’s shocking to see how many key phrases in this field come from this book:
The Tar Pit (which is used by the famous “out of the tar pit” essay)
No Silver Bullet
The Mythical Man Month
Essential vs Accidental complexity
Much of the book is still relevant today, even though programming has changed a ton since the 60s, and of course many is just hilarious reminders of how much has changed.
As you may expect, the “No Silver Bullet” essay was the most relevant to me, and I LOVED it. Very, very thought provoking.
The premise of the essay is this: there will be no single magical solution that will drastically improve the beast that is programming. Instead there will be a myriad of small incremental improvements that will over time tame the beast.
A wonderful analogy he uses to illustrate the premise: Before germ theory, we held out hope for magical solutions, such as the fountain of youth, to solve all of our problems. Germ theory, however, told us that no magical solution was coming. Instead we have a long term, tedious war, that will have to struggle against for decades and decades, making slow slow progress. Yet, once we accepted that and stopped wasting all our time looking or waiting for magic, we were able to get to work on making the necessary incremental progress, and look where we are now.
Thus it’s time to change our focus from the accidental complexity (of hardware limitations, etc) to the essential complexity of ameliorating the inherent difficulties of software development, which are its complexity (so much more intricate than other engineering disciplines), changeability (software is changed rapidly as compared with cars, for example), and invisibility (not naturally mapped onto a 2d visual plane).
Ok, I’ll agree with him the programming is indeed harder than other kinds of engineering for greater complexity and change-ability. While fast changes are hard to deal with, version control in git and GitHub is pretty good, and I and others have a few ideas on how to improve this. Yet if it is indeed also inherently un-visualizable, than we are going to be hard pressed to address all the complexity in code.
But of course I strongly disagree that software is inherently un-visualizabe. I think if Brooks looked at the work of Bret Victor and Edward Tufte, he’d see that through innovative design work, we can increase the visualizablity of just about anything. While we might not have great visualizations today, have hope! Many visualizations that seem obvious now were not obvious before they were invented – and it was in no small part data graphics that led the scientific journal revolution.
Additionally, I deem much of what Brooks refers to as essential difficulties in programming as accidental to how programming is done today, and not at all essential to solving the problems at hand.
Thought experiment: how long would it take for you to describe Twitter, the entire application, from the consumer’s perspective, down to every interaction and pixel? Maybe a couple dozen hours, maybe a hundred, at the outset. Yet thousands of engineers are working on it every day. Why? 99% incidental complexity.
Of course, you may argue that to simply describe Twitter in English is not a fair test because English is not precise enough. So what language do we have that’s 100% essential complexity, 0% incidental? My thought is that math is just such a language. It talks about conceptual objects and relations, not at all about how such things are computed (unless that’s what you are talking about in math). And of course this is why I’m currently working on (as you can see in the last few entries below) describing UIs with only pure maths / piecewise functions with dynamic piece breaks.
100% agreed. So much of what stinks about programming today is that folder-and file interface of code. Impossible to navigate and communicate how things are built and run. I think a key way to overcome this is to build a sounder model for computation that lends itself better to communication and understanding (such as definitions only depend on explicit dependencies, and nowhere else).
Composibility is also key, particularly for re-using code and collaboration.
I haven’t thought alot about OOP in a while. It’s always felt lame to be, particularly because I learned it in Java and it was mostly about inheritance.
Yet there’s one idea about it, encapsulation, which is kinda neat. The idea is restricting the mutation of state to a higher level operation, hiding the implementation details, and also limiting the number of pathways that can change state. Of course, in practice, objects often become dumb data structures and people just mutate their fields willy nilly with no encapsulation.
I think what OOP is missing here is being explicit about who can access those pathways under which conditions. In other words, being clear about what can effect certain pieces of state, what it depends upon, as opposed to the protocol by which it can be ordered about by.
I can’t believe it’s taken be so long to read this. It’s just 1.5 pages. It’s so great. Makes me think that even more of what we do to program is also harmful in a similar way. As I’ve been saying recently, everything but mathmatical expressions are harmful. A new thing that I’ve been against recently is dynamic scope and lexical scope. Names are super dumb. Content hashing is really the only way to go. I saw a fun Twitter argument on this this morning with Joe Armstrong.
I had a lot of fun tweeting with Michael Nielsen on Twitter via his tools_4_thought account: https://twitter.com/stevekrouse/status/1002604633209131009
I also had success tweeting a Fred Brooks quote: https://twitter.com/stevekrouse/status/1002614810780094469
I find it hilarious how great Twitter is for this community!
After banging my head against the ball for an hour or so trying to get Newton’s method to work in JavaScript, I bailed for the Wolfram Language which, as you’d expect, has excellent primitives for solving equations, both analytic and computational.
However, I did really get fed up with constantly re-evaluating old cells to make their definitions update. This was glitchy some of the time too, making me thing there were bugs when it was that I was looking at older version of expressions.
Anyways, it took me a 3-5 hours, but I finally got a piecewise, pure, mostly-non-recursive function which can be infinitely expanded of a bouncing ball. Let’s look at the code:
sim[0] = <|start -> 0, end -> 0, vel -> Function[t, 0], pos -> Function[t, 0] |>
sim[n_] := sim[n] =
With[{start1 := sim[n - 1][end]},
With[{previousV := sim[n - 1][vel][start1]},
With[{vel1 := Function[t, -previousV*0.9 + -10*(t-start1)]},
With[{startPos := sim[n - 1][pos][start1]},
With[{pos1 := Function[t, startPos + ((vel1[t] + vel1[start1])/2)*(t -start1)]},
With[{end1 := First[t/.Solve[{pos1[t] == -473 && t > start1}], t]},
<|start -> start1, end -> end1, vel -> vel1, pos -> pos1 |>
]
]
]
]
]
]
piece := Function[n, {sim[n][pos][t], t < sim[n][end]}]
Plot[
{Piecewise[Map[piece, {1,2,3,4,5, 6, 7, 8}]]},
{t, 0, 100}
]
Which gives us the glorious result:
This code isn’t at all read-able, even by me, a few hours after I wrote it.
First a top-level explaination: the code is creating a piecewise function. Every time the ball hits the bottom (-473), we need to create a new function to represent it’s motion post-bounce.
What’s tricky is that each piece of the function depends on the previous piece’s end values (when the ball bounces, we need it’s old position and velocity). So we construct this recursive function sim
which will generate us parts of bounces.
sim[0] = <|start -> 0, end -> 0, vel -> Function[t, 0], pos -> Function[t, 0] |>
sim[0]
represents the initial conditions, lasting 0 seconds (as the start and end times are the same).
sim[n_] := sim[n] =
We define sim[n]
here and also let Wolfram know to (memorize the results of prior runs to sim
so it doesn’t need to recompute the results)[http://reference.wolfram.com/language/tutorial/FunctionsThatRememberValuesTheyHaveFound.html].
With[{start1 := sim[n - 1][end]},
Here I begin a nested series of With
statements. They are nested because the computations depend on each other and Wolfram With
statements aren’t smart enough to figure out how to handle that.
Some of the variables defined in the With
statements, such as start1
above are eventually placed into the association (it’s like a dictionary or object) below. I didn’t name this variable start
(as you would in another langauge), because Wolfram would then then I wanted the key to be the value of the start variable, such as <| 4.34 -> 4.34, ... |>
– so that’s why there’s a 1
after certain variable names.
And now to the meat of this line: it says that the start of the current piece is the end of the prior piece. Pretty straight forward.
With[{previousV := sim[n - 1][vel][start1]},
Here we calculate the final velocity of the previous piece, which we do by evaluating its velocity function on the starting time of this piece (which as we calculated above is the ending time of the previous piece).
With[{vel1 := Function[t, -previousV*0.9 + -10*(t-start1)]},
Here we calulate the velocity of this piece:
With[{startPos := sim[n - 1][pos][start1]},
The starting position of this piece is the position of the previous piece at the end time of that piece (which is the starting time of this piece). This is the same way we found previousV
.
With[{pos1 := Function[t, startPos + ((vel1[t] + vel1[start1])/2)*(t -start1)]},
Then we calculate position:
With[{end1 := First[t/.Solve[{pos1[t] == -473 && t > start1}], t]},
And here we calcuate the time at which this piece hits the floor which I set here to be -473. The meat of this line Solve[{pos1[t] == -473 && t > start1}], t]
simply asks to solve for t
where the position is at the floor and the time is after the starting time of this piece. First[t/.
is just extracting the computed solution out of the result.
<|start -> start1, end -> end1, vel -> vel1, pos -> pos1 |>
And here we boringly compile the results of our hard work into a single data structure.
piece := Function[n, {sim[n][pos][t], t < sim[n][end]}]
This function turns the nth simulation into an array suitable for the Piecewise
function. It’s first argument it the function to be plotted and the second argument is the domain over which it should be plotted. In this case, the first argument is simply the position and the second argument is that the time shall be less than the ending time of the piece.
Plot[
{Piecewise[Map[piece, {1,2,3,4,5, 6, 7, 8}]]},
{t, 0, 100}
]
And here we plot 8 pieces of the simulation, resulting in the beautiful graph you saw above.
I’ve been talking a lot about piecewise functions these last few days, to my mom and my girlfriend, and they both think the idea is kinda neat, but don’t understand why I’m doing this. I don’t have a great defense. Instead of spending 4ish hours making this graph, I could’ve done the same thing in WoofJS in 9 lines of very readable code in under 60 seconds (I timed it):
var circle = new Circle({})
circle.v = 0
forever(() => {
circle.v--
circle.y += circle.v
if (circle.y < minY) {
circle.v = circle.v *-0.9
}
})
I want that when you define something it stays defined. I’ve had enough of these mutable reference cells that you can read to and write to at any time. I’ve had enough of playing Turing Machine!
Redux isn’t all that different than global state we had before. Sure it’s easily serialize-able, which gives you undo and redo, and time travel debugging, but that’s not all I want.
I want to be able to look at a piece of code and know that the only things that can affect that code are written in it. A piece of code should depend on its dependencies. Nothing else should be able to change it. In this way you could understand a small section of your code without having to understand the whole thing. Without this we are doomed to read all our code if we want to understand what’s going on.
I don’t know much maths beyond high school, so I wonder if my salvation lies in a more powerful structure than a piecewise function. If any math geeks out there are reading this, please let me know if you have any ideas!
Possibly these explicit formula and the Logo/Woof implicit, recursive formula are just two sides of the same coin, and we could allow programmers to write in the more intuitive (change the position by X every Y) and then take the derivitive of that to get the explicit form. (It seems like there are ways to get explicit formula for any implicit or recursive formula, so this seems likely…)
In the same way, I wonder if we can convert a codebase where any mutable cell can be changed by anyone, and statically parse all the places in the code that could possibly have an effect on that value, and bring them all together so you can see you things are tangled. (I am doubtful of this because 1. which values are being modified can change dymaically so we can’t really assess well statically, 2. I bet this would be such a gnarled mess that it’s not really useful expect to show us all the folly of our entangled ways.)
1 - Control flow of any kind seems like a terrible idea all around. You should just describe the system and the system shall determine which lines of code to run when. Non-monadic Haskell does a great job of doing away with this.
2 - Mutable variables seem like a terrible idea, as discussed above.
3 - The global state reducer from actions also feels like a terrible idea. I don’t want anything to be able to emit an action that can change any part of the system. If you look at a piece of state, how are you to know which actions could affect it? I also feel like it’s too much power, allowing the system to change itsellf based on the past value of itself.
But how are these piecewise functions any better, you ask? Great qusetion! As you noticed, when a new piece of the piecewise function is being created, it depends on the values of the last piece. It has the entire state of the app to do what it wills with. Yeah I didn’t like this either, but I wasn’t able to thing up a way out of it. We really need to know the final velocity / position of the prior piece to start the next bounce. But I’m not terribly beat up about it: maybe allowing pieces to view prior pieces on piece transitions isn’t so bad…?
As far as the first cricism of Redux from above, that the dependencies of a piece of start aren’t listed in the state, I haven’t yet dealt with that because I haven’t built “prototype” to also work with unpredictable input, such as mouse clicks. But the idea is to allow mouse clicks to trigger new pieces to be generate at the time of the mouse click. Only time protoyping will tell if this new framework has any merits at all.
Part of me wonders if I need to build an interface for this prototype because exisiting ones (such as Wolfram and Woof) just don’t cut it. And part of the whole point is that I’m just constructing transitions between pure mathmatic functions, so should it be super easy to visualize the state of all my functions as graphs?!
At the very least, I’m glad I was finally able to articulate above my dislike of the current frameworks: you can’t tell what’s going on at X by just looking at X and asking for its dependencies.
This closes #66, get /journal entries into /log.
I did it in ObservableHQ here, which was really fun and fustrating, both!
someImmutableValue.toJS()
just to see what was happening. Also I’d sometimes forget about it and use the old syntax and things would silently break: very fustrating! Also a bit annoying when you’re doing a.something(b)
and you want to switch the arguments - lots of typing! Probably lost an hour on ImmutableJS alone, but it was definitely worth it for the API, speed, and not having to worry about mutability ever.someFunction(someLIst.get(5))
. What’s the point of putting my code in cells a whole function is just one cell?! You loose all the benefits of the cells and intermediate results because functions are just a textbox! Not entirely sure how I’d fix this but I have an intuition that it should be possible within this paradigm.It’s interesting to note how few operations I used:
map
, filter
, some
, max
, find
and other Immutable list and map operatorsFor regex, I used regexr.com to live show me the output right on the data I was using.
For date parsing and diffing, I was constantly screwing this up and had to instrument my own code by hand to figure out in which ways. It definitely feels like there should be a regexr-like environment for this too.
Ditto for the list operations: no reason that any of these shouldn’t show you the results in a more immediate way, and where you can see the intermediate values too of course. As said above, not 100% sure how to do this but I feel like it’s not too tough.
As for importing and exporting data, not difficult once you find the example somewhere. I think those sorts of primitives should be easy at hand.
Speaking of easy at hand, I spend a TON of time switching tabs (ObservableHQ doesn’t work great in half-screen) to look at the Immutable, Moment, simple-markdown, and various other documentations. It was annoying. They really need autocomplete on variable properties – but also with documentation, preferably with examples!
And of course, I think everything would’ve been beeter with stronger types. I wasted so much time thinking that everything was OK when it wasn’t and banging me head against the wall. I want to know when things fail!
And if you can do it with no syntax errors, that’s pretty amazing too!
This all makes me think that there’s a neat prototype opportunity here. All my prior protyotype ideas in this space have been very visually-focused, almost like spreadsheets. Very much like Glen’s FlowSheets. But what if we don’t need the interface to be a spreadsheet (because variables (as opposed to column and row reference) and nested data structres are nice), but just a vertical series of cells (and cells within cells within cells for nested levels of abstraction)? Syntax errors aren’t so bad (considering I haven’t yet met a projectional editor I prefer, maybe save Scratch), but let’s definitely get some strong types, solid primitives (data structure, string, date, HTTP, importing), and inline documentation.
I’ve had this intuition for a while now: wouldn’t it be neat if you could somehow “copy-and-paste” your physics equations from your textbook and drop them into your computer and that be the code?
Let’s describe a bouncing ball.
Works great to start:
const a = (t) => -9.8 /* meters per second sq */ * 10 /* px per meter */
const v0 = 0
const v1 = (t) => v0 + a(t)*t
const p0 = 0
const p1 = (t) => p0 + (((v1(t)+v0)/2)*t)
But what about the bounce? There are two problems:
1) We need to reverse the velocity (multiply it by -1) 2) We need to know at what time the bounce occurs
How about a piece-wise defined function? For starters, let’s just hardcode the bounce intervals at 3, and 9:
const t_bottom1 = 3 // p1(t) === 0, solve for t
const v2 = (t) => -v1(t_bottom1) + a(t)*(t - t_bottom1)
const p2 = (t) => p1(t_bottom1) + ((v2(t) + v2(t_bottom1))/2)*(t - t_bottom1)
const t_bottom2 = 9 // p2(t) === 0, solve for t
const v3 = (t) => -v2(t_bottom2) + a(t)*(t - t_bottom2)
const p3 = (t) => p2(t_bottom2) + ((v3(t) + v3(t_bottom2))/2)*(t - t_bottom2)
const p = (t) => {
if (t < t_bottom1) {
return p1(t)
} else if (t < t_bottom2) {
return p2(t)
} else {
return p3(t)
}
}
Ok, let’s abstract this so the ball can bounce forever-ish:
const eq0 = {
t0: -1,
v: (t) => 0,
p: (t) => 0
}
var equations = [eq0]
const intervals = [0].concat(range(3,1000, 6)) // [0, 3, 9, 15, ...]
intervals.forEach((t0, i) => {
var eq = {}
eq_last = equations[i]
eq.v_last = eq_last.v(t0)
eq.p_last = eq_last.p(t0)
eq.t0 = t0
eq.a = (t) => -9.8 /* meters per second sq */ * 10 /* px per meter */
eq.v = (t) => -eq.v_last + eq.a(t)*(t - t0) // NEXT: can't decrease by 0.9 because then 6 second hardcoded intervals are fucked up
// THUS: how do make dynamic
// MAYBE use something like http://algebra.js.org/ to solve the equations
// WHICH would eventually lead to allowing mouse click generated invervals
eq.p = (t) => eq.p_last + (((eq.v(t)+eq.v(t0))/2)*(t - t0))
equations.push(eq)
})
function eqI(t) {
const eqNext = equations.find(eq => t < eq.t0)
if (!eqNext) { return equations.length-1 }
const eqNextIndex = equations.indexOf(eqNext)
return eqNextIndex - 1
}
const p = (t) => {
const eq = equations[eqI(t)]
return eq.p(t)
}
Full code to be found here.
As said in the code, next steps are:
1) make the intervals dynamic (so we can decrease the speed by 10% each bounce), maybe using something like http://algebra.js.org/ to solve the equations 2) see if we can additionally generate intervals by non-predictable user-input, such as mouse clicks 3) …turn into flappy bird?
Some notes:
Friday night, Twitter sent me a notification to look at this tweet:
https://t.co/nqIwhoOc39
— Jennifer Jacobs (@jsquare) May 18, 2018
Recording of our presentation on Artist-Centric Programming Tools at #chi2018
First of all, I’m amazed and super grateful that Twitter was smart enough to let me know about this.
I only got part-way into the thesis, but I’m pumped to finish it. I got distracted by one of her references, InterState…
In an earlier entry, I wrote:
how do Facebook’s Origami, CycleJS devtools, and statecharts somehow fit together?
This protoype seems to answer that question! I was so excited that I sent it over to the StateCharts community!
Really wonderful! I missed the beauty here in the first watching. Notes here.
As I was falling alseep last night, I thought about the core problems with existing languages:
Redux, Elm, and CycleJS are a pain to code in. Small changes require big refactors. More importantly, emitting actions that can edit state in any way feels similar to the global mutable state we started with, that Andre Staltz argues against in his arrow post.
IO (and async) in an elegant way. Monads and streams kinda stink and feel imperitive the same way Redux does. (Conal agrees.)
In order to build a solution to these problems, it also needs to be somehow inter-operable with existing web frameworks. Relevant questions include whether its interpreted or compiled, etc, etc.
Another open questions is what the backend should look like in order to handle authentication, permissioning, persistance, etc.
Another open question: if you want to add phsyics to an object in your code, why not add the equations from your physics class to your program? In other words, how mathmatical and high-level can we get? What’s the highest-levle we can get and still have something computable (with good performance) at the end? Additionally, can we make a declarative langauge that’s readable and debuggable? (The Seymour Papert’s distinction between LOGO’s first-person FORWARD 1, RIGHT 1
, and math’s third-person x^2 + y^2 = z^2
, and how they are really just two sides of the same coin via the derivitive feels relevant here.)
Soundness of the type-system from a Scott, Strachey, and Elliot perspective also feels relevant here. I don’t want leaky or infinitely-complex abstractions.
I’m not even sure if these problems are truly problems, or just things I don’t yet understand. Yet, it feels very relevant to continue down the Conal Elliot, FRP rabbithole. I hope that the more I understand Elliot’s and (also Hudak’s Arrowized) FRP, the more I’ll be able to explain why I dislike Redux, Elm, and CycleJS. (Yet, I am a bit nervous that I won’t like any implementation of FRP out there. What then?!)
I still haven’t found a better way to manage these than listing them every log entry. Here are the remaining ones:
Besides the readings below, I cleaned up my inbox, which I haven’t done in about a week, so that feels good to be back in integrity.
I meant to reach out to Jonathan Edwards today (but didn’t), and found myself on his twitter, where I found some amazing gems:
I spent Monday - Wednesday last week working mostly on Repl.it. It was quite fustrating, but in a motivating way: there’s so much about programming that needs fixing!
To give you a taste, I’ve worked 23 horus at Repl.it. The first 4.5 hours were spent setting up my dev environment, then another 2.25 hours debugging my dev environment, and ~2 hours fixing linting or flow type errors. And I believe most of the productive hours I spent were wasted waiting for webpack to recompile all my code for every little change. Sometimes I’d have to wait 10-20 seconds to see even a small change. Very fustrating and very slow! And, of course, so much of my time was spent understanding complex code, React/Redux abstractions, and funneling scope and state, etc, etc. So much garbage!
For the 10ish hours I wanted to work there, I clearly was not being productive enough to justify them getting me up to speed on their stack. As of yesterday, we parted ways, so now I’m down to the 1 part-time gig with Dark (and another one in the works, but it’s touch-and-go if it’ll close). As explained above, this work fustrated me so I’m happy to have it off my plate, but I also got enough fustration to re-spark some motivation in a positive way.
I read from 10am to 3pm. It was interesting but also difficult to stay focused.
I wasn’t so motivated on Friday, so I mostly read Understanding Comics (reccomended by Conal and BV), and did some part-time work for Dark. I also had a group call with Dennis Heihoff, Shaun Williams, and Ivan Reese, which was fun.
I’ve realized that “devtools” is a better way to describe the field I’m in that “programming langauges”. Pretty obvious in retrospect, but took me a while to realize this.
I haven’t been super productive or passionate about stuff in a little while. It’s been a bit of a slog getting myself to do research.
Part of the reason is that I’ve becommed obsessed with The Wheel of Time series on audibook and am making my way through. Almost to the end!
I find myself increasingly curious about foundations, mathmatics, precision, abstraction, etc. I’ve always been on the fringes of the strong types community, and proving program correctness. These things were what the professors were up to a Penn while I was there. But I’ve never seen the need to dive in before. I think it was Conal’s comments in the Haskellcast video that got me:
1) Those that don’t understand FRP are doomed to re-create inferior versions of it 2) All the FRP implementations out there are more complex than their creators realize because they broke abstractions 3) You don’t truly understand how good or complex a design is until you describe it mathmatically, precisely
Music to Paul Chiusano’s ears, I’m sure.
Why am I so insistent that programming can be better? It’s hard to put into words. As I was falling asleep a few nights ago, I was struck by how a two words kept coming to mind: “semantic” and “canonical”. But what does that even mean? I have a fuzzy image of mathmatical expressions in the sky, with a white background behind them, and you can select different parts of the expression and get interesting information about the node, because it’s a living object that can be manipulated…
I started the morning by reading the paper on the Flapjax language/library. It seems to do FRP in much the same way that CycleJS does: stream combinators. It’s kinda neat in how it makes things higher level, but just as with CycleJS, I found it quite difficult to parse the code. However, it is crazy to think that this pre-dated CycleJS by a decade… I guess this goes to Bret’s point about aiming higher than “improving programming.”
I’ve gotten tired of Haskell people talking about “real FRP”, so I spent a few hours re-reading the Fran tutorial and re-watching two of Conal’s talks on it (1 and 2). Finally, I feel like I have a clear understanding of what people mean by “real FRP.” I don’t think I had seen enough of the “fake FRP” when I first watched the video to understand the distinction:
Real FRP is (mostly) about continuous time, in the same way that vector graphics are about continous space. In other words, “resolution independent time”, or “time as a function from the real numbers”.
Real FRP is also about denotational design, of which I feel slightly closer to groking, but likely still a hundred hours of Haskell away from. One day I’ll memorize the Typeclassopedia and everything will make sense.
This is a JS library for “real FRP”, and I’m excited to play with it a bit.
I’ve been seeing his name pop up a lot recently, so I’m excitd to check out this Live Programming Experience video. Maybe I’ll reach out too…
I’m excited to check out Staltz new article on Callbacks.
Another thing I want to do is continue to investigate how far I can push Facebook’s Origami. There are a few interesting threads in the FB Group I want to either look into or reply to, along with posting a question myself about updating complex centralized state.
He’s also been saying interseting things on Twitter recently, so I’ve been meaning to reach out…
Apparently, this is a interesting family of languages, including Lucid and Esterel, etc. I wonder where to start… This article on Lucid Synchrone looks good.
As in re-write the logic for a game I made yesterday for fun in WoofJS. Maybe I could re-write it in Hareactive or just prototype it on paper (without a library to back it up).
It’s a bit crazy to put them in a log like this… Maybe /now or /todo would make sense…
Inspired by my conversation with Brent Yorgey yesterday, I spent ~2 hours reading a 30-page survey paper on data flow programming langauges, with an emphasis on visual data flow langauages, such as LabView. It was fascinating!
A good next step would be to find a similar kind of survey paper but for functional reactive programming. While I could just start with Conal’s Fran paper, I expect it’d be more fun and a better use of my time to get a more modern and broad perspective – if it could include ReactJS and other JS libraries too, even better!
Brent reccomended a great article in a similar spirit to my “booleans are too generic” idea. The main idea here is that truth shouldn’t be a data type, but should be a proof, as it is in math.
That’s a bit different from my idea which proposes to create a lot of new, more specific data types to replace True and False, such as Equal and NotEqual, which would be pattern matched against, not branched at like an if-statement.
I didn’t have time on Friday last week to write up my log entry, so here it is now…
On Friday I ended up opening more links than I had time to actually read.
Fascinating conversation on Friday of last week. Notes in this google doc. Maybe I’ll turn it into a proper article. It’s fascinating that there’s so much progress in this space in both the past and in industries that we ignore.
He sent me two amazing videos of the NeXTSTEP development environments that Steve Jobs made. Unbelievable. It’s like Smalltalk. Really makes you scratch your head as to why we don’t have development environments like this today.
I had a great call with Amjad of Repl.it last Friday as well, in part from my Eve article. I’m going to follow up with a few ideas for improving their IDE interface, and then possible begin developing for them part-time. I’m very excited about this! It makes me feel much more secure to have a second part-time gig running, as well as that it’s starting to feel like finding these part-time gigs at developer-tools companies are repeatedly find-able. Content marketing - who knew?
Today I wasn’t really in the mood to do much hard thinking, so I spent a few hours in my inbox catching up on things, doing random chores around the house, and finally brining myself to do some reading relevant to the research topics I’ve been thinking about lately.
I started with reading Ian Horrock’s UI with Statecharts, but to a break to go directly to the source (Harel’s Statecharts), and then back to Ian’s book. I read parts and skimmed parts of both. They seemed semi-relevant to what I’m trying to do. As far as I could see, neither referenced the management of complex, nested state. They seemed to assume that non-state data (such as the currently displayed value on the calculator screen) was stored somewhere magically.
I then read a series of Andre Staltz posts. It turns out my link to his post on Friday was actually the incorrect post. I meant this one. He had a lot of other interesting posts, but I found myself not excited about the conclusions he drew. In particular, I didn’t like his discussion of hot vs cold observables because I don’t love his conclusions, although I couldn’t find any fault with his logic. It’d be a bummer if there wasn’t a more elegant way to model this distinction. It really seems like an incidental complexity type of thing. I also came to realize that I find stream-based programming a bit annoying. While it’s kinda cool, I wonder if it’s overly abstract, the way Haskell can get sometimes.
I then re-read (for the 10th time), Elm’s Farewell to FRP. Again, an article that feels profound but that I don’t like for some reason, mostly because there are a lot of concepts in it that I don’t quite understand, but don’t like the conclusions reached. It feels similar to when I had to learn AngularJS. I don’t really know what the concepts mean, but I can tell somehow that they are less elegant than they could be.
From there I found my way to Lucid Synchrone, but stopped after a few paragrahs, and just like with statecharts decided I wanted to go to the source, so I started reading Lucid. I’m actually a little shocked that I haven’t read this before. There’s so much here that I was slowly coming to on my own, but it feels like they’ve brought it all together for me. And, of course, Bret Victor is a big fan of it I’ve heard, although I wasn’t able to find his thoughts on it anywhere on the internet. My next steps are definitely to read through this, and maybe the Synchrone Experiment.
And as much as I want to avoid it, it seems like I may have to get my hands dirty with all this theoretical stuff. Read the Fran paper, other Conal Elliot stuff, maybe read Evan’s Elm thesis, as well as other Elm blogs, etc. I imagine all of this stuff could provide interesting conversation topics with Brent next week…
Over the last week, I’ve fallen smitten for Facebook’s Origami prototyping tool, which was originally built on top of Apple’s Quartz Composer, but is now a standalone Mac app. It suprised me because I don’t usually expect much from node-and-wire coding environments. I think the main feature that got me is that you can see the live-updating intermediate values of all computations. It’s amazing. It makes looking at static code unbearable. Additionally: there’s no run button - it’s hot-realoading. And they have a great way to add nodes (they call them “patches”) via the keyboard.
This morning, I had the idea of prototyping the Origami UI in Woof. I had a fun few hours doing it: http://woofjs.com/create#origami
I had to come up with techniques for managing lots of things I’d never had to do before in Woof, mostly around nested components and layout. Stuff that HTML and CSS does for you in annoying ways. Turns out it’s pretty annoying to calculate widths and paddings manually too…
The idea in my head is that you have a dependency graph of all the values. The trick, of course, is sequential evaluation and events. Mostly events.
These are simple. They should always have live-updated values.
They are normally things you’d want nested internally (which is a common complaint of node-and-wire tools), but when “proccessing” a thing, you want vertical or horizontal sequencing, such as in Elm.
I think we can do away with it entirely. If you actually need it, you can model it explicitly as I show in this demo.
Can we do without and just have reactive values? Clicks are reactive list of all clicks, and you can get last one and check timestamp a la http://futureofcoding.org/prototypes/streamsheets/.
Potentially, this will be too conceptually difficult, for the same reaons it was to do CycleJS Flappy Bird without centralized state and we’ll need a reducer. But maybe nested state via statecharts could save the day.
1 - What do I not like about cyclejs and elm (and what’s the difference again)?
1a - Is the main reason that it’s terrible to code in streams and elm, and thus streamsheets?
1a1 - Idea to make cycle better: swap it: make value be values and have a history function that gets the stream on it?
2 - Is the main difference explicit modeling of time and explictly listing all state?
2a - We don’t really need explicitly listing state, just reactively defined
2a1 - What is reactively defined state? wont it eventually bleed into everything depends on everything else and we’re back where we started? in other words, the reducer thingy seems stinky for some reason … it’s annoying to use, but making it easier to use just gets rid of the benefits of using it
2a2 - Can we make a calculus for how we want this to work to simplify things? Such as what I was trying to do with Reactive Woof … Or is this a cyclejs data structure?
3 - How to statecharts keep track of state…? .. go to Ian’s book
3a - Can you do statecharts without events?
4 - Difference between hot and cold streams?
5 - At the end of the day, can we really realize my perception of Anre’s vision in this article that all things contain all the ways to modify that thing? Or is it impossible because we want to branch in too many ways in our code, not sequentially, but just with nested cases? Would it at all be possible to define things in one place only? Or would that just hurt our brains even if we could do it?
I was a little surprised to see my Eve project stay on the front page for basically a whole day! It got 150 points on HN and up to #6. I got over 10k people on the page, and around 500 on my website homepage. However, I didn’t really get anything material out of this experiment.
It does pass my “do they spend more time on it then you do test”, because I only spend 10-ish hours here, and even if the average time on the page was 1 minute, that’s over 100 hours of time they spent on it.
This morning I decided to work with my notebook for 2 hours offline, thinking about other projects to work on. I got a bit excited about a WoofJS refactor plan, but ultimately was most excited about an FRP Scratch tool that is similar to Facebook’s Origami (which is based on Apple’s Quartz Composer) in that you can see all the intermediate values flowing through, immediate updates, etc.
After lunch, I was lazy and watched a few hours of TV. Then I got myself on my computer to look at my Cycle and Elm Flappy Birds for inspiration. I was quite fustrated with how awful coding in Cycle and Elm is compared to Facebook Origami, and also a bit inspired by that, too. However, I feel lazy today, so I’ll have to pick up on this thread later.