I've watched the actual SICP lectures before (the 1986 recordings on MIT OCW). They're often praised for the information density, but it actually still wastes a lot of time listening to students' Q&A, the lecturers drawing the class' attention to various attempts at "multimedia" presentation in the classroom, simply not having the entire lesson plan worked out in advance (i.e., not being able to preempt the Q&A) etc. For that matter, the sheer amount of time spent on writing things on a chalkboard really adds up.
And of course the order of the material could be debated and rearranged countless ways. One of my future planned projects is to do my own video series presenting the material according to my own sensibilities.
It's nice to hear that the course apparently still stays true to its roots while using more current languages like Python. Python is designed as a pragmatic, multi-paradigm language and I think people often don't give it enough credit for its expressive power using FP idioms (if not with complete purity).
FP in Python is rather weak. Even JS does a better job there.
Some of the code exercises will need completely different solutions than in Scheme, due to not having TCO.
What do instructors do, when their 1 to 1 translated code fails? Tell the students, that due to the choice of language it does not work that way, and that they simply need to believe it? Or do they treat it all as externalize the stack problems and solve it that way?
The course is using Python to implement a Scheme, then uses Scheme to implement a Scheme. Python could and should be removed from the course.
Python has very poor support for functional programming. Lists are not cons based, lambdas are crippled, pattern matching is horrible and not even expression based, namespaces are weird.
Python is not even a current language, it is stuck in the 1990s and happens to have a decent C-API that unfortunately fueled its growth at the expense of better languages.
Not very Schemey, but at least modern Python has basically full-on algebraic data types thanks to type hints, immutable dataclasses and structural pattern matching.
It's still not great for functional programming, but far, far better than it used to be.
IMHO the main problem is the fact that lambda expressions have been deliberately crippled. Ruby is often described as a good-enough Lisp despite it is not homoiconic. That's because, like all modern Lisps, it makes pervasive use of blocks, procs and lambdas. Python could have been a very similar language, but Guido held a vocal anti-FP stance. Perhaps this can be addressed now, as other interesting features like the ones you outlined have been added to the language, but it'd have a very deep impact on nearly every API.
While I am a huge lisp fan, oh...the irony of saying that python is struck in the 1990's when CONS, CAR and CDR are artifacts from the IBM 704 and Fortran :)
While I do find it annoying that python used 'list' to mean 'dynamic array', it is a lot better than a ton of church encoding in the other common teaching language, Java.
Linked lists may not be native in python but it is trivial to implement them.
I’m not the parent poster, but I’ve seen two major spurts of Python’s popularity: (1) the mid-2000s when Python became a popular scripting language, displacing Perl, and (2) beginning in the first half of the 2010s when an entire ecosystem of Python APIs backed by code written in C, C++, and even Fortran made up the infrastructure for machine learning code (e.g., NumPy, SciPy, scikit-learn, Pandas, etc.). If Python didn’t have a good way of interfacing with code written in languages like C, then it might not have been as popular among machine learning researchers and practitioners, who needed the performance of C/C++/Fortran for numerical computing but wanted to work with higher levels of abstraction than what is provided in those languages.
What drew me to Python back in 2006 as a CS student who knew C and Java was its feeling like executable pseudocode compared to languages that required more “boilerplate.” Python’s more expressive syntax, combined with its extensive “batteries included” standard library, meant I could get more done in less time. Thus, for a time in my career Python was my go-to language for short- and medium-sized programs. To this day I often write pseudocode in a Python-like syntax.
Since then I have discovered functional programming languages. I’m more likely to grab something like Common Lisp or Haskell these days; I find Lisps to be more expressive and more flexible than Python, and I also find static typing to be very helpful in larger programs. But I think Python is still a good choice for small- and medium-sized programs.
I'm convinced python's main asset for its growth was how ubiquitous it was. It was basically pre installed everywhere. With the batteries included idea, you were mostly good with basics, too.
This changed with heavy use, of course. Such that now packaging is a main reason to hate python. Comically so.
I try to avoid python in production code bases as much as possible: dependency management issues alone are a good reason to do so for anything that will last longer than a year or two.
It was relatively easy to lash Python as a higher-level orchestration layer to popular number crunching libraries, yielding NumPy and similar, which made Python popular for machine learning applications.
If you're used to Scheme, Common Lisp, or Haskell, Python's arbitrary decisions about e.g. lambda or how scopes work may be grating. But Python is the BASIC of the modern day, and people laughed at BASIC in the 80s too... except businesses ran on BASIC code and fortunes had been made from it.
They give a nice introduction to encoding state as pure functions. In fact, there are many more purely functional encodings for all kinds of data like trees, integers, sum/product types, images, monads, ...
The encodings can be a bit confusing, but really elegant and tiny at the same time. Take for example a functional implementation of the Maybe monad in javascript:
Nothing = nothing => just => nothing
Just = v => nothing => just => just(v)
pure = Just
bind = mx => f => mx(mx)(f)
evalMaybe = maybe => maybe("Nothing")(v => "Just " + v)
console.log(evalMaybe(bind(Nothing)(n => pure(n + 1)))) // Nothing
console.log(evalMaybe(bind(Just(42))(n => pure(n + 1)))) // Just 43
You can see this as replacing an inductive type with its recursor's function type. It's pretty cool in type theory, but not so good for actually programming stuff.
You can derive these implementations from the recursion principle for your type:
data Maybe a = Nothing | Just a
foldMaybe :: (Unit -> r) -> (a -> r) -> Maybe a -> r
The two higher order functions passed into `foldMaybe` are your `Nothing` and `Just` (modulo I added the Unit param to the Nothing case to be a little more precise).
It may be elegant mathematically, but then conveyed through a language that is strictly in the ASCII character set without any alignment or internal justification, they're really just painful to look at.
I think it's all right if you're used to the notation. The first two lines are tagged unions and will be recognisable as such if you're familiar with encodings like Scott/Church pairs/lists/numbers. Once you understand the structure, the definition of `bind` becomes obvious, as its two arguments represent the cases "is nothing" and "is just", where in the first case Nothing is returned, and in the second case the function is applied to the value inside the Just.
I think that writing such code, if only for educational purposes, can be really helpful in actually understanding how the state "flows" during the monadic bind/return. Typical monad instantiations of Maybe do not give such deep insight (at least to me).
> Just because you can do a thing doesn’t mean you should.
Of course you should, where would be the fun in that?
When encoding tagged unions as lambdas, the tags are arguments. In this case `Nothing` has two available tags (`nothing` and `just`) and uses the tag `nothing`. `Just` does the same with the tag `just`, only that the tag gets an additional argument (as does its constructor `Just`), such that the value can be extracted afterwards - just like in an enum:
This is a big reason why legacy production code bases are such a nightmare to work with: developers refuse to learn anything beyond the minimum necessary to pile on yet another band-aid fix and the code base turns into a disorganized ball of mud
>I think it's all right if you're used to the notation.
Higher mathematics in a nutshell.
>Of course you should, where would be the fun in that?
Also higher mathematics in a nutshell.
Narrator asks: Who should we put in charge of <<thing that will effect people in a tangible way>>?
Not the mathematicians! echo the crowd in unanmity.
Narrator asks: Who will we delegate the task of <<abuse of notation>> to?
The crowd grumbles, arguing amongst themselves whether such a question even warrants an answer. A mathematician stands up, proclaiming "We'll take it!", following up with, "Once you understand the notation involved in my previous statement, you will understand why this outcome is inevitable."
The crowd, seeing the wisdom of not even embarking on that tribulation, assents to the delegation, given the task of undoing the abuse of notation for the legibility of the layperson is also delegated to the aspiring mathematician.
The cons/car/cdr implementation as lambda was magical the first time I saw it. But it just shows that the language runtime must implement key/value dictionaries and you are able to borrow that implementation to make other data structures.
I’m a bit of an elixir noob, but Enum functions like slice let you cut up lists in various ways, and you can pattern match values in maps in function definitions:
David Beazley is a bit of a legend in the python world and honestly this course seems a surprising idea but it took about two seconds thought before it seemed perfect match and Inhave signed up for the next one.
The relevant part is that this is basically how “software engineers continual education” is going to look like
The two calls to `fib` are surely meant to be `fibonacci` since the latter is defined, but not the former. Indeed, the code is correct in the github repo:
I recently came across the notion that you need inductive data types (and can't just use Church encodings) if you want to do theorem proving, like proving that `0 != 1`.
I think you could prove 0 ≠ 1 if you had some other concrete fact about inequality to make use of. You could reason from the theorem "f = g -> f x = g x" to create your inequality fact on the right side and then take the contrapositive.
It seems correct to me that you can't directly prove inequality between Church numerals without starting with some other fact about inequality. Whereas with inductive data types, a proof system can directly "observe" the equality or inequality of two concrete instances of the same inductive type, by recursively removing the outermost constructor application from each instance.
Neat article. But it was very difficult to navigate for me because 99% I use the keyboard up/down arrows to scroll the page as I'm reading. This page swallows those keystrokes, apparently. Page up/down work, but sometimes. I never use page up/down while reading because I'll be in the middle of a sentence of a paragraph at the bottom, hit page down, and now I need to scan my eyes back to the top of the page. First, it introduces a hiccup in the middle of a sentence, and secondly, because of the hiccup I often want to go back a line or two to reestablish context, but it is now offscreen. Grr.
For me it was "Your browser is not compatible with Notion." on Android with Hack's (hacker news client) built in browser which is I guess just a stripped down Web view
For me it was "JavaScript must be enabled in order to use Notion" (I'm a NoScript user). But it had already redirected me to another domain to show this page. How am I supposed to enable JS for the actual domain of the page? I have ways of course, but it seems like notion is deliberately flipping the bird to people like me...
Warning: Diving into SCIP/Lisp/Scheme can transform how you think about programming... food for thought is always welcomed! But applying those ideas wholesale to OOP codebases often backfires or gets pushback from teammates. Languages handle paradigms differently, and going against the grain usually leads to worse code.
Example: After Lisp, you might replace every for loop with forEach or chain everything through map/reduce. But unless you’re working in a language that fully embraces functional programming, this approach can hurt both readability and performance.
At the end of the day, it’s grounding to remember there’s mutable memory and a CPU processing the code. These days, I find data-oriented design and “mechanical sympathy” (aligning code with hardware realities) more practical day-to-day than more abstract concepts like Church numerals.
Not a fan of everything-is-a-function because it's oversimplistic and often unhelpful. Some of the issues:
- functions that don't fit in cache, RAM, disk, etc.
- functions that have explosive big-O, including N way JOINs, search/matching, etc.
- functions with side effects, including non-idempotent. Nobody thinks about side channel attacks on functions.
- non-deterministic functions, including ones that depend on date, time, duration, etc.
- functions don't fail midway, let alone gracefully.
- functions don't consume resources that affect other (cough) functions that happen to be sharing a pool of resources
- function arguments can be arbitrarily large or complex - IRL, there are limits and then you need pointers and then you need remote references to the web, disk, etc.
Oversimplifying can be great at times. In this case, the lambda-calculus model (which is the base for this type of "everything is just a function" approach) is a great model of computation because it is so simple, while being easy to handle /reason about (compared to eg. Turing machines), which is why it is at the base of most computer logic/proof systems
Half of what you call functions in that comment, are not actually functions and in the FP world many would not call them functions. Rather they are procedures. Functions are procedures, but not all procedures are functions.
Which is also a problem with thinking this is a helpful abstraction: apparently, not everything you need to do can be captured by functions (in that sense)!
I learned functions in terms of sets. Domain and codomain are sets. Function is a set of ordered pairs between them.
How could we go the other way? A set can be "defined" by the predicate that tests membership, but then how do we model the predicates? Some formalism like the lambda calculus?
Reading this brings back fond memories of taking CS61a with prof Brian Harvey at UC Berkeley some 25 years ago. Same book, same level of mind=blown, and very similar instruction style. we spent a semester instead of a week and if memory serves tuition was about the same, but they threw in some English and history courses as well :-)
OP here. Thank you for the kind words! For those who enjoyed this, I would also point out Eli Bendersky's excellent SICP series https://eli.thegreenplace.net/tag/sicp
Same memories, and even the same timeline :) I still recall being blown away by the concept of "code is data", the magic of which I haven't encountered in professional development, alas.
David Beazley is using Scheme! That is a nice shift towards a civilized language. I hope he scraps the scheme-in-python section, but perhaps that is intended as an exit drug for Python addicts.
"Everything is just" approaches usually result in hammering things that don't fit into fitting.
That often ends badly. Computing has been through, at least:
- Everything is just a function (SICP)
- Everything is just an object (Smalltalk, and to some extent Java)
- Everything is just a closure (the original Common LISP object system)
- Everything is just a file of bytes (UNIX)
- Everything is just a database (IBM System/38, Tandem)
None of the things you mention ended badly though. I think all of those approaches you list are incredibly useful and important concepts and I am very happy that I not only know them, but that because of how universal they are I can leverage my knowledge of one approach to learn or apply another approach.
Another angle on this is that there’s many formal axiomatic ways to define computing.
Everything is just a Turing machine. Everything is just a function. Everything is the Conway’s game of life.
The fact that all of these forms are equally expressive is quite a surprise when you first discover this. Importantly, it doesn’t mean that any one set of axioms is “more correct” than the other. They’re equally expressive.
>where everything is possible but nothing of interest is easy.
Real development IMX is not much different. People just have low standards for "interesting" nowadays, and also have vastly increased access to previous solutions for increasingly difficult problems. But while modern programming languages might be more pleasant to use in many ways, they have relatively little to do with the combined overall progress developers have made. Increased access to "compute" (as they say nowadays), effort put into planning and design, and the simple passage of time are all far more important factors in explaining where we are now IMO.
It is a simplification that makes easier to grasp a paradigm. Sure, it could be taken to extremes and pretend nothing else exists outside this ‘everything is a … “ bubble. Luckily we can learn from others’ mistakes and not fall into traps too often.
I would go further and say that each one of these were so useful that they presented entirely new sets of problems to attempt to solve, because of how many other problems they directly addressed.
It's like being mad that hammer was so successful we invented screw to improve on it's greatest hits.
Everything (expressed with language) is just a model of something else.
By making the model follow some simple rules which we think the real thing follows as well we can reason about what happens when some inputs to the real thing being modeled change, by runnign our model (-simulation).
Thus you could add to your list: "Everything is just a simulation".
What will make any function that uses floating point numbers mindblowing complex. But there's probably an easier way by creating some transformation from (Integer -> a) to (F64 -> a) so that only the transformation gets complex.
Anyway, there are many reasons people don't write actual programs this way.
"Lambdas" and functions are not different things, in a functional-programming perspective (i.e. where you're operating with referential transparency and immutable objects anyway). The lambda syntax is just function-definition syntax that doesn't include an implicit name binding.
I'd iterate on that and say: everything is just languages and dialogues, with functions being one component of them. Over time, we’ve evolved from machine languages to higher-level ones, but most popular languages today still focus on the "how" rather than the "what".
Programming paradigms, even those like functional and logic programming, requires the "how". My rant is this: the next major iteration(s) in programming languages should shift focus to the "what". By abstracting away the "how", we can reach a higher-order approach that emphasizes intent and outcomes over implementation details.
I don't want to constrain this idea to Z3, LLMs, or low/no-code platforms, but rather to emphasize the spirit of the "what". It’s about enabling a mindset and tools that prioritize defining the goal, not the mechanics.
I know this contradicts our work as software engineers where we thrive on the "how", but maybe that’s the point. By letting go of some of the control and complexity, we might unlock entirely new ways to build systems and solve problems.
If I should be plain realistic, I'd say that in the middle, we need to evolve by mixing both worlds while keeping our eyes on a new horizon.
Incorrect: you need to know the "how" to create more complex and optimal queries. Your example is like saying, in Python, you just need to write print("Hello World!") to print something.
I wouldn't say that since SQL was an improvement over previous ways to query data which were more concrete, like writing C code to get what you need. As such we are on a level of abstraction higher. Thus SQL specifies the "what", not the "how", with respect to those previous methods. However in complex queries, since we are constrained by the relational model (PK/FK), we may have a feeling of having to specify too much details.
You aren't telling the database how to get those results from the files on the disk. You are telling it what values you want, matching what conditions, and (in the case of joins) what related data you want. If you want an aggregation grouped by some criteria you say what values you want summed (or averaged, etc.) and what the grouping criteria are, but not how to do it.
Not a perfect example and it breaks entirely if you get into stuff like looping over a cursor but it is why SQL is usually called a declarative language.
Imagine this concrete example: you are the best developer in the world in some specific area(s), except for UX/UI. If you wanted to create a relatively simple yet secure site with user authentication, even if described declaratively as “create a secure site with user authentication,” it would still take a significant amount of time to learn technologies like React and put everything in place. There are zillions of development teams doing the same work around the world.
They don't do it "already" but are one of the approaches taken. If you build state of the art web UI/UX you know that it is not just dragging and dropping objects on the screen while it is perfectly possible to build a tool like this.
Yeah, but a new generation is coming of age, whose teachers only learned these ideas through books, not experience. They are rediscovering computer science one blog post or tweet at a time, because books and classes are obsolete.
It's also useful to be able to understand how the idioms map into the syntax of programming languages that one is actually going to use going forward. The point of SICP isn't what language you use, but how you use it, and how you think about the process of using it. Lisp itself exists because someone had the idea of taking the theoretical abstraction and actually realizing it, in notation similar to what the theorists were already using. But that similarity isn't actually relevant to core concepts like "functions as first-class objects", or referential transparency, or the substitution model of computation, or the complexity introduced by mutable state, etc. (Or, dare I say it: to the mind-expanding effects of contemplating the Y combinator.) These ideas can make you a better programmer in any programming language.
Nor is there any good reason to filter people out preemptively. If seeing `foo(x)` instead of `(foo x)` makes the student more receptive to a proper understanding of recursion, that's just fine.
I agree. Being self contained helps make it timeless. In contrast are books with a CD in the back with an outdated Java compiler you will never be able to setup. And then you have to migrate the snippets yourself.
If you study any other related field like math or physics you become accustomed to learning a formal system for the context of a particular problem.
CS students tend to have this weird careerist view where every page just directly help them get a job.
Most undergrad CS students want a practical/engineering curriculum. They are not really there for theory, but for a long time that's how CS departments operated, unless maybe you were at an engineering school.
Schools are so desperate to keep up enrollment numbers today that many have capitulated and are giving students what they want instead of what the faculty thinks they need.
> Most undergrad CS students want a practical/engineering curriculum.
If all someone wants is the practical benefits of programming and has no interest in the underlying theory, they shouldn't waste their their time and money on a CS degree. All the practical information is available for free or at very low cost.
Maybe so, but we shouldn't be doubling down on expensive and time consuming degrees in the name of ill-conceived credentialism. That hurts everyone except the universities profiting off of it.
At least in the U.S., many students are paying upwards of a $100k for a four-year degree. That better be one hell of a "campus experience" and some next-level "skilled tutors".
Call me a hopeless optimist, but I think there's a better way out there.
How about an AI-tutor? Actual professors don't have time to adapt their teaching to every indfividual studen's knowledge background. But AI might.
Universities should start their own AI-tutor development programs, in co-operation with others because, only way AI-tutors can become better is by practice practive practice.
So I'n not sure if this is a new viewpoint or not, but it is not only students that need training, it is also teachers who need to be trained more in teaching. AI is all about "training", understanding is about training. Training is the new paradigm for me.
There is a big difference between being practically minded and the allergy to learning anything which doesn’t translate to resume keywords. SICP will teach you more about JavaScript, python, etc than most anything.
> Most undergrad CS students want a practical/engineering curriculum.
Somewhat understandable considering that student loans put you into indentured servitude unless you have rich parents. Although I still think they're shortsighted. A good CS graduate should understand that programming languages are just syntactic sugar over the underlying concepts and have little trouble translating/picking up the basics of new languages.
Then it does not matter what language SICP chooses to illustrate timeless concepts? Even if some JS stuff changes down the line people should be able adapt what’s on the book on the fly?
Because knowing scheme isn't going to get you a job at most places. Employers overwhelmingly want JavaScript or Python these days. Trailing that would probably be Java, C++ and C#, and regular old C.
When I did my undergrad CS degree, the fact that scheme was so heavily used was a common complaint they received from students. It just wasn't a marketable skill.
Four year CS degrees usually require something around 20 (maybe even more) CS courses. Are you saying that all of those courses at your school were taught in Scheme? You never had a chance (in the classes, ignoring hobby or internships) to use other languages? That'd be a pretty unique school.
But even if that were true and you did take 20+ classes in Scheme, you're still a college educated computer scientist. You can't pick up JavaScript or Python in time for a job interview for an entry level job? They're easy languages to learn. If you survived four years of exclusively being taught with Scheme, they'd be a breeze to pick up.
No not all scheme. That's an example. The intro course and programming languages course was scheme. There were a number of other languages used. I guess I should have been more nuanced in that a number of students wanted to be taught the currently popular progrmmming languages so they could use them on a resume. They complained about using scheme (or whatever "teaching" language a professor might require) and did not yet appreciate that the concepts/theory they were learning applied to any programming language they might need to use.
They wanted a trade school/practical education in something immediately marketable, not a theoretical education.
The reason I remember this is that in my "exit interview" as a senior I mentioned that I appreciated the exposure to these languages and theory and my advisor remarked "we don't hear that very often, the usual feedback is that we don't teach the languages employers want"
JS is easier to read IMO. And of the widely-used interpreted languages I can think of, it's actually got the least confusing implementation of first-class anonymous functions. Python lambdas are limited to one expression, Ruby has that confusing block vs. proc vs. lambda problem, etc.
I do feel like the value of using Scheme is teaching students early on that syntax doesn't really matter. Those that are actually interested in CS theory will find this enlightening, those that are simply in it because investment banking is so 2007 will churn out.
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
And if you have JS disabled by default, it redirects to a page on a different domain name, so you cannot easily allow it in noscrpt just for that website, even if you want to. I gave up on that though; judging by the title, the article is going to be about modelling all the things as functions, as commonly and similarly done with other objects (e.g., sets, categories), which I wanted to confirm, and maybe to nitpick on this perspective and/or the title then (i.e., it is not quite correct to declare everything a function just because you can model or represent things that way).
The arrow and page up/down keys don't work in any predictable pattern for me, it's really weird. Like I thought it only scrolled up and down with the arrow keys if I press it 4 times, but then page up/down keys don't work no matter how many times I press it, then I focus on the page and it works, but then the arrow keys take 6 times to press before moving, and then I tried the same pattern again, and the arrow keys now take 11 presses before they start moving. Usually a lot of modern apps predictably break the back/forward history buttons and tab focus, but I've never seen anything quite like this. I guess it must be still delivering value though even if the product isn't polished.
wysiwyg document authoring experience, afaik there are still no alternative publishing platforms with both the flexibility and point click content authoring UX of Notion. Change my view, I’m in the market!
Which, based on what I see in the rendered archive.is version, is being used to do nothing outside of the normal use of a standard Markdown-based SSG like Nikola or Jekyll.
It's a performant publishing tool and perhaps even high performance publishing tool - in terms of user effort. What it's not is performant displaying the thing it published.
That’s fair. Viewers who don’t know what is serving the page will be disappointed. If you know it’s Notion, then it works about as expected which satisfies the definition of performant.
Yeah the guy at my last place that was proud of serving < 2 req/s/core liked to use the world “powerful” too. It’s like it was his favorite word. And he’s on the short list of people I refuse to work with again. What a putz.
These are some of the biggest weasel words of IT. Every one of them has an implicit nature of a comparison word and yet the comparison or any sort of hard metrics are always completely absent in their use.
And this is for what, a ~100KB header image (most of which is bounding-boxed away) and 24KB of actual text (Markdown source would be only slightly larger)?
Probably a hard question to answer. IME, cultural norms around documentation vary pretty wildly.
Some orgs I've worked for were very "wiki" driven - there's a big expectation of using Confluence or Notion to navigate documentation. This applies both big (5000+) and small (50+) organizations for me.
Other organizations I've worked in were very document centric - so you organize things in folders, link between documents (GDoc @SomeDocument or MSFT's equivalent). Those organizations tend to pass around links to documents or "index" documents. Similarly, this applies for both big and small organizations in my experience.
Of the two, I tend to prefer the latter. Without dedicated editors, the wiki version seems to decay rapidly, especially once the org grows above some size.
A computer fundamentally isn’t functions though. That’s not how a processor works. If functions are a useful abstraction, why haven’t functional languages taken off?
> A computer fundamentally isn’t functions though. That’s not how a processor works. If functions are a useful abstraction, why haven’t functional languages taken off?
If computers and their processors are a useful abstraction, why don't we write everything directly in machine language - or microcode for that matter?
This is more about computing than about computers. As Dijkstra put it, "Computer science is no more about computers than astronomy is about telescopes."
Computing involves languages, including many languages that are not machine languages. Every language that's higher level than machine code requires translation to actually execute on the particular machines that we've developed as a result of our history and legacy decisions.
The lambda calculus is a prototypical language that provides very simple yet general meanings for the very concept of variables - or name-based abstraction in general - and the closely related concept of functions. It's a powerful set of concepts that is the basis for many very powerful languages.
It also provides a mathematically tractable way to represent languages that don't follow those principles closely. Compilers perform optimizations like static single assignment (SSA), which are fundamentally equivalent to a subset of the functional concept of continuation passing style (CPS). In other words, mainstream languages need to be transformed through functional style in order to make them tractable enough to compile.
The mapping from a lambda calculus style program to a CPU-style register machine is quite straightforward. The connection is covered in depth in Chapter 5 of SICP, "Computing with Register Machines." Later work on this found even better ways to handle this, like Appel's "Compiling with Continuations" - which led to the SSA/CPS equivalence mentioned above.
There's a lot to learn here. It's hard to recognize that if you know nothing about it, though.
Just to pick some nits with those claims… CPUs do have hardware support for functions in the form of a stack and CALL/RET instructions. Functions are a useful abstraction since more or less all software uses them. Functions and functional languages are two related but different things, and the usefulness of functions as an abstraction doesn’t depend on whether functional languages have taken off. And last, I’d say functional languages have gained ground over time, as well as semi-functional languages like, say, Python and JavaScript. Even C++ is gaining more functional language features over time.
I just haven’t seen anything concrete as to why SICP’s materials are useful in either the real world or academia. Sometimes these discussions talk about how it is useful for computer science and for theory but even that seems like a claim without evidence. Is this just people reminiscing about their first introduction to programming or a favorite professor?
It's not a surprise that most of students failed and hate abstract algebra right ? I mean to learn the concept, you will need to know more about the concept itself in a real world context.
SICP shows a real world code base. It's real world programs that builds up to implementing real world programming languages.
Why would you validate if you can parse? If you have a decent chunk of experience in implementing business logic then you know that your quality of life will be destroyed by switches and other inscrutable wormhole techniques up until the point where you learn to use and build around rule engines. SICP shows you how you can tailor your own rule engine, so you won't have to get the gorilla and the jungle when you reach for one in an enterprisey library.
I've watched the actual SICP lectures before (the 1986 recordings on MIT OCW). They're often praised for the information density, but it actually still wastes a lot of time listening to students' Q&A, the lecturers drawing the class' attention to various attempts at "multimedia" presentation in the classroom, simply not having the entire lesson plan worked out in advance (i.e., not being able to preempt the Q&A) etc. For that matter, the sheer amount of time spent on writing things on a chalkboard really adds up.
And of course the order of the material could be debated and rearranged countless ways. One of my future planned projects is to do my own video series presenting the material according to my own sensibilities.
It's nice to hear that the course apparently still stays true to its roots while using more current languages like Python. Python is designed as a pragmatic, multi-paradigm language and I think people often don't give it enough credit for its expressive power using FP idioms (if not with complete purity).
FP in Python is rather weak. Even JS does a better job there. Some of the code exercises will need completely different solutions than in Scheme, due to not having TCO. What do instructors do, when their 1 to 1 translated code fails? Tell the students, that due to the choice of language it does not work that way, and that they simply need to believe it? Or do they treat it all as externalize the stack problems and solve it that way?
It seems rather silly to force SICP into Python.
CPython doesn't do TCO, but you can implement it yourself in Python.
This version is one of my all-time favorite StackOverflow answers: https://stackoverflow.com/questions/13591970/does-python-opt...
Pyret would be a good alternative. It's designed for education by racketeers.
https://pyret.org/
The course is using Python to implement a Scheme, then uses Scheme to implement a Scheme. Python could and should be removed from the course.
Python has very poor support for functional programming. Lists are not cons based, lambdas are crippled, pattern matching is horrible and not even expression based, namespaces are weird.
Python is not even a current language, it is stuck in the 1990s and happens to have a decent C-API that unfortunately fueled its growth at the expense of better languages.
Not very Schemey, but at least modern Python has basically full-on algebraic data types thanks to type hints, immutable dataclasses and structural pattern matching.
It's still not great for functional programming, but far, far better than it used to be.
IMHO the main problem is the fact that lambda expressions have been deliberately crippled. Ruby is often described as a good-enough Lisp despite it is not homoiconic. That's because, like all modern Lisps, it makes pervasive use of blocks, procs and lambdas. Python could have been a very similar language, but Guido held a vocal anti-FP stance. Perhaps this can be addressed now, as other interesting features like the ones you outlined have been added to the language, but it'd have a very deep impact on nearly every API.
While I am a huge lisp fan, oh...the irony of saying that python is struck in the 1990's when CONS, CAR and CDR are artifacts from the IBM 704 and Fortran :)
While I do find it annoying that python used 'list' to mean 'dynamic array', it is a lot better than a ton of church encoding in the other common teaching language, Java.
Linked lists may not be native in python but it is trivial to implement them.
Interesting, I think this is the first time I have seen anyone bash Python this hard.
Why would a decent C-API fuel its growth? Also can you give me some examples of better languages?
Am no senior developer but I find python very elegant and easy to get started with.
I’m not the parent poster, but I’ve seen two major spurts of Python’s popularity: (1) the mid-2000s when Python became a popular scripting language, displacing Perl, and (2) beginning in the first half of the 2010s when an entire ecosystem of Python APIs backed by code written in C, C++, and even Fortran made up the infrastructure for machine learning code (e.g., NumPy, SciPy, scikit-learn, Pandas, etc.). If Python didn’t have a good way of interfacing with code written in languages like C, then it might not have been as popular among machine learning researchers and practitioners, who needed the performance of C/C++/Fortran for numerical computing but wanted to work with higher levels of abstraction than what is provided in those languages.
What drew me to Python back in 2006 as a CS student who knew C and Java was its feeling like executable pseudocode compared to languages that required more “boilerplate.” Python’s more expressive syntax, combined with its extensive “batteries included” standard library, meant I could get more done in less time. Thus, for a time in my career Python was my go-to language for short- and medium-sized programs. To this day I often write pseudocode in a Python-like syntax.
Since then I have discovered functional programming languages. I’m more likely to grab something like Common Lisp or Haskell these days; I find Lisps to be more expressive and more flexible than Python, and I also find static typing to be very helpful in larger programs. But I think Python is still a good choice for small- and medium-sized programs.
I'm convinced python's main asset for its growth was how ubiquitous it was. It was basically pre installed everywhere. With the batteries included idea, you were mostly good with basics, too.
This changed with heavy use, of course. Such that now packaging is a main reason to hate python. Comically so.
Ah thanks, that makes sense!
I try to avoid python in production code bases as much as possible: dependency management issues alone are a good reason to do so for anything that will last longer than a year or two.
It was relatively easy to lash Python as a higher-level orchestration layer to popular number crunching libraries, yielding NumPy and similar, which made Python popular for machine learning applications.
If you're used to Scheme, Common Lisp, or Haskell, Python's arbitrary decisions about e.g. lambda or how scopes work may be grating. But Python is the BASIC of the modern day, and people laughed at BASIC in the 80s too... except businesses ran on BASIC code and fortunes had been made from it.
There is also the course from ArsDigita University. The site is offline now but the courses are available on archive.org.
https://en.m.wikipedia.org/wiki/ArsDigita#ArsDigita_Foundati...
https://archive.org/details/arsdigita_01_sicp/
They were selling USB keys with the entire curriculum, if someone could upload an iso, that would be amazing. https://web.archive.org/web/20190222145553/aduni.org/drives/
They give a nice introduction to encoding state as pure functions. In fact, there are many more purely functional encodings for all kinds of data like trees, integers, sum/product types, images, monads, ...
The encodings can be a bit confusing, but really elegant and tiny at the same time. Take for example a functional implementation of the Maybe monad in javascript:
You can see this as replacing an inductive type with its recursor's function type. It's pretty cool in type theory, but not so good for actually programming stuff.
You can derive these implementations from the recursion principle for your type:
The two higher order functions passed into `foldMaybe` are your `Nothing` and `Just` (modulo I added the Unit param to the Nothing case to be a little more precise).If you have any light to shed, I'm wondering about inductive data types and their expressive necessity here: https://news.ycombinator.com/item?id=42166709
It may be elegant mathematically, but then conveyed through a language that is strictly in the ASCII character set without any alignment or internal justification, they're really just painful to look at.
[flagged]
I think it's all right if you're used to the notation. The first two lines are tagged unions and will be recognisable as such if you're familiar with encodings like Scott/Church pairs/lists/numbers. Once you understand the structure, the definition of `bind` becomes obvious, as its two arguments represent the cases "is nothing" and "is just", where in the first case Nothing is returned, and in the second case the function is applied to the value inside the Just.
I think that writing such code, if only for educational purposes, can be really helpful in actually understanding how the state "flows" during the monadic bind/return. Typical monad instantiations of Maybe do not give such deep insight (at least to me).
> Just because you can do a thing doesn’t mean you should.
Of course you should, where would be the fun in that?
> The first two lines are tagged unions
Are they? But in the Nothing you have 2 identical members (`nothing' without arguments), won't that throw an exception?
To borrow Rust syntax (pun intended):
That's just weird.When encoding tagged unions as lambdas, the tags are arguments. In this case `Nothing` has two available tags (`nothing` and `just`) and uses the tag `nothing`. `Just` does the same with the tag `just`, only that the tag gets an additional argument (as does its constructor `Just`), such that the value can be extracted afterwards - just like in an enum:
It's alright once you get used to it usually means it isn't alright in my experience. There are exceptions of course.
This is a big reason why legacy production code bases are such a nightmare to work with: developers refuse to learn anything beyond the minimum necessary to pile on yet another band-aid fix and the code base turns into a disorganized ball of mud
>I think it's all right if you're used to the notation.
Higher mathematics in a nutshell.
>Of course you should, where would be the fun in that?
Also higher mathematics in a nutshell.
Narrator asks: Who should we put in charge of <<thing that will effect people in a tangible way>>?
Not the mathematicians! echo the crowd in unanmity.
Narrator asks: Who will we delegate the task of <<abuse of notation>> to?
The crowd grumbles, arguing amongst themselves whether such a question even warrants an answer. A mathematician stands up, proclaiming "We'll take it!", following up with, "Once you understand the notation involved in my previous statement, you will understand why this outcome is inevitable."
The crowd, seeing the wisdom of not even embarking on that tribulation, assents to the delegation, given the task of undoing the abuse of notation for the legibility of the layperson is also delegated to the aspiring mathematician.
Scene opens on current day...
It’s definitely easier to read in an ML language, that’s for sure!
If there is a wrong way to do something someone will do it.
There’s an old saying attributed to the Inuit: everyone enjoys the smell of their own farts.
Just fyi (perhaps @dang) this jumps to the Postscript of the blog post due to the anchor/hash in the URL. I was a bit confused initially.
The title is also wrong. I was wondering if the submitter maybe did mean to link to the postscript, but it doesn’t fit the title any better.
The cons/car/cdr implementation as lambda was magical the first time I saw it. But it just shows that the language runtime must implement key/value dictionaries and you are able to borrow that implementation to make other data structures.
I find the destructuring logic in elixir much more interesting, and the watered down version in ES6 much more practical.
In elixir you can pop off as many as you like.
Can you share any resources about it?
I’m a bit of an elixir noob, but Enum functions like slice let you cut up lists in various ways, and you can pattern match values in maps in function definitions:
http://www.skuunk.com/2020/01/elixir-destructuring-function....
Which can let you unroll function preambles, or apply different rules if for instance an admin user runs a function versus a regular user.
I think this is a little different. Pattern matching gives you car and cdr, but not cons.
That exists for maps and lists.
David Beazley is a bit of a legend in the python world and honestly this course seems a surprising idea but it took about two seconds thought before it seemed perfect match and Inhave signed up for the next one.
The relevant part is that this is basically how “software engineers continual education” is going to look like
"legend in the python world"
That's a fun statement.
There's a typo in the code in "the substitution model" section:
The two calls to `fib` are surely meant to be `fibonacci` since the latter is defined, but not the former. Indeed, the code is correct in the github repo:https://github.com/savarin/pyscheme/blob/0f47292c8e5112425b5...
OP here. Thank you!
I recently came across the notion that you need inductive data types (and can't just use Church encodings) if you want to do theorem proving, like proving that `0 != 1`.
I threw up some content up here: https://intellec7.notion.site/Drinking-SICP-hatorade-and-why... , along with an unrelated criticism of SICP.
I'd like to better understand what the limitations are of "everything is just a function".
I think you could prove 0 ≠ 1 if you had some other concrete fact about inequality to make use of. You could reason from the theorem "f = g -> f x = g x" to create your inequality fact on the right side and then take the contrapositive.
It seems correct to me that you can't directly prove inequality between Church numerals without starting with some other fact about inequality. Whereas with inductive data types, a proof system can directly "observe" the equality or inequality of two concrete instances of the same inductive type, by recursively removing the outermost constructor application from each instance.
Neat article. But it was very difficult to navigate for me because 99% I use the keyboard up/down arrows to scroll the page as I'm reading. This page swallows those keystrokes, apparently. Page up/down work, but sometimes. I never use page up/down while reading because I'll be in the middle of a sentence of a paragraph at the bottom, hit page down, and now I need to scan my eyes back to the top of the page. First, it introduces a hiccup in the middle of a sentence, and secondly, because of the hiccup I often want to go back a line or two to reestablish context, but it is now offscreen. Grr.
For me it was "Your browser is not compatible with Notion." on Android with Hack's (hacker news client) built in browser which is I guess just a stripped down Web view
For me it was "JavaScript must be enabled in order to use Notion" (I'm a NoScript user). But it had already redirected me to another domain to show this page. How am I supposed to enable JS for the actual domain of the page? I have ways of course, but it seems like notion is deliberately flipping the bird to people like me...
For me it was perpetual loading spinner.
The book itself is currently being discussed at:
https://news.ycombinator.com/item?id=42157558
Is there a reason why the link goes to the discussion at the bottom of that page rather than the beginning?
Could this be folded into the other discussion? (I don't see that the link has been posted there yet)
Warning: Diving into SCIP/Lisp/Scheme can transform how you think about programming... food for thought is always welcomed! But applying those ideas wholesale to OOP codebases often backfires or gets pushback from teammates. Languages handle paradigms differently, and going against the grain usually leads to worse code.
Example: After Lisp, you might replace every for loop with forEach or chain everything through map/reduce. But unless you’re working in a language that fully embraces functional programming, this approach can hurt both readability and performance.
At the end of the day, it’s grounding to remember there’s mutable memory and a CPU processing the code. These days, I find data-oriented design and “mechanical sympathy” (aligning code with hardware realities) more practical day-to-day than more abstract concepts like Church numerals.
https://www.youtube.com/watch?v=PAZTIAfaNr8
He wrote my favourite maths book.
https://www.cambridge.org/core/books/all-the-math-you-missed...
https://m.youtube.com/watch?v=ur0UGCL6RWc
Cool, available on archive.org too
Did you read the book in isolation or was it a part of a class / MOOC ?
https://archive.org/details/all-the-mathematics-you-missed
Fellow is doing an aggressive function. I wouldn't dare put contrary functions against his output.
this guy has Chris Fleming energy.
Not a fan of everything-is-a-function because it's oversimplistic and often unhelpful. Some of the issues:
- functions that don't fit in cache, RAM, disk, etc.
- functions that have explosive big-O, including N way JOINs, search/matching, etc.
- functions with side effects, including non-idempotent. Nobody thinks about side channel attacks on functions.
- non-deterministic functions, including ones that depend on date, time, duration, etc.
- functions don't fail midway, let alone gracefully.
- functions don't consume resources that affect other (cough) functions that happen to be sharing a pool of resources
- function arguments can be arbitrarily large or complex - IRL, there are limits and then you need pointers and then you need remote references to the web, disk, etc.
(tell me when to stop - I can keep going!)
Oversimplifying can be great at times. In this case, the lambda-calculus model (which is the base for this type of "everything is just a function" approach) is a great model of computation because it is so simple, while being easy to handle /reason about (compared to eg. Turing machines), which is why it is at the base of most computer logic/proof systems
Half of what you call functions in that comment, are not actually functions and in the FP world many would not call them functions. Rather they are procedures. Functions are procedures, but not all procedures are functions.
Which is also a problem with thinking this is a helpful abstraction: apparently, not everything you need to do can be captured by functions (in that sense)!
I think its the only hope for theoretical purposes.
alternative take: everything is just sets
both can be a foundation for mathematics, and hence, a foundation for everything
what's interesting is how each choice affects what logic even means?
I learned functions in terms of sets. Domain and codomain are sets. Function is a set of ordered pairs between them.
How could we go the other way? A set can be "defined" by the predicate that tests membership, but then how do we model the predicates? Some formalism like the lambda calculus?
Reading this brings back fond memories of taking CS61a with prof Brian Harvey at UC Berkeley some 25 years ago. Same book, same level of mind=blown, and very similar instruction style. we spent a semester instead of a week and if memory serves tuition was about the same, but they threw in some English and history courses as well :-)
OP here. Thank you for the kind words! For those who enjoyed this, I would also point out Eli Bendersky's excellent SICP series https://eli.thegreenplace.net/tag/sicp
Same. For me it was 15 years ago, but was with Prof. Brian Harvey in Pimentel hall with the rotating stage.
Nice memories.
I fell in love with scheme eventually as it was such a simple syntax. Getting used to parentheses did take some time though.
Same memories, and even the same timeline :) I still recall being blown away by the concept of "code is data", the magic of which I haven't encountered in professional development, alas.
Surprised "mind-blowing" is not in the HN clickbait filter.
It is now.
Everything is just assembly!
Actually it is more like Algorithms + Data Structures = Programs.
for some reason I resonate more with the fp/math/linguistic side of this coin. you don't even think about programs in the end
that said, since I've been reading about kanren and prolog I'm about to say "everything is a relation" :)
From my perspective all software is essentially applying transformation functions to some data.
There are externalities like networking and storage, but still data transformation in a way.
David Beazley is using Scheme! That is a nice shift towards a civilized language. I hope he scraps the scheme-in-python section, but perhaps that is intended as an exit drug for Python addicts.
"Everything is just" approaches usually result in hammering things that don't fit into fitting. That often ends badly. Computing has been through, at least:
- Everything is just a function (SICP)
- Everything is just an object (Smalltalk, and to some extent Java)
- Everything is just a closure (the original Common LISP object system)
- Everything is just a file of bytes (UNIX)
- Everything is just a database (IBM System/38, Tandem)
None of the things you mention ended badly though. I think all of those approaches you list are incredibly useful and important concepts and I am very happy that I not only know them, but that because of how universal they are I can leverage my knowledge of one approach to learn or apply another approach.
Another angle on this is that there’s many formal axiomatic ways to define computing.
Everything is just a Turing machine. Everything is just a function. Everything is the Conway’s game of life.
The fact that all of these forms are equally expressive is quite a surprise when you first discover this. Importantly, it doesn’t mean that any one set of axioms is “more correct” than the other. They’re equally expressive.
Everything is just a Turing machine.
That one ends in a tarpit where everything is possible but nothing of interest is easy.
>where everything is possible but nothing of interest is easy.
Real development IMX is not much different. People just have low standards for "interesting" nowadays, and also have vastly increased access to previous solutions for increasingly difficult problems. But while modern programming languages might be more pleasant to use in many ways, they have relatively little to do with the combined overall progress developers have made. Increased access to "compute" (as they say nowadays), effort put into planning and design, and the simple passage of time are all far more important factors in explaining where we are now IMO.
That's the generic problem with "Everything is a ...". Trying to force things into a paradigm that doesn't fit well complicates things.
It is a simplification that makes easier to grasp a paradigm. Sure, it could be taken to extremes and pretend nothing else exists outside this ‘everything is a … “ bubble. Luckily we can learn from others’ mistakes and not fall into traps too often.
The generic problem is every generation thinks they invented sex.
https://www.cs.yale.edu/homes/perlis-alan/quotes.html
I would go further and say that each one of these were so useful that they presented entirely new sets of problems to attempt to solve, because of how many other problems they directly addressed.
It's like being mad that hammer was so successful we invented screw to improve on it's greatest hits.
- Everything is just a filesystem (Plan9/Inferno)
- Everything is just a buffer (K&R C and many of its descendants)
- Everything is just a logical assertion (Prolog)
I look at the list and I see a bunch of successes, though some of them are niche.
Everything (expressed with language) is just a model of something else.
By making the model follow some simple rules which we think the real thing follows as well we can reason about what happens when some inputs to the real thing being modeled change, by runnign our model (-simulation).
Thus you could add to your list: "Everything is just a simulation".
Except the real thing of course :-)
OP here. I would add "All you need is NAND".
"All you need is Transistor" - can't believe how badly that ended!
It's because all you need is Ohm's law.
Ohm's law doesn't really work for transistors though.
James Clerk Maxwell has entered the conversation
How about everything is just state and transformations.
Everything is a rule (Pega)
Everything is just a turing machine after all* **
* modulo infinity
** except a small number of languages that are not
So an integer is represented by how deep in the stack you are?
How do you represent an irregular float?
Probably by using IEEE 754.
What will make any function that uses floating point numbers mindblowing complex. But there's probably an easier way by creating some transformation from (Integer -> a) to (F64 -> a) so that only the transformation gets complex.
Anyway, there are many reasons people don't write actual programs this way.
This does not seem very groundbreaking to me.
It is like claiming the abacus was invented by counting number of cars in a parking lot.
Shameless plug: https://aplaceofmind.notion.site/It-s-Lambdas-All-the-Way-Do...
I got to the same conclusion a while ago, except that I found that it's lambdas all the way down.
"Lambdas" and functions are not different things, in a functional-programming perspective (i.e. where you're operating with referential transparency and immutable objects anyway). The lambda syntax is just function-definition syntax that doesn't include an implicit name binding.
> Everything Is Just Functions...
I'd iterate on that and say: everything is just languages and dialogues, with functions being one component of them. Over time, we’ve evolved from machine languages to higher-level ones, but most popular languages today still focus on the "how" rather than the "what".
Programming paradigms, even those like functional and logic programming, requires the "how". My rant is this: the next major iteration(s) in programming languages should shift focus to the "what". By abstracting away the "how", we can reach a higher-order approach that emphasizes intent and outcomes over implementation details.
I don't want to constrain this idea to Z3, LLMs, or low/no-code platforms, but rather to emphasize the spirit of the "what". It’s about enabling a mindset and tools that prioritize defining the goal, not the mechanics.
I know this contradicts our work as software engineers where we thrive on the "how", but maybe that’s the point. By letting go of some of the control and complexity, we might unlock entirely new ways to build systems and solve problems.
If I should be plain realistic, I'd say that in the middle, we need to evolve by mixing both worlds while keeping our eyes on a new horizon.
> programming languages should shift focus to the "what"
SQL is an example of a language that is at least somewhat like that.
Doesn't really say "how" to do that, it only defines what you want.Incorrect: you need to know the "how" to create more complex and optimal queries. Your example is like saying, in Python, you just need to write print("Hello World!") to print something.
I wouldn't say that since SQL was an improvement over previous ways to query data which were more concrete, like writing C code to get what you need. As such we are on a level of abstraction higher. Thus SQL specifies the "what", not the "how", with respect to those previous methods. However in complex queries, since we are constrained by the relational model (PK/FK), we may have a feeling of having to specify too much details.
That's why I said "somewhat."
You aren't telling the database how to get those results from the files on the disk. You are telling it what values you want, matching what conditions, and (in the case of joins) what related data you want. If you want an aggregation grouped by some criteria you say what values you want summed (or averaged, etc.) and what the grouping criteria are, but not how to do it.
Not a perfect example and it breaks entirely if you get into stuff like looping over a cursor but it is why SQL is usually called a declarative language.
That’s every programming language abstraction. All of them break when you get a fair amount of complexity or performance requirements.
Imagine this concrete example: you are the best developer in the world in some specific area(s), except for UX/UI. If you wanted to create a relatively simple yet secure site with user authentication, even if described declaratively as “create a secure site with user authentication,” it would still take a significant amount of time to learn technologies like React and put everything in place. There are zillions of development teams doing the same work around the world.
isn't that what declarative programming frameworks do already ?
They don't do it "already" but are one of the approaches taken. If you build state of the art web UI/UX you know that it is not just dragging and dropping objects on the screen while it is perfectly possible to build a tool like this.
Yeah, but a new generation is coming of age, whose teachers only learned these ideas through books, not experience. They are rediscovering computer science one blog post or tweet at a time, because books and classes are obsolete.
And functions are just numbers combined with if/else's and a pretty name at the end of the day.
If this number, jump to numberA otherwise jump to this numberB. Also if numberC store numberD at numberE. ;)
Has anyone read the new SICP with Javascript as language of choice?
Isn't scheme with close to zero syntax so easy to learn?
Why did someone think it was a good idea to switch to JavaScript?
I think the person who'll get value out of SICP will not have any problem picking up scheme syntax on the fly.
It's also useful to be able to understand how the idioms map into the syntax of programming languages that one is actually going to use going forward. The point of SICP isn't what language you use, but how you use it, and how you think about the process of using it. Lisp itself exists because someone had the idea of taking the theoretical abstraction and actually realizing it, in notation similar to what the theorists were already using. But that similarity isn't actually relevant to core concepts like "functions as first-class objects", or referential transparency, or the substitution model of computation, or the complexity introduced by mutable state, etc. (Or, dare I say it: to the mind-expanding effects of contemplating the Y combinator.) These ideas can make you a better programmer in any programming language.
Nor is there any good reason to filter people out preemptively. If seeing `foo(x)` instead of `(foo x)` makes the student more receptive to a proper understanding of recursion, that's just fine.
I agree. Being self contained helps make it timeless. In contrast are books with a CD in the back with an outdated Java compiler you will never be able to setup. And then you have to migrate the snippets yourself.
If you study any other related field like math or physics you become accustomed to learning a formal system for the context of a particular problem.
CS students tend to have this weird careerist view where every page just directly help them get a job.
Most undergrad CS students want a practical/engineering curriculum. They are not really there for theory, but for a long time that's how CS departments operated, unless maybe you were at an engineering school.
Schools are so desperate to keep up enrollment numbers today that many have capitulated and are giving students what they want instead of what the faculty thinks they need.
> Most undergrad CS students want a practical/engineering curriculum.
If all someone wants is the practical benefits of programming and has no interest in the underlying theory, they shouldn't waste their their time and money on a CS degree. All the practical information is available for free or at very low cost.
But, a lot of employers demand a degree.
Maybe so, but we shouldn't be doubling down on expensive and time consuming degrees in the name of ill-conceived credentialism. That hurts everyone except the universities profiting off of it.
How does that mean anything to the people who need to be employed to continue living? We're not the ones with the ability to change this.
The same applies to CS, so you're missing something else -- skilled tutors and the campus experience.
At least in the U.S., many students are paying upwards of a $100k for a four-year degree. That better be one hell of a "campus experience" and some next-level "skilled tutors".
Call me a hopeless optimist, but I think there's a better way out there.
How about an AI-tutor? Actual professors don't have time to adapt their teaching to every indfividual studen's knowledge background. But AI might.
Universities should start their own AI-tutor development programs, in co-operation with others because, only way AI-tutors can become better is by practice practive practice.
So I'n not sure if this is a new viewpoint or not, but it is not only students that need training, it is also teachers who need to be trained more in teaching. AI is all about "training", understanding is about training. Training is the new paradigm for me.
There is a big difference between being practically minded and the allergy to learning anything which doesn’t translate to resume keywords. SICP will teach you more about JavaScript, python, etc than most anything.
> They are not really there for theory
Is that why they are so bad at adapting to foreign languages and frameworks? Maybe they should go back to the basics.
> Most undergrad CS students want a practical/engineering curriculum.
Somewhat understandable considering that student loans put you into indentured servitude unless you have rich parents. Although I still think they're shortsighted. A good CS graduate should understand that programming languages are just syntactic sugar over the underlying concepts and have little trouble translating/picking up the basics of new languages.
You are comparing mathematicians to programmers.
A more fair comparison is engineering or applied math major, not pure math at MIT.
I dont think so. SICP isn’t abstract algebra, it’s just unlikely to be the exact syntax you will use at your job.
Engineers rarely do laplace transforms by hand either.
The book is written for 1st year stem undergrads at MIT. So maybe 2nd or 3rd year at state school.
Then it does not matter what language SICP chooses to illustrate timeless concepts? Even if some JS stuff changes down the line people should be able adapt what’s on the book on the fly?
Because knowing scheme isn't going to get you a job at most places. Employers overwhelmingly want JavaScript or Python these days. Trailing that would probably be Java, C++ and C#, and regular old C.
When I did my undergrad CS degree, the fact that scheme was so heavily used was a common complaint they received from students. It just wasn't a marketable skill.
Four year CS degrees usually require something around 20 (maybe even more) CS courses. Are you saying that all of those courses at your school were taught in Scheme? You never had a chance (in the classes, ignoring hobby or internships) to use other languages? That'd be a pretty unique school.
But even if that were true and you did take 20+ classes in Scheme, you're still a college educated computer scientist. You can't pick up JavaScript or Python in time for a job interview for an entry level job? They're easy languages to learn. If you survived four years of exclusively being taught with Scheme, they'd be a breeze to pick up.
No not all scheme. That's an example. The intro course and programming languages course was scheme. There were a number of other languages used. I guess I should have been more nuanced in that a number of students wanted to be taught the currently popular progrmmming languages so they could use them on a resume. They complained about using scheme (or whatever "teaching" language a professor might require) and did not yet appreciate that the concepts/theory they were learning applied to any programming language they might need to use.
They wanted a trade school/practical education in something immediately marketable, not a theoretical education.
The reason I remember this is that in my "exit interview" as a senior I mentioned that I appreciated the exposure to these languages and theory and my advisor remarked "we don't hear that very often, the usual feedback is that we don't teach the languages employers want"
JS is easier to read IMO. And of the widely-used interpreted languages I can think of, it's actually got the least confusing implementation of first-class anonymous functions. Python lambdas are limited to one expression, Ruby has that confusing block vs. proc vs. lambda problem, etc.
I do feel like the value of using Scheme is teaching students early on that syntax doesn't really matter. Those that are actually interested in CS theory will find this enlightening, those that are simply in it because investment banking is so 2007 will churn out.
I haven't, but you can compare editions with this SICP Comparison Edition:
https://sicp.sourceacademy.org/
Yes. But, I prefer the regularity of the Lisp syntax.
Imagine if that statement applied to every non-digital thing as well.
The link takes around 10s to render. That's excessive for a text article.
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
https://news.ycombinator.com/newsguidelines.html
And if you have JS disabled by default, it redirects to a page on a different domain name, so you cannot easily allow it in noscrpt just for that website, even if you want to. I gave up on that though; judging by the title, the article is going to be about modelling all the things as functions, as commonly and similarly done with other objects (e.g., sets, categories), which I wanted to confirm, and maybe to nitpick on this perspective and/or the title then (i.e., it is not quite correct to declare everything a function just because you can model or represent things that way).
The arrow and page up/down keys don't work in any predictable pattern for me, it's really weird. Like I thought it only scrolled up and down with the arrow keys if I press it 4 times, but then page up/down keys don't work no matter how many times I press it, then I focus on the page and it works, but then the arrow keys take 6 times to press before moving, and then I tried the same pattern again, and the arrow keys now take 11 presses before they start moving. Usually a lot of modern apps predictably break the back/forward history buttons and tab focus, but I've never seen anything quite like this. I guess it must be still delivering value though even if the product isn't polished.
I can’t use the scroll to the top gesture in iOS either.
I guess that just goes to show that the author’s mind was, in fact, blown.
Notion. Why do people use that stuff? Especially for tech text articles.
wysiwyg document authoring experience, afaik there are still no alternative publishing platforms with both the flexibility and point click content authoring UX of Notion. Change my view, I’m in the market!
I’m also on the market and this conversation took Notion out of the running.
https://archive.ph/kcZcY
Archive seems to "bake" JS sites to plain HTML.
Maybe it's made entirely of functions.
Well, it’s a published Notion site, and Notion is a powerful doc creation platform. It’s not really intended to be a performant publishing tool.
Or a performant anything else, AFAICT
>a powerful doc creation platform
Which, based on what I see in the rendered archive.is version, is being used to do nothing outside of the normal use of a standard Markdown-based SSG like Nikola or Jekyll.
Not that doing more would be a good idea anyway.
It’s a performant publishing tool (depending, of course, on your expectations) but it’s not a high performance publishing tool.
It's a performant publishing tool and perhaps even high performance publishing tool - in terms of user effort. What it's not is performant displaying the thing it published.
That’s fair. Viewers who don’t know what is serving the page will be disappointed. If you know it’s Notion, then it works about as expected which satisfies the definition of performant.
“Just because you are bad guy doesn’t mean you are bad guy.”
Yeah the guy at my last place that was proud of serving < 2 req/s/core liked to use the world “powerful” too. It’s like it was his favorite word. And he’s on the short list of people I refuse to work with again. What a putz.
Well Notion usually exceeds at least 3 req/s/core, so nothing to worry about there
<snerk>
Well then that’s a relief.
Powerful, lightweight, configurable, performance.
These are some of the biggest weasel words of IT. Every one of them has an implicit nature of a comparison word and yet the comparison or any sort of hard metrics are always completely absent in their use.
Yarp.
Infinite configurability means infinite validation time.
46 domains blocked by UBlock Origin, 3 by my own NoScript filter. Seems about right for a "modern" website.
Edit: also, the pop-up menu on the right side that completely breaks your scrollbar. Putting that UI/UX degree to use.
Page weight is 7.2MB, 25.6 uncompressed. 110MB heap size. Such extravagant wastefulness.
I wonder if that's large enough to contain an old linux running an old version of firefox and feed that the page content.
And this is for what, a ~100KB header image (most of which is bounding-boxed away) and 24KB of actual text (Markdown source would be only slightly larger)?
They (tech companies) layoff people to save money but they should be hiring seniors to save them some cloud bills.
I mean it’s Notion. That’s par for the course.
What if your text editing and presentation experience was slow and laggy? That’s Notion.
Whats the best corporate wiki platform?
Probably a hard question to answer. IME, cultural norms around documentation vary pretty wildly.
Some orgs I've worked for were very "wiki" driven - there's a big expectation of using Confluence or Notion to navigate documentation. This applies both big (5000+) and small (50+) organizations for me.
Other organizations I've worked in were very document centric - so you organize things in folders, link between documents (GDoc @SomeDocument or MSFT's equivalent). Those organizations tend to pass around links to documents or "index" documents. Similarly, this applies for both big and small organizations in my experience.
Of the two, I tend to prefer the latter. Without dedicated editors, the wiki version seems to decay rapidly, especially once the org grows above some size.
Knowledge management is hard...
Notion. Delivering value right at your fingertips.
Is that a clever way of saying it’s about as fast as braille?
Wow it’s really bad.
That’s why no one reads articles, just headlines.
A computer fundamentally isn’t functions though. That’s not how a processor works. If functions are a useful abstraction, why haven’t functional languages taken off?
> A computer fundamentally isn’t functions though. That’s not how a processor works. If functions are a useful abstraction, why haven’t functional languages taken off?
If computers and their processors are a useful abstraction, why don't we write everything directly in machine language - or microcode for that matter?
This is more about computing than about computers. As Dijkstra put it, "Computer science is no more about computers than astronomy is about telescopes."
Computing involves languages, including many languages that are not machine languages. Every language that's higher level than machine code requires translation to actually execute on the particular machines that we've developed as a result of our history and legacy decisions.
The lambda calculus is a prototypical language that provides very simple yet general meanings for the very concept of variables - or name-based abstraction in general - and the closely related concept of functions. It's a powerful set of concepts that is the basis for many very powerful languages.
It also provides a mathematically tractable way to represent languages that don't follow those principles closely. Compilers perform optimizations like static single assignment (SSA), which are fundamentally equivalent to a subset of the functional concept of continuation passing style (CPS). In other words, mainstream languages need to be transformed through functional style in order to make them tractable enough to compile.
The mapping from a lambda calculus style program to a CPU-style register machine is quite straightforward. The connection is covered in depth in Chapter 5 of SICP, "Computing with Register Machines." Later work on this found even better ways to handle this, like Appel's "Compiling with Continuations" - which led to the SSA/CPS equivalence mentioned above.
There's a lot to learn here. It's hard to recognize that if you know nothing about it, though.
Just to pick some nits with those claims… CPUs do have hardware support for functions in the form of a stack and CALL/RET instructions. Functions are a useful abstraction since more or less all software uses them. Functions and functional languages are two related but different things, and the usefulness of functions as an abstraction doesn’t depend on whether functional languages have taken off. And last, I’d say functional languages have gained ground over time, as well as semi-functional languages like, say, Python and JavaScript. Even C++ is gaining more functional language features over time.
It seems functional language experts are too busy rewriting SICP instead of actually useful programs.
I just haven’t seen anything concrete as to why SICP’s materials are useful in either the real world or academia. Sometimes these discussions talk about how it is useful for computer science and for theory but even that seems like a claim without evidence. Is this just people reminiscing about their first introduction to programming or a favorite professor?
I think it's a cool book for students.
But for real world programming, the tedious ones is related to validation, parsing and other business logic.
So i prefer a book to help teach CS by using real world codebase to solve real world everyday problem as a software engineer instead.
You can have your cake and eat it.
That's like teaching physics via car repair. You'll learn a few ideas, but not much of the science.
It's practical and productive and profitable, which is great, but not really the original goal.
It's not a surprise that most of students failed and hate abstract algebra right ? I mean to learn the concept, you will need to know more about the concept itself in a real world context.
SICP shows a real world code base. It's real world programs that builds up to implementing real world programming languages.
Why would you validate if you can parse? If you have a decent chunk of experience in implementing business logic then you know that your quality of life will be destroyed by switches and other inscrutable wormhole techniques up until the point where you learn to use and build around rule engines. SICP shows you how you can tailor your own rule engine, so you won't have to get the gorilla and the jungle when you reach for one in an enterprisey library.