> Current Common Lisp implementations can usually support both image-oriented and source-oriented development. Image-oriented environments (for example, Squeak Smalltalk) have as their interchange format an image file or memory dump containing all the objects present in the system, which can be later restarted on the same or distinct hardware. By contrast, a source-oriented environment uses individual, human-readable files for recording information for reconstructing the project under development; these files are processed by the environment to convert their contents into material which can be executed.
Am I reading this right that people can (and do??) use images as a complete replacement for source code files?
All the magic of Smalltalk is in the development tools that work by means of introspection into the running image, writing source code in text files causes you to lose all that. Add to that the fact that Smalltalk when written as source files is quite verbose.
Smalltalk does have standard text source file format, but that format is best described as human-readable, not human-writable. The format is essentially a sequence of text blocks that represent operations done to the image in order to modify it to a particular state interspersed with "data" (mostly method source code, but the format can store arbitrary stuff as the data blocks).
One exception to this is GNU Smalltalk which is meant to be used with source files and to that end uses its own more sane source file syntax.
The image is not stand-alone: there should also be a sources file and a changes file (and of course a virtual machine).
"When you use a browser to access a method, the system has to retrieve the source code for that method. Initially all the source code is found in the file we refer to as the sources file. … As you are evaluating expressions or making changes to class descriptions, your actions are logged onto an external file that we refer to as the changes file. If you change a method, the new source code is stored on the changes file, not back into the sources file. Thus the sources file is treated as shared and immutable; a private changes file must exist for each user."
1984 "Smalltalk-80 The Interactive Programming Environment" page 458
~
The image is a cache. For a reproducible process, version and archive source-code.
1984 "Smalltalk-80 The Interactive Programming Environment" page 500
"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager."
> Am I reading this right that people can (and do??) use images as a complete replacement for source code files?
Images are not replacements of source code files. Images are used in addition to source code files. Source code is checked in. Images are created and shipped. The image lets you debug things live if you've got to. You can introspect, live debug, live patch and do all the shenanigans. But if you're making fixes, you'd make the changes in source code, check it in, build a new image and ship that.
in smalltalk you make the changes in the image while it is running. the modern process is that you then export the changes into a version control system. originally you only had the image itself. apparently squeak has objects inside that go back to 1977:
https://lists.squeakfoundation.org/archives/list/squeak-dev@...
with originally i meant before the use of version control systems became common and expected. i don't know the actual history here, but i just found this thread that looks promising to contain some interesting details: https://news.ycombinator.com/item?id=15206339 (it is also discussing lisp which bring this subthread back in line with the original topic :-)
I've never heard of anybody doing it, but in theory it could work.
SBCL (and maybe others) use a "core image" to bootstrap at startup. It's not unheard of for people to build a custom core image with the packages they use a lot from the REPL. It's become less common as computers have gotten faster, and most people use systems like Quicklisp or Roswell to automatically get updates and load from source. Of course the SBCL core image is generated from the compiler source code when building it, and the dependencies are loaded and compiled from source initially, too, so there's still going to be source code files around.
You could, in theory, start with the compiled SBCL image, exclusively type code into the REPL, save the image and exit, and then restart with the new image and continue adding code via the REPL. I really doubt anybody uses that workflow exclusively, though. At the very least most people will eventually save the code they entered in the REPL into a source file once they've debugge it and got it working.
I imagine some systems may start out by tinkering with definitions in the REPL in the live system, and then as it grows, the best definition of the system is found in the current state of the REPL, rather than any more formal specification of the system – including by source code.
At some point maybe the system state will be captured into source code for longer term maintenance, but I can totally see the source code being secondary to the current state of the system during exploration.
After all, that's how I tend to treat SQL databases early on. The schema evolves in the live server, and only later do I dump it into a schema creation script and start using migrations to change it.
SBCL seems pretty actively developed. A proposal for coroutines implementation appeared recently and AFAIK it is being actively discussed and improved upon.
No. I have yet to propose the patches formally. The SBCL maintainers are reviewing the high-level proposal (on my blog) first. You can try the implementation, however. There's a pointer to the repo/branch on my blog. I need to build a proper benchmarking framework and publish some real numbers that people can reproduce before I am confident enough to submit the patches for review.
Let me know if you try it out. I would love some feedback (via github)
it came up in the SBCL mailing list as well, and its author has been commenting there as well. seems like it has some legs! would be a very nice feature to have.
SBCL is lovely and very well optimised. Been using it for many personal projects as it just works and it is so easy to work with and debug. I cry a lot when I have to use 'modern' things. But those make money, they are just far worse.
As someone who fell in love with Lisp with Scheme and SICP, I think I agree with the push for minimalism and a bit dubious about the changes in Lisp-2 ...
Is that only a recency bias ? Because I learned Scheme first ? When I try CL I find my mind resisting some things due to purely elegance reasons.
If someone put a gun to my head and asked me for a deliverable quickly I will go with CL to save my life of course, but for my own personal pleasure I will always prefer Scheme.
As someone who also first got introduced to Lisp through SICP and Scheme, I don't really care about Lisp-1 versus Lisp-2, but I don't much fancy minimalism. I switched to CL for the type declarations and just got used to funcall and number sign+apostrophe; and minimalism means the things a larger language would provide out of the box (say, hash tables) you either need to write yourself or find a library for. Hence why various Schemes (Guile, Gauche, Chicken) have a ton of libraries beyond the standard.
In fact, I'd say CL is too minimalist, hence CLDR (a bit like SRFIs) and various libraries which form a sort of unofficial extended core (Alexandria, Bordeaux threads, CL-PPCRE,...)
But there's a value in having a defined, stable language definition. Being able to rely on the basic language not changing is a feature, not a bug. Though it does mean you have to sometimes search for a good lib if you don't have a feature built into the language.
My comment said nothing about language permanence, though I would say that some measure of evolution can sometimes be for the better. I doubt many people would prefer programming in Java 1.4 over Java 21.
In Portugal, Siscog used to be a Lisp shop, no idea nowadays.
Then you have the Clojure based companies, where Datomic and Nubank are two well known ones, even if not a proper Lisp, still belongs to the same linage.
Lisp languages are niche, but frequently used as seen in the great projects mentioned in this thread. Since 1982, I have been employed about 20% of my time using mostly Common Lisp and for a few years Clojure. Racket is a great language and system for learning and having fun, so, have fun!
*I am learning scheme(dr racket), which is i think derived from lisp*
it _is_ Lisp. Namely lisp-1, vs what one would consider lisp like common lisp would be lisp-2. Difference mostly being that in lisp-1 everything's in single namespace, whereas lisp-2 has more. So, in scheme you cannot have a function and a variable have the same name. In common lisp you can. Other diffs being (syntactically) passing functions and executing them. There are other things, of course, but not that big of a deal. Scheme is simpler and suitable for teaching / getting into lispen. I'd argue it might also be a rather well-equipped DSL.
Scheme is mostly used for teaching, but there are many production applications out there written in Lisp (Emacs for example). Also I'd like to mention Clojure, which is "lispy" and used by big cooperations.
Current racket is running on top of chez scheme - which is maintained by Cisco - and reportedly extensively used in commercial products (router firmware/os etc).
It was brought into Cisco to do that but the project was eventually shelved, which was a shame because the prototypes delivered some really interesting reliability features. Most Cisco hardware products run firmware written in C. Management systems are often Java and (increasingly) Go. Clojure is used for one of the security product lines, but that was developed as a startup that was later purchased by Cisco. One of the management systems, NSO, is written in Erlang (brought in through the tail-f acquisition). There are certainly a lot of people in Cisco that understand the power of Lisp (I was one), but they are spread out and surrounded by people that just want to push whatever the latest thing is (now Go). C.f. the blub paradox and “worse is better.” They have a lot of legacy code that was written over the last 30 years that powers their devices, and that’s all in C.
Not sure what they are using now, but fairly recently the folks over at grammarly were still using Common Lisp (CCL and SBCL) among other technologies:
Some companies: https://github.com/azzamsa/awesome-lisp-companies/ (Routific, Google's ITA Software, SISCOG running resource planning in transportation, trading, big data analysis, cloud-to-cloud services, open-source tools (pgloader, re-written from Python), games (Kandria, on Steam and GOG, runs on the Switch), music composition software and apps…
Often as a DSL (domain specific language) for extending applications at runtime and/or configuration. I wouldn't start a "serious" project in Lisp today; meaning, a project with investment behind it, but Lisp can be a real joy to work with, and I've used Clojure for countless hobby projects. Clojure, in particular, has lots of deployments around the tech industry, and it's the foundation of the Jepsen DB test suite, Datomic (an immutable DB), and Metabase, as a few examples. Walmart has a non-trivial amount of Clojure running in prod as well.
I used a few different lisps for pet projects and honestly today for me the biggest problem of lisps is the typing. ADTs (and similar systems) are just super helpful when it comes to long term development, multiple people working on code, big projects or projects with multiple pieces (like frontend+backend) and it helps AI tools as well.
And this in not something lisps explored much (is there anything at all apart from Racket/typed dialect?), probably due to their dynamic nature. And this is why I dropped lisps in favour of Rust and Typescript.
SBCL has fine type checking. Some is done at compile time -- you get warnings if something clearly can't be type correct -- but otherwise when compiled with safety 3 (which people tend to make the default) types are checked dynamically at run time. You don't get the program crashing from mistyping as one would in C.
> You don't get the program crashing from mistyping as one would in C.
Sorry but I don't compare to C anymore, I want the same safety as in Rust or Typescript: exhaustive checks, control-flow type narrowing, mapped types and so on. Some detection at compile time is not enough, since there is a way to eliminate all type errors I want to eliminate them all, not some.
Why stop there? Why not demand proof of correctness? After all, that's now within reach using LLMs producing the formal specification from a simple prompt, right?
SBCL does a fine job in detecting type mismatches within the frame of ANSI Common Lisp, not Haskell. While I would agree that a strict type system eases long term maintenance of large systems, for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way. And if such proof-of-concept looks promising, then there is no shame in rewriting it in a language more suitable for scale and maintenance.
Proof of correctness would be fantastic, but I have yet to see it in action. LLMs maybe could do it for simple program, but I'm pretty sure it will fail in large codebases (due to context limits), and types help a lot in that case.
> for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way
I would even argue that its better to have typed system even for POCs, because things change fast and it very often leads to type errors that need to be discovered. At least when I did that I often would do manual tests after changes just to check if things work, with typing in place this time can also be minimised.
> You don't get the program crashing from mistyping as one would in C.
Uh, isn't that exactly what happens with runtime type checking? Otherwise what can you do if you detect a type error at runtime other than crash?
In C the compiler tries to detect all type errors at compile time, and if you do manage to fool the compiler into compiling badly typed code, it won't necessarily crash, it'll be undefined behavior (which includes crashing but can also do worse).
> Uh, isn't that exactly what happens with runtime type checking?
No, it raises an exception, which you can handle. In some cases one can even resume via restarts. This is versus C, where a miscast pointer can cause memory corruption.
Again, a proper C compiler in combination with sensible coding standards should prevent "miscast pointers" at compile time / static analysis. Anyway, being better than C at typing / memory safety, is a very low bar to pass.
I'm curious in what situation catching a typing exception would be useful though. The practice of catching exceptions due to bugs seems silly to me. What's the point of restarting the app if it's buggy?
Likewise, trying to catch exceptions due to for example dividing by zero is a strange practice. Instead check your inputs and throw an "invalid input" exception, because exceptions are really only sensible for invalid user input, or external state being wrong (unreadable input file, network failures, etc.).
If "just don't do the bad things" is a valid argument, why do we need type checking at all?
Exceptions from type checking are useful because they tell you exactly where something has screwed up, making fixing the bug easier. It also means problems are reduced from RCEs to just denial of service. And I find (in my testing) that it enables such things as rapid automated reduction of inputs that stimulate such bugs. For example, the SBCL compiler is such that it should never throw an exception even on invalid code, so when it does so one can automatically prune down a lambda expession passed to the COMPILE function to find a minimal compiler bug causing input. This also greatly simplifies debugging.
A general reason I look down on static type checking is that it's inadequate. It finds only a subset, and arguably a not very interesting subset, of bugs in programs. The larger set of possible bugs still has to be tested for, and for a sufficient testing procedure for that larger set you'll stimulate the typing bugs as well.
So, yeah, if you're in an environment were you can't test adequately, static typing can act as a bit of a crutch. But your program will still suck, even if it compiles.
The best argument for static typing IMO is that it acts as a kind of documentation.
You can run Coalton on Common Lisp. It has a type system similar to Haskell’s. And interops very well with pure Common Lisp. It also modernizes function and type names in the process so it makes Lisp more familiar to modern developers. I tried it in a small project and was impressed.
Thanks everyone for opening my mind, actually it is being taught in a core course at my college, and it is very different course form others that i have taken. course teaches feature of lisp how they are unique and useful. Also we are solving simple questions like Fibonacci, or number patterns, list pattern, recursion vs iterative etc.
Previously:
SBCL (16 days ago) https://news.ycombinator.com/item?id=47140657 (107 comments)
Porting SBCL to the Nintendo Switch https://news.ycombinator.com/item?id=41530783 (81 comments)
An exploration of SBCL internals https://news.ycombinator.com/item?id=40115083 (106 comments)
Arena Allocation in SBCL https://news.ycombinator.com/item?id=38052564 (32 comments)
SBCL (2023) https://news.ycombinator.com/item?id=36544573 (167 comments)
Parallel garbage collection for SBCL [pdf] https://news.ycombinator.com/item?id=37296153 (45 comments)
SBCL 2.3.5 released https://news.ycombinator.com/item?id=36107154 (31 comments)
Using SBCL Common Lisp as a Dynamic Library (2022) https://news.ycombinator.com/item?id=31054796 (67 comments)
etc
Hacker News now runs on top of Common Lisp https://news.ycombinator.com/item?id=44099006 (435 comments)
(this was mentioned below but repeated here)
> Current Common Lisp implementations can usually support both image-oriented and source-oriented development. Image-oriented environments (for example, Squeak Smalltalk) have as their interchange format an image file or memory dump containing all the objects present in the system, which can be later restarted on the same or distinct hardware. By contrast, a source-oriented environment uses individual, human-readable files for recording information for reconstructing the project under development; these files are processed by the environment to convert their contents into material which can be executed.
Am I reading this right that people can (and do??) use images as a complete replacement for source code files?
All the magic of Smalltalk is in the development tools that work by means of introspection into the running image, writing source code in text files causes you to lose all that. Add to that the fact that Smalltalk when written as source files is quite verbose.
Smalltalk does have standard text source file format, but that format is best described as human-readable, not human-writable. The format is essentially a sequence of text blocks that represent operations done to the image in order to modify it to a particular state interspersed with "data" (mostly method source code, but the format can store arbitrary stuff as the data blocks).
One exception to this is GNU Smalltalk which is meant to be used with source files and to that end uses its own more sane source file syntax.
Fascinating. Thanks for the explanation.
The image is not stand-alone: there should also be a sources file and a changes file (and of course a virtual machine).
"When you use a browser to access a method, the system has to retrieve the source code for that method. Initially all the source code is found in the file we refer to as the sources file. … As you are evaluating expressions or making changes to class descriptions, your actions are logged onto an external file that we refer to as the changes file. If you change a method, the new source code is stored on the changes file, not back into the sources file. Thus the sources file is treated as shared and immutable; a private changes file must exist for each user."
1984 "Smalltalk-80 The Interactive Programming Environment" page 458
The image is a cache. For a reproducible process, version and archive source-code.1984 "Smalltalk-80 The Interactive Programming Environment" page 500
"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager."
> Am I reading this right that people can (and do??) use images as a complete replacement for source code files?
Images are not replacements of source code files. Images are used in addition to source code files. Source code is checked in. Images are created and shipped. The image lets you debug things live if you've got to. You can introspect, live debug, live patch and do all the shenanigans. But if you're making fixes, you'd make the changes in source code, check it in, build a new image and ship that.
in smalltalk you make the changes in the image while it is running. the modern process is that you then export the changes into a version control system. originally you only had the image itself. apparently squeak has objects inside that go back to 1977: https://lists.squeakfoundation.org/archives/list/squeak-dev@...
Does "originally" mean before release from the offices and corridors of Xerox Palo Alto Research Center.
Perhaps further back: before change sets, before fileOut, before sources and change log ? There's a lot of history.
I wonder if the Digitalk Smalltalk implementation "has objects inside that go back to 1977".
with originally i meant before the use of version control systems became common and expected. i don't know the actual history here, but i just found this thread that looks promising to contain some interesting details: https://news.ycombinator.com/item?id=15206339 (it is also discussing lisp which bring this subthread back in line with the original topic :-)
I've never heard of anybody doing it, but in theory it could work.
SBCL (and maybe others) use a "core image" to bootstrap at startup. It's not unheard of for people to build a custom core image with the packages they use a lot from the REPL. It's become less common as computers have gotten faster, and most people use systems like Quicklisp or Roswell to automatically get updates and load from source. Of course the SBCL core image is generated from the compiler source code when building it, and the dependencies are loaded and compiled from source initially, too, so there's still going to be source code files around.
You could, in theory, start with the compiled SBCL image, exclusively type code into the REPL, save the image and exit, and then restart with the new image and continue adding code via the REPL. I really doubt anybody uses that workflow exclusively, though. At the very least most people will eventually save the code they entered in the REPL into a source file once they've debugge it and got it working.
I imagine some systems may start out by tinkering with definitions in the REPL in the live system, and then as it grows, the best definition of the system is found in the current state of the REPL, rather than any more formal specification of the system – including by source code.
At some point maybe the system state will be captured into source code for longer term maintenance, but I can totally see the source code being secondary to the current state of the system during exploration.
After all, that's how I tend to treat SQL databases early on. The schema evolves in the live server, and only later do I dump it into a schema creation script and start using migrations to change it.
> After all, that's how I tend to treat SQL databases early on.
Ah, that’s a very helpful analogy/parallel that didn’t occur to me. Thank you!
Maybe you understood image as in photo-image instead of image as in memory-image (like disk-image); a glorified memory dump, more-or-less.
I understood it as the latter.
SBCL seems pretty actively developed. A proposal for coroutines implementation appeared recently and AFAIK it is being actively discussed and improved upon.
And arena support, and a parallel GC... there's always something exciting and promising coming up.
The proprietary implementations are also quite good.
Arena support would make it amazing for game dev. Yes please!
To be clear, we can talk in present tense: https://github.com/sbcl/sbcl/blob/master/doc/internals-notes...
discussion (2023): https://news.ycombinator.com/item?id=38052564
To note that you will find arena like stuff on old Lisps, like those from Xerox, TI and Genera.
Do you have a link to the proposal and the discussion? I am quite interested to see the implementation details. Thanks!
It's on the devel mailing list: https://sourceforge.net/p/sbcl/mailman/sbcl-devel/thread/CAF...
I'm the author. https://atgreen.github.io/repl-yell/posts/sbcl-fibers/
This is fantastic! Godspeed.
Incredible! Is this ready for at scale production use?
No. I have yet to propose the patches formally. The SBCL maintainers are reviewing the high-level proposal (on my blog) first. You can try the implementation, however. There's a pointer to the repo/branch on my blog. I need to build a proper benchmarking framework and publish some real numbers that people can reproduce before I am confident enough to submit the patches for review.
Let me know if you try it out. I would love some feedback (via github)
Would it work with the parallel GC feature?
I haven't really looked into it, but I'm hopeful it can be made to work.
Here's an SBCL coroutines talk at the European Lisp Symposium from 2024: https://www.youtube.com/watch?v=S2nVKfYJykw
Yeah, so I believe that this proposal kind of petered out at proof of concept phase but the author of the one being discussed references it.
it came up in the SBCL mailing list as well, and its author has been commenting there as well. seems like it has some legs! would be a very nice feature to have.
SBCL is lovely and very well optimised. Been using it for many personal projects as it just works and it is so easy to work with and debug. I cry a lot when I have to use 'modern' things. But those make money, they are just far worse.
As someone who fell in love with Lisp with Scheme and SICP, I think I agree with the push for minimalism and a bit dubious about the changes in Lisp-2 ...
Is that only a recency bias ? Because I learned Scheme first ? When I try CL I find my mind resisting some things due to purely elegance reasons.
If someone put a gun to my head and asked me for a deliverable quickly I will go with CL to save my life of course, but for my own personal pleasure I will always prefer Scheme.
As someone who also first got introduced to Lisp through SICP and Scheme, I don't really care about Lisp-1 versus Lisp-2, but I don't much fancy minimalism. I switched to CL for the type declarations and just got used to funcall and number sign+apostrophe; and minimalism means the things a larger language would provide out of the box (say, hash tables) you either need to write yourself or find a library for. Hence why various Schemes (Guile, Gauche, Chicken) have a ton of libraries beyond the standard.
In fact, I'd say CL is too minimalist, hence CLDR (a bit like SRFIs) and various libraries which form a sort of unofficial extended core (Alexandria, Bordeaux threads, CL-PPCRE,...)
But there's a value in having a defined, stable language definition. Being able to rely on the basic language not changing is a feature, not a bug. Though it does mean you have to sometimes search for a good lib if you don't have a feature built into the language.
Did you respond to the wrong comment?
My comment said nothing about language permanence, though I would say that some measure of evolution can sometimes be for the better. I doubt many people would prefer programming in Java 1.4 over Java 21.
I am learning scheme(dr racket), which is i think derived from lisp, what is this actually used for and do people build anything with lisp???
Yes, people do build anything with Lisp, that is why there are at least two commercial Common Lisp systems around, LispWorks and Allegro Common Lisp.
Google Flights is an acquisition of a company using Lisp, ITA Software, they even have a Lisp guide.
https://google.github.io/styleguide/lispguide.xml
In Portugal, Siscog used to be a Lisp shop, no idea nowadays.
Then you have the Clojure based companies, where Datomic and Nubank are two well known ones, even if not a proper Lisp, still belongs to the same linage.
Yes, SISCOG is still kicking. From last year's European Lisp Symposium: https://www.youtube.com/watch?v=hMVZLo1Ub7M
Obrigado. Thanks.
I was aware of the company when I was still living in Lisbon, a few decades ago.
This very website that you are using right now, Hacker News, runs on sbcl.
It runs on Arc, which itself is implemented with SBCL.
It switched from Racket in late 2024. Context and discussion: https://news.ycombinator.com/item?id=44099006 (9 months ago, 435 comments)
Well, besides pgloader, which achieved a 20~30x speedup via a rewrite from Python to Common Lisp (<https://tapoueh.org/blog/2014/05/why-is-pgloader-so-much-fas...>), there is also this little organisation called NASA which has a collection of theorem proving libraries called variously PVSLib or NASALib (<https://github.com/nasa/pvslib>).
There is a lot more as well, of course, but these two are clear examples of Common Lisp being used in 'the real world'.
Lisp languages are niche, but frequently used as seen in the great projects mentioned in this thread. Since 1982, I have been employed about 20% of my time using mostly Common Lisp and for a few years Clojure. Racket is a great language and system for learning and having fun, so, have fun!
*I am learning scheme(dr racket), which is i think derived from lisp*
it _is_ Lisp. Namely lisp-1, vs what one would consider lisp like common lisp would be lisp-2. Difference mostly being that in lisp-1 everything's in single namespace, whereas lisp-2 has more. So, in scheme you cannot have a function and a variable have the same name. In common lisp you can. Other diffs being (syntactically) passing functions and executing them. There are other things, of course, but not that big of a deal. Scheme is simpler and suitable for teaching / getting into lispen. I'd argue it might also be a rather well-equipped DSL.
"Emacsen" I can understand by analogy with plural forms like "oxen". "Lispen" is new to me.
At Uni we had a stable of Vaxen.
It gets confusing if you speak a Scandi language where -en is the masculine definite article so Emacsen would mean the Emacs, Lispen = the Lisp etc.
Scheme is mostly used for teaching, but there are many production applications out there written in Lisp (Emacs for example). Also I'd like to mention Clojure, which is "lispy" and used by big cooperations.
Current racket is running on top of chez scheme - which is maintained by Cisco - and reportedly extensively used in commercial products (router firmware/os etc).
https://cisco.github.io/ChezScheme
It was brought into Cisco to do that but the project was eventually shelved, which was a shame because the prototypes delivered some really interesting reliability features. Most Cisco hardware products run firmware written in C. Management systems are often Java and (increasingly) Go. Clojure is used for one of the security product lines, but that was developed as a startup that was later purchased by Cisco. One of the management systems, NSO, is written in Erlang (brought in through the tail-f acquisition). There are certainly a lot of people in Cisco that understand the power of Lisp (I was one), but they are spread out and surrounded by people that just want to push whatever the latest thing is (now Go). C.f. the blub paradox and “worse is better.” They have a lot of legacy code that was written over the last 30 years that powers their devices, and that’s all in C.
Not sure what they are using now, but fairly recently the folks over at grammarly were still using Common Lisp (CCL and SBCL) among other technologies:
https://www.grammarly.com/blog/engineering/running-lisp-in-p...
Examples with screenshots: http://lisp-screenshots.org/
Some companies: https://github.com/azzamsa/awesome-lisp-companies/ (Routific, Google's ITA Software, SISCOG running resource planning in transportation, trading, big data analysis, cloud-to-cloud services, open-source tools (pgloader, re-written from Python), games (Kandria, on Steam and GOG, runs on the Switch), music composition software and apps…
More success stories: https://www.lispworks.com/success-stories/
I myself run web-apps and scripts for clients. Didn't ditch Django yet but working on that.
Often as a DSL (domain specific language) for extending applications at runtime and/or configuration. I wouldn't start a "serious" project in Lisp today; meaning, a project with investment behind it, but Lisp can be a real joy to work with, and I've used Clojure for countless hobby projects. Clojure, in particular, has lots of deployments around the tech industry, and it's the foundation of the Jepsen DB test suite, Datomic (an immutable DB), and Metabase, as a few examples. Walmart has a non-trivial amount of Clojure running in prod as well.
I used a few different lisps for pet projects and honestly today for me the biggest problem of lisps is the typing. ADTs (and similar systems) are just super helpful when it comes to long term development, multiple people working on code, big projects or projects with multiple pieces (like frontend+backend) and it helps AI tools as well.
And this in not something lisps explored much (is there anything at all apart from Racket/typed dialect?), probably due to their dynamic nature. And this is why I dropped lisps in favour of Rust and Typescript.
+1 to explore Coalton. It's also talked about on this website and often by its authors.
Links to Coalton and related libraries and apps (included Lem editor's mode and a web playground): https://github.com/CodyReichert/awesome-cl/#typing
SBCL has fine type checking. Some is done at compile time -- you get warnings if something clearly can't be type correct -- but otherwise when compiled with safety 3 (which people tend to make the default) types are checked dynamically at run time. You don't get the program crashing from mistyping as one would in C.
> You don't get the program crashing from mistyping as one would in C.
Sorry but I don't compare to C anymore, I want the same safety as in Rust or Typescript: exhaustive checks, control-flow type narrowing, mapped types and so on. Some detection at compile time is not enough, since there is a way to eliminate all type errors I want to eliminate them all, not some.
Why stop there? Why not demand proof of correctness? After all, that's now within reach using LLMs producing the formal specification from a simple prompt, right?
SBCL does a fine job in detecting type mismatches within the frame of ANSI Common Lisp, not Haskell. While I would agree that a strict type system eases long term maintenance of large systems, for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way. And if such proof-of-concept looks promising, then there is no shame in rewriting it in a language more suitable for scale and maintenance.
Proof of correctness would be fantastic, but I have yet to see it in action. LLMs maybe could do it for simple program, but I'm pretty sure it will fail in large codebases (due to context limits), and types help a lot in that case.
> for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way
I would even argue that its better to have typed system even for POCs, because things change fast and it very often leads to type errors that need to be discovered. At least when I did that I often would do manual tests after changes just to check if things work, with typing in place this time can also be minimised.
> You don't get the program crashing from mistyping as one would in C.
Uh, isn't that exactly what happens with runtime type checking? Otherwise what can you do if you detect a type error at runtime other than crash?
In C the compiler tries to detect all type errors at compile time, and if you do manage to fool the compiler into compiling badly typed code, it won't necessarily crash, it'll be undefined behavior (which includes crashing but can also do worse).
> Uh, isn't that exactly what happens with runtime type checking?
No, it raises an exception, which you can handle. In some cases one can even resume via restarts. This is versus C, where a miscast pointer can cause memory corruption.
Again, a proper C compiler in combination with sensible coding standards should prevent "miscast pointers" at compile time / static analysis. Anyway, being better than C at typing / memory safety, is a very low bar to pass.
I'm curious in what situation catching a typing exception would be useful though. The practice of catching exceptions due to bugs seems silly to me. What's the point of restarting the app if it's buggy?
Likewise, trying to catch exceptions due to for example dividing by zero is a strange practice. Instead check your inputs and throw an "invalid input" exception, because exceptions are really only sensible for invalid user input, or external state being wrong (unreadable input file, network failures, etc.).
If "just don't do the bad things" is a valid argument, why do we need type checking at all?
Exceptions from type checking are useful because they tell you exactly where something has screwed up, making fixing the bug easier. It also means problems are reduced from RCEs to just denial of service. And I find (in my testing) that it enables such things as rapid automated reduction of inputs that stimulate such bugs. For example, the SBCL compiler is such that it should never throw an exception even on invalid code, so when it does so one can automatically prune down a lambda expession passed to the COMPILE function to find a minimal compiler bug causing input. This also greatly simplifies debugging.
A general reason I look down on static type checking is that it's inadequate. It finds only a subset, and arguably a not very interesting subset, of bugs in programs. The larger set of possible bugs still has to be tested for, and for a sufficient testing procedure for that larger set you'll stimulate the typing bugs as well.
So, yeah, if you're in an environment were you can't test adequately, static typing can act as a bit of a crutch. But your program will still suck, even if it compiles.
The best argument for static typing IMO is that it acts as a kind of documentation.
You can run Coalton on Common Lisp. It has a type system similar to Haskell’s. And interops very well with pure Common Lisp. It also modernizes function and type names in the process so it makes Lisp more familiar to modern developers. I tried it in a small project and was impressed.
Thanks everyone for opening my mind, actually it is being taught in a core course at my college, and it is very different course form others that i have taken. course teaches feature of lisp how they are unique and useful. Also we are solving simple questions like Fibonacci, or number patterns, list pattern, recursion vs iterative etc.
Go through one of the more famous books to learn Scheme, in case you haven't yet been introduced into it.
"Structure and Interpretation of Computer Programs"
https://web.mit.edu/6.001/6.037/sicp.pdf
See https://planet.racket-lang.org/package-source/neil/sicp.plt/... as well.
That is the same book, prof is going through the course.
> what is this actually used for
If you're interested in LeetCode, Racket is one of their accepted languages.
Jonathan Blow: "It’s about a compiler written in Python FFS."
Missing the joke here. The pdf if about a Common Lisp compiler, written in Common Lisp, C, and assembly for good measure.
I don’t get it either. The CMUCL compiler is named Python, no relation to the prominent language. Not sure that’s what this was about though.
That was his confusion.
Seems some rando posted something factually false on twitter, got corrected and apologized.
https://x.com/Jonathan_Blow/status/2028906867368550563
Here you go: https://nitter.net/Jonathan_Blow/status/2028906867368550563
https://twitter.com/Jonathan_Blow/status/2028903268265672728
Is there any way to see the whole conversion, and not one specific reply in the middle of it?
a little bit more: https://xcancel.com/Jonathan_Blow/status/2028906867368550563