The Wayback Machine - https://web.archive.org/web/20200728192152/http://planet.lisp.org/
Planet Lisp

Alexander Artemenkocl-difflib

· 44 hours ago

This library is able to compute differences between sequences. For example, if we want to generate a unified diff for two lists of strings, we can do:

POFTHEDAY> (difflib:unified-diff
            t
            '("one" "two" "three" "four" "five" "six")
            '("one" "three" "four" "seven" "eight")
            :test-function 'equal)

---  
+++  
@@ -1,6 +1,5 @@
 one
-two
 three
 four
-five
-six
+seven
+eight

It is also possible to provide filenames:

POFTHEDAY> (difflib:unified-diff
            t
            '("one" "two" "three" "four" "five" "six")
            '("one" "three" "four" "seven" "eight")
            :test-function 'equal
            :from-file "a.txt"
            :to-file "b.txt")

--- a.txt 
+++ b.txt 
@@ -1,6 +1,5 @@
 one
-two
 three
 four
-five
-six
+seven
+eight

There is also a lower-level API. We can make a diff of any objects. Here is an example of how to get a diff of two lists of symbols:

POFTHEDAY> (defparameter *diff*
             (make-instance 'difflib:sequence-matcher
                            :a '(:one :two :three :four :five :six)
                            :b '(:one :three :four :seven :eight)))

POFTHEDAY> (difflib:get-opcodes *diff*)
(#<DIFFLIB:OPCODE :EQUAL   0 1 0 1>
 #<DIFFLIB:OPCODE :DELETE  1 2 1 1>
 #<DIFFLIB:OPCODE :EQUAL   2 4 1 3>
 #<DIFFLIB:OPCODE :REPLACE 4 6 3 5>)

These "opcodes" tell us what to do with subsequences of two lists. For example, REPLACE opcode from the results tell us that:

;; This should be replaced:
POFTHEDAY> (subseq '(:one :two :three :four :five :six)
                   4 6)
(:FIVE :SIX)

;; with:
POFTHEDAY> (subseq '(:one :three :four :seven :eight)
                   3 5)
(:SEVEN :EIGHT)

;; The same as we seen in the text output in the beginning:
--- a.txt 
+++ b.txt 
@@ -1,6 +1,5 @@
 one
-two
 three
 four
-five
-six
+seven
+eight

Tomorow we'll see the library, which uses cl-difflib for something more interesting.

Alexander Artemenkocl-flow

· 2 days ago

CL-Flow is @borodust's library and provides a way for lock-free parallel code execution. You can combine blocks of code and define how they should be executed - serially or in parallel.

This system is in the Quicklisp, but is not installable because requires bodge-queue which is not in the Quicklisp yet (or now). You need to install @borodust's distribution first:

POFTHEDDAY> (ql-dist:install-dist
             "http://bodge.borodust.org/dist/org.borodust.bodge.txt"
             :replace t :prompt nil)

POFTHEDAY> (ql:quickload '(:simple-flow-dispatcher
                           :cl-flow
                           :log4cl
                           :dexador))

POFTHEDAY> (defun handle-error (e)
             (log:error "Unhandled error" e))

;; This code will help us to run flow blocks
;; in the thread pool:
POFTHEDAY> (defvar *dispatcher*
             (simple-flow-dispatcher:make-simple-dispatcher
              :threads 4
              :error-handler #'handle-error))

POFTHEDAY> (defun run (flow)
             (cl-flow:run *dispatcher* flow))

Here is an example from cl-flow's documentation.

This code will run three blocks of code in parallel and then pass their results into another block:

POFTHEDAY> (run (flow:serially
                  (flow:concurrently
                    (flow:atomically :first ()
                      "Hello")
                    (flow:atomically :second ()
                      "Lisp")
                    (flow:atomically :third ()
                      "World"))
                  ;; Last block will receive results
                  ;; of all previous blocks:
                  (flow:atomically :finally (results)
                    (destructuring-bind (first second third)
                        results
                      (format t "~A ~A ~A~%"
                              first
                              second
                              third)))))

Sadly, the documentation does not cover a more interesting topic - blocks which generate other blocks. Let's try to figure out how to use flow:dynamically to define a web crawler which will process pages recursively:

POFTHEDAY> (defparameter *base-url*
             "https://borodust.org/projects/cl-flow/")

POFTHEDAY> (defun is-external (url)
             (or (str:starts-with-p "mailto:" url)
                 (and (str:starts-with-p "http" url)
                      (not (str:starts-with-p *base-url* url)))))

POFTHEDAY> (defun make-full (url)
             (let ((new-url
                     (cond
                       ((or (str:starts-with-p "http" url)
                            (str:starts-with-p "mailto:" url))
                        url)
                       ((str:starts-with-p "/" url)
                        (concatenate 'string "https://borodust.org" url))
                       (t
                        (concatenate 'string *base-url* url)))))
               (cl-ppcre:regex-replace "#.*" new-url "")))

POFTHEDAY> (defun make-url-processor (already-processed url)
             (flow:serially
               (flow:atomically url ()
                 (log:info "Downloading ~A" url)
                 (dex:get url))

               ;; This block creates new blocks where each
               ;; will process a single url and produce more
               ;; blocks to process links from fetched pages:
               (flow:dynamically (content)
                 (flow:concurrently
                   (loop with page = (ignore-errors
                                      (plump:parse content))
                         for link in (when page
                                       (plump:get-elements-by-tag-name page "a"))
                         for link-url = (plump:attribute link "href")
                         for full-url = (make-full link-url)
                         unless (or (is-external full-url)
                                    (gethash full-url already-processed))
                           collect (progn
                                     (setf (gethash full-url already-processed)
                                           t)
                                     (make-url-processor already-processed
                                                         full-url)))))))

Now we can start it:

POFTHEDAY> (let ((already-processed (make-hash-table :test 'equal)))
             (run
              (make-url-processor already-processed *base-url*))
             already-processed)

 <INFO> [23:10:00] poftheday (make-url-processor body-fu3) -
  Downloading https://borodust.org/projects/
#<HASH-TABLE :TEST EQUAL :COUNT 0 {10073D59A3}>
 <INFO> [23:10:00] poftheday (make-url-processor body-fu3) -
  Downloading https://borodust.org/projects/vinoyaku/
...
 <INFO> [23:10:01] poftheday (make-url-processor body-fu3) -
  Downloading https://borodust.org/projects/cl-bodge/overview/

;; These URL were processed by our crawler:
POFTHEDAY> (rutils:hash-table-to-alist *)
(("https://borodust.org/projects/vinoyaku/" . T)
 ("https://borodust.org/projects/trivial-gamekit/" . T)
 ("https://borodust.org/projects/cl-flow/" . T)
 ("https://borodust.org/projects/cl-bodge/" . T)
 ("https://borodust.org/projects/" . T)
 ("https://borodust.org/projects/cl-flow/getting-started/" . T)
 ("https://borodust.org/projects/trivial-gamekit/getting-started/" . T)
 ("https://borodust.org/projects/trivial-gamekit/advanced/" . T)
 ("https://borodust.org/projects/trivial-gamekit/manual/" . T)
 ("https://borodust.org/projects/cl-bodge/overview/" . T))

It would be nice if @borodust will do a little code review and check if I used cl-flow correctly or not.

Alexander Artemenkocl-mechanize

· 3 days ago

The README says this library tries to be a Perl's WWW:Mechanize clon. There is also Python library mechanize as well. Seems the stateful web scrapers is popular among some developers.

When I tried cl-mechanize to log into Reddit, it didn't work. The fetch function should discover all forms with their inputs but the login form was empty. Without CSRF token I wasn't able to log in.

But I found a fork https://github.com/ilook/cl-mechanize where this problem was fixed.

Let's create a program which will fetch your karma and latest comments from the Reddit!

First, we need to log in. Mechanize operates on the browser object which keeps the information about the current page and cookies:

POFTHEDAY> (defparameter *browser*
             (make-instance 'cl-mechanize:browser))

POFTHEDAY> (cl-mechanize:fetch "https://www.reddit.com/login/"
                               *browser*)
#<CL-MECHANIZE:PAGE {100A2D7FA3}>

POFTHEDAY> (mechanize:page-forms *)
(#<CL-MECHANIZE:FORM {100A2D4923}>)

POFTHEDAY> (defparameter *login-form* (first *))

POFTHEDAY> (mechanize:form-inputs *login-form*)
(("otp-type" . "app") ("otp" . "") ("password" . "") ("username" . "")
 ("is_mobile_ui" . "False") ("ui_mode" . "") ("frontpage_signup_variant" . "")
 ("is_oauth" . "False")
 ("csrf_token" . "ba038152b86951ab28725c37ed0b3e96d640d083")
 ("dest" . "https://www.reddit.com") ("cookie_domain" . ".reddit.com"))

POFTHEDAY> (setf (alexandria:assoc-value
                  (mechanize:form-inputs *login-form*)
                  "username" :test #'string=)
                 "svetlyak40wt")

POFTHEDAY> (setf (alexandria:assoc-value
                  (mechanize:form-inputs *login-form*)
                  "password" :test #'string=)
                 "********")

However, ilook's version of the cl-mechanize does not work either. It fails on form submission with the following error:

"Don't know how to handle method :|post|."

To overcome this issue we'll set the method to the proper keyword:

POFTHEDAY> (setf (mechanize:form-method *login-form*)
                 :post)

POFTHEDAY> (mechanize:submit *login-form* *browser*)

POFTHEDAY> (cl-mechanize:fetch "https://www.reddit.com/"
                               *browser*)

POFTHEDAY> (cl-ppcre:scan-to-strings
            "(\\d+) karma"
            (mechanize:page-content *))
"708 karma"
#("708")

Now we'll fetch last 3 comments:

;; Mechanize can be enchanced to handle relative URLs:
POFTHEDAY> (cl-mechanize:fetch "/message/inbox"
                               *browser*)
; Debugger entered on #<DRAKMA:PARAMETER-ERROR
; "Don't know how to handle scheme ~S." {100252AA63}>

;; I found that page /message/inbox does not countain messages
;; and you have to fetch this instead:
POFTHEDAY> (cl-mechanize:fetch "https://www.reddit.com/message/inbox?embedded=true"
                               *browser*)
; Debugger entered on #<TYPE-ERROR expected-type: STRING datum: NIL>

As you see, cl-mechanize failed on fetching this simple page. This library is 10 years old and still has so many bugs :(

Also, I found very unpleasant to work with cxml-stp's API. CL-Mechanize parses the page's body into cxml data structures and it was hard to figure out how to search the nodes I need.

If you know about some other Common Lisp library that is able to keep cookies and suitable for web scraping, please, let me know.

Alexander Artemenkopapyrus

· 4 days ago

In post number 50 I've reviewed the literate-lisp system which allows to write you lisp code in org-mode files and to load them as usual lisp files.

Papyrus does a similar trick but for Markdown files. It adds a named readtable to load markdown files as usual lisp code.

The library itself is less than 20 lines of code!

Here is how does the hello world look like using a literate programming style and Papyrus:

(defpackage #:hello-world
      (:use :cl :named-readtables))
    (in-package #:hello-world)
    
    (in-readtable :papyrus)

# Hello world with Papyrus

As you probably know, every programmer starts his learning of the
new programming language from the "hello world" program.

Simplest hello world program outputs a text "Hello World!" in console and exit.

Here is how we can output this program in Common Lisp:

```lisp
(defun main ()
    (princ "Hello World!")
    (terpri))
```

Now we can load it and run our main function:

POFTHEDAY> (ql:quickload :papyrus)

POFTHEDAY> (load "docs/media/0139/hello.md")
T

POFTHEDAY> (hello-world::main)
Hello World!

Also, you can add markdown files as ASDF system's dependencies!

However, there are view drawbacks because of Markdown's limitations and Papyrus simplicity:

  • All files have to start with an indented block of code to set proper read-table.
  • Emacs does not understand the current package when you are doing C-c C-c.
  • It is impossible to define blocks of lisp code which shouldn't be evaluated.

But literate-lisp system addresses all these issues.

Alexander Artemenkofreebsd-sysctl

· 5 days ago

This library works on OSX because of its BSD roots, but fails on Linux with error: "The alien function 'sysctlnametomib' is undefined."

It provides information about the system.

Here is a quick example:

POFTHEDAY> (freebsd-sysctl:sysctl-by-name "kern.hostname")
"poftheday"

POFTHEDAY> (freebsd-sysctl:sysctl-by-name "kern.ostype")
"Darwin"

POFTHEDAY> (freebsd-sysctl:sysctl-by-name "machdep.cpu.core_count")
6

Using this library and cl-spark, reviewed two weeks ago, we can build a simple tool to monitor the CPU's temperature:

POFTHEDAY> (loop with num-probes = 30
                 with probes = ()
                 for current = (freebsd-sysctl:sysctl-by-name
                                "machdep.xcpm.cpu_thermal_level")
                 do (push current probes)
                    (setf probes
                          (subseq probes 0
                                  (min num-probes
                                       (length probes))))
                    (format t "~A ~A~%~%"
                            (cl-spark:spark
                             (reverse probes)
                             :min 30)
                            current)
                    (sleep 15))

█▇▇▇▇▇▇▆▆▅▄▄▃▃▃▃▃▃▃▃▃▃▃▃▄▄▄▄▄▄ 53
...
▃▂▃▃▃▂▂▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▄▅▆▇█ 93
...
▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▃▅▅▆▇██████▇▆▅▄ 66
...
▃▃▃▃▅▅▆▇██████▇▆▅▄▃▂▂▁▁▁▁▁▁▁▁▁ 21

To find out different keys supported by your system do sysctl -a in console.

Alexander Artemenkothread.comm.rendezvous

· 6 days ago

This system provides a simple thread synchronization primitive called Rendezvous. It allows exchanging pieces of data between threads.

Here is how it works. You create a rendezvous object. Then you might create one or many threads and each of them can either to "call" rendezvous and pass it some value or to "accept" value and return a result.

Accept blocks the calling thread until some other thread will not call and vice-versa. This is similar to a thread-safe blocking queue of size 1:

POFTHEDAY> (defparameter *r*
             (thread.comm.rendezvous:make-rendezvous))

POFTHEDAY> (bt:make-thread
            (lambda ()
              (log:info "Waiting for value")
              (let ((value (thread.comm.rendezvous:accept-rendezvous *r*)))
                (log:info "Value received: ~S" value))))

<INFO> [2020-07-21T23:06:56.836061+03:00] Waiting for value

POFTHEDAY> (thread.comm.rendezvous:call-rendezvous
            *r*
            :the-value-to-be-sent-to-the-thread)

<INFO> [2020-07-21T23:07:46.642640+03:00] Value received: :THE-VALUE-TO-BE-SENT-TO-THE-THREAD

I wasn't able to imagine a more complex but short illustration of the case where this synchronization primitive can be useful. If you know one, please share your ideas in comments.

Alexander Artemenkolog4cl-extras

· 7 days ago

Yesterday I've posted about the log4cl and promised to tell you about my addons. The library is called log4cl-extras.

The main purpose of log4cl-extras is to make logging suitable for production. It provides JSON formatter, a macro to capture context variables and a macro to log unhandled tracebacks.

Capturing context variables makes each log entry self-sustained. Also, this way you can use do a "request_id" trick to bind many related log messages into a single track.

To show you how does "request_id" trick works, let me create a simple Clack application which will handle a request, simulate the query to the database and use logging.

Pay attention how does it use log4cl-extras/context:with-fields to capture request-id variable:

POFTHEDAY> (defun get-current-user ()
             "This is a fake function simulating SQL queries to database."
             (log:debug "SELECT * FROM users WHERE ...")
             (values "Bob"))

POFTHEDAY> (defun handle-request (env)
             (let* ((headers (getf env :headers))
                    (request-id (or (gethash "x-request-id" headers)
                                    (format nil "~A" (uuid:make-v4-uuid)))))
               (log4cl-extras/context:with-fields (:request-id request-id)
                 (log:debug "Processing request")
                 (let ((user (get-current-user)))
                   (list 200 '(:content-type "text/plain")
                         (list (format nil "Hello ~A!" user)))))))

POFTHEDAY> (defparameter *server*
             (clack:clackup 'handle-request
                            :port 8081))
Hunchentoot server is started.
Listening on 127.0.0.1:8081.

Now we can initialize logging and make a few HTTP requests:

POFTHEDAY> (log4cl-extras/config:setup
            '(:level :debug
              :appenders ((this-console :layout :plain))))

POFTHEDAY> (dex:get "http://localhost:8081/")
<DEBUG> [2020-07-20T23:23:28.293441+03:00] Processing request
  Fields:
    request-id: 0E0D035A-B24F-4E69-806C-ACACE6C6B08E
<DEBUG> [2020-07-20T23:23:28.295783+03:00] SELECT * FROM users WHERE ...
  Fields:
    request-id: 0E0D035A-B24F-4E69-806C-ACACE6C6B08E
"Hello Bob!"

Our app is able to use request id passed as an HTTP header X-Request-ID. This is useful when you have many microservices and want to have a single trail of all their logs:

POFTHEDAY> (dex:get "http://localhost:8081/"
                    :headers '(("X-Request-ID" . "Custom ID :)))))")))
<DEBUG> [2020-07-20T23:29:04.123354+03:00] Processing request
  Fields:
    request-id: Custom ID :)))))
<DEBUG> [2020-07-20T23:29:04.123412+03:00] SELECT * FROM users WHERE ...
  Fields:
    request-id: Custom ID :)))))
"Hello Bob!"

This plain text log format is convenient when you are debugging the application. But in production you either want to grep log messages or to feed them to the Elastic Search for further indexing.

In both cases it is more convenient to write each messages as a single-line JSON object:

POFTHEDAY> (log4cl-extras/config:setup
            '(:level :debug
              :appenders ((this-console :layout :json))))

POFTHEDAY> (dex:get "http://localhost:8081/")
{"fields":{"request-id":"20A7..."},"level":"DEBUG","message":"Processing request","timestamp":"2020-07-20T23:32:34.566029+03:00"}
{"fields":{"request-id":"20A7..."},"level":"DEBUG","message":"SELECT * FROM users WHERE ...","timestamp":"2020-07-20T23:32:34.566167+03:00"}
"Hello Bob!"

log4cl-extras also contains a macro to capture unhandled errors along with their tracebacks. It is also very useful for production. I'm using this facility to capture errors in Ultralisp.org.

Read log4cl-extra's documentation to learn more:

https://github.com/40ants/log4cl-extras

Alexander Artemenkolog4cl

· 8 days ago

This is the mystery why I didn't review any logging library so far! Probably, because there is a great article exists which compares 8 logging libraries.

Today I want only mention that my library of choice is log4cl. Mostly because if it's great integration with SLIME/SLY which helps when you have a lot's of "debug" logging in the app, but at some moment want to turn it on only for a function or a package.

Log4cl has great documentation which demonstrates all its features. Here I'll provide only a small example of its default logging output and ability to process additional arguments:

POFTHEDAY> (log:config :sane2 :debug)

POFTHEDAY> (defun foo (first-arg second-arg)
             (log:info "Entering into the foo with" first-arg "and" second-arg))

POFTHEDAY> (foo 100500 "Blah")

 <INFO> [21:04:10] poftheday (foo) -
  Entering into the foo with FIRST-ARG: 100500 and SECOND-ARG: "Blah" 

;; Now I want to process arguments with in format-like style:

POFTHEDAY> (defun foo (first-arg second-arg)
             (log:info "Entering into the (foo ~A ~A)" first-arg second-arg))

POFTHEDAY> (foo 100500 "Blah")
 <INFO> [21:04:53] poftheday (foo) - Entering into the (foo 100500 Blah)

Tomorrow I'll show you addons I've made, to make log4cl even more suitable for production applications.

Alexander Artemenkotaglib

· 9 days ago

The first post in the #poftheday series was about cl-mpg123 library. It failed on attempt to process metadata of mp3 file. Today we'll try taglib. This is the pure CL library to process MP3, MP4, FLAC tags.

Let's try it on the file from the zero post!

POFTHEDAY> (audio-streams:open-audio-file
            "docs/media/0000/file.mp3")
#<ID3:MP3-FILE {10036524E3}>

POFTHEDAY> (abstract-tag:show-tags *)
/Users/art/projects/poftheday/docs/media/0000/file.mp3
1 frame read, MPEG 1, Layer III, CBR, sample rate: 44,100 Hz, bit rate: 320 Kbps, duration: 7:15
    album: Rogue's Gallery: Pirate Ballads, Sea Songs, and Chanteys
    artist: Baby Gramps
    comment: ((0 eng  NIL))
    compilation: no
    cover: (Size: 9,870)
    genre: Folk
    lyrics:  
    title: Cape Cod Girls
    track: (1 23)
    year: 2006
NIL

There is also a possibility to access specific fields:

POFTHEDAY> (audio-streams:open-audio-file
            "docs/media/0000/file.mp3")
#<ID3:MP3-FILE {10027E6093}>

POFTHEDAY> (id3:id3-header *)
#<ID3:ID3-HEADER {10027E60D3}>

POFTHEDAY> (id3:v21-tag-header *)
#<ID3:V21-TAG-HEADER {10027E6363}>

POFTHEDAY> (id3:album *)
"Rogue's Gallery: Pirate Ballad"

POFTHEDAY> (id3:title **)
"Cape Cod Girls"

Seems it works very good!

ABCL DevABCL 1.7.1

· 10 days ago
With gentle prodding, the Bear has released ABCL 1.7.1, a decidedly minor release correcting a few bugs resulting from the overhaul of arrays specialized on unsigned byte types.

The brief list of CHANGES is a available for your perusal.

Vsevolod DyomkinProgramming Algorithms 2nd Edition

· 10 days ago

Apress — the most dedicated publisher of Common Lisp books, famous for giving the world "Practical Common Lisp" and "Common Lisp Recipes" — has approached me to publish "Programming Algorithms", and, after some consideration, I have agreed. So, the book will be released under the title "Programming Algorithms in Lisp" and with some slight modifications to the content.

It was not an easy decision to make. Ultimately, my goal for the book is to make it as widely read as possible. For these three months since it had been published on Leanpub, it was downloaded more than 1500 times, and almost 250 people have also donated some money in its support. The paperback book was shipped to around 40 locations around the globe: even to Australia and Colombia. Besides, I have received lots of positive feedback and some improvement suggestions. I'm very grateful and happy that it has seen such positive reception.

In my opinion, the book has the potential to hit at least an order of magnitude more readers. However, to achieve that, targeted promotion effort is necessary. I have already mostly exhausted the capacity of the free PR channels I had access to (such as Hacker News, Reddit, and Twitter). I had a long-term promotion strategy though, but it required spending the time and (possibly) financial resource that could be used elsewhere.

The Apress edition of the book will not be free, but it will have the full power of this respected publisher behind it. So, my hope is that thus it will reach an even wider audience. Very soon I will have to take down the free version of the book, so this is the last chance to download it (if you or some of your friends planned to do it). The book webpage will remain active and will collect relevant information and news, so stay tuned...

vseloved.github.io/progalgs

Alexander Artemenkocl-irc

· 11 days ago

Today we'll write a simple bot to keep a history of the IRC channel. IRC is a chat protocol that existed before Slack, Telegram, etc.

For the test I've installed a local lisp server on my OSX:

[poftheday:~]% brew install ngircd
Updating Homebrew...
...
==> Caveats
==> ngircd
To have launchd start ngircd now and restart at login:
  brew services start ngircd

[poftheday:~]% /usr/local/sbin/ngircd --nodaemon --passive

[poftheday:~]% /usr/local/sbin/ngircd --nodaemon --passive
[66510:5    0] ngIRCd 26-IDENT+IPv6+IRCPLUS+SSL+SYSLOG+ZLIB-x86_64/apple/darwin19.5.0 starting ...
[66510:6    0] Using configuration file "/usr/local/etc/ngircd.conf" ...
[66510:3    0] Can't read MOTD file "/usr/local/etc/ngircd.motd": No such file or directory
[66510:4    0] No administrative information configured but required by RFC!
[66510:6    0] ServerUID must not be root(0), using "nobody" instead.
[66510:3    0] Can't change group ID to nobody(4294967294): Operation not permitted!
[66510:3    0] Can't drop supplementary group IDs: Operation not permitted!
[66510:3    0] Can't change user ID to nobody(4294967294): Operation not permitted!
[66510:6    0] Running as user art(1345292665), group LD\Domain Users(593637566), with PID 66510.
[66510:6    0] Not running with changed root directory.
[66510:6    0] IO subsystem: kqueue (initial maxfd 100, masterfd 3).
[66510:6    0] Now listening on [0::]:6667 (socket 6).
[66510:6    0] Now listening on [0.0.0.0]:6667 (socket 8).
[66510:5    0] Server "irc.example.net" (on "poftheday") ready.

After that, I've installed a command line IRC client ircii and made two connections to simulate users in the #lisp channel.

Now it is time to connect our bot to the server and create a thread with the message processing loop:

POFTHEDAY> (defparameter
               *conn*
             (cl-irc:connect :nickname "bot"
                             :server "localhost"))

POFTHEDAY> (defparameter *thread*
             (bt:make-thread (lambda ()
                               (cl-irc:read-message-loop *conn*))
                             :name "IRC"))

POFTHEDAY> (cl-irc:join *conn* "#lisp")

While messages are processed in the thread we are free to experiment in the REPL. Let's add a hook to process messages from the channel:

POFTHEDAY> (defun on-message (msg)
             (log:info "New message" msg))

POFTHEDAY> (cl-irc:add-hook *conn*
                            'cl-irc:irc-privmsg-message
                            'on-message)

;; Now if some of users will write to the channel,
;; the message will be logged to the screen:

POFTHEDAY> 
; No values
 <INFO> [22:35:43] poftheday (on-message) -
  New message POFTHEDAY::MSG: #<CL-IRC:IRC-PRIVMSG-MESSAGE joanna PRIVMSG {1007692DA3}>
  
UNHANDLED-EVENT:3803916943: PRIVMSG: joanna #lisp "Hello lispers!"

We can modify the on-message function to save the last message into the global variable to inspect its structure:

POFTHEDAY> *last-msg*
#<CL-IRC:IRC-PRIVMSG-MESSAGE joanna PRIVMSG {1007703213}>

POFTHEDAY> (describe *)
#<CL-IRC:IRC-PRIVMSG-MESSAGE joanna PRIVMSG {1007703213}>
  [standard-object]

Slots with :INSTANCE allocation:
  SOURCE                         = "joanna"
  USER                           = "~art"
  HOST                           = "localhost"
  COMMAND                        = "PRIVMSG"
  ARGUMENTS                      = ("#lisp" "Hello")
  CONNECTION                     = #<CL-IRC:CONNECTION localhost {1003918FA3}>
  RECEIVED-TIME                  = 3803917081
  RAW-MESSAGE-STRING             = ":joanna!~art@localhost PRIVMSG #lisp :Hello
"

;; If user sent a direct message,
;; it will have the bot's username as the first argument:

POFTHEDAY> (describe *last-msg*)
#<CL-IRC:IRC-PRIVMSG-MESSAGE joanna PRIVMSG {1001600943}>
  [standard-object]

Slots with :INSTANCE allocation:
  SOURCE                         = "joanna"
  USER                           = "~art"
  HOST                           = "localhost"
  COMMAND                        = "PRIVMSG"
  ARGUMENTS                      = ("bot" "Hello. It is Joanna.")
  CONNECTION                     = #<CL-IRC:CONNECTION localhost {1003918FA3}>
  RECEIVED-TIME                  = 3803917270
  RAW-MESSAGE-STRING             = ":joanna!~art@localhost PRIVMSG bot :Hello. It is Joanna.
"

If you intend to make a bot which will reply to the messages, you have to choose either message's source slot or the first argument as a destination for the response.

Most probably the bug @SatoshiShinohai complained about on Twitter is caused by the wrong algorithm for choosing the response's destination.

Now let's redefine our on-message function to format log messages in an accurate way:

POFTHEDAY> (defun on-message (msg)
             (log:info "<~A> ~A"
                       (cl-irc:source msg)
                       (second (cl-irc:arguments msg)))
             ;; To let cl-irc know that we've processed the event
             ;; we need to return `t'.
             ;; Otherwise it will output "UNHANDLED-EVENT" messages.
             t)
WARNING: redefining POFTHEDAY::ON-MESSAGE in DEFUN
ON-MESSAGE
 <INFO> [22:55:06] poftheday (on-message) - <joanna> Hello everybody!
 <INFO> [22:55:17] poftheday (on-message) - <art> Hello, Joanna!
 <INFO> [22:55:27] poftheday (on-message) -
  <joanna> What is the best book on Common Lisp for newbee?
 <INFO> [22:55:56] poftheday (on-message) - <art> Try the Practical Common Lisp.
 <INFO> [22:56:04] poftheday (on-message) - <joanna> Thanks!

If you want to make the bot which responds to the message, then use cl-irc:privmsg like this:

;; This will send a message to the channel:
POFTHEDAY> (cl-irc:privmsg *conn* "#lisp" "Hello! Bot is in the channel!")
"PRIVMSG #lisp :Hello! Bot is in the channel!
"

;; and this will send a private message:
POFTHEDAY> (cl-irc:privmsg *conn* "joanna" "Hi Joanna!")
"PRIVMSG joanna :Hi Joanna!
"

If you will download the cl-irc's sources from https://common-lisp.net/project/cl-irc/ you'll find more sofisticated bot in the example folder.

One final note, to debug communication between lisp and IRC server set the cl-irc::*debug-p* variable to true and it will log every message send or received by the bot.

Quicklisp newsJuly 2020 Quicklisp dist now available

· 12 days ago
New projects:
  • cl-aristid — Draw Lindenmayer Systems with Common LISP! — MIT
  • cl-covid19 — Common Lisp library and utilities for inspecting COVID-19 data — BSD 2-Clause
  • cl-grip — Grip is a simple logging interface and framework. The core package contains basic infrastructure and interfaces. — Apache v2
  • cl-liballegro-nuklear — CFFI wrapper for the Nuklear IM GUI library with liballegro backend, to be used with cl-liballegro. — MIT
  • colored — System for colour representation, conversion, and operation. — zlib
  • linux-packaging — ASDF extension to generate linux packages. — MIT
  • litterae — Beautiful documentation generation. — MIT
  • osmpbf — Library to read OpenStreetMap PBF-encoded files. — MIT
  • teddy — A data framework for Common Lisp, wanna be like Pandas for Python. — UNLICENSE
Updated projects3b-hdr3bmd3bzalexandriaalgaeasync-processatomicsbabelbinpackcesdicfficl-allcl-collidercl-conllucl-fixcl-formscl-gearmancl-hamcrestcl-i18ncl-interpolcl-krakencl-marklesscl-migratumcl-naive-storecl-online-learningcl-patternscl-prevalencecl-projectcl-random-forestcl-rdkafkacl-rediscl-satcl-strcl-string-generatorcl-utilscl-webkitclathclcs-codeclim-widgetscljcloser-mopclxcommon-lisp-jupytercroatoandeedsdeploydjulaeasy-routeseclectorfiveamflexi-streamsfunctional-treesgendlhyperluminal-memintrospect-environmentironcladlisp-criticlisp-preprocessorliterate-lisplquerymarkupmcclimmutilitynibblesnodguinumcloriginosicatoverlordparachuteparser.common-rulesperlrepetalispphoe-toolboxpngloadpostmodernqlotquilcread-as-stringrutilssc-extensionsscalplselserapeumshadowslyspinneretstaplestumpwmtaggertrace-dbtrivial-featurestrivial-mimesumbrauuidvernacularxhtmlambda.

To get this update, use (ql:update-dist "quicklisp").

Enjoy!

Alexander Artemenkopiping

· 12 days ago

This library in some sense similar to the cl-events, reviewed yesterday. It allows defining pipelines to process messages.

Each message can be processed sequentially or in parallel. Each node would be an instance of the segment class. There are two kinds of nodes - intermediate and final.

Intermediate nodes can filter messages or route them into other pipelines.

Final nodes are called faucets. They process the message and stop processing.

For example here is how we can build a log message processing using piping. We want to print all ERROR messages to *error-output* and to write all messages to the log file.

To create this pipeline, we need following segments. Here is "Pipeline" is a chain of segments to pass the message through:

Pipeline:
  Print["full.log"]
  Pipeline: Filter[if ERROR] -> Print[*error-output*]

Here is how we can configure this pipeline in Lisp code:

POFTHEDAY> (defparameter *pipe*
             (make-instance 'piping:pipeline))

POFTHEDAY> (defparameter *log-printer*
             (piping:add-segment
               *pipe*
               (make-instance 'piping:printer
                              :stream (open "full.log"
                                            :direction :output
                                            :if-exists :append
                                            :if-does-not-exist :create))))

;; This adds a sub-pipe where we'll filter message
;; and print if it stats with "ERROR":
POFTHEDAY> (piping:add-segment *pipe* :pipe)

POFTHEDAY> (piping:add-segment *pipe*
            (make-instance 'piping:predicate-filter
                           :predicate (lambda (message)
                                        (str:starts-with-p "ERROR: " message)))
            '(1))

POFTHEDAY> (piping:add-segment *pipe*
            (make-instance 'piping:printer
                           :stream *error-output*)
            '(1))

;; Now we'll pass two messages through this pipeline:
POFTHEDAY> (piping:pass *pipe*
             "INFO: Hello world!")

;; This one will be printed to *error-output*:
POFTHEDAY> (piping:pass *pipe*
             "ERROR: Something bad happened!")

"ERROR: Something bad happened!" 

;; But in file both messages are present:
POFTHEDAY> (force-output (piping:print-stream *log-printer*))

POFTHEDAY> (princ (alexandria:read-file-into-string "full.log"))

"INFO: Hello world!" 
"ERROR: Something bad happened!"

Working on this example, I found two things:

  • there is no component to fanout messages into the nested segments or sub pipes.
  • using indices to point to a place where a segment should be added is very inconvenient.

@Shinmera uses piping in his logging library verbose. I skimmed through its sources and didn't find if he has some solution of fanout absence problem.

Definitely, this library can be made a more convenient if somebody is interested to use it for other purposes.

Alexander Artemenkocl-events

· 13 days ago

This is the library by @dead_trickster. It implements a pub-sub API and allows to:

  • create an event object;
  • subscribe on it;
  • fire the event.

CL-Events provides a way to add a hook point for your application.

Here is the simplest example. Here we create a single-threaded event where all callbacks will be called sequentially:

POFTHEDAY> (defparameter *on-click*
             (make-instance 'cl-events:event))

POFTHEDAY> (defun the-callback (message)
             ;; pretend, we need some time to process the callback
             (sleep 1)
             (format t "MSG [~A]: ~A~%"
                     (bt:current-thread)
                     message))

POFTHEDAY> (cl-events:event+ *on-click*
                             'the-callback)

POFTHEDAY> (cl-events:event! *on-click*
                             "Button clicked!")
MSG [#<THREAD "sly-channel-1-mrepl-remote-1" RUNNING {1003955B33}>]: Button clicked!
NIL

To make them execute in parallel, you only need to replace the type of the event object. Pay attention to the thread's name in the callback's output. They are different:

POFTHEDAY> (defparameter *on-click*
             (make-instance 'cl-events:broadcast-event))

POFTHEDAY> (defun the-callback (handler-name message)
             ;; pretend, we need some time to process the callback
             (sleep 1)
             (format t "MSG [~A/~A]: ~A~%"
                     handler-name
                     (bt:current-thread)
                     message))

POFTHEDAY> (cl-events:event+ *on-click*
                             (alexandria:curry 'the-callback
                                               "First handler"))

POFTHEDAY> (cl-events:event+ *on-click*
                             (alexandria:curry 'the-callback
                                               "Second handler"))

POFTHEDAY> (cl-events:event! *on-click*
                             "Button clicked!")
NIL
MSG [Second handler/#<THREAD "lparallel" RUNNING {1005A97983}>]: Button clicked!
MSG [First handler/#<THREAD "lparallel" RUNNING {1005A96F93}>]: Button clicked!

Also, in this case, event! function returns before all handlers are called.

In this case, parallel execution is implemented using lparallel's thread pool. There are more executors available and you can implement your own.

Alexander Artemenkotrivial-with-current-source-form

· 14 days ago

This library is a compatibility layer. It helps to provide hints to the Lisp compiler. Hints allow the compiler to show more precise error messages when error happens during the macro-expansion.

Here is an example I've stolen from the library's documentation. To show you how this works in dynamic, I've recorded a GIF image.

Pay attention, without the hint compiler highlights "even-number-case" top-level form:

That is it. You just wrap some part of the macro-processing code with with-current-source-form and say: "Hey, compiler! Here is the s-expr I'm currently processing. If some shit will happen, let the user know."

As I said before, this library is a compatibility layer. Only SBCL and Clasp are supported for now. On other implementations, the macro will do nothing.

Wimpie NortjeDatabase drivers for PostgreSQL and SQLite.

· 15 days ago

To interact with a database system from your code requires some database driver library. For most database systems there are multiple driver libraries, most of which are stable and work well.

There are also options of multi-system and single-system drivers. The multi-system drivers can interface with multiple database systems, usually MySQL, PostgreSQL and SQLite, while single-system drivers only work for a specific database system.

The two main reasons I see for using a multi-system driver are:

  1. You want to reduce the risk in the event you need to switch your database system mid-project. It is rare but possible.
  2. You work on multiple projects which use different database systems. With a multi-system driver you only need to learn one library which works the same for all databases.

My response to those two reasons are:

  1. I consider it extremely unlikely that I will need to switch database systems for the project.
  2. I prefer to use libraries that do one thing well rather than multi-purpose ones. I would rather learn the focused driver for each different database system I use than fighting the unavoidable complexities which come with generalised tools that cater for databases I will never use.

In another post I mentioned that I use PostgreSQL for my application database. I also use SQLite for local configuration files. The drivers I considered are:

  • CL-sxl
  • Sxql
  • CL-dbi
  • CL-sql
  • Postmodern
  • CL-sqlite

Postmodern and CL-sqlite are the only single-system drivers in that list. Due to the two reasons I mentioned above, I use Postmodern and CL-sqlite.

Alexander Artemenkotrivial-benchmark

· 15 days ago

Some time ago I've reviewed the the-cost-of-nothing library which allowed you to check the performance of the form execution. Trivial-benchmark does a similar job but has a few pros and cons.

The main con is that you have to give it a number of iterations manually, but the pro is that the library provides a way more statistics:

POFTHEDAY> (trivial-benchmark:with-timing (1000000)
             (format nil "Symbol is: ~S" :foo))

-                SAMPLES  TOTAL      MINIMUM   MAXIMUM   MEDIAN    AVERAGE    DEVIATION  
REAL-TIME        1000000  3.78       0         0.169     0         0.000004   0.000207   
RUN-TIME         1000000  3.734      0         0.132     0         0.000004   0.000179   
USER-RUN-TIME    1000000  2.332375   0.000001  0.061505  0.000002  0.000002   0.00011    
SYSTEM-RUN-TIME  1000000  1.398129   0.000001  0.070875  0.000001  0.000001   0.000072   
PAGE-FAULTS      1000000  0          0         0         0         0          0.0        
GC-RUN-TIME      1000000  0.436      0         0.132     0         0.0        0.000168   
BYTES-CONSED     1000000  592388352  0         130976    0         592.38837  4354.098   
EVAL-CALLS       1000000  0          0         0         0         0          0.0

Another cool feature is the ability to define more custom metrics.

Here is a practical example. We'll measure a number of SQL queries made during form execution:

;; These are owr SQL driver simulation:
POFTHEDAY> (defparameter *num-queries* 0)

POFTHEDAY> (defun execute (query)
             "A fake SQL driver"
             (declare (ignorable query))
             (incf *num-queries*))

;; The application code:
POFTHEDAY> (defun the-view ()
             (execute "SELECT some FROM data")
             (loop repeat 5
                   do (execute "SELECT some FROM other_data")))

;; Metric definition is very simple. You just provide a code
;; which returns an absolute value:
POFTHEDAY> (trivial-benchmark:define-delta-metric sql-queries
             *num-queries*)

;; Pay attention to the last line of the report:
POFTHEDAY> (trivial-benchmark:with-timing (100)
             (the-view))
-                SAMPLES  TOTAL     MINIMUM   MAXIMUM   MEDIAN    AVERAGE   DEVIATION  
REAL-TIME        100      0         0         0         0         0         0.0        
RUN-TIME         100      0         0         0         0         0         0.0        
USER-RUN-TIME    100      0.000308  0.000001  0.00012   0.000002  0.000003  0.000012   
SYSTEM-RUN-TIME  100      0.000117  0.000001  0.000002  0.000001  0.000001  0.0        
PAGE-FAULTS      100      0         0         0         0         0         0.0        
GC-RUN-TIME      100      0         0         0         0         0         0.0        
BYTES-CONSED     100      98240     0         65536     0         982.4     7258.1045  
EVAL-CALLS       100      0         0         0         0         0         0.0        
SQL-QUERIES      100      600       6         6         6         6         0.0

The trivial-benchmark is not as accurate as the-cost-of-nothing because it does not count the overhead, but overhead can be significant because trivial-benchmark uses generic functions.

Also when sampling, the trivial-benchmark executes the form only once. That is why the measurements for a very fast code will be even more inaccurate.

Another interesting feature is the ability to define benchmark suites to measure performance regression of some parts of your code. I won't show you an example of such a suite. Just go and read nice documentation, written by @Shinmera:

https://github.com/Shinmera/trivial-benchmark#benchmark-suites

Tycho Garen Common Lisp Grip, Project Updates, and Progress

· 16 days ago

Last week, I did a release, I guess, of cl-grip which is a logging library that I wrote after reflecting on common lisp logging earlier. I wanted to write up some notes about it that aren't covered in the read me, and also talk a little bit4 about what else I'm working on.

cl-grip

This is a really fun and useful project and it was really the right size for a project for me to really get into, and practice a bunch of different areas (packages! threads! testing!) and I think it's useful to boot. The read me is pretty comprehensive, but I thought I'd collect some additional color here:

Really at it's core cl-grip isn't a logging library, it's just a collection of interfaces that make it easy to write logging and messaging tools, which is a really cool basis for an idea, (I've been working on and with a similar system in Go for years.)

As result, there's interfaces and plumbing for doing most logging related things, but no actual implementations. I was very excited to leave out the "log rotation handling feature," which feels like an anachronism at this point, though it'd be easy enough to add that kind of handler in if needed. Although I'm going to let it stew for a little while, I'm excited to expand upon it in the future:

  • additional message types, including capturing stack frames for debugging, or system information for monitoring.
  • being able to connect and send messages directly to likely targets, including systemd's journal and splunk collectors.
  • a collection of more absurd output targets to cover "alerting" type workloads, like desktop notifications, SUMP, and Slack targets.

I'm also excited to see if other people are interested in using it. I've submitted it to Quicklisp and Ultralisp, so give it a whirl!

See the cl-grip repo on github.

Eggqulibrium

At the behest of a friend I've been working on an "egg equilibrium" solver, the idea being to provide a tool that can given a bunch of recipes that use partial eggs (yolks and whites) can provide optimal solutions that use a fixed set of eggs.

So far I've implemented some prototypes that given a number of egg parts, attempt collects recipes until there are no partial eggs in use, so that there are no leftovers. I've also implemented the "if I have these partial eggs, what can I make to use them all." I've also implemented a rudimentary CLI interface (that was a trip!) and a really simple interface to recipe data (both parsing from a CSV format and an in memory format that makes solving the equilibrium problem easier.)

I'm using it as an opportunity to learn different things, and find a way to learn more about things I've not yet touched in lisp (or anywhere really,) so I'm thinking about:

  • building a web-based interface using some combination of caveman, parenscript, and related tools. This could include features like "user submitted databases," as well as links to the sources the recpies, in addition to the basic "web forms, APIs, and table rendering."
  • storing the data in a database (probably SQLite, mostly) both to support persistence and other more advanced features, but also because I've not done database things from Lisp at all.

See the eggquilibrium repo on github it's still pretty rough, but perhaps it'll be interesting!'

Other Projects

  • Writing more! I'm trying to be less obsessive about blogging, as I think it's useful (and perhaps interesting for you all too.) I've been writing a bunch and not posting very much of it. My goal is to mix sort of grandiose musing on technology and engineering, with discussions of Lisp, Emacs, and programming projects.
  • Working on producing texinfo output from cl-docutils! I've been toying around with the idea of writing a publication system targeted at producing books--long-form non-fiction, collections of essays, and fiction--rather than the blogs or technical resources that most such tools are focused on. This is sort of part 0 of this process.
  • Hacking on some Common Lisp projects, I'm particularly interested in the Nyxt and StumpWM.

Alexander Artemenkochameleon

· 16 days ago

Chameleon is a configuration management library. It allows us to define a bunch of options and their values for different profiles. After that, you can switch between profiles.

It works like that:

POFTHEDAY> (chameleon:defconfig
             (port 8000 "Port to listen on")
             (log-level :info "The log level for log4cl"))

POFTHEDAY> (chameleon:defprofile :dev)

POFTHEDAY> (chameleon:defprofile :production
             (port 80)
             (log-level :warn))

POFTHEDAY> (setf (active-profile) :production)
:PRODUCTION
POFTHEDAY> (port)
80

POFTHEDAY> (log-level)
:WARN

POFTHEDAY> (active-profile)
:PRODUCTION

;; Now switching to development mode:
POFTHEDAY> (setf (active-profile) :dev)
:DEV

POFTHEDAY> (port)
8000

POFTHEDAY> (log-level)
:INFO

I've investigated the chameleon's code and think it can be made better and simpler using CLOS instances for profiles instead of hash maps.

If you know other Lisp systems for configuration management, please, let me know.

Alexander Artemenkowith-output-to-stream

· 17 days ago

This is a "trivial" library by @HexstreamSoft. It simplifies the writing of the functions which would like to accept a stream argument as format function does.

CL's format function accepts as the first parameter a nil, t or a stream object. In first case it returns a string and in second - outputs to *standard-output*.

When you are writing a custom function with similar semantics, you have to handle all these cases by hand. Here is where with-output-to-stream helps you:

POFTHEDAY> (defun log-info (obj &key (stream t))
             (with-output-to-stream:with-output-to-stream (s stream)
               (write-string "INFO " s)
               (write obj :stream s)
               (terpri s)))

;; Here is we return result as a string:
POFTHEDAY> (log-info 100500 :stream nil)
"INFO 100500
"

;; This will output to *standard-output*:
POFTHEDAY> (log-info 100500 :stream t)
INFO 100500
NIL

;; But you can pass any stream as the argument:
POFTHEDAY> (log-info 100500 :stream *error-output*)
INFO 100500
NIL

POFTHEDAY> (with-output-to-string (s)
             (log-info 100500 :stream s)
             (log-info 42 :stream s))
"INFO 100500
INFO 42
"

That is it for today.

Alexander Artemenkolisp-critic

· 18 days ago

A few weeks ago, I've reviewed the sblint - a tool to check code quality in terms of warnings from the SBCL compiler. Lisp-critic is another kind of beast. It checks the code quality in terms of common patterns and idioms.

For example, it outputs warning when there is only one subform inside the progn or if you are setting global variables in the function definition:

POFTHEDAY> (lisp-critic:critique
            (progn
              (format t "Hello World!")))
----------------------------------
Why do you think you need a PROGN?
----------------------------------

POFTHEDAY> (lisp-critic:critique
            (defun start-server ()
              (setf *server*
                    (listen-on :port 8080))
              (values)))
----------------------------------------------------
GLOBALS!! Don't use global variables, i.e., *SERVER*
----------------------------------------------------

Lisp-critic operates on patterns. There are 109 built-in patterns and you can define more:

POFTHEDAY> (length (lisp-critic:get-pattern-names))
109

POFTHEDAY> (rutils:take 10
             (lisp-critic:get-pattern-names))
(LISP-CRITIC::?-FOR-PREDICATE
 LISP-CRITIC::ADD-ZERO
 LISP-CRITIC::APPEND-LIST-LIST
 LISP-CRITIC::APPEND-LIST-LOOP
 LISP-CRITIC::APPEND-LIST-RECURSION
 LISP-CRITIC::APPEND-LIST2-LIST
 LISP-CRITIC::APPLY-FOR-FUNCALL
 LISP-CRITIC::CAR-CDR
 LISP-CRITIC::CONCATENATE-LIST
 LISP-CRITIC::COND->OR)

Also, you can use lisp-critic:critique-file to analyze all top-level forms in a file.

It would be nice to:

  • add a command-line tool (like sblint) to check all files in the project;
  • to add the ability to ignore some checks for some forms. Probably declaim could be used for this purpose?

Probably adding an integration with SLIME or SLY would also be a good idea.

This way you'll be able to hit some shortcuts to receive recommendations from Lisp Critic, or it could happen when you are evaluating a top-level form.

Alexander Artemenkocl-spark

· 19 days ago

This small utility has nothing common with Apache Spark and big data processing. However, it relates to data plotting.

Cl-spark allows you to visualize data in the console like that:

POFTHEDAY> (cl-spark:spark '(1 0 1 0))
"█▁█▁"

POFTHEDAY> (cl-spark:spark '(1 1 2 3 5 8))
"▁▁▂▃▅█"


POFTHEDAY> (cl-spark:spark '(0 30 55 80 33 150))
"▁▂▃▄▂█"

POFTHEDAY> (cl-spark:spark '(0 30 55 80 33 150)
                           :min -100)
"▃▄▅▆▄█"
POFTHEDAY> (cl-spark:spark '(0 30 55 80 33 150)
                           :max 50)
"▁▅██▅█"
POFTHEDAY> (cl-spark:spark '(0 30 55 80 33 150)
                           :min 30
                           :max 80)
"▁▁▄█▁█"

Or like that:

POFTHEDAY> (cl-spark:spark
            '(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
            :key (lambda (x)
                   (sin (* x pi 1/4))))
"▄▆█▆▄▂▁▂▄▆█▆▄▂▁"


POFTHEDAY> (cl-spark:vspark
            '(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
            :key (lambda (x)
                   (sin (* x pi 1/4)))
            :size 20)
"
-1.0     0.0     1.0
˫--------+---------˧
██████████▏
█████████████████▏
████████████████████
█████████████████▏
██████████▏
██▉
▏
██▉
█████████▉
█████████████████▏
████████████████████
█████████████████▏
██████████▏
██▉
▏
"

It's repository has a lot more examples. Check it out:

https://github.com/tkych/cl-spark

Alexander Artemenkocl-coveralls

· 20 days ago

I hope, you are writing unit tests for your program. And if you do, then it is really helpful to know which code is covered by tests and which does not.

Did you know that some CL implementation has tools for measuring code coverage?

For example, SBCL has a package sb-cover. To create a coverage report you need to turn instrumentation on, recompile the program, run tests and generate the report.

This is the code from SBCL's manual:

(declaim (optimize sb-cover:store-coverage-data))

;;; Load some code, ensuring that it's recompiled
;;; with correct optimization policy.
(asdf:oos 'asdf:load-op :cl-ppcre-test :force t)

;;; Run the test suite.
(cl-ppcre-test:test)

;;; Produce a coverage report
(sb-cover:report "/tmp/report/")

;;; Turn off instrumentation
(declaim (optimize (sb-cover:store-coverage-data 0)))

Here are few screenshots of HTML pages I've got running sb-cover against Ultralisp's code:

But today we are talking about cl-coveralls. It helps to build coverage measuring into your CI pipeline. I decided that it is a great moment to add make it check Ultralisp's code.

What do you need to collect coverage data of Common Lisp project? Well, you need:

  • to set up a CI pipeline on Travis or CircleCI.
  • register at https://coveralls.io/ and enable it for your GitHub repository.
  • set two environment variables in the CI's config.
  • wrap code with a call to coveralls:with-coveralls like that.

Here is the diff, required to enable code coverage measurement for Ultralisp's tests. And now Coveralls will track if code coverage was improved with each pull-request.

Alexander Artemenkopath-parse

· 21 days ago

This is a small utility library by Fernando Borretti. The only function it has is PATH variable parsing. But it does it really well.

Path-parse works on Windows and Unix (OSX):

POFTHEDAY> (path-parse:path)

(#P"/Users/art/.roswell/bin/"
 #P"/Users/art/.bin/"
 #P"/Users/art/.dotfiles/bin/"
 #P"/usr/local/bin/"
 #P"/usr/bin/"
 #P"/bin/"
 #P"/usr/sbin/"
 #P"/sbin/")

That is it for today. Tomorrow I'll try to find something more interesting!

Nicolas HafnerEngine Rewrites - July Kandria Update

· 22 days ago

header
Last month I outlined a very rough timeline for the future development of Kandria. In the past month I managed to implement the first two of the tasks listed there, namely some very deep fixes to the game engine, and the improvement of the pathfinding AI. I'll try to boil these changes down to make them easier to understand.

If you're subscribed to the mailing list, you should already be familiar with the AI pathfinding problem. If not, I'll give you a freebie this time: you can read the article here. I publish similar articles on developments, backstory, and other things every week on the mailing list, so if you're interested, it's a great way to keep up to date!

What I didn't really touch on in the article is the problem of executing a plan once you've computed it from the navigation mesh. This turned out to be a bit more tricky than I had given it credit for, and took up most of my time. It's working really well now, though, so I think I can move on to actual enemy AI.

As for the game engine changes, those are more numerous and much more involved still. The engine itself is open source, and available to anyone. I'll try my best to outline the changes without having to explain everything about Trial, and without having to go too deep into it.

The first change relates to how assets and resources are managed in Trial. A resource here means an abstract representation of something that needs to be manually managed in memory, like textures and vertex data. Previously it used to be the case that assets were special variants of resources that could be loaded from a file. In that system, an image asset would be a texture resource that could load its contents from a file. This system works fine for many cases, but it breaks down as soon as a file should expand to more than one resource, such as for a 3D model that contains both mesh data and textures.

The new system provides a clear split between assets and resources. Assets are now resource generators that, when loaded, read whatever input data you feed it, and turns it into appropriate resource instances. This solves the previous problem, but introduces a new one: when you need to refer to a resource in order to, for example, set the texture of an object, you now cannot do so anymore before the associated asset is loaded, and this loading should only occur at specific, controlled points in time.

This is where a rare feature of Lisp makes itself very useful: change-class. Assets can offer access to the resources it would generate before actually generating them by providing a placeholder-resource instance instead. Once the asset is actually loaded, this instance is then changed into one of the appropriate resource type. This allows us to reference resources before loading them, without having to perform any patching after loading, or expensive repeated runtime lookup.

The second change relates to the actual loading operation. Previously there was a system that would try to automatically traverse objects to find all referenced assets and resources. This system was convenient, but also slow and... well, to be honest, it just made me uncomfortable. The new system only automatically traverses the scene-graph, for everything else you need to write explicit methods that the resources you need for loading.

The system also takes care of a problem that was introduced by the new asset system. Since resources can now be placeholders, they won't know their dependencies before their generating asset is loaded. This is a problem when determining the order in which to load assets and resources, since parts of the dependency information is now deferred. The solution adopted so far is that the load order is recomputed when a resource is encountered that used to be a placeholder. This works fine, but might induce a lot of load order recomputations if the initial order is unfavourable. At the moment though I'm not losing any sleep over this potentially slow corner case.

Finally, the new loader also handles failures better. If an error occurs during the load operation, the state can be rolled back smoothly so that the game can continue running. This isn't too useful on a user's machine, but it is very useful during development, so that the game doesn't just crash and you lose whatever you were doing before.

The third and final big change relates to the way objects are rendered in the engine. Trial allows creating rather involved pipelines with different passes of shaders. In order to allow a lot of flexibility, these passes need to have control over how objects are rendered, but also which objects are rendered. Previously this was accomplished by a paint function that would traverse the scene graph and perform render operations. Transformations such as translations and rotations were accomplished by defining methods on that function that would dynamically bind the transform matrices and change them. However, this system made it very complicated and error-prone when a pass needed to be selective about which objects it should render. It also forced repeated lookup of the shader program appropriate for a given combination of pass and object, which could be quite slow.

The new system separates the scene and the shader passes entirely. In order to run a shader pass that should render objects in a scene, the scene must first be 'compiled' to the shader pass. This compilation would traverse the scene graph and flatten it into a sequence of render actions. These actions would include management of the transform matrix stack, application of necessary transforms, and ultimately the rendering of objects. Selecting which objects to render could be done at this stage as well, simply omitting the actions of irrelevant objects.

This system makes controlling the render behaviour much easier for the user, but is a lot more complex on the engine side, especially when paired with dynamic updates where objects can enter and leave the scene at will. The way it's currently implemented is very much sub-optimal in that sense, mostly because I have not yet figured out a good protocol on how to communicate where exactly the actions of a new entity should be placed in the action sequence. Containers may not always append a new entity at the end, so there has to be a way for the pass to know where to insert. The option of just recomputing all actions of the container may be prohibitively expensive.

There were other, more minor changes all over as well of course, but I think this entry is already long enough as it is. After getting all of these changes to the engine in, I had to go back and fix a ton of things in Kandria to work again. While at it, I also ripped out a bunch of systems that sucked in Kandria itself and replaced them with cleaner, more simplified variants.

All in all this took up pretty much the entire month. I'm done now, though, and pretty happy with the changes, so I should be able to focus on working on Kandria itself again. I've also already begun work on the next big rewrite that's listed: fixing up the sound engine. I'll put that on the back-burner until the next demo release, though.

Anyway, that's it for this month. Hopefully next month will have a lot more Kandria-specific updates!

Michał HerdaCHECK-TYPE* - CHECK-TYPE, except the type is evaluated

· 23 days ago

Someone seemed to need a CHECK-TYPE variant whose type is evaluated at runtime instead of being fixed at compile-time.

I quickly gutted out some code from PCS and produced the following code.

            ;;;; Based on Portable Condition System (License: CC0)

(defun store-value-read-evaluated-form ()
  (format *query-io* "~&;; Type a form to be evaluated:~%")
  (list (eval (read *query-io*))))

(defmacro with-store-value-restart ((temp-var place tag) &body forms)
  (let ((report-var (gensym "STORE-VALUE-REPORT"))
        (new-value-var (gensym "NEW-VALUE"))
        (form-or-forms (if (= 1 (length forms)) (first forms) `(progn ,@forms))))
    `(flet ((,report-var (stream)
              (format stream "Supply a new value of ~S." ',place)))
       (restart-case ,form-or-forms
         (store-value (,new-value-var)
           :report ,report-var
           :interactive store-value-read-evaluated-form
           (setf ,temp-var ,new-value-var
                 ,place ,new-value-var)
           (go ,tag))))))

(defun check-type-error (place value type type-string)
  (error
   'simple-type-error
   :datum value
   :expected-type type
   :format-control (if type-string
                       "The value of ~S is ~S, which is not ~A."
                       "The value of ~S is ~S, which is not of type ~S.")
   :format-arguments (list place value (or type-string type))))

(defmacro check-type* (place type &optional type-string)
  "Like CHECK-TYPE, except TYPE is evaluated on each assertion."
  (let ((variable (gensym "CHECK-TYPE-VARIABLE"))
        (tag (gensym "CHECK-TYPE-TAG"))
        (type-gensym (gensym "CHECK-TYPE-TYPE")))
    `(let ((,variable ,place))
       (tagbody ,tag
          (let ((,type-gensym ,type))
            (unless (typep ,variable ,type-gensym)
              (with-store-value-restart (,variable ,place ,tag)
                (check-type-error ',place ,variable ,type-gensym
                                  ,type-string))))))))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

CL-USER> (let ((x 2)) (check-type* x 'integer))
NIL

CL-USER> (handler-case (let ((x 2)) (check-type* x 'string))
           (error (e) (princ-to-string e)))
"The value of X is 2, which is not of type STRING."

          

Alexander Artemenkocl-skip-list

· 23 days ago

I found this library a few weeks ago. It implements a Skip List data structure. Which is a lock-free and has O(log n) for lookup, insert and delete operations.

I wondered if this library will have a better performance in situation when you have to access a dictionary from multiple threads?

Here is a simple benchmark. We ll create 10 threads and do 10 millions lookup of a value in the dictionary filled by 6600 symbols from the keywords package.

I'm testing on SBCL 2.0.2 with (declaim (optimize (debug 1) (speed 3))) options running on the Macbook with 12 cores.

Let's run this benchmark using a standard Common Lisp hash table and a lock:

POFTHEDAY> (let ((hash (make-hash-table))
                 (lock (bt:make-lock))
                 (num-operations 10000000)
                 (num-threads 10))
             (do-external-symbols (s :keyword)
               (setf (gethash s hash)
                     (symbol-name s)))
             (setf (gethash :foo hash)
                   "FOO")
             ;; Now it is time to define a worker function
             (flet ((worker ()
                      (loop with result = nil
                            repeat num-operations
                            do (bt:with-lock-held (lock)
                                 (setf result
                                       (gethash :foo hash)))
                            finally (return result))))
               ;; We'll create N workers and measure a total time required to finish them all
               (let* ((started-at (get-internal-real-time))
                      (workers (loop repeat num-threads
                                     collect (bt:make-thread #'worker))))
                 (loop for worker in workers
                       do (bt:join-thread worker))
                 ;; Calculate the total time
                 (/ (- (get-internal-real-time) started-at)
                    internal-time-units-per-second))))
2399/100 (23.99)

And now a lock free version using cl-skip-list:

POFTHEDAY> (let ((hash (cl-skip-list:make-skip-list :key-equal #'eql))
                 (num-operations 10000000)
                 (num-threads 10))
             (do-external-symbols (s :keyword)
               (cl-skip-list:skip-list-add hash
                                           s
                                           (symbol-name s)))
             (unless (cl-skip-list:skip-list-lookup hash :foo)
               (cl-skip-list:skip-list-add hash
                                           :foo
                                           "FOO"))
             ;; Now it is time to define a worker function
             (flet ((worker ()
                      (loop with result = nil
                            repeat num-operations
                            do (setf result
                                     (cl-skip-list:skip-list-lookup hash :foo))
                            finally (return result))))
               ;; We'll create N workers and measure a total time required to finish them all
               (let* ((started-at (get-internal-real-time))
                      (workers (loop repeat num-threads
                                     collect (bt:make-thread #'worker))))
                 (loop for worker in workers
                       do (bt:join-thread worker))
                 ;; Calculate the total time
                 (/ (- (get-internal-real-time) started-at)
                    internal-time-units-per-second))))
45799/1000 (45.799)

As you see, the version with a lock is twice faster: 46 seconds against 24.

Are there any reasons to use a lock-free data structure if it does not get you any speed gains?

Alexander Artemenkomake-hash

· 23 days ago

This is the most comprehensive library for making hash tables I've already seen! And it has wonderful documentation with lots of examples!

make-hash allows to create hash tables in multiple ways, from different kinds of data structures and even using functions for data transformation. For example, you can create a hash by reading rows from the database.

I'll show you only a few examples I especially liked.

First one is creation hash from a sequence while counting each item. Using this, we can easily count how many times each character is used in a text:

POFTHEDAY> (make-hash:make-hash
            :init-format :keybag
            :initial-contents "Alice loves Bob")
#<HASH-TABLE :TEST EQL :COUNT 11 {1008943083}>

POFTHEDAY> (rutils:print-hash-table *)
#{
  #\A 1
  #\l 2
  #\i 1
  #\c 1
  #\e 2
  #\  2
  #\o 2
  #\v 1
  #\s 1
  #\B 1
  #\b 1
 }

In the next example, we'll make a smaller hash table from another one while selecting data by keys:

POFTHEDAY> (let ((full-data
                   (make-hash:make-hash
                    :initial-contents
                    '(:foo 1
                      :bar 2
                      :bazz 3
                      :blah 4
                      :minor 5))))
             (make-hash:make-hash
              :init-format :keys
              :init-data full-data
              :initial-contents '(:bar :minor)))
#<HASH-TABLE :TEST EQL :COUNT 2 {10060F6123}>

POFTHEDAY> (rutils:print-hash-table *)
#{
   :BAR 2
   :MINOR 5
 }

And here is how we can build a hash from a data returned by a function. We only need a closure which will return rows of data as values and will return nil at the end.

POFTHEDAY> (defun make-rows-iterator ()
             ;; This list will allow us to simulate
             ;; the data storage:
             (let ((rows '((bob 42)
                           (alice 25)
                           (mike 30)
                           (julia 27))))
               (lambda ()
                 (let ((row (car rows)))
                   (setf rows
                         (cdr rows))
                   (values (first row) ;; This is a key
                           (second row))))))

POFTHEDAY> (make-hash:make-hash
            :init-format :function
            :initial-contents (make-rows-iterator))
#<HASH-TABLE :TEST EQL :COUNT 4 {10086FF8E3}>

POFTHEDAY> (rutils:print-hash-table *)
#{
  BOB 42
  ALICE 25
  MIKE 30
  JULIA 27
 }

make-hash also provides a configurable reader macro:

(install-hash-reader ())  ; default settings and options
#{:a 1 :b 2 :c 3 :d 4}   
       

(install-hash-reader '(:init-format :pairs)
  :use-dispatch t
  :open-char #\[ :close-char #\])
#['(:a . 1) '(:b . 2) '(:c . 3) '(:d . 4)] 
       

(install-hash-reader '(:init-format :lists)
  :use-dispatch nil
  :open-char #\{ :close-char #\})
{'(:a 1) '(:b 2) '(:c 3) '(:d 4)}

You will find more examples and instructions on how to define your own initialization formats in the library's documentation:

https://github.com/genovese/make-hash

Let's thank the #poftheday challenge for the chance to discover such cool Common Lisp library!

Tycho Garen Common Lisp and Logging

· 24 days ago

I've made the decision to make all of personal project code that I write to do in Common Lisp. See this post for some of the background for this decision.

It didn't take me long to say "I think I need a logging package," and I quickly found this wonderful comparsion of CL logging libraries, and only a little longer to be somewhat disappointed.

In general, my requirements for a logger are:

  • straightforward API for logging.
  • levels for filtering messages by importance
  • library in common use and commonly available.
  • easy to configure output targets (e.g. system's journal, external services, etc).
  • support for structured logging.

I think my rationale is pretty clear: loggers should be easy to use because the more information that can flow through the logger, the better. Assigning a level to all log messages is great for filtering practically, and it's ubiquitous enough that it's really part of having a good API. While I'm not opposed to writing my own logging system, [1] but I think I'd rather not in this case: there's too much that's gained by using the conventional choice.

Configurable outputs and structured logging are stretch goals, but frankly are the most powerful features you can add to a logger. Lots of logging work is spent crafting well formed logging strings, when really, you just want some kind of arbitrary map and makes it easier to make use of logging at scale, which is to say, when you're running higher workloads and multiple coppies of an application.

Ecosystem

I've dug in a bit to a couple of loggers, sort of using the framework above to evaluate the state of the existing tools. Here are some notes:

log4cl

My analysis of the CL logging packages is basically that log4cl is the most mature and actively maintained tool, but beyond the basic fundamentals, it's "selling" features are... interesting. [2] The big selling features:

  • integration with the developers IDE (slime,) which makes it possible to use the logging system like a debugger, almost. This is actually a phenomenal feature, particularly for development and debugging. The downside is that it wouldn't be totally unreasonable to use it production, and that's sort of terrifying.
  • it attempts to capture a lot of information about logging call sites so you can better map back from log messages to the state of the system when the call was made. Again, this makes it a debugging tool, and that's awesome, but it's overhead, and frankly I've never found it difficult to grep through code.
  • lots of attention to log rotation and log file management. There's not a lot of utility in writing log data to files directly. In most cases you want to write to standard out: the program is being used interactively, and users may want to see the log of what happens. In cases where you're running in a daemon mode, any more you're not, systemd or similar just captures your output. Even then, you're probably in a situation where you want to send the log output to some other process (e.g. an external service, or some kind of local socket for fluentd/etc.)
  • hierarchical organization of log messages is just less useful than annotation with metadata, in practice, and using hierarchical methods to filter logs into different streams or reduce logging volumes just obscures things and makes it harder to understand what's happening in a system.

Having said that, the API surface area is big, and it's a bit confusing to just start using the logger.

a-cl-logger

The acl package is pretty straightforward, and has a lot of features that I think are consistent with my interests and desires:

  • support for JSON output,
  • internal support for additional output formats (e.g. logstash,)
  • more simple API

It comes with a couple of pretty strong draw backs:

  • there are limited testing.
  • it's SBCL only, because it relies on SBCL fundamentals in collecting extra context about log messages. There's a pending pull request to add ECL compile support, but it looks like it wouldn't be quite that simple.
  • the overhead of collecting contextual information comes at an overhead, and I think logging should err on the side of higher performance, and making expensive things optional, just because it's hard to opt into/out of logging later.

Conclusion

So where does that leave me?

I'm not really sure.

I created a scratch project to write a simple logging project, but I'm definitely not prioritizing working on that over other projects. In the mean time I'll probably end up just not logging very much, or maybe giving log4cl a spin.

Notes

[1]When I started writing Go I did this, I wrote a logging tool, for a bunch of reasons. While I think it was the right decision at the time, I'm not sure that it holds up. Using novel infrastructure in projects makes integration a bit more complicated and creates overhead for would be contributors.
[2]To be honest, I think that log4cl is a fine package, and really a product of an earlier era, and it makes the standard assumptions about the way that logs were used, that makes sense given a very different view of how services should be operated.


For older items, see the Planet Lisp Archives.


Last updated: 2020-07-26 19:38