Page 1 of 1

Converting signed to unsigned: help, Lisp is in my way!

Posted: Sat Oct 23, 2010 10:41 am
by Warren Wilkinson
Hi,

I've got an interesting problem. The user enters a minimum value, which I represent as a 64bit signed number. But I want to store it as a 64 bit unsigned number relative to the lowest possible value that a 64bit signed can represent. Heres what I mean demonstrated with 3 bit numbers

Code: Select all

;; SIGNED                   UNSIGNED RELATIVE TO -4
;; 000 = 0    --- goal --> 4  100
;; 001 = 1                 5  101
;; 010 = 2                 6  110
;; 011 = 3                 7  111
;; 100 = -4                0  000
;; 101 = -3                1  001
;; 110 = -2                2  010
;; 111 = -1                3  011

;; Thus
;; 000 -> 100
;; 001 -> 101
;; 010 -> 110
;; 011 -> 111
;; 100 -> 000
;; 101 -> 001
;; 110 -> 010
;; 111 -> 011
From this ready pattern emerges; To convert a signed 64 bit number to an unsigned 64 bit number where 0 represents the smallest representable 64bit signed number I just need to add the smallest 64bit signed number and treat the result as an unsigned number.

So far, I've written this:

Code: Select all

(defconstant +smallest-64s+ (the (signed-byte 64) #x8000000000000000))

(defun 64s->_64us (64s)
    nil)
But I already have problems. Lisp won't let me define +smallest-64s+, because it thinks it has a positive number (although I could evaluate (- 0 (expt 2 63))), but I suspect I'll have a bigger problem: How will I coerce the answer to a unsigned? (coerce n '(unsigned-byte 64)) gives an error if N is negative.

Basically, what I'm asking is how can I get Lisp out of the way so I can do precisely what I want with raw bits? And when I'm done, how can I tell Lisp that the answer was unsigned?

I'm using SBCL. Thanks for your help!

Re: Converting signed to unsigned: help, Lisp is in my way!

Posted: Sat Oct 23, 2010 11:58 am
by ramarren
Warren Wilkinson wrote:But I want to store it as a 64 bit unsigned number relative to the lowest possible value that a 64bit signed can represent.
First, why do you want to do that? Why can't you just store a signed number and don't worry about binary representation? Also, do you actually care about the exact representation you specified?

First thing I feel obligated to mention, types in Common Lisp denote sets of objects, therefore "coercing" signed to unsigned integer makes no real sense, since there is no objective mapping between objects in those sets that would map negative numbers to some numbers in nonnegative number set. Binary representation is incidental as far as type system is concerned.

On the other hand CL does have a full set of binary operations among its number manipulation operators.

My binary is a bit rusty, so I do not guarantee this is not wrong. If you do not actually care about the scheme you invented you can just cut-off the complement bits. You did not ask how to reverse the transformation, but it just requires reinterpreting the most significant bit as a sign bit:

Code: Select all

(defparameter *bitwidth* 3)

(defparameter *signed-mask* (ash -1 *bitwidth*))

(defparameter *unsigned-mask* (1- (ash 1 *bitwidth*)))

(defun s->us (number)
  (logand number *unsigned-mask*)) ; will also truncate to *bitwidth* bits

(defun us->s (unumber)
  (if (logbitp (1- *bitwidth*) unumber)
      (logior unumber *signed-mask*)
      unumber))

(defun test ()
  (loop for i from 0 below (ash 1 (1- *bitwidth*))  for us = (s->us i) do (format t "~a -> ~a -> ~a~&" i us (us->s us)))
  (loop for i from (ash -1 (1- *bitwidth*)) to -1  for us = (s->us i) do (format t "~a -> ~a -> ~a~&" i us (us->s us))))
If you do care about your exact representation then you can just flip the sign bit (assuming definitions above):

Code: Select all

(defparameter *flipmask* (ash 1 (1- *bitwidth*)))

(defun s->us-2 (number)
  (logxor *flipmask* (s->us number)))

(defun us->s-2 (unumber)
  (let ((unumber (logxor *flipmask* unumber)))
    (us->s unumber)))

(defun test-2 ()
  (loop for i from 0 below (ash 1 (1- *bitwidth*)) for us = (s->us-2 i) do (format t "~a -> ~a -> ~a~&" i us (us->s-2 us)))
  (loop for i from (ash -1 (1- *bitwidth*)) to -1  for us = (s->us-2 i) do (format t "~a -> ~a -> ~a~&" i us (us->s-2 us))))
Tests:

Code: Select all

CL-USER> (test)
0 -> 0 -> 0
1 -> 1 -> 1
2 -> 2 -> 2
3 -> 3 -> 3
-4 -> 4 -> -4
-3 -> 5 -> -3
-2 -> 6 -> -2
-1 -> 7 -> -1
NIL
CL-USER> (test-2)
0 -> 4 -> 0
1 -> 5 -> 1
2 -> 6 -> 2
3 -> 7 -> 3
-4 -> 0 -> -4
-3 -> 1 -> -3
-2 -> 2 -> -2
-1 -> 3 -> -1
NIL

Re: Converting signed to unsigned: help, Lisp is in my way!

Posted: Sat Oct 23, 2010 1:04 pm
by Warren Wilkinson
First, why do you want to do that?
Eventualy, these numbers need to be stored in a binary file. These numbers are part of a family of structures that are created demand in response to other user actions, and sometimes loaded from disk. In order to compress and future proof the design, I decided that any value NOT present in the database becomes a zero. So zeros are special.

One use of these numbers is to let the user specified minimum and maximum values. With my zero-based design the default minimum and maximum would both be zero, which is spectacularly useless. So I have a choice a) Accept impossible dumb default values, b) Get rid of my zero-based elegance or c) Find an alternate interpretation that makes zero a good default.

I've chosen method C... by representing my numbers in this fashion -- storing zero minimum actually means the minimum is the smallest possible 64bit value. A parallel interpretation makes a zero maximum mean the maximum possible 64 bit value. On the display side I re-normalize the numbers so they make sense to humans. (Also, when the value is zero, I actually leave the min and max blank to show that there is no effective limits in place).

If you do care about your exact representation then you can just flip the sign bit (assuming definitions above):
Thanks for your help, I was able to use these definitions nicely.

Code: Select all

(defconstant +signbit+ (ash 1 63))
(defun s->us_ (s) (logxor s +signbit+))  ;; Signed to unsigned relative counting up from minimum value
(defun us_->s (us_) (s->us_ us_))
(defun s->us^ (s) (lognot (s->us_ s)))   ;; Signed to unsigned counting down from maximum value
(defun us^->s (us^) (s->us^ us^))
I think I can get the binary code out pretty easy too...

Code: Select all

RENDER> (ldb (byte 64 0) (s->us_ 0))
9223372036854775808
RENDER> (ldb (byte 64 0) (s->us_ 1))
9223372036854775809
RENDER> (ldb (byte 64 0) (s->us_ -1))
9223372036854775807

Re: Converting signed to unsigned: help, Lisp is in my way!

Posted: Sat Oct 23, 2010 1:46 pm
by ramarren
I don't think your definitions are correct, at least technically... using your definition I get:

Code: Select all

CL-USER> (s->us_ -2)
-9223372036854775810
which I don't think is exactly what you want, since it is negative. Using LDB will obviously truncate it anyway, so it might not matter, but truncating the number with LOGAND might be a good idea. Also, you write that stored zero should represent the smallest possible 64bit value" but (us_->s 0)->9223372036854775808 which is positive. You can't just flip the sign bit, you have to replicate all the "virtual" bits, since CL is specified to use a two-complement representation. Which is what the conditional was for in my code.

Although honestly if that is your actual problem then it would be easier to just add/subtract numbers. I went into bit manipulation because you represented the numbers in binary in your example, but you really don't have to care how exactly they are stored, as long as they fit, since that is the systems problem.

Code: Select all

(defun s->us_ (s)
  (- s (ash 1 62)))

(defun us_->s (us_)
  (+ us_ (ash 1 62)))

(defun s->us^ (s)
  (- (ash 1 62) s))

(defun us^->s (us^)
  (- (ash 1 62) us^))
Also, you do not tell the Lisp system if the number is signed or unsigned, since that is the property of the number in the abstract, when defined by sign and bit width. You do tell the signedness when storing data, but that happens at either file opening level by stating element-type, or when using FFI to talk to the operating system, by stating argument function types, or types of memory accesses.

Re: Converting signed to unsigned: help, Lisp is in my way!

Posted: Sat Oct 23, 2010 3:04 pm
by Warren Wilkinson
I had been doing a subtraction-ee thing, but was finding it a little error prone. This hybrid approach seems to work well, and it keeps the sign, so it should be both Lisp and storage friendly.

Code: Select all

(defconstant +max-s64+  (the (unsigned-byte 64) #x7FFFFFFFFFFFFFFF))
(defun s->us^ (s) (the (unsigned-byte 64) (- +max-s64+ (the (signed-byte 64) s))))
(defun us^->s (us^) (the (signed-byte 64) (- +max-s64+ (the (unsigned-byte 64) us^))))
(defun s->us_ (s)  (s->us^ (lognot (the (signed-byte 64) s))))
(defun us_->s (us_) (lognot (us^->s (the (unsigned-byte 64) us_))))

Code: Select all

RENDER> (us^->s 0)
9223372036854775807    ;; An unsigned 0 relative to the signed ceiling gives the signed ceiling
RENDER> (us_->s 0)
-9223372036854775808  ;; an unsigned 0 relative to the signed floor gives the signed floor.

Code: Select all

RENDER> (us_->s (s->us_ 0))  ;; Pack and unpack seems to work, and keep the sign.
0
RENDER> (us_->s (s->us_ -1222))
-1222
RENDER> (us_->s (s->us_ 1352))
1352

Re: Converting signed to unsigned: help, Lisp is in my way!

Posted: Thu Oct 28, 2010 8:00 am
by death

Re: Converting signed to unsigned: help, Lisp is in my way!

Posted: Mon Jan 07, 2019 5:15 pm
by Warren Wilkinson
Many years later, here is a better way:

Code: Select all

;; Signed 16 to unsigned 16.
(deposit-field -3000 (byte 16 0) 0)
;; result: 62536

;; Signed 14 to unsigned 14.
(deposit-field -3000 (byte 14 0) 0)
;; 13384

;; Signed 8 to unsigned 8
(deposit-field -128 (byte 8 0) 0)
;; 128