2020年3月29日 星期日

Running Wacom Bamboo Slate client application on Ubuntu Bionic


I have several Wacom Bamboo Slate, which is a kind of graphics tablet in different size. The most special feature of Bamboo Slate is that you could use the customized pen to draw on a real paper on the top of the tablet, and you could get the digital image at the same time. It also supports "Live mode" so the tablet could behave like a normal graphic tablet.

Unfortunately, the corresponding client application only supported on Android, iOS, Windows, and OSX. Here shows how to make it work on one of the Linux distribution, Ubuntu Bionic.

Tuhi project


There is an open source project Tuhi https://github.com/tuhiproject/tuhi is trying to support the client application of the device.


Build Tuhi


The biggest issue that may happen to you on Ubuntu Bionic is the one of the build prerequisites, pygobject-3.0 should be 3.30 or higher. The corresponding debian package on Ubuntu Bionic is
python-gi-dev, and the latest version Bionic provides is 3.26.x (2020 March). 3.30 is only available on Eoan or later.

There are two solutions. First, you may build the code in a python virtual environment, which currently provides PyGObject 3.34.0. Or tweak the source code to unlock the dependency check to allow to use elder version of pygobject. I tried the latter solution with 3.26, and both of the normal and live modes work well for me.


Run Normal Mode in a Python Virtual Environment

After building the code, you will get several runner with the suffix .devel to run the application in development mode. You may execute tuhi.devel directly.


Run Live Mode in a Python Virtual Environment

Regarding live mode, the application needs more system permission because it will create a HID device, and need to communicate with the Linux kernel to create device nodes. You may use the following command to execute the live mode runner:

sudo <your python interpreter of your virtual environment> tools/tuhi-live.py

so you could get enough permission to run the live runner.


The Difference Between Normal and Live Mode From System Perspective

The normal mode is mainly based on the session bus of dbus mechanism and GTK3. The live mode is mainly based on being a HID device of Linux kernel.

Both of normal and live mode are established by communicating with the rule of the device firmware. Check the Protocol and Interactions class of the protocol module.


Get the Correct Dimension in Live Mode

If you are using the device in the normal mode, it works like a charm. If you are using the device in the live mode, you may be aware of the distortion of what you are drawing. This is caused by the mismatch of the ratio between your monitor dimension and your bamboo slate. By setting the slate dimension it will help.

There are several ways to match the ratio. Here is mine:
  • Constrain the tablet in one monitor only. This may be optional for you because I use multiple monitors.
  • Remove the out-of-range part of the table panel to match the monitor dimension.
Thanks for FLOSS. We already have the corresponding tools to achieve the above tasks. I will illustrate what I used in the following sessions.


Constrain the Tablet in One Monitor Only

Firstly let's check if the device has been regarded as one of the input device of your X.

$ xinput
⎡ Virtual core pointer                    id=2 [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer              id=4 [slave  pointer  (2)]
⎜   ↳ Logitech M310                            id=10 [slave  pointer  (2)]
⎜   ↳ Logitech K520                            id=11 [slave  pointer  (2)]
⎜   ↳ SynPS/2 Synaptics TouchPad              id=15 [slave  pointer  (2)]
⎜   ↳ Wacom A4 - Office - Slate Pen stylus    id=17 [slave  pointer  (2)]
⎣ Virtual core keyboard                    id=3 [master keyboard (2)]
    ↳ Virtual core XTEST keyboard              id=5 [slave  keyboard (3)]
    ↳ Power Button                            id=6 [slave  keyboard (3)]
    ↳ Video Bus                                id=7 [slave  keyboard (3)]
    ↳ Power Button                            id=8 [slave  keyboard (3)]
    ↳ Sleep Button                            id=9 [slave  keyboard (3)]
    ↳ Chicony USB2.0 Camera: Chicony          id=12 [slave  keyboard (3)]
    ↳ Intel HID events                        id=13 [slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard            id=14 [slave  keyboard (3)]
    ↳ Logitech K520                            id=16 [slave  keyboard (3)]

Neat, we have "Wacom A4 - Office - Slate Pen stylus" as one of the input device now. The let's check with a more specific tool by:

xsetwacom list
Wacom A4 - Office - Slate Pen stylus id: 17 type: STYLUS

The listed name is exactly the input device name.


Man xsetwacom will Tell You a LOT

You may use "xsetwacom list parameters" to check your Wacom device status, and then setup them via "xsetwacom set <device> <parameter> <value>".

For example, inquiry by

$ xsetwacom get "Wacom A4 - Office - Slate Pen stylus" Mode
Absolute

Tip: "man setwacom" to know how many parameters are available.

If it is not in the absolute mode, change it by

$ xsetwacom set "Wacom A4 - Office - Slate Pen stylus" Mode Absolute

Inquiry the monitor output source:

$ xrandr -q

According to its output, we could constrain the tablet in the target monitor via
$ xsetwacom set "Wacom A4 - Office - Slate Pen stylus" MapToOutput eDP-1
You may be aware of the change of the value of "Coordinate Transformation Matrix" output by "xinput list-props <your wacom device name>" before and after the MapToOutput setting.

Finally, make sure the value of "Wacom Tablet Area" is in the 1:1 ratio of your monitor.

$ xinput list-props "Wacom A4 - Office - Slate Pen stylus"
Device 'Wacom A4 - Office - Slate Pen stylus':
Device Enabled (169): 1
Coordinate Transformation Matrix (171): 0.545455, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
Device Accel Profile (300): 0
Device Accel Constant Deceleration (301): 1.000000
Device Accel Adaptive Deceleration (302): 1.000000
Device Accel Velocity Scaling (303): 10.000000
Device Node (292): "/dev/input/event18"
Wacom Tablet Area (732): 2500, 6362, 28700, 21100
Wacom Rotation (733): 0
Wacom Pressurecurve (734): 0, 0, 100, 100
Wacom Serial IDs (465): 1, 1, 2, 0, 0
Wacom Serial ID binding (735): 0
Wacom Pressure Threshold (736): 26
Wacom Sample and Suppress (737): 2, 4
Wacom Enable Touch (738): 0
Wacom Hover Click (739): 1
Wacom Enable Touch Gesture (740): 0
Wacom Touch Gesture Parameters (741): 0, 0, 250
Wacom Tool Type (742): "STYLUS" (731)
Wacom Button Actions (743): "Wacom button action 0" (744), "Wacom button action 1" (745), "Wacom button action 2" (746), "None" (0), "None" (0), "None" (0), "None" (0), "Wacom button action 3" (747)
Wacom button action 0 (744): 1572865
Wacom button action 1 (745): 1572866
Wacom button action 2 (746): 1572867
Wacom button action 3 (747): 1572872
Wacom Pressure Recalibration (748): 1
Wacom Panscroll Threshold (749): 1300
Device Product ID (293): 1386, 1
Wacom Debug Levels (750): 0, 0
That's it. Enjoy your drawing!


2020年3月17日 星期二

Enable EDAC debug mode on Ubuntu bionic kernel

When developing Ubuntu Bionic kernel, you probably notice the EDAC, Error Detection and Correction, is not enabled by default. You may want to enable it for development. This is how it works.

Go and fetch Ubuntu bionic source code via Launchpad, and make a change as the following:



diff --git a/debian.master/config/annotations b/debian.master/config/annotations
index d4ba76f3a350..bf220bf6e729 100644
--- a/debian.master/config/annotations
+++ b/debian.master/config/annotations
@@ -1259,7 +1259,7 @@ CONFIG_OF_UNITTEST                              flag<DEBUG>
 # Menu: Device Drivers >> EDAC (Error Detection And Correction) reporting
 CONFIG_EDAC                                     policy<{'amd64': 'y', 'arm64': 'y', 'armhf': 'y', 'i386': 'y', 'ppc64el': 'y'}>
 CONFIG_EDAC_LEGACY_SYSFS                        policy<{'amd64': 'n', 'arm64': 'n', 'armhf': 'n', 'i386': 'n', 'ppc64el': 'n'}>
-CONFIG_EDAC_DEBUG                               policy<{'amd64': 'n', 'arm64': 'n', 'armhf': 'n', 'i386': 'n', 'ppc64el': 'n'}>
+CONFIG_EDAC_DEBUG                               policy<{'amd64': 'n', 'arm64': 'y', 'armhf': 'n', 'i386': 'n', 'ppc64el': 'n'}>
 CONFIG_EDAC_DECODE_MCE                          policy<{'amd64': 'm', 'i386': 'm'}>
 CONFIG_EDAC_GHES                                policy<{'amd64': 'y', 'arm64': 'y', 'i386': 'y'}>
 CONFIG_EDAC_AMD64                               policy<{'amd64': 'm', 'i386': 'm'}>
diff --git a/debian.master/config/arm64/config.flavour.generic b/debian.master/config/arm64/config.flavour.generic
index bb7773a235d2..b6d9b685a5a7 100644
--- a/debian.master/config/arm64/config.flavour.generic
+++ b/debian.master/config/arm64/config.flavour.generic
@@ -1,3 +1,4 @@
 #
 # Config options for config.flavour.generic automatically generated by splitconfig.pl
 #
+CONFIG_EDAC_DEBUG=y


For now, if you check the debugfs of EDAC, you should have /sys/kernel/debug/edac folder is generated by default.

2019年2月3日 星期日

Using command line tools proactively


I still remember the day that I firstly saw the books like "the Linux command manual". Most of them are just (or similar to) the collection of the contents from the "man" command output. I was surprised at "How a person could memorize or know so many commands and their usage." There are hundreds of commands, and thousands of their corresponding options and usages.

Many people, at least myself, figure out which command we should know and how to use it by googling a specific question. For example, google "How do I list files in a directory with Ubuntu Linux", and pick up (randomly) the first few searching results to follow. As the time goes by, I know more and more commands until I could handle most of the issues in my life.

Then the learning curve reach a plateau. I won't learn new command or new option of a known command for longer and longer period.

"How the people on the Stack Overflow know so many variant ways or commands to handle a similar problem?" I seldom (and can't most of the time) walkthrough the manual of a command, and I suppose many people don't as well. For example, the "dd" command has a lot of fancy and useful options and features to use, but the default value should work like a charm in more than 80% of life problems. If I don't know there is such a option or an extra feature to use, how am I aware that I could use them?

Besides from randomly googling and waiting for someone's answer on Stack Overflow, there is another way to think outside the box: think proactively.

To think proactively here means the following points of view have been considered in our mind:


  • What problem I am going to resolve?
  • What's the essential property of this problem? Does this property has similar problems as well?
  • What feature this tool should has to resolve this problem essentially?
  • What kind of the tool to resolve this problem may be? How this kind of tool to resolve the problem?
  • What is the result after I applied the tool to the problem?

Let's take the "dd" command as an example again. Assume our problem is "to clone one disk". Then the essential property of this problem could be:

  • How to clone this disk faster? - Is there any option to make it faster?
  • How to clone the disk reliably? - Is there any option for me to check the status frequently?
  • It is an I/O problem - There are very likely to be input and output related features.
  • It is an operation on block device - I have to think form the block device point of view.

And then it could be:

  • Speed: what is the potential features to make the read/wirte IO of block devices faster? - read/wirte chunk size. Error handling.
  • Reliability - Is there any process status reporter? Is there any error handler?
So I might figure out the "bs" option may play a role for the speed, and there are "sync" and "noerrors". I could suppose there should be a option for progress. It is status=progress then.



This mind is similar to the mind when trying to find out an solution to a problem, answer to a question, or debug code. Essentially they are just to figure out the goal, collect the associated 
information, apply it and review the result consciously.

For example, I am working on SGE (Sun Grid Engine) infrastructure recently. I was prototyping a solution to build the infrastructure automatically with LXD/LXC. When I complete the prototype I move the solution Juju/Charms to MaaS. I was blocked then. However I could soon find the root cause out by thinking proactively, like:

  • The error looks like a network issue and permission issue.
  • To build a SGE scheduler is a question about communication between nodes.
  • When I setup the prototype successfully, which part relates communication/networking/permission.
  • Is the same step applied to the new infrastructure flow?
Then I could understand what kind of commands (qconf for example) and the associated options I may look into. : )


In conclusion, when you have some background knowledge already, try to think "the tool that I has known would be great if it could has this feature. Does it have this feature?" rather than "google the problem directly." To google a problem on the internet you could most of the time just get an entry level answer, none, or noise.


2018年11月21日 星期三

Modify casper/initrd of Ubuntu 18.10 Cosmic Cuttlefish


The article is firstly posted here https://askubuntu.com/questions/1094854/how-to-modify-initrd-initial-ramdisk-of-ubuntu-18-10-cosmic-cuttlefish/1094855 because this change is pretty new and it seems that nobody has asked on the internet. To post there should help many people in the follow months after 18.10 release.


Besides, the quote from Debian wiki is also useful as background knowledge.

  • If an uncompressed cpio archive exists at the start of the initramfs, extract and load the microcode from it to CPU.
  • If an uncompressed cpio archive exists at the start of the initramfs, skip that and set the rest of file as the basic initramfs. Otherwise, treat the whole initramfs as the basic initramfs.
  • unpack the basic initramfs by treating it as compressed (currently gzipped) cpio archive into a RAM-based disk.
  • mount and use the RAM-based disk as the initial root filesystem.

2018年10月11日 星期四

Troubleshooting - curtin version is incorrect on a MaaS region server

Few weeks ago an weird MaaS issue happened to me. When I tried to commission or deploy node with ga-18.04 kernel. The deployment cycle always stops at the grub entry, which shows "Commissioning".

After fighting for few days by stopping in the ephemeral environment when dd the customized image to the hard disk. I noticed that the well-functioned MaaS region server updates the kernel in the ephemeral environment, and the malfunctioned MaaS region server doesn't. To use the new kernel is very important for me to deploy my customized images because I need nls_iso8859-1.ko module to deal with my recovery partition. This code snippet shows how a recent curtin (18.1) updates the kernel


ubuntu@breckenridge-dvt2-201802-26115:/curtin$ grep linux-image -r *
Binary file curtin/deps/pycache/init.cpython-36.pyc matches
curtin/deps/init.py: # linux-image package for this environment
curtin/deps/init.py: kernel_pkg = 'linux-image-%s' % os.uname()[2]
def check_kernel_modules(modules=None):

if modules is None:
modules = REQUIRED_KERNEL_MODULES

# if we're missing any modules, install the full
# linux-image package for this environment
for kmod in modules:
try:
subp(['modinfo', '--filename', kmod], capture=True)
except ProcessExecutionError:
kernel_pkg = 'linux-image-%s' % os.uname()[2]
return [MissingDeps('missing kernel module %s' % kmod, kernel_pkg)]

return [] 


Thus I went to dig in curtin, which takes care of the installation/dd of images, and noticed the version of curtin differs in two different MaaS region server which are installed the same version of MaaS. By updating the curtin I fixed the issue. The mulfunctioned one uses 0.1.0 curtin, and the good server uses 18.1.

In conclusion, the curtin version of the MaaS region server matters. It seems that the curtin will map into the ephemeral environment and be leveraged. Interesting!


Summary of the Debugging Tips




  • Summary of the debugging flow of this case
    • stop at the grub entry
    • check the previous stage and found errors in curtin stage
    • compare good and bad environment to use curtin (ephemeral environment)
    • identified the root cause is lack of nls_iso8859-1.ko
    • notice good environment updates its kernel
    • figure out the curtin source differs
    • found the curtin version differs
  • curtin log is valuable. Read it carefully. Check if it triggers the very first error.
  • Effective Debugging: 66 Specific Ways to Debug Software and Systems by Diomidis Spinellis suggests to compare the buggy system with a well-functioned system may help. So true!






2018年9月24日 星期一

How "source activate conda-virtual-environment" works?


When using conda of Anaconda  or Mini-conda to create and manage a Python virtual environment, this kind of command is commonly used to activate and deactivate the target virtual environment:
source activate <conda-virtual-environment-name>
How does this work? Firstly we need to know:

  • source is a feature of bash shell. It is equivalent to . (a dot) of dash shell.
    • bash manual page says
      • ... filenames in PATH are used to find the directory containing filename. ...
If you could use the command, conda, your conda bin folder must be included in the  environment variable PATH to make the command conda available to be searched and used. If you go to the same bin folder of conda path, you could see the files, activate and deactivate are in the same folder.


Thus, the command, source activate conda-virtual-environment, is actually

source <path-to-conda-bin-folder>/activate <conda-virtual-environment-name>

<conda-virtual-environment-name> is just the argument of the executable file, activate. Read the file activate would help you to understand how the virtual environment is launched/activated.

By the way, the recent conda is going to use conda activate to replace the conventional source activate.




2018年5月14日 星期一

Installer nightmare

To develop an installer or debug it could be very challenging for the sake of limited resource. Besides, long turn around time is another big big challenge as well. It could be a nightmare.

The limited resource here means:

  • You have no idea where the log will be.
  • You may not fetch the log you want.
  • You don't manage to access the log even you know where it is.


The long turn around time here means:


  • You can't reproduce the breakpoint within 5 minutes because you have to restart the machine and wait for image-level copying.


I will take an example below to elaborate the essence of an installer challenge. LAVA is a tool for debian, and I will talk about Ubuntu.


Ubuntu Desktop installer


To develop Ubuntu Desktop installer on a real machine in OEM mode. I often turn on debug mode by injecting debug parameters in the kernel parameter line and go to /var/log/installer/debug. To hardcode the frequently used parameters in the bootloader (say grub.cfg of grub) may be a good idea, because it takes much attention to input the parameters. The following parameters are the ones I used very much:


  • debug -- automatic-oem-config debug

I tweak casper/filesystem.squashfs sometime as well to dump more special messages at the stage 1 of recovery. Besides, to install useful tools by choort/dpkg may be a good idea as well. A better text editor and the ability to ssh connect remotely may help me to interact with the installer and monitor the log in run time.

To leverage tweaking squashfs is useful, however it also pays off. A typical Ubuntu desktop squashfs could be 1 ~ 2G. If you are not using solid state disks, it takes a log of time to extract the squashfs, modify it, and then re-pack it back to a squashfs. Frequent flow looks like:
  • sudo unsquashfs -d ./fs filesystem.squashfs (extract files)
  • sudo mksquashfs ./fs/ filesystem.squashfs.mod (pack modified files)
  • sudo cp filesystem.squashfs.01 ./<somewhere of your installing media>/casper/filesystem.squashfs (deploy)

An auxiliary could be an ftp server to download debug tools.