2019년 10월 7일 월요일

Setup develop environment for Bluetooth SOC, nRF52840

1. Install Toolchain on debian testing(currently bulleseye)
# apt install libnewlib-dev libstdc++-arm-none-eabi-newlib libnewlib-arm-none-eabi gcc-arm-none-eabi binutils-arm-none-eabi

1.2. Verify the version of installed gcc
# arm-none-eabi-gcc --version
arm-none-eabi-gcc (15:7-2018-q2-6+b1) 7.3.1 20180622 (release) [ARM/embedded-7-branch revision 261907]
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

2. Download, and install the nRF5 SDK
Go to https://www.nordicsemi.com/Software-and-Tools/Software/nRF5-SDK/, and
download the latest verion of SDK at https://www.nordicsemi.com/-/media/Software-and-other-downloads/SDKs/nRF5/Binaries/nRF5SDK153059ac345.zip
Unzip nRF5SDK153059ac345.zip like as:
nRF5_SDK_15.3.0_59ac345
+- components
+- config
+- documentation
+- examples
+- external
+- external_tools
+- intergation
+- modules
+- license.txt
+- nRF5x_MDK_8_24_1_IAR_NordicLicense.msi
+- nRF5x_MDK_8_24_1_Keil4_NordicLicense.msi

At "$(SDK_ROOT)/components/toolchain/gcc" under SDK, there are several kinds of Makefile for Windows/Linux/MacOs.
Now, we should edit "Makefile.posix" in the case of Linux or MacOs like the below.

GNU_INSTALL_ROOT ?= /usr/local/gcc-arm-none-eabi-7-2018-q2-update/bin/
GNU_VERSION ?= 7.3.1
GNU_PREFIX ?= arm-none-eabi

into

GNU_INSTALL_ROOT ?= /usr/bin/
GNU_VERSION ?= 7.3.1
GNU_PREFIX ?= arm-none-eabi

2.1. Try to compile the example.
Under "$(SDK_ROOT)/examples/peripheral/blinkyPCA10056/blank/armgcc",
$ make
mkdir _build
cd _build && mkdir nrf52840_xxaa
Assembling file: gcc_startup_nrf52840.S
Compiling file: nrf_log_frontend.c
Compiling file: nrf_log_str_formatter.c
Compiling file: boards.c
Compiling file: app_error.c
Compiling file: app_error_handler_gcc.c
Compiling file: app_error_weak.c
Compiling file: app_util_platform.c
Compiling file: nrf_assert.c
Compiling file: nrf_atomic.c
Compiling file: nrf_balloc.c
Compiling file: nrf_fprintf.c
Compiling file: nrf_fprintf_format.c
Compiling file: nrf_memobj.c
Compiling file: nrf_ringbuf.c
Compiling file: nrf_strerror.c
Compiling file: nrfx_atomic.c
Compiling file: main.c
Compiling file: system_nrf52840.c
Linking target: _build/nrf52840_xxaa.out
   text       data        bss        dec        hex    filename
   2168        112        172       2452        994    _build/nrf52840_xxaa.out
Preparing: _build/nrf52840_xxaa.hex
Preparing: _build/nrf52840_xxaa.bin
DONE nrf52840_xxaa

2019년 4월 26일 금요일

리눅스에서 안드로이드 카카오톡 앱 실행해보기

먼저, 리눅스 시스템에서 안드로이드 시스템을 구성해주는 anbox (Android in Box)를 설치하자.

# apt install anbox

설치는 되었지만, 실행에 뭔가 문제가 있다. 문제가 뭔지 살펴보자.

$ anbox session-manager
 [ 2019-04-25 14:41:11] [session_manager.cpp:130@operator()] Failed to start as either binder or ashmem kernel drivers are not loaded

커널 드라이버가 없어서 실행이 되지 않는 것 같다.

그래서, 설치된 패키지의 설명서를 살펴보았다.

....
This package needs Android kernel modules and rootfs image, see
/usr/share/doc/anbox/README.Debian for information.
...

따라해 보기로 했다. /usr/share/doc/anbox/README.Debian을 살펴보자.

그 내용은 요약하자면 다음과 같다.

/lib/modules/`uname -r`/kernel/drivers/android/binder_linux.ko
/lib/modules/`uname -r`/kernel/drivers/staging/android/ashmem_linux.ko
위의 두 커널 모듈이 있는지 확인하자.

그리고, https://build.anbox.io/android-images 에서 사전에 만들어진 안드로이드 이미지를 다운로드하자. 이왕이면 가장 최신의 것으로 다운로드 한다. 현 시점에선

Index of /android-images/2018/07/19

[ICO]NameLast modifiedSizeDescription

[PARENTDIR]Parent Directory-
[   ]android_amd64.img2018-07-20 01:46 311M
[   ]android_amd64.img.sha256sum2018-07-20 02:11 84


이 최신이다.

이제 부터는 좀 불친절하게 되어 있다. 말도 많고 탈도 많은 systemd와 관련된 문제이다.
다자고짜 anbox-container-manager.service 를 실행하라고 한다.
찾아보니 다음과 같이 하면 될 것 같다. 우선 해보기로 했다.

# systemctl start anbox-container-manager.service

얼추 된 것 같다. 한번 실행해 보자. .... 안된다. 뭔가 빠진 것 같다.
 다시 한번 더 실행해보자

$ anbox session-manager
 [ 2019-04-25 15:01:08] [session_manager.cpp:130@operator()] Failed to start as either binder or ashmem kernel drivers are not loaded

여전하다. 커널 모듈 로딩 목록에 강제적으로 넣어 주자. 아래와 같이...

@/etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
ashmem_linux
binder_linux


그리고 리부팅...

다시 실행.

$ anbox session-manager
[ 2019-04-25 15:07:35] [daemon.cpp:61@Run] Failed to connect to socket /run/anbox-container.socket: No such file or directory


아 돌겠네.

# systemctl start anbox-container-manger.service

또 실행.

$ anbox session-manager
[ 2019-04-25 15:09:28] [daemon.cpp:61@Run] Failed to connect to socket /run/anbox-container.socket: No such file or directory


서비스에 문제가 있나보다. 휴우...




# systemctl status anbox-container-manager.service
● anbox-container-manager.service - Anbox Container Manager
   Loaded: loaded (/lib/systemd/system/anbox-container-manager.service; enabled;
   Active: inactive (dead)
Condition: start condition failed at Fri 2019-04-26 00:06:55 KST; 4min 6s ago
           └─ ConditionPathExists=/var/lib/anbox/android.img was not met
     Docs: man:anbox(1)

 4월 26 00:05:22 T480s systemd[1]: Condition check resulted in Anbox Container M
 4월 26 00:06:55 T480s systemd[1]: Condition check resulted in Anbox Container M
lines 1-9/9 (END)




안드로이드 이미지를 못찾는다. 다운로드한 이미지 /var/lib/anbox 에 복사.

$ su
# anbox container-manager
[ 2019-04-25 15:17:48] [container_manager.cpp:71@operator()] You are running the container manager manually which is most likely not
[ 2019-04-25 15:17:48] [container_manager.cpp:72@operator()] what you want. The container manager is normally started by systemd or
[ 2019-04-25 15:17:48] [container_manager.cpp:73@operator()] another init system. If you still want to run the container-manager
[ 2019-04-25 15:17:48] [container_manager.cpp:74@operator()] you can get rid of this warning by starting with the --daemon option.
[ 2019-04-25 15:17:48] [container_manager.cpp:75@operator()]
[ 2019-04-25 15:17:48] [container_manager.cpp:119@operator()] boost::filesystem::create_directories: Invalid argument
# exit

$ anbox session-manager
[ 2019-04-25 15:18:23] [daemon.cpp:61@Run] Failed to connect to socket /run/anbox-container.socket: No such file or directory


장시간 복기 끝에 문제점을 찾은 것 같다.

/var/lib/anbox 에 이름을 android_amd64.img에서 android.img로 안바꾼 것이 탈이 된 것 같다.

anbox-container-manager.service 를 멈추고 다시 시작하자.

# systemctl stop anbox-container-manager.service
# systemctl start anbox-container-manager.service
# systemctl status anbox-container-manager.service
● anbox-container-manager.service - Anbox Container Manager
   Loaded: loaded (/lib/systemd/system/anbox-container-manager.service; enabled;
   Active: active (running) since Fri 2019-04-26 00:23:27 KST; 5s ago
     Docs: man:anbox(1)
  Process: 27421 ExecStartPre=/sbin/modprobe ashmem_linux (code=exited, status=0
  Process: 27422 ExecStartPre=/sbin/modprobe binder_linux (code=exited, status=0
  Process: 27423 ExecStartPre=/usr/share/anbox/anbox-bridge.sh start (code=exite
 Main PID: 27499 (anbox)
    Tasks: 9 (limit: 4915)
   Memory: 4.8M
   CGroup: /system.slice/anbox-container-manager.service
           └─27499 /usr/bin/anbox container-manager --daemon --privileged --data

 4월 26 00:23:27 T480s systemd[1]: Starting Anbox Container Manager...
 4월 26 00:23:27 T480s systemd[1]: Started Anbox Container Manager.
 

뭔가 느낌이 좋다. 한번 더 실행해보자.




된다.

2019년 3월 6일 수요일

[Note] Setup DepthEyes-H1

1. Download DepthEyeSdk-master from https://github.com/pointcloud-ai/DepthEyeSdk


2. Unzip it.

3. copy the rules of permission for USB connection under root privilige.

# cp DepthEyeSdk-master/third_party/udev/rules.d/72-DepthEyeH1CDK.rules /etc/udev/rules.d/

3.1. Reload udev rules.

# udevadm control --reload-rules && udevadm trigger

4. Plug DepthEye-H1 in USB hub powered by external adaptor.

5. Verify the connection.

# lsusb
Bus 003 Device 002: ID 8087:8000 Intel Corp.
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:8008 Intel Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 050: ID 0451:9107 Texas Instruments, Inc. 
Bus 002 Device 002: ID 17ef:60a9 Lenovo
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

6. Export SDK path.
$ export VOXEL_SDK_PATH="third_party/voxelsdk_ubuntu_4.13"
$ export PATH=$VOXEL_SDK_PATH/lib:$VOXEL_SDK_PATH/bin:$PATH

7. Build SDK.

$ mkdir build

$ cd build/

$ cmake ..
-- The C compiler identification is GNU 8.2.0
-- The CXX compiler identification is GNU 8.2.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info -- Detecting C compiler ABI info - done
-- Detecting C compile features -- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done CMAKE_SYSTEM_NAME = Linux VERSION=4.19.0-2-amd64(4-19-0) ARCH = x86_64 use Voxel SDK ./third_party/voxelsdk_ubuntu_4.13/lib
-- Configuring done
-- Generating done
-- Build files have been written to: /home/user/Workspace/DepthEyeSdk-master/build

$ make
Scanning dependencies of target deptheye
[ 25%] Building CXX object src/CMakeFiles/deptheye.dir/DepthEyeInterface.cpp.o
[ 50%] Linking CXX static library ../lib/libdeptheye.a
[ 50%] Built target deptheye Scanning dependencies of target H1AsciiSample
[ 75%] Building CXX object test/CMakeFiles/H1AsciiSample.dir/H1AsciiSample.cpp.o
[100%] Linking CXX executable ../bin/H1AsciiSample
[100%] Built target H1AsciiSample

8. Test

$ ./bin/H1AsciiSample
 INFO: load path path size:3 INFO: load path file size =0
 INFO: load path /home/user/.Voxel/lib file size =0
 INFO: load path /usr/lib/voxel file size =0
 WARNING: CameraSystem: No depth camera library found or loaded.
 ERROR: Find depth camera FAILED
 Press any key to quit

Woops...


2019년 2월 12일 화요일

When CUDA_ERROR_UNKNOWN comes up to call cuInit,

The detailed error messge is

(venv) user@LifeNTech:~/Workspace/deep_neural_network$ python test_keras.py Using TensorFlow backend. Epoch 1/5 2019-02-11 17:45:26.303774: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-02-11 17:45:26.315447: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_UNKNOWN 2019-02-11 17:45:26.315530: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: LifeNTech 2019-02-11 17:45:26.315552: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: LifeNTech 2019-02-11 17:45:26.315635: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 390.87.0 2019-02-11 17:45:26.315687: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: 390.87.0 2019-02-11 17:45:26.315695: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:305] kernel version seems to match DSO: 390.87.0 

This thing  might be helpful.
# nvidia-modprobe -u


2019년 1월 24일 목요일

Install nvidia-cuda-toolkit 9.0.176-2 on Debian (buster).

At first, need to download all the dependencies from http://ftp.riken.jp/Linux/debian/debian/pool/non-free/n/nvidia-cuda-toolkit/, and install these.

# dpkg -i libaccinj64-9.0_9.0.176-2_amd64.deb libnppig9.0_9.0.176-2_amd64.deb libcublas9.0_9.0.176-2_amd64.deb libnppim9.0_9.0.176-2_amd64.deb libcudart9.0_9.0.176-2_amd64.deb libnppist9.0_9.0.176-2_amd64.deb libcuinj64-9.0_9.0.176-2_amd64.deb libnppisu9.0_9.0.176-2_amd64.deb libcurand9.0_9.0.176-2_amd64.deb libnppitc9.0_9.0.176-2_amd64.deb libcusolver9.0_9.0.176-2_amd64.deb libnpps9.0_9.0.176-2_amd64.deb libcusparse9.0_9.0.176-2_amd64.deb libnvgraph9.0_9.0.176-2_amd64.deb libnppc9.0_9.0.176-2_amd64.deb libnvrtc9.0_9.0.176-2_amd64.deb libnppial9.0_9.0.176-2_amd64.deb libnvtoolsext1_9.0.176-2_amd64.deb libnppicc9.0_9.0.176-2_amd64.deb libnvvm3_9.0.176-2_amd64.deb libnppicom9.0_9.0.176-2_amd64.deb  libnppidei9.0_9.0.176-2_amd64.deb libnppif9.0_9.0.176-2_amd64.deb  

In order to prevent automatic upgrade, mark some packages as hold.

# apt-mark hold libnvtoolsext1 libnvvm3 nvidia-cuda-dev nvidia-cuda-toolkit nvidia-profiler

Indeed, minor dependencies should be downloaded and installed.
# apt install gcc-6 g++-6 clang-4.9

And, finalize the install of nvidia-cuda-toolkit.

# dpkg -i nvidia-profiler_9.0.176-2_amd64.deb
# dpkg -i nvidia-cuda-dev_9.0.176-2_amd64.deb
# dpkg -i nvidia-cuda-toolkit_9.0.176-2_amd64.deb 

Next, download and install remaining dependencies from https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/

# dpkg -i libcudnn7_7.0.5.15-1+cuda9.0_amd64.deb
# dpkg -i libcudnn7-dev_7.0.5.15-1+cuda9.0_amd64.deb
# dpkg -i libcudnn7-doc_7.0.5.15-1+cuda9.0_amd64.deb

(venv) $ python3.6
>>> pip install tensorflow-gpu

In terms of virtualenv, please see the other post about it.

Install wxPython 4.0.4 in raspberry pi

1. preparation
1.2. sdhc >= 16GB installed raspbian
1.3. Install dependencies
# sudo apt install libjpeg-dev libtiff5-dev libnotify-dev libgtk2.0-dev libgtk-3-dev libsdl1.2-dev libgstreamer-plugins-base0.10-dev libwebkitgtk-dev freeglut3 freeglut3-dev

1.4. Turn off the GUI boot chaning to CLI
Preferences -> Raspberry Pi Configuration -> At Boot, select To CLI


2. Install wxpython 4.0.4
# sudo pip3 install wxpython

3. Wait and do not see .....

4. return to GUI
#sudo raspi-config
 -> 3. BootOptions -> B1 Desktop / CLI -> B4 Desktop Autologin -> double 'TAB' -> Finish

# reboot

5. Check the installation
$ python3

>>> import wx
>>> wx.__version__
'4.0.4'



Install log of tensorflow-gpu on Debian (buster)

1. Install nvidia driver
# apt install nvidia-driver

1.1. Driver check
# nvidia-smi
Thu Jan 24 15:06:13 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.87 Driver Version: 390.87 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 105... Off | 00000000:01:00.0 On | N/A | | 45% 30C P8 N/A / 75W | 220MiB / 4038MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 872 G /usr/lib/xorg/Xorg 135MiB | | 0 13044 G /usr/lib/firefox-esr/firefox-esr 1MiB | | 0 13275 G /usr/lib/firefox-esr/firefox-esr 79MiB | | 0 14283 G /usr/lib/firefox-esr/firefox-esr 1MiB | +-----------------------------------------------------------------------------+

1.2. Install cuda toolkit
# apt install nvidia-cuda-toolkit
# cudafe++ -v
cudafe: NVIDIA (R) Cuda Language Front End Portions Copyright (c) 2005-2018 NVIDIA Corporation Portions Copyright (c) 1988-2016 Edison Design Group Inc. Based on Edison Design Group C/C++ Front End, version 4.14 (Jun 12 2018 23:07:12) Cuda compilation tools, release 9.2, V9.2.148
# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Tue_Jun_12_23:07:04_CDT_2018 Cuda compilation tools, release 9.2, V9.2.148

2. Install Python3.6
2.1. Check default python in Debian 10(buster)
# apt policy python3
python3: Installed: 3.7.1-3 Candidate: 3.7.1-3 Version table: *** 3.7.1-3 500 500 http://debian-archive.trafficmanager.net/debian testing/main amd64 Packages 100 /var/lib/dpkg/status

2.2. Install packages of python3.6 and virtualenv
 # apt install python3.6 virtualenv

3. build the private console with python3.6
$ virtualenv --system-site-packages -p python3.6 ./venv
$ source ./venv/bin/activate
(venv) $ pip install --upgrade pip
(venv) $ pip install tensorflow-gpu

4. (TODO) Check tensorflow-gpu

(venv) $ python
Python 3.6.8 (default, Jan 3 2019, 03:42:36) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/home/user/Workspace/deep_neural_network/venv/lib/python3.6/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/home/user/Workspace/deep_neural_network/venv/lib/python3.6/imp.py", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py", line 22, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/home/user/Workspace/deep_neural_network/venv/lib/python3.6/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/home/user/Workspace/deep_neural_network/venv/lib/python3.6/imp.py", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/install_sources#common_installation_problems for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.

4.1. Fix the absence of libcublas.so.9.0

Have to do lots of works to install dependencies related to libcublas.so.9.0.
Please see my next post about installation of nvidia-cuda-toolkit.

P.S.
# prompt : root privilage
$ prompt : user prifilage