RealSense learning/turtorial/sharing blog – Chapter One:The Start, Install and Hello World

It’s been one month I’ve been working on this Intel realsense D435, and still I’m not so sure with a lot of functions and some parameters. Like when to add () and when not, which function need which

parameter to fit in. First I read the

https://buildmedia.readthedocs.org/media/pdf/pyrealsense/dev/pyrealsense.pdf

And the python.cpp in the python wrapper to find the options Actually it didn’t help much, mainly I started with the examples in python wrapper and then read the corresponding codes in the C++ example. It is also because this project is to measure objects in the picture, and in the example there is already one measuring in stream, so translating that code is the first approach. And here

https://pyrealsense.readthedocs.io/en/master/index.html

The starting part of the camera:

With the Intel Realsense Viewer we can get all the possible control options

the picture of IR/Depth/RGB stream

For RGB the highest resolution can be up to 1920*1080
auto exposure sometimes is turned off to reduce frame drop

In my measuring usage I will be using the High density to get the most pixels covered
while determine movement or object can use high accuracy
and the presets can be also set manually

one important feature for me is the hole filling, to get the most of the measurements this algorithm is a way to fill the frame 
before

after 

it can already record and choose the save directory

the view also come in 3D, for the usage of point cloud

least but not the last, the depth viewer, currently I havent been using it, since its like the viewer without the color stream control.

Setting of the environment

Pip install pyrealsense2

This is the current easiest way to install it, while the other method is through CMake which I never tested

Other plugins needed will be: numpy, opencv-python, matplotlib

Also can be done with pip install

————————————————————————————————————————

So start scripting:

My first example is to rewrite the Hello World into python

https://github.com/soarwing52/RealsensePython/blob/master/phase%201/Hello%20World.py

Import pyrealsense2 as rs

This is the library import line for realsense, in all further scripts it will be all in this form

The pipeline:

For video processing all the instructions are in a pipeline, so start it and combine the next moves.

Pipeline = rs.pipeline()

pipeline.start()

And then the configurations of the pipeline:

config = rs.config()

config.enable_stream()

This sets up streams of frames which not only when using camera but also when reading from bag files, mainly I use for alignment of frames, other I still haven’t found the function for it

Parameters:

(stream type,resolution, image format, fps)

Example:

config.enable_stream(rs.stream.depth, 640, 360, rs.format.z16, 30)

config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

The possible can be found in the Intel Realsense Viewer, color,depth,infrared.

One thing I found interesting is that when it in default is the bgr format

in Intel Viewer it shows bgr in rgb, which means where red is blue and the colors are not correct to human eye.

So when I record as rgb8/rgba8, and view in opencv-python, it shows also again wrong, shows rgb in bgr format.

and when I use matplotlib, it showed rgb as default view.

so I need to add one cv2.convert if I want to show RGB image correctly in opencv window.

It is not a big problem just one interesting phenomenon

https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/

RGB/Y16/BGR

In order to get the frames one after one, perform as a stream/video, a try and a while loop is required

Try:

while True:

Except Runtimeerror:

No frames came or ended

Finally:

pipeline.stop()

So for getting the frames

frame = pipeline.wait_for_frames()

depth_frame = frame.get_depth_frame()

color_frame = frame.get_color_frame()

This is quite straightforward

The get frame options for pipeline are:

poll_for_frames()

Get frames immediately

Somehow I can’t make it work, it just always reply no frames coming until the end

wait_for_frames()

It will block the stream until the next stream came, this caused a matching problem which I will describe later when I tested measuring

try_wait_for_frames

I haven’t really get to know this one yet

This is also same options for the syncer, but as I wrote before, I can’t get the syncer to function yet.

For frame queue there are more options

rs.frame_queue()

This will put the frames first in memory and then process and save, if there’s a lot of frames while streaming, this is the method to prevent drop frames since it saves the frame data first.

In this project I didn’t need this function, just came across it while searching

So back to the Hello World sample.

width = depth_frame.get_width()

height = depth_frame.get_height()

dist = depth_frame.get_distance(width / 2, height / 2)

This is getting attribute from frames

The options are

get_width

get_height

get_stride_in_bytes

get_bits_per_pixel

get_bytes_per_pixel

And for the metadata of the frames:

var = rs.frame.get_frame_number(color_frame)

print ‘frame number: ‘+ str(var)

time_stamp = rs.frame.get_timestamp(color_frame)

time = datetime.now()

print ‘timestamp: ‘ + str(time_stamp)

domain = rs.frame.get_frame_timestamp_domain(color_frame)

print domain

meta = rs.frame.get_data(color_frame)

print ‘metadata: ‘ + str(meta)

Frame number, timestamp are the ones I used, others I haven’t get my hands on yet

The get_distance is only for depth_frames while the others apply to all other frames.

So this can get the basic distance. But for accuracy, even though it shows the number up to 76,

The detecting distance of D435 is 10 meters, and for my test, it’s accurate within 5-6m up to +-10cm on measuring objects

key = cv2.waitKey(1)

This is window controlling, with 1 means 1ms the window will show, and 0 means forever

if key & 0xFF == ord(‘q’) or key == 27:

Break

This means when esc/q is pressed, exit the while loop, and proceed to next frame because there is a try outside of the while loop.

新的篇章,新工作

來到了現在的這個城鎮兩個禮拜了,上工兩個禮拜,或許該有點心得了

至於未來會怎麼樣,我也不知道

目前還沒有什麼照片,就先文字描述吧

首先介紹一下我所在的地方,這裡叫做Melle

中文翻譯可以叫做梅勒,是一個只有五萬人的小鎮

而辦公室 就在離火車站五分鐘的地方,所以大部分的員工都不住在這。

我的公司叫做 Ge-Komm,標榜著為了交通而存在的社群,表示我們是一群關心大家通勤交通的人們

而實際業務呢,在面試的時候就是分兩大項,道路網路跟下水道

那時候展示的有ArcGIS路網 跟下水道用類似胃鏡的東西探勘,檢視哪裡需要整治

實際到職之後呢,第一個禮拜就是開始接觸各種業務

第一件事情,ArcGIS只有Basic的使用權 太難過啦,很常想處理個什麼整頓資料,然後權限不足

而且公司電腦也是要安裝什麼都需要管理員帳戶,所以當我問能不能裝Office的英文語言套件,就被打槍了
好險換系統語言不需要權限,不然德文的ArcGIS我看了一下,我選擇死亡

前幾天的業務,在填入道路的表格

表格內有:
是否有農業/林業使用
是否有大眾生活使用需求
是否有觀光價值-以下有單車道、步道,還有很酷的是騎馬路線
然後就是生態價值

判定標準呢,就是用航照圖、官方的步道、單車道等資訊,然後用editor決定是:
頻繁/偶爾/幾乎不使用
這樣的標準

有在田旁邊的就是會有農業,有樹林的就寫有林業
重點就是要連成一個網絡,讓這些不易迴轉的重機具有一個可以繞一圈的路

填完表格之後呢,就是最重要的分類啦
基於上述的基本資料,把現況分類
分成A-I的幾個等級之後,提出未來定位的建議
最主要就是要建議鋪面,有:瀝青/碎石/無鋪面
所以主要是找出哪裡鋪面需要甚麼,讓政府省錢

而接案的地區就是大概都在周圍,目前看到最遠的在科隆附近

大概都是開車2小時內的各城鎮吧,所以才主要對於農林產業道路的規畫做案子
各種路網的維護

而這些他們需要開車在現場拍照並場刊,不過目前我沒有駕照,所以我只有跟過一次

因為平常他們人力就是一個人獨自開車,獨自紀錄道路實際使用情況

然後時速只能25以下,不然照片會糊掉

比較有趣的是在農田間有爛泥,不是四輪驅動就要高超的技術

因此他們問候就是:你今天有卡住嗎?

架在車上的相機,用遙控器拍照

紀錄橋/涵洞

大概就是開這種地方

而電腦作業內容的話幾乎都在填表格

不過也有時間剛好拿來做arcpy的使用,因為這個禮拜所有人都出外拍照了

據老闆表示,本季這樣的案子有10個做好到大概10月前緊湊的完成,然後輕鬆(這是一個適合滑雪的工作? 冬天輕鬆

不過就是個耗眼力的工作,而且以地理系水準應該大二就可以做了吧

雖然可以說是學以致用?

就看著辦囉 累積點經驗、德文能力,最好能申請公司錢考個駕照
————————————————————————————————————————-

接下來就是住的環境了

今天看到了房東的後花園 不是後庭花

然後有一窩四隻兔子,兩隻狗

請問總共有幾隻腳?

房東的媽媽種菜,房東種樹,還有花

其實完全就是一種退休田園生活呀

房子是一棟三層樓的大房子,不過排水問題一直困擾著

總體而言是真的挺舒服的,走路20分鐘上班,不過晨跑就是大概8-10分鐘

而其他房間有一對老夫婦、一個19歲的阿富汗人、一個德國人

老先生因為要看病才搬過來,離醫院近

而每天都有喉嚨痛、肺積水、腿痛

令人十分的難過,也無法說甚麼,就是陪著這位76歲爺爺聊天

其餘時間,德國女生是一個純素主義,每天自己做麵包,或是吃些草葉種子

他跟房東聊的比較來,禮拜天房東都會上來一起吃早餐,而我就算聽得懂也是插不上話的

————————————————————————————————————-
總之,體驗了兩個禮拜和平的生活,比起舊家的各種冒險,是很有溫度的房子

周末去了附近的大學城Osnabruck、跟城市Bielefeld

這樣的生活目前這樣,工作很快上軌道只要去把簽證的藍卡處理好

住的也沒什麼問題,380一個月

但是就是十分養老

而且我還沒有正式上班的感覺,或許是因為試用期,而我也還真的想去更有挑戰性的地方吧

除了分類做基礎資料,額外做了寫小程式讓工作自動化、安裝了平板現地資料蒐集的QGIS APP,下禮拜工讀生會回報好不好用

我覺得有個規律的生活,目前是挺喜歡的,比起上學期間的許多課間時間更容易把運動、學習排進去

總之,這就是在德第一份工的前兩周,看過完第一個月會不會有另外的感想呢