Xiaomi Yi - Genlock Multiple Cameras

I doesn't look like real. If I have an eshop with this kind of equipment, I would definitelly use real photos of a rig, not only 3D model.

Anyway, since I don't know (as I have already written) how Bluetooth works, if there isn't some kind of "back-checking" communication between devices, I had some other (most probably stupid) idea about genlock. What about changing BT MAC adress of the cameras to have the identical one? Pairing would be executed just with one of them, but since the same MAC, it could be theoreticly able to controll both in the same time later. Of course, problem would rise, if there is some other communication...something like "BT remote ON pressed > (sends signal to the camera to start recording) > Camera starts to record > Camera sends back some confirmation to the BT remote". If there is this kind of operation from the slave device, then is clear it is not possible to solve it like this...but if the communication proceeds without any feedback from the slave device, there is theoretical chance to be successful. So, is there some chance to change BT MAC adress of a camera? Does anybody have a clue how to do it?
 
In a switched network, having duplicate mac's would make the switch talk to the one that most recently sent a reply out...ie, one but not both. I'd guess BT is similar, but no certain knowledge on the subject.
 
Tobias Chen, did you ever get anywhere with the electronic genlocking? I tried Andy's script with the wireless router sitting right there next to the cameras, I'm getting sync delays between 20 and 200 ms, which is way worse than even within 1 frame. I need the shutters to work as close to each other in time as possible.
 
Ok, I managed to do the ntp syncing. Now I need to send a specific time to all the cameras and have them all start recording at that time. Does anyone have any idea how to do it?
 
Nothing related to your current problem, however might there exist a 3rd party app/desktop app which handles this?
Maybe look at GoPro's/SJCAM/... apps/desktop apps, since they mostly all got 90% same specs.
 
And related to your problem:
I think the Yi's are connected via a cable.

JUMP_16x_Cam_Rig_ISO_View_1_large.JPG

If you look closely, the Yi's are faced down, where the cables port are, and there are maybe the cables connected to?
 
Now that I have the camera clocks synched, correct me if I'm wrong, but it seems like I have to reverse-engineer how camera recording is being triggered and figure out how to schedule that. I found a medley of scripts on the camera's filesystem in /usr/local/share/script/, along with a few hefty binaries, copied them to my computer and am inspecting them. I think this is all the custom code that's on the camera, aside from the drivers. The first major problem I see is the filesystem being read-only the way I access it. I just make an empty enable_info_display.script file on the camera's card and telnet into the camera on port 23... Again, suggestions welcome. How can I write to the filesystem? Maybe, instead, I can schedule the capture directly from Andy's script (ntp sync is done from there), but how?
 
Greg_K Your by far the person that has gotten closest to genlock these. Did you got any ressults?. Its sad that almost at the dors of heaven you stalled with shooting the cameras, but shouldnt the be the easy part? I mean as you said these scripts already are able to control the triggering no?. May be a silly way to trigger for testing could be that if clocks are matched that the cameras do start at a time spicified in the script. like 15:00:00:000. I'm really pissed to see that gopro4 will never get genlock as hero 5 is already out. Its funny because gopro has recently anounced a genlocked gopro 4 6cam rig at a ridiculous price. Even they do have preorder for a super ridiculous like 20 gopro4 camera rig that obviously will be genlocked. But these suckers do not release any firmware update... The price of the Yi2 is incredible for what it delivers, genlock is the only thing I miss.
 
Pableras, sorry to disappoint you, but my tests revealed there is no way to reliably sync up the cameras with the scripts. One issue is that there is no way to insure that the scripts or code you run to launch the cameras, even after the cameras have been tyme-synchronized to a central server. You get time offsets of over 80 ms, which is critical for capture at 60fps. Second issue, an even more important one, is that camera capture rates drift apart after a the cameras have been turned on for some time. This may be related to the cameras heating at different rates. In any case, you get huge time offsets quickly. For me it was 8-30 frame offset at 60 fps within the first 10 minutes, almost none of the 8 cameras I initially synced up continued to capture in sync.

The only way to truly sync them is to genlock them, I believe. Perhaps, try USB3 cameras with a computer, which you can sync up with a software cyclic barrier during capture? I've switched over to Tara Stereo USB3 cams (https://www.e-consystems.com/3D-USB-stereo-camera.asp).
 
Ouu. Tx for the quick answer. What a shame. For testing I bought a lumix W3 very cheap. The quality is terrible. Video is like glitchy VHS. WTF!. I think I'll go with gopro3+ as used ones price is really dropping and genlock seems to work ok with these. I whant to build an stereo cam for pointcloud reconstruction. Can I ask what U for? XD.
I wanted to have 4k 30fps just in case fut will go with 4k 15fps or the 2fps 12Megapixel. I hope I'll have enough data with these modes.
 
Ouu. Tx for the quick answer. What a shame. For testing I bought a lumix W3 very cheap. The quality is terrible. Video is like glitchy VHS. WTF!. I think I'll go with gopro3+ as used ones price is really dropping and genlock seems to work ok with these. I whant to build an stereo cam for pointcloud reconstruction. Can I ask what U for? XD.
I wanted to have 4k 30fps just in case fut will go with 4k 15fps or the 2fps 12Megapixel. I hope I'll have enough data with these modes.
If resolution of the Tara cam I pointed out is not enough, I still highly recommend using an existing stereo cam instead of rolling your own. Check out this one: https://www.stereolabs.com/zed/specs/. Its only 1080p, not 4k like you mentioned, but it's better than Tara and it'll save you a lot of trouble calibrating the camera, and also comes with a nice SDK. Stereo calibration is not that big of an issue if you know how to do it, but, if you do it on two cameras like GoPro, you risk to run into such issues as vertical disparity (when building a horizontal stereo cam, for instance). Just a tiny vertical offset between optical axes of the cameras, even in the order of fractions of a millimeter, will wreak havoc in your stereo reconstruction result, since that much physical distance actually translates to a large number of "pixels" on the camera sensor. Especially with a wide lens.

Trust me, I know how you're thinking, that's what I thought too when I tried homegrown stereo with the Xiaomi.
As to my purpose, I'm doing research on generic models for tracking skeletal organisms in 3D (as in, tracking their joints).
 
Last edited:
, and also comes with a nice SDK. Stereo calibration is not that big of an issue if you know how to do it, but, if you do it on two cameras like GoPro, you risk to run into such issues as vertical disparity (when building a horizontal stereo cam, for instance). Just a tiny vertical offset between optical axes of the cameras, even in the order of fractions of a millimete

You made me laugh with the "Trust me, I know how you're thinking," part XD.

Yeah teh that ZED cam I saw a couple of days ago and I was lik OMG!! 2.2K stereo rig at affordable price! Take may money. And later whan I saw the example material I was lik ummm aaaaa not sure....

When I saw they claime a 20m range I was trully excited when I saw that it is more lik 5m I was like mmmm .

When seeing the videos it feels like they're trying to show the ressults quick, not getting into much detail. I wonder if the poor aquracy of the clouds is due to the realtime procesing or to the image quality. But being a prebuilt rig I can ask for the camera params and a sample video and give it a go on a SLAM soft to see what it able to do when used for mapping.

My idea is to attach it (them) to a car and build an sparse cloud (its more for camera poses). Then export each camera pose to an Structure From Motion soft and there build a dense point cloud to use it for reference in modelling.

In their vid:
The reconstruction of the road is more what I would espect from an 800x600 cam or something like that.

No sample images/video/pointclouds make me a little suspicious but will give it another chance.


Also they claim that it can be used with a phone. I wonder the hell how they transfer the data stream to the phone via usb2? wonder if the bandwith can hold with the data stream. I only need to record no process. I'll do everything on the desktop rig.

While I admig you scared me with the alignment issues I also saw time ago this project:
http://projectvideoscanner.blogspot.com.es/

Witch is a guy reconstructing stuff with webcams that are not even genlock but triggered at the same time LOL.
 
While I admig you scared me with the alignment issues I also saw time ago this project:
http://projectvideoscanner.blogspot.com.es/
Witch is a guy reconstructing stuff with webcams that are not even genlock but triggered at the same time LOL.

Well, as I said, with webcams and any kind of USB cameras you don't need to genlock, because you basically control the capture time of each frame via software. Also, webcams tend to have a much narrower field of view, bigger sensor size, and a much lower resolution, which makes them easier to calibrate and makes a physical vertical offset less important. The Xiaomi stock lens, on the other hand, is a compound 150-degree (diagonally) fisheye, which makes calibrating it particularly hard and has all sorts of distortions. Add to that the small, extremely dense sensor, and you're potentially in for some trouble. I'm not saying you shouldn't do it or that it's impossible, it just might be really hard (as in, months of work) and you're not guaranteed a result. If you will proceed with Xiaomi or GoPro, don't repeat my mistake: place the cameras in the same orientation vertically, don't turn one upside-down. Even though they seem like the lenses are aligned that way, they really aren't. I know putting them both upright side-by-side will increase interpupillary distance and potentially make stereo-matching harder. But the vertical offset, on the other hand, will make it friggin' impossible.
I've been thinking to make a little 3D-printed rig for the Xiaomi-s which can adjust vertical placement very precisely with a nut and bolt, but the idea probably isn't worth my time right now.
A few tips on calibration, if you actually do it: if you can, use MATLAB corner detection on your calibration board, not the OpenCV one. The MATLAB one is based on a 2014 algorithm for "calibrating from a single shot" and is significantly better. However, after you get the MATLAB corners from like ~100 images, save them as a csv, read them and do the actual calibration with OpenCV 3.1. This will give you the lowest reprojection error. If you run into issues (like a really high reprojection error), this probably means the detected corner order got flipped in some of the images, in which case you'll have to figure out how to reorder the "bad" ones. I wrote python scripts to do all this basically in one fell swoop.
 
Well, as I said, with webcams and any kind of USB cameras you don't need to genlock, because you basically control the capture time of each frame via software. Also, webcams tend to have a much narrower field of view, bigger sensor size, and a much lower resolution, which makes them easier to calibrate and makes a physical vertical offset less important. The Xiaomi stock lens, on the other hand, is a compound 150-degree (diagonally) fisheye, which makes calibrating it particularly hard and has all sorts of distortions. Add to that the small, extremely dense sensor, and you're potentially in for some trouble. I'm not saying you shouldn't do it or that it's impossible, it just might be really hard (as in, months of work) and you're not guaranteed a result. If you will proceed with Xiaomi or GoPro, don't repeat my mistake: place the cameras in the same orientation vertically, don't turn one upside-down. Even though they seem like the lenses are aligned that way, they really aren't. I know putting them both upright side-by-side will increase interpupillary distance and potentially make stereo-matching harder. But the vertical offset, on the other hand, will make it friggin' impossible.
I've been thinking to make a little 3D-printed rig for the Xiaomi-s which can adjust vertical placement very precisely with a nut and bolt, but the idea probably isn't worth my time right now.
A few tips on calibration, if you actually do it: if you can, use MATLAB corner detection on your calibration board, not the OpenCV one. The MATLAB one is based on a 2014 algorithm for "calibrating from a single shot" and is significantly better. However, after you get the MATLAB corners from like ~100 images, save them as a csv, read them and do the actual calibration with OpenCV 3.1. This will give you the lowest reprojection error. If you run into issues (like a really high reprojection error), this probably means the detected corner order got flipped in some of the images, in which case you'll have to figure out how to reorder the "bad" ones. I wrote python scripts to do all this basically in one fell swoop.

Super duper. I'll try to gather material from ZED and give it a run on RTAB Map. You deffinetly scared me enough XD.

If I see the ZED not fitting in my need Instead of the custom rig I'll probably go with SFM +GPS instead of SLAM. If I ever manage to get something I'll post.
 
Align frames method. Starts recording at the beginning of the next second (millisecond accurate). Still requires syncing in post to align recording start times, but this reduces the millisecond offset between the shutters.

Have used this to reduce my recording fps from 60fps to 30fps on multi camera moving shots.

Example: https://www.dropbox.com/s/17k5umd7vy0692f/Multi.Cam.30fps.mp4?dl=0
Please excuse the stupid arm movements. lol

Code:
#autoexec.ash
...
lu_util exec '/tmp/fuse_d/wifi/sta.sh'
sleep 1
lu_util exec '/tmp/fuse_d/wifi/get_time.sh'
t app key record

Code:
#get_time.sh
ntpd -d -n -q -p 0.pool.ntp.org
# get remaining milliseconds to round to second
USEC=$(adjtimex |tail -3 | head -2 | awk {'print $2'} | tail -1 | cut -c1-4)
# sleep milliseconds
sleep ".$((9999 - USEC))"
 
Last edited:
Nice post going to try this out with the Brahma Xiaomi 360 app :)

And then try kolor to see how the sync is :)
 
Hi Armdromeda,
many thanks for sharing your findings. I wanted to test the "frame align method" but I was not able to import your code snippets into my autoexec.ash without losing the other settings (1600x1200, 60fps, 35mbps, station mode wifi). Can you (or anybody else) help me please? I attach my files.
Thanks in advance!
 

Attachments

  • XiaomiYi_Scripts_20161013.zip
    2.3 KB · Views: 13
Back
Top