MPEG-2 TS 包格式

Alt none Alt none Alt none Alt none Alt none Alt none

PMT

Alt none

PAT

Alt none

Name Numberof bits 32-bit BE mask Description
4-byte Transport Stream Header      
Sync byte 8 0xff000000 Bit pattern of 0x47 (ASCII char ‘G’)
Transport Error Indicator (TEI) 1 0x800000 Set when a demodulator can’t correct errors from FEC data; indicating the packet is corrupt.
Payload Unit Start Indicator 1 0x400000 Set when a PES, PSI, or DVB-MIP packet begins immediately following the header.
Transport Priority 1 0x200000 Set when the current packet has a higher priority than other packets with the same PID.
PID 13 0x1fff00 Packet Identifier, describing the payload data.
Scrambling control 2 0xc0 ’00’ = Not scrambled.For DVB-CSA only:’01’ (0x40) = Reserved for future use’10’ (0x80) = Scrambled with even key’11’ (0xC0) = Scrambled with odd key

ref: Link

iOS 使用镜头捕捉视频

- (void)takeVideo {
    UIImagePickerController *camera = [[UIImagePickerController alloc] init];
    NSArray *availableTypes = [UIImagePickerController availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera];
    camera.mediaTypes = availableTypes;
    camera.sourceType = UIImagePickerControllerSourceTypeCamera;
    camera.delegate = self;
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    //UIImage *image = info[UIImagePickerControllerOriginalImage];

    // 不同于获取图片,我们从Delegate中取得的是所拍摄视频的URL
    NSURL *videoURL = info[UIImagePickerControllerReferenceURL];

    //拷贝所拍视频到用户相册中,并且删除临时目录中的视频文件
    if (videoURL) {
        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum([videoURL path])) {
            UISaveVideoAtPathToSavedPhotosAlbum([videoURL path], nil, nil, nil);
            [[NSFileManager defaultManager] removeItemAtPath:[videoURL path] error:nil];
        }
    }
}

github: https://github.com/lnmcc/Homepwne.git

Ubuntu 建立 ssh 代理

Ubuntu 14.04

先强烈鄙视一下北京移动的宽带业务,各种不好用,最郁闷的是把VPN给封了。所以只能想别的方法来访问Google和Android developer了,在这里记录一下方法。

ssh -qTfnN -D 7070  [email protected]

这里的的端口号7070是任意(> 1024)你机器上的空闲端口。

然后打开Settings—-> Network控制面板,把SocksHost对接到本机的7070端口,就是在上一步中建立ssh隧道的端口: Alt none

在这里设置的proxy是系统级别的,你也可以通过你的浏览器来设置一些更详细的规则,这样会省一些proxy流量。不过我发现Chrome只能使用全局proxy,好在有SwitchySharp插件, firefox是可以设置自己的proxy的。 Alt none

PS:参数说明

Argument Comment
-q Quiet mode. Causes all warning and diagnostic messages to be suppressed.
-T Disable pseudo-tty allocation.
-f Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm.
-n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)
-N Do not execute a remote command. This is useful for just forwarding ports (protocol version 2 only).
-D port Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, and whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 protocol is supported, and ssh will act as a SOCKS4 server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file.

iOS bitcode 设置

什么是bitcode

Bitcode is an intermediate representationof a compiled program. Apps you upload to iTunes Connect that contain bitcodewill be compiled and linked on the App Store. Including bitcode will allowApple to re-optimize your app binary in the future without the need to submit anew version of your app to the store.

关于bitcode的报错

ld: ‘~/test/libtestSDK.a(testSDK.o)’does not contain bitcode. You must rebuild it with bitcode enabled (Xcodesetting ENABLE_BITCODE), obtain an updated library from the vendor, or disablebitcode for this target. for architecture arm64

Bitcode设置

Alt none

FFMPEG Reduce RTSP Latency

You may be able to decrease initial “startup” latency by specifing that I-frames come “more frequently” (or basically always, in the case of x264’s zerolatency setting), though this can increase frame size and decrease quality, see here for some more background. Basically for typical x264 streams, it inserts an I-frame every 250 frames. This means that new clients that connect to the stream may have to wait up to 250 frames before they can start receiving the stream (or start with old data). So increasing I-frame frequency (makes the stream larger, but might decrease latency). For real time captures you can also decrease latency of audio in windows dshow by using the dshow audio_buffer_size setting. You can also decrease latency by tuning any broadcast server you are using to minimize latency, and finally by tuning the client that receives the stream to not “cache” any incoming data, which, if it does, increases latency.

Sometimes audio codecs also introduce some latency of their own. You may be able to get less latency by using speex, for example, or opus, in place of libmp3lame.

You will also want to try and decrease latency at the server side, for instance wowza hints.

Also setting -probesize and -analyzeduration to low values may help your stream start up more quickly (it uses these to scan for “streams” in certain muxers, like ts, where some can appears “later”, and also to estimate the duration, which, for live streams, the latter you don’t need anyway). This should be unneeded by dshow input.

Reducing cacheing at the client side can help, too, for instance mplayer has a “-nocache” option, other players may similarly has some type of pre-playback buffering that is occurring. (The reality is mplayers -benchmark option has much more effect).

Using an encoder that encodes more quickly (or possibly even raw format?) might reduce latency.

You might get less latency by using one of the “point to point” protocols described in this document, at well. You’d lose the benefit of having a server, of course.

NB that a client when it initially starts up may have to wait until the next i-frame to be able to start receiving the stream (ex: if receiving UDP), so the GOP setting (-g) i-frame interval will have an effect on how quickly they can start streaming (i.e. they must receive an i-frame before they start). Setting it to a lower number means it will use more bandwidth, but clients will be able to connect more quickly (the default for x264 is 250–so for 30 fps that means an i-frame only once every 10 seconds or so). So it’s a tradeoff if you adjust it. This does not affect actual latency (just connection time) since the client can still display frames very quickly after and once it has received its first i-frame. Also if you’re using a lossy transport, like UDP, then an i-frame represents “the next change it will have to repair the stream” if there are problems from packet loss.

You can also (if capturing from a live source) increase frame rate to decrease latency (which affects throughput and also i-frame frequency, of course). This obvious sends packets more frequently, so (with 5 fps, you introduce at least a 0.2s latency, with 10 fps 0.1s latency) but it also helps clients to fill their internal buffers, etc. more quickly.

Note also that using dshow’s “rtbufsize” has the unfortunate side effect of sometimes allowing frames to “buffer” while it is waiting on encoding of previous frames, or waiting for them to be sent over the wire. This means that if you use a higher value at all, it can cause/introduce added latency if it ever gets used (but if used, can be helpful for other aspects, like transmitting more frames overall consistently etc. so YMMV). Almost certainly if you set a very large value for this, and then see the “buffer XX% full! dropping!” message, you are introducing latency.

There is also apparently an option -fflags nobuffer which might possibly help, usually for receiving streams reduce latency.

Testing latency

By default, ffplay (as a receiver for testing latency) introduces significant latency of its own, so if you use it for testing (see troubleshooting section) it may not reflect latency accurately. FFplay introduces some video artifacts, also, see notes for it in “troubleshooting streaming” section Also some settings mentioned above like “probesize” might help it start more quickly.

Useful is mplayer with its -benchmark for testing latency (-noaudio and/or -nocache might be useful, though I haven’t found -nocache to provide any latency benefit it might work for you).

Using the SDL out option while using FFmpeg to receive the stream might also help to view frames with less client side latency: “ffmpeg … -f sdl “window title”” (this works especially well with -fflags nobuffer, though in my tests is still barely more latency than using mplayer -benchmark always). This doesn’t have a “drop frames if you run out of cpu” option so it can get quite far behind at times (introduce more latency variably).

Another possibly useful receiving client is “omxplayer -live”

See also “Point to point streaming” section esp. if you use UDP etc.

See also

Here is a list of some other ideas to try (using VBR may help, etc.)

ref: link