- 25 Feb, 2015 6 commits
-
-
Better management of blacklisted playlists
ojw28 committed -
Note: I'm fairly confident that NetworkLoadable.Parser implementations can live without the inputEncoding being specified. But not completely 100%... Issue: #311 Issue: #56
Oliver Woodman committed -
The return value here assumed that the time being searched for was beyond the start time of the last segment. This fix also handles the case where the time is prior to the start of the first segment.
Oliver Woodman committed -
The only downside of this change is that MediaPresentationDescriptionParser is no longer stateless.
Oliver Woodman committed -
This issue didn't have any material impact on playbacks, but fixing it anyway to be technically correct.
Oliver Woodman committed -
1. Clear prefixFlags when a NAL unit is found. 2. continueBuffering should return true if loading is finished.
Oliver Woodman committed
-
- 23 Feb, 2015 4 commits
-
-
Calling clearStaleBlacklistedPlaylist within getNextVariantIndex method.
J. Oliva committed -
Clear stale blacklist in getChunkOperation before getting next variant. This ensures: 1.- Player resilience to failures, always trying to look for a working playlist that allows player to non stop playback. 2.- High quality blacklisted playlists can be reused in case they go up after a failure. Player always trying to provide the best user experience.
J. Oliva committed -
- Method evaluatePlayListBlackListedTimestamps renamed to clearStaleBlacklistedPlaylists - Code formatted to be consistent with style elsewhere.
J. Oliva committed -
Added an expiration time field to playlists blacklisted to allow Exoplayer to continue playback when playlists that failed were recovered from a bad state. In live environments, some times occur that primary encoder stop working for a while. In that cases, HLS failover mechanism in the player should detect the situation and “switch” to playlists served by the backup encoder (in case a backup encoder exists). This was well managed before these changes. However, and to ensure a playback experience that can recover itself from temporary issues, we cannot blacklist a playlist forever. When streaming live events using HLS, it is quite typical that the player needs to switch from primary to backup playlists, and from backup to primary ones, from time to time to have playback working when temporary issues in the network/encoder are happening. Most of the issues are recoverable, so what I have implemented is a mechanism that makes blacklisted playlist to be available again after a while (60 seconds). Evaluation of this algorithm should happen just when something fails. If player is working with a backup playlist, it shouldn’t switch to the primary one at least something fail.
J. Oliva committed
-
- 20 Feb, 2015 2 commits
-
-
Support is provided for the following schemes: urn:mpeg:dash:utc:direct:2012 urn:mpeg:dash:utc:http-iso:2014 urn:mpeg:dash:utc:http-xsdate:2012 urn:mpeg:dash:utc:http-xsdate:2014
Oliver Woodman committed -
Oliver Woodman committed
-
- 19 Feb, 2015 3 commits
-
-
- Data needs to be unescaped before it's passed to SeiReader. - SeiReader should loop over potentially multiple child messages. - I also changed the sample passed to the EIA-608 renderer so that it's the entire sei message payload. The first 8 bytes are unnecessary, but it seems nicer conceptually to do it this way. Issue: #295
Oliver Woodman committed -
Issue with regular expression for CODEC attribute
ojw28 committed -
Previous regular expression for extracting codec information was wrong, given a line that defines a variant it added information from “CODEC=“ text to the end of the line (including also information about RESOLUTION or alternate rendition groups as part of the CODEC field). This is not causing a functional problem (at least known by me) although is making codecs field storing information that is not related with the codec.
J. Oliva committed
-
- 18 Feb, 2015 5 commits
-
-
ojw28 committed
-
ojw28 committed
-
Oliver Woodman committed
-
Some extractor implementations underneath MediaExtractor require a seekTo call after tracks are selected to ensure samples are read from the correct position. De-duplicating logic was preventing this from happening in some cases, causing issues like: https://github.com/google/ExoPlayer/issues/301 Note that seeking all tracks a side effect of track selection sucks if you already have one or more tracks selected, because it introduces discontinuities to the already selected tracks. However, in general, it *is* necessary to specify the position for the track being selected, because the underlying extractor doesn't have enough information to know where to start reading from. It can't determine this based on the read positions of the already selected tracks, because the samples in these tracks might be very sparse with respect to time. I think a more optimal fix would be to change the SampleExtractor interface to receive the current position as an argument to selectTrack. For our own extractors, we'd seek the newly selected track to that position, whilst the already enabled tracks would be left in their current positions (if possible). For FrameworkSampleExtractor we'd still have no choice but to call seekTo on the extractor to seek all of the tracks. This solution ends up being more complex though, because: - The SampleExtractor then needs a way of telling DefaultSampleSource which tracks were actually seeked, so that the pendingDiscontinuities flags can be set correctly. - It's a weird API that requires the "current playback position to seek only the track being enabled" So it may not be worth it! I think this fix is definitely good for now, in any case. Issue: #301
Oliver Woodman committed -
Oliver Woodman committed
-
- 17 Feb, 2015 2 commits
-
-
Issue: #289
Oliver Woodman committed -
- This change: 1. Extracts HlsExtractor interface from TsExtractor. 2. Adds AdtsExtractor for AAC/ADTS streams, which turned out to be really easy. Selection of the ADTS extractor relies on seeing the .aac extension. This is at least guaranteed not to break anything that works already (since no-one is going to be using .aac as the extension for something that's not elementary AAC/ADTS). Issue: #209
Oliver Woodman committed
-
- 16 Feb, 2015 2 commits
-
-
Oliver Woodman committed
-
Oliver Woodman committed
-
- 13 Feb, 2015 9 commits
-
-
Oliver Woodman committed
-
Oliver Woodman committed
-
Oliver Woodman committed
-
This prevents excessive memory consumption when switching to very high bitrate streams. Issue: #278
Oliver Woodman committed -
I think this is the limit of how far we should be pushing complexity v.s. efficiency. It's a little complicated to understand, but probably worth it since the H264 bitstream is the majority of the data. Issue: #278
Oliver Woodman committed -
Use of Sample objects was inefficient for several reasons: - Lots of objects (1 per sample, obviously). - When switching up bitrates, there was a tendency for all Sample instances to need to expand, which effectively led to our whole media buffer being GC'd as each Sample discarded its byte[] to obtain a larger one. - When a keyframe was encountered, the Sample would typically need to expand to accommodate it. Over time, this would lead to a gradual increase in the population of Samples that were sized to accommodate keyframes. These Sample instances were then typically underutilized whenever recycled to hold a non-keyframe, leading to inefficient memory usage. This CL introduces RollingBuffer, which tightly packs pending sample data into a byte[]s obtained from an underlying BufferPool. Which fixes all of the above. There is still an issue where the total memory allocation may grow when switching up bitrate, but we can easily fix that from this point, if we choose to restrict the buffer based on allocation size rather than time. Issue: #278
Oliver Woodman committed -
Oliver Woodman committed
-
Oliver Woodman committed
-
end of some streams.
Oliver Woodman committed
-
- 12 Feb, 2015 5 commits
-
-
- Remove TsExtractor's knowledge of Sample. - Push handling of Sample objects into SampleQueue as much as possible. This is a precursor to replacing Sample objects with a different type of backing memory. Ideally, the individual readers shouldn't know how the sample data is stored. This is true after this CL, with the except of the TODO in H264Reader. - Avoid double-scanning every H264 sample for NAL units, by moving the scan for SEI units from SeiReader into H264Reader. Issue: #278
Oliver Woodman committed -
The complexity around not enabling the video renderer before it has a valid surface is because MediaCodecTrackRenderer supports a "discard" mode where it pulls through and discards samples without a decoder. This mode means that if the demo app were to enable the renderer before supplying the surface, the renderer could discard the first few frames prior to getting the surface, meaning video rendering wouldn't happen until the following sync frame. To get a handle on complexity, I think we're better off just removing support for this mode, which nicely decouples how the demo app handles surfaces v.s. how it handles enabling/disabling renderers.
Oliver Woodman committed -
Reordering in the extractor isn't going to work well with the optimizations I'm making there. This change moves sorting back to the renderer, although keeps all of the renderer simplifications. It's basically just moving where the sort happens from one place to another.
Oliver Woodman committed -
I'm not really a fan of micro-optimizations, but given this method scans through every H264 frame in the HLS case, it seems worthwhile. The trick here is to examine the first 7 bits of the third byte first. If they're not all 0s, then we know that we haven't found a NAL unit, and also that we wont find one at the next two positions. This allows the loop to increment 3 bytes at a time. Speedup is around 60% on Art according to some ad-hoc benchmarking.
Oliver Woodman committed -
There's no code change here at all, except for how TsExtractor's getLargestSampleTimestamp method works.
Oliver Woodman committed
-
- 11 Feb, 2015 1 commit
-
-
1. AdtsReader would previously copy all data through an intermediate adtsBuffer. This change eliminates the additional copy step, and instead copies directly into Sample objects. 2. PesReader would previously accumulate a whole packet by copying multiple TS packets into an intermediate buffer. This change eliminates this copy step. After the change, TS packet buffers are propagated directly to PesPayloadReaders, which are required to handle partial payload data correctly. The copy steps in the extractor are simplified from: DataSource->Ts_BitArray->Pes_BitArray->Sample->SampleHolder To: DataSource->Ts_BitArray->Sample->SampleHolder Issue: #278
Oliver Woodman committed
-
- 10 Feb, 2015 1 commit
-
-
- TsExtractor is now based on ParsableByteArray rather than BitArray. This makes is much clearer that, for the most part, data is byte aligned. It will allow us to optimize TsExtractor without worrying about arbitrary bit offsets. - BitArray is renamed ParsableBitArray for consistency, and is now exclusively for bit-stream level reading. - There are some temporary methods in ParsableByteArray that should be cleared up once the optimizations are in place. Issue: #278
Oliver Woodman committed
-