Some problems with DirectShow Filter.
Posted: 16 January 2010 01:15 PM   [ Ignore ]
Jr. Member
RankRank
Total Posts:  49
Joined  2010-01-15

Alex/or whovever is implementing DirectShow Filter:

I don’t know if you are using IMemAllocator interface to allocate IMediaSamples or something else, but here is problematic scenario:

—suppose I have a graph with PS3Eye Camera as a source filter;
—immediately connected to it my own renderer accepting yours MEDIASUBTYPE_RGB32.

As my renderer might be relatively slow: my renderer is interested in buffering source samples coming from yours output pin.
So, standard approach that works with all other source filters without any problems is:
—in my implementation of DoRenderSample( IMediaSample * pSample ) I’m just placing pSample pointer into some “buffer”, pefrom AddRef() on it and immediately return. Another thread performs processing of collected IMediaSample pointers in that “buffer”.

So, when playing with yours PS3Eye Camera source filter I’ve discovered, that contrary to other implementations if I don’t AddRef() on pSample, the program doesn’t crash, as it it does with any other capture filters. On one hand that might be great (LOL). On the other hand this tells me that there is some problem with yours IMediaSamples allocator implementation.

It looks like you might be using the same and just one IMediaSample pointer (or maybe several but finite number). It doesn’t look right, as in that scenario your source filter actually performs update of the same buffer, while another thread in downstream filter perfroms for example visualization of that buffer. Plus, such implementation defeats the purpose of buffering and obtaining each sample individualy.

On the other hand, I also noticed that, somehow even if I request 60fps from your filter, I’m getting just 45fps, while I have plenty of CPU resources (CPU<10%). If you are using standard IMemAllocator implementation this could be explained (in my opinion) by having not enough buffers created by allocator. Are you using DecideBufferSize in your OutpuPin implementation? If so, how many cBuffers are you setting in ALLOCATOR_PROPERTIES structure? Maybe you should consider increasing this number.

Just my 2 cents

Profile
 
 
Posted: 16 January 2010 03:11 PM   [ Ignore ]   [ # 1 ]
Administrator
Avatar
RankRankRankRank
Total Posts:  585
Joined  2009-09-17
igor1960 - 16 January 2010 01:15 PM
Alex/or whovever is implementing DirectShow Filter: I don’t know if you are using IMemAllocator interface to allocate IMediaSamples or something else, but here is problematic scenario: —suppose I have a graph with PS3Eye Camera as a source filter;—immediately connected to it my own renderer accepting yours MEDIASUBTYPE_RGB32. As my renderer might be relatively slow: my renderer is interested in buffering source samples coming from yours output pin. So, standard approach that works with all other source filters without any problems is: —in my implementation of DoRenderSample( IMediaSample * pSample ) I’m just placing pSample pointer into some “buffer”, pefrom AddRef() on it and immediately return. Another thread performs processing of collected IMediaSample pointers in that “buffer”. So, when playing with yours PS3Eye Camera source filter I’ve discovered, that contrary to other implementations if I don’t AddRef() on pSample, the program doesn’t crash, as it it does with any other capture filters. On one hand that might be great (LOL). On the other hand this tells me that there is some problem with yours IMediaSamples allocator implementation. It looks like you might be using the same and just one IMediaSample pointer (or maybe several but finite number). It doesn’t look right, as in that scenario your source filter actually performs update of the same buffer, while another thread in downstream filter perfroms for example visualization of that buffer. Plus, such implementation defeats the purpose of buffering and obtaining each sample individualy. On the other hand, I also noticed that, somehow even if I request 60fps from your filter, I’m getting just 45fps, while I have plenty of CPU resources (CPU<10%). If you are using standard IMemAllocator implementation this could be explained (in my opinion) by having not enough buffers created by allocator. Are you using DecideBufferSize in your OutpuPin implementation? If so, how many cBuffers are you setting in ALLOCATOR_PROPERTIES structure? Maybe you should consider increasing this number. Just my 2 cents
What version of the PS3Eye Camera source filter are you refering to? Is it the latest version of the driver? Yes I do use DecideBufferSize and in the next release of the driver, I will increase that number and hopefully this will eliminate the pproblems you are experiencing. In any case, if your renderer is slower that real time, what is the point of buffering? It will never catch up and thus continue to waste system resources. The right thing would be to drop the old non-rendered samples and render the most recent one. AlexP
Profile
 
 
Posted: 16 January 2010 04:16 PM   [ Ignore ]   [ # 2 ]
Jr. Member
RankRank
Total Posts:  49
Joined  2010-01-15

Alex,

You are absolutely right that if my renderer is too slow => older frames should be dropped.

However, what I’m talking about here are 2 possible scenarios:

—renderer is not real-time surface, but for example some writer filter configured to save all arriving frames—so, if our capture filter is set to 60fps and writer filter is configured to produce 60fps => we obviously want each 60 unique frames not just arriving to writer, but obviously buffered and processed by writer;

—even if we are talking about real-time rendering surface: while you are absolutely right, that we should drop all frames, but the last one, the problem is that at point of to for example DoRenderSample( IMediaSample * pSample ) filter has no clue if this pSample is the last one before drawing operation would be called, Right? Therefore, DoRenderSample just cannot return without doing nothing: instead
it calls pSample->AddRef() and remebers that pSample in some “buffer” and/or maybe just in class member (m_pSample) for example. So, the code looks like that:
DoRenderSample( IMediaSample * pSample )
{
...
pSample->AddRef()
if(m_pSample != NULL)
  m_pSample->Release();
m_pSample = pSample;
...
}

So, as the result as you can see m_pSample points to the last and AddRefed sample, while all previous samples has been released.

That code works fine when we have relatively low frequency of calls to DoRenderSample.

However, as you are allocating IMediaSamples through default IMemAllocator and use just 1 buffer in reality when we used pSample->AddRef()
that IMediaSample can’t be returned reused by your filter until my filter releases it. You might look at source code inside
HRESULT CSourceStream::DoBufferProcessingLoop(void)  of source.cpp of BaseClasses.
You might see that there is an endless loop like that:

      HRESULT hr = GetDeliveryBuffer(&pSample;,NULL,NULL,0);
      if (FAILED(hr)) {
                      Sleep(1);
      continue;  // go round again. Perhaps the error will go away
          // or the allocator is decommited & we will be asked to
          // exit soon.
      }
So, GetDeliveryBuffer will be waiting for your last media sample to be released, otherwise it would be returning error codes always.

So, my point is: as you want your DirectShow filter not just work, but work perfectly: something should be done here and the difference here is high FPS.

Profile
 
 
 
 


RSS 2.0     Atom Feed