python - Gabor filters: large variance compared to the mean -


I am trying to remove gabber features from an input image, so, I have different parameters (frequency, angle and standard deviation ) And I am leveling each of these filters with the input image and seeing the mean and deviation of the output magnitude image. Therefore, there is something like this in Python:

Import the nppy as nppy, as SPP is imported by NDIS # Here the kernel is the GAB filter filter filter (self, Image): filtered = NP. Zero (Kernel) for Kernel in Kashmir * 2,) + image.shape (self. Kernel): filtered [k * 2,:] = nd.convolve (image, np.real (kernel)), filtered ''

and now, I Viewing the mean and variance of the power image:

  def compute_features (self, image): attributes = np.zeros ((lane (self. Kernel) 2) filtered = filter_image (image ) For category (0, Lennon (self.kernels)): power_image = np.sqrt (filtered [k * 2] ** 2 Filtered [k * 2 + 1] **  

Therefore [k, 0] = filtered [k,:]. Meaning () attributes [k, 1] = filtered [k, :]. When I look at the meaning and variance of each, the filter reactions, I notice that difference is really high in relation to the meaning, for example, I get the value (mean = 0.83, variation = 900). I wonder Is there something that often sees what it tells me that I do not really have any texture in the image ? I'm not sure how it is perceived.

If apologies are not exactly on this forum then I apologize. I also posted it mutually.

You should have an idea about the texture of your image by looking at them.

I suggest you have a look at. You will be able to see typical results on textured images and check your code.

I think your code snippet is a mistake: filter_image in your function, you should walk all the corners, where you should only rebound a kernel.

In addition, you should see what type of image input you use as a schema example, the image is converted to float in all of these features [0, 1] In your results, I suspect that you are working with integer, which can achieve very different values ​​of meaning and variance, although it gives the same output after normalization. If I change the code in the schemaz example and use raw integer images, then I get results compared to the floats. I calculate the minimum / maximum / mean ratio of the meaning on variance:

 < Code> print (ref_feats [: ,,, 0] / ref_feats [: ,,, 1]). Min ()) print (ref_feats [, ,,,, 0] / ref_feats [: ,,, 1]) . ()) ()) ( ref_feats [: ,,,, 0] / ref_feats [: ,,, 1]) max.))  

(where ref_feats means above the images and the corners , Var is the table) and found:

  0.00403515106897 1. 67550281887 9.91940408151  

Comments

Popular posts from this blog

apache - 504 Gateway Time-out The server didn't respond in time. How to fix it? -

c# - .net WebSocket: CloseOutputAsync vs CloseAsync -

c++ - How to properly scale qgroupbox title with stylesheet for high resolution display? -