Tuesday, April 2, 2019
Glaucoma Image Processing Technique
Glaucoma Image Processing Technique team up 19 Members40102434 Andrew Collins40134357 Connor Cox40056301 William Craig40133157 Aaron DevineWe accommodate been tasked to develop a dodging that by reckon affect techniques would be up to(p) to detect glaucoma. This compulsory us to enhance our knowl contact in how to apply pre- touch on, segmentation, feature fall and post-processing on a garnish of given designs to be able to say a classification.Glaucoma is an shopping center condition where the optic nerve, which is the connector from your brain to your gist becomes persecuted. This asshole lead to a complete loss of vision if it is non detected and treated early on. This is ca employ by when fluid in the bosom elicit non be drained effectively which builds hug and so(prenominal) applies excessive pressure on the optic nerve.Detecting glaucoma normally is a rattling magazine consuming and expensive process beca habituate it requires a trained professional to digest out the research. The advantages of automating this process is that it frees up that professionals time to carry out other(a) duties.The dodge is going to be tested orderologically during the creation of the assignment, to helper us decide what would be the best parameters to aim to help ontogeny the detection rate of glaucoma.SystemThe way we tackled this assignment is we make a system that takes image mickles and converts them into information sets which trains and tests them through our classification process. The system assigns the data set to either world healthy or having glaucoma detected. Training goes through the fol sufferinging legs in this orderPre-processing. variancePost-Processing.Feature ExtractionClassification. methodologyFor us to decide what would be the best choice of techniques for separately stage of the system we be going to be victimisation the a set methodology to standardize our subscribeion process. The aim is to maximise the system to demonstrate and break down it to yield the maximum correctness it slew achieve at to each one stage so when it reaches the classification stage it would provide the or so accurate subject.The best way we are going to measure the correctness of the system is running a testing/ breeding cycle for each parameter being changed and put into a table and comparing them to select the best result.Brightness EnhancementIn our system, I pack apply Automated Brightness Enhancement (ABE). ABE is affaird to normalise an image so the images mean gray cheer is equal to 127 or (255/2). The image on a execrableer floor illustrates what the results look like.As you can take hold of in the table above, the the true or our system significantly decreases when ABE is enabled. Therefore, for the good of the systems true statement, we leave alone disable ABE in the system. As for wherefore ABE damages the verity, it likely destroys roughly data at bottom the images that contain a mo re(prenominal) dynamic range than the one shown above. This would result in more or less gray levels being 0 or 255. the true significantly falls here. The campaign for this is that ABE is causing the classifier to re annul demonstrable for glaucoma for more images than it should, in turn, improving truth due to class ratio imbalance. billet EnhancementOur system implements three types of contrast enhancement, histogram equalisation, Automated Linear stretch forth (ALS) and the power Law. These three topics are covered extensively in the reprimand slides, so in the inte proportionality of keeping the report concise, I drug abuse discuss them in depth here. Ultimately, only one of these techniques bequeath be picked.Automated Linear StretchHistogram Equalisation tycoon LawThis ideal shows an error. The system doesnt contain an automated way to find the value for (da Gamma) in each image. So well test every value of gamma from 0.0-2.0 in increments of 0.1 to perk up if any of our results provide a higher truth than when it isnt enabled at all.0.6, highlighted in green, shows that the accuracy is 88%,the image infra shows magnate law being applied when there is an error.In the image above, the superior image is the one on the left, and the processed image is on the right, and their synonymous histograms are underneath each, respectively. It would turn up that the power law has actually made the dynamic range of our image worse.Examining the segmented binary star image under could explain why the accuracy has risen to 88%.From this image, we can render that reducing contrast at the higher end, which call forms to be what the error is doing, is allowing the segmenter, which is set at its default of ring extraction with a = 1 and no post processing, to detect the veins and optic nerve ring within the shopping centre within the image with a higher level of success.But why is this the case? it is due to the images primer becoming more uniform ed because of the decrement in contrast in the white end while not altering the veins much at all as they are darker/greyer.The reason values of ySummaryFrom my tests, I pull in come to the mop up curtain that the best technique of the three is the Power Law. It was the only technique that alter our systems accuracy. My tests also suggest that high levels of accuracy are dependent on the successful extraction of data about the veins, which, as I discussed above, the Power Law is highly effective at.This theory makes even more mind when you consider that the other two methods, which significantly increased the dynamic range, did very poorly in comparison. Our system will benefit from using the Power Law, so from this point on it will be enabled.Noise reducingOur system incorporates two kinds of disruption reduction, those two being, Low Pass sift and Median Filter. From examining our images, one would conclude that salt pepper and CCD noise is not present. To demonstrate this however, well need to see if the system gains accuracy when each technique is enabled.Low Pass Filter (LPF)As we can see in the table above, accuracy has significantly decreased. To illustrate this, here is what the cowcatcher and processed histograms look like when the contrast enhancement is applied without the low outgo separate.From the histograms, it would have the appearance _or_ semblance that low pass filter is actually removing some of the contrast enhancement. Low contrast seems to be mistaken for actual primer coat noise, and when that happens, more distinct light and dark patches are created which in turn increases the dynamic range.Median FilterSimilar to the low pass filter, the median(a) filter is also removing some of the improvements made by contrast enhancement. Although it does appear that median pass filter is doing this to a lesser degree, as the accuracy is slightly higher here.SummaryFrom our tests, we can conclude that both low pass filter and median p ass filter only damage the accuracy of our system. LPF more so than MPF. It appears that the two actually undo some of the work done in contrast enhancement. As well as that, there isnt actually enough noise in the image utilize here to warrant the use of a noise reduction filter at all.After performing these tests, I decided to test my hypothesis, I tried applying the noise reduction filters before contrast enhancement to analyze the results. The results were actually identical to the results from the earlier test. So what could that mean?Well, it would seem that noise reduction is actually removing some information/data from the images, which hence limits the intensity of the segmenter. From this point on, noise reduction filters will not be used.SegmentationThis is used to separate the image into a foreground and a background with key subjects in the foreground being turned white and the rest black. Our segmentation process involved using edge extraction and then self-locki ng wanding. The for the first time thing we do is apply the Sobel act to the pre-processed imageIts very important to use edge extraction because it helps show the boundaries of the eye and make the veins much more defined. Right after that we apply automatic thresholding on the gradient magnitude image to approach a binary segmented image.The class that we use to test which value to use is called SegmenterTest which will test the value of n within a range of -2.0 to 2.0 and increases the increments by 0.1 to see if the improved value increases the compared to a default value of n = 1. From this we got the following valuesThe default system where the value of n=1 it produces a good accuracy of 88% so this is the value that we pass into our segmenter.This will allow more generic segmentation than what is possible with shot a manual threshold. The thresholds that are going to be in use are derived from the mean brightness of the pixels in the image raster and then alter by a sta ndard deviation providing the best optional threshold for each image.To check if Sobels Mask is the best for using to do edge extraction we will now compare the results from using Prewitt mask edge extraction.What we found that using the prewitt mask edge extraction as factor of our segmentation process is that it is more effective using the default value on the Sobel Mask n = 1. The best accuracy that we got using the prewitt mask happens when we have n = 1 just like when we were using the sobel mask. This allows us to reduce that the sobel mask is the best option for us to use the edge extraction during the segmentation process.Post-processingThrough this image processing technique, the image is compound and is filtered by a mask. The process uses erosion and dilation to remove marooned noise pixels, fills holes and smooth boundaries. Using brightness based segmentation, post processing is used to clean up the thresholded binary image. However, this can make preys appear small er or larger than the original size.We added the post processing techniques of closing and opening for our methods of erosion and dilation. To test which value that we are going to use we tried a variety of combinations and got the following results.From what we set uped is that the best accuracy drops heavily when using any of the other post processing techniques were used.The image above has closing only enabled which produced the best accuracy from the post processing techniques however as you can tell by the image below which has post processing disabled it has much more detail. It is for this reason we will have post processing disabled because we are then able to receive better accuracy from the images. Post-processing did not have a positive result in the classification accuracy. It does make it visually easier to see how the action was processing the images.Feature ExtractionThe purpose of feature extraction is to gather useful features and details out of segmented images by extracting the feature vectors using a technique called heartbeats. Implementing the use of moments correctly is the foundation for the essential computer sciences performed during the epitome of an object.In our feature extraction class within our program we have decided that the following features of an object will be taken into consideration- Compactness, margin, sight of Centroid and finally the Area of the object.Before we perform the calculations for these features of said Object we first had to implement the moments formula in Java. Once we have created the moment method in our class we will then be able to use this to portend the feature vectors needed.CompactnessThe reason we want to get the area and the perimeter is so that we can use the values to calculate what is needed, that being Compactness, as it is a more uself shape description for our vision system to use.. Compactness can be calculated by squaring the perimeter and then dividing it by the area.private manifold compactness(BufferedImage image) return Math.pow(getPerimeter(image), 2) / getArea(image) above I have include the method that is called to calculate the compactness of the object, as you can see the calculation that was mentioned above is performed within this method.PerimeterWe can get the object in questions perimeter is first calculated by first decay the object and then we perform a calculation to receive the brisk objects area after erosion, after this we go onto calculating the difference amongst the new objects area and the initial objects area like so Perimeter = accredited Area Eroded AreaAfter this calculation is performed we are left with the perimeter of our object.private double getPerimeter(BufferedImage image)return getArea(image) - getArea(PostProcessor.erode(image))I have placed the method used to get the perimeter of the object above, as you can see the method is performing the calculation required for the perimeter, Original Area Eroded Area result ing in our perimeter.Centroid PositionWe can get the X Y coordinates of the centroid in the object by performing the calculation of M01 M10private double position(BufferedImage image) //calculate Centroid at M01 double i = Math.round((moment(image, 0, 1))/ moment(image, 0, 0)) //calculate Centroid at M10 double j = Math.round((moment(image, 1, 0))/ moment(image, 0, 0)) double Cij = i, j return CijAbove is the method we have developed to find the position of the centroid for our Object. As you can see in the code above this method is using the moment method to perform the calculations needed to find the centroid position of the object.AreaWe mustiness also find the area vector, to do this we must calculate M00, this can be performed using the moment method which was developed earlier.private double getArea(BufferedImage image) return Math.round(moment(image, 0, 0))Above is a screenshot of the getArea method, this method calls upon the moment method and Math.round give way to fi nd the Area of our object.ClassificationWithin our system which we have developed, we included the Nearest populate puzzle out that is used to identify and recognise the training images we have supplied our system with. When we implement this feature in our system we get a variation of results depending on the value we set K to, we have included the results outputted by this function below for analysis Nearest neighbor Function K = 1o Accuracy 62.50% K = 3o Accuracy 87.50% K =5o Accuracy 56.25%As you can see in the above results from testing this function, the Nearest Neighbour Function provides us with the highest accuracy rate when using the value 3 for the K variable. This is due to the fact it can recognise the training images features. A disadvantage to this approach is that when changing the value of the K variable then this can alter the accuracy of the output as we can see when changing the value of K from 1 to 3, the accuracy increases greatly alone once we change the va lue from 3 to 5 then the accuracy suffers and drops 30 points of accuracy.SummaryFor this current group of images, the Nearest Neighbour function with the value K set to 3 is the best method used for classifying the object, this is because it returns the highest possible accuracy rate compared with other values of K much(prenominal) as 1 or 5, the accuracy rates for these values can be seen above.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.