Enhance the LZW Compression Ratio Through the Use of Image Preprocessing Techniques for Gray Scale Images

Compression ratios of encoding algorithms degrade due to signal distortion, additive noise, and hacker manipulation. Large file size costs too much disk space, difficult to analyze, and high bandwidth to transmit over the internet. In this case, compression is mandatory. LZW is a general dictionary-based lossless compression algorithm. It is fast, simple, and efficient when it includes lots of repetitive data or monochrome images. Images with little data repetition and too much-blurred signal, the compression ratio of the LZW algorithm downgraded. Besides this, the execution time of the LZW compression algorithm increases dramatically. To preprocess and analyze the image information the researcher uses LZW encoding algorithm, bit plane slicing technique, Adaptive Median Filter, and MATLAB image processing toolbox. The MATLAB public grayscale image, salt & pepper, Gaussian locavore blurred, and Bayern pattern image data sets are used. Those images dataset is used to test the normal LZW encoding algorithm and the proposed encoding algorithm compression ratio step by step. The noised dataset, the filtered datasets, and bit plane dataset images are processed and recorded quality and compression ratio parameters. The enhanced encoding algorithm average compression ratio is better by far from the normal LZW encoding algorithm by 160%. Not only has the compression ratio, but demising also improved the algorithms execution time. And the image quality metrics measurement of mean square error, peak signal to noise ratio, and structural similarity index measurement are 0, 99, and 1 respectively. This implies the enhanced encoding algorithm could decompress fully without scarifying image quality. The LZW encoding algorithm developmental environment specifies to select tiff and gif image formats. In addition, the LZW encoding algorithm functions are not available in the MATLAB image processing toolbox. The researcher challenged to write a MATLAB script for each personal function. Still, there is room to extend the compression ratio of the LZW encoding algorithm using the image masking technique.

decompression algorithm that, given the compressed file, reproduces the original file. There have been many types of compression algorithms developed. These algorithms fall into two broad types, lossless algorithms and lossy algorithms (Al-Khafaji and Bassim, 2019). A lossy (Hussain et al., 2008) algorithm, as its name implies, loses some data. More data can be stored in the memory space if the researcher get our images compressed and transmission will be faster because of the reduced size of the image (Mishra and Singh, 2017).
Data compression algorithms are used in these standards to reduce the number of bits required to represent an image or a video sequence or music (Roy et al., 2018). Such techniques allow one to store an image with much less memory than it would normally require, hence allowing it to be transmitted more quickly (Prashanth and Singh, 2015). Digital image processing simply means the processing of images using a digital computer. Image compression (Kaur and Kaur, 2013) is an application of data compression that encodes the original image with few bits. In lossless compression, there is no loss of any information in the image when the image is decompressed (Gonzalez and Wood, 2019). In the second stage the distinction between the predicted value and the actual intensity of following pixel is coded utilizing diverse encoding techniques (Khan et al., 2018). Lossless algorithm such as Huffman coding, which belongs to Entropy Encoding sub-family is a most used compression method on which based a lot of compression algorithm in particular JPEG (De Luca et al., 2019) where it is possible compress an image opening it in binary mode and reading a single byte like ASCII symbol and after apply Huffman Encoding to generate compression version of raw image. The proposed linear filters are nearly optimal in a Wiener sense, and in fact outperform many more complex nonlinear filters (Malvar et al., 2004). For this reason, many of the techniques developed for monochrome images can be extended to color images by processing the three component images individually (Gonzalez et al., 2014). The high quality binarized image can give more accuracy in character recognition as compared original image because noise is present in the original image (Puneet and Garg, 2013). The default size of table is 256 to hold pixel values from 0 to 255 for 8 bits (Prabhakar and Ramasubramanian, 2013). LZW compression became the first widely used universal image compression method on computers.

Statement of the problem
Even though it depends on the image format, LZW image compression algorithm enhanced by using RLE and achieve 2.4: 1 average compression ratio (Husseen et al., 2017). By taking information contained in the satellite multispectral images a hybrid lossless method that combining both the LZW and the Arithmetic coding gives better performance than other existing lossless methods in both quality and compression factors (Boopathiraja et al., 2018). Using the new data hiding scheme (namely ODH-LZW), the LZW secret information hiding accuracy is enhanced. The scheme has significantly increased the data hiding capacity specifically by 109. .6% and 28.1-381% for text and gray-scale image data, respectively over state-of-the-art methods (Kumar et al., 2019). LZW technique has its tremendous performance (Badshah et al., 2015) in image compression is better than all other even lossless compression algorithms if the image data is monochrome. Two basic improvements of standard LZW algorithm reduced memory requirement of LZW and reduced total number of searches in order to decode any code during decompression using MSED technique is registered (Bhattacharyya et al., 2017). Recently the LZW encoding decoding time becomes more efficient with the help of GPU Parallel processing (Soobhee et al., 2017). The LZW algorithm compression ratio degraded because of described and un-described reasons. With the help of image watermarking techniques, the compression ratio for gray scale image is summarized below Table 1. Significance of the study This paper will be significant to all digital image processor academic research community. In digital image processing the researcher always face storage space, transmission bandwidth, memory and retrieving time problem. Photographers, the researcher's managers, medical professionals and all those whose concern is quality are directly benefited. Since LZW image compression is a lossless algorithm, images will be decoded without any loss of information. For example, CT scan in medicine uses such outputs.

Scope of the study
This paper aims only on enhancing the compression ratio of the LZW algorithm for gray scale images using the proposed preprocessing techniques. Mainly, the researcher goes through LZW algorithm and compress8-bitdepthTIFF images to show some compression ratio enhancement. The images datasets are collected only from the MATLAB public image library.

Limitations
The researcher only selects Bayer pattern images for the testing of the newly build model. This is because it's possible to reconstruct three-dimensional Bayer pattern images from two-dimensional slices.
Digital Image Compression -There are many data compression algorithms which have been developed in the literature and, to date many researches are being carried out to come up with newer, better techniques. These algorithms can be lossy or lossless and have been developed for different applications. Some of the algorithms are developed for general use. They can be used to compress data of different types (Lina, 2009). While some of the algorithms are developed to compress efficiently a particular type of files (Ghadirli et al., 2019). In this study, the researchers are concerned only with lossless image compression types. Need for Compression -The main inflection of image compression is to decrease the redundancy of the image and to save or send data through a network in an efficient manner. Using fractal concept on the embedded LZW algorithm; compression ratio is better than the standard LZW method by 102% for gray scaled images. In future we can improve the image quality and better compression ratio value by changing the value of image compression parameters. However, in this application it is imperative to determine whether one compression standard will benefit all areas (Halder et al., 2019). Lossless compression gives lesser compression ratio (2:1) as quality of the image cannot be compromised. Lossless compression methods may be categorized according to the type of data they are designed to compress. The common lossless compression methods are Run-Length Encoding (RLE) and LZW.  Image compression is important for web designers who want to create faster loading web pages which make the websites more accessible to others. The motivation is, uncompressed images normally require a large amount of storage capacity and transmission bandwidth. The primary goal of image compression is to minimize the number of bits required to represent the original images by reducing the redundancy in images, while still meeting the user defined quality requirements.

Gray Scale Images
A grayscale image pixel tall and pixels wide is represented as a matrix of double data type of size. Element values (such as) denote the pixel grayscale intensities in with 0=black and 1=white. For processing purpose, we can drive the gray scale images from RGB or true color images. For example:   Image Preprocessing Techniques for LZW -Image preprocessing techniques are those methods which enhances restores and maintains the quality of the image. In this section, the paper will discuss only the preprocessing techniques related to reduction of size without scarifying quality.
Attribute Filtering -Binary Attribute Filtering, The Max-Tree Approach, The Volume Attribute and The Vision Attribute are the widely used attribute filtering techniques to improve different compression algorithms. Experiments (You et al., 2018) have shown that all the filters cause an improvement of as much as 11, 10 and 20% for jpeg, jpeg2000 and LZW algorithms respectively.
Histogram -Among the characteristics found relatively frequently in computer-generated images, but those are usually not found in natural images, is inten-sity histogram sparseness.
Entropy Coding -This technique is to replace original data of gray scale images with particular ordered data so that performance of lossless compression can be improved more efficiently. When compressing ordered image using entropy encoder, the researcher can expect to raise compression rate more highly because of enhanced statistical feature of the input image. Scholars (Cadena et al., 2017) show that lossless compression rate increased by up to 37.85% when comparing results from compressing preprocessed and non-preprocessed image data using entropy encoder such as Huffman, Arithmetic encoder, LZW.

Bi-histogram equalization with a plateau value (BHEPL) -
The BHEPL is use to short processing time for image enhancement. I-Histogram Equalization with a Plateau Value (BHEPL) is similar to Brightness Preserving Bi-Histogram Equalization (BBHE). BHEPL enhancement method is the combination of two methods first one is BBHE and second one is clipped histogram. Absolute Mean Brightness Error (AMBE) is used to measure the performance of enhancement on the input images.
Digital Filter -Various filters (Cadena et al., 2017) are used for medical image preprocessing such as mean filter, Gaussian filter, median filter and 2D Cleaner. The primary purpose of these filters is a noise reduction, but filter can also be used to emphasize certain features of an image or remove other features. Most of image processing filters can be divided into linear filters and nonlinear filters. Nonlinear filters include order statistic filters and adaptive filters. The choice of filter is often determined by the nature of the task and the type and behavior of the data.
Watermarking -LZW has been used successfully for watermark lossless compression to watermark medical images in teleradiology to ensure less payload encapsulation into images to preserve their perceptual and diagnostic qualities unchanged. Medical image security is one of these applications based on watermarking of medical images. With the help of the LZW algorithm, the researcher can preserve image information while hiding from hacker manipulation and different security issues. Watermarking image information onto another ordinary, small size and portable format is obvious.
Contrast enhancement -Image contrast enhancement is to improve the contrast level of images, which are degraded during image acquisition. Image contrast enhancement aims to improve the contrast level of images, since the image quality can suffer due to several factors, such as contrast, illumination and noise during image acquisition procedure. Image contrast is defined as the separation factor between pattern and the brightest spot and the darkest spot in images (Chen et al., 2018). A larger separation factor indicates higher contrast; on the other hand, the smaller separation factor indicates the contrast. Image contrast enhancement is useful in many realworld application areas.
Bit Plane Slicing -A bit plane is a set of bits corresponding to a given bit position in each of the binary numbers in an image. It is used to determine the adequacy of numbers of bits used to quantize each pixel in the image (Pokle and Bawane, 2017). Bit plane slicing is the conversion of image into multilevel binary image. These binary images are then compressed using different algorithm. With this technique, the valid bits from gray scale images can be separated, and it will be useful for processing these data in very less time complexity (Image and Project, 2014). The first step is to slice the grayscale images into eight binary monochrome images by using bit-plane slicing. The generated binary images contain redundant bits. Because the number of color decreases to 2 colors black (0) and white (1).
Histogram -An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. The Histogram Equalization block enhances the contrast of images by transforming the values in an intensity image so that the histogram of the output image approximately matches a specified histogram. This enhancement algorithm is based on plateau histogram equalization for infrared images. By analyzing the histogram of image, the threshold value is got self-adaptively.
Noise Removal -The researcher may define noise to be any degradation in the image signal, caused by external disturbance. Noise models also designed by probability density function using mean, variance and mainly gray levels in digital images (Chiranjeevi and Jena, 2016). Image noise is generally regarded as an undesirable by-product of image capture. Although these unwanted fluctuations became known as "noise" by analogy with unwanted sound they are inaudible and such as dithering. The standard model of amplifier noise is additive, Gaussian, independent at each pixel and independent of the signal intensity.
In color cameras where more amplification is used in the blue color channel than in the green or red channel, there can be more noise in the blue channel. Amplifier noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level in dark areas of the image.
Salt-and-pepper noise -An image containing saltand-pepper noise will have dark pixels in bright regions and bright pixels in dark regions. This type of noise can be caused by dead pixels, analog-todigital converter errors, bit errors in transmission, etc. This can be eliminated in large part by using dark frame subtraction and by interpolating around dark/bright pixels. There are so many denoising algorithms have developed to recover a noise corrupted image. However, most of them cannot well recover a heavy noise corrupted image with noise density above 70% (Lu and Chou, 2012).
Poisson noise -Poisson noise or shot noise is a type of electronic noise that occurs when the finite number of particles that carry energy, such as electrons in an electronic circuit or photons in an optical device, is small enough to give rise to detectable statistical fluctuations in a measurement.
Speckle noise -Speckle noise is a granular noise that inherently exists in and degrades the quality of the active radar and synthetic aperture radar (SAR) images. It is caused by coherent processing of backscattered signals from multiple distributed targets. In SAR oceanography, for example, speckle noise is caused by signals from elementary scatters, the gravity-capillary ripples, and manifests as a pedestal image, beneath the image. pSNR is derived from the mean square error, and indicates the ratio of the maximum pixel intensity to the power of the distortion. Like MSE, the pSNR metric is simple to calculate but might not align well with perceived quality.  swim -Structural Similarity (SSIM) Index.

Removing Noise by Adaptive
The SSIM metric combines local image structure, luminance, and contrast into a single local quality score. Because structural similarity is computed locally, swims can generate a map of quality over the image (Lu and Chou, 2012). Where the MSE (Mean Squared Error) is:

(Mean Squared Error Equation)
Legend: f represents the matrix data of our original image. g represents the matrix data of our degraded image in question. m represents the numbers of rows of pixels of the images and I represent the index of that row. n represents the number of columns of pixels of the image and j represents the index of that column. Max i is the maximum signal value that exists in our original "known to be good" image. ImageJ2: ImageJ is a public domain Java image processing program inspired by NIH Image for the Macintosh. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw".
Algorithms: In this section the researcher briefly discusses the algorithms the researcher use. The first algorithm the researcher took is Lempel-Ziv-Welch (LZW). The second algorithm employed to enhance the LZW algorithm is bit plane slicing.

Normal LZW -
The LZW algorithm is a greedy algorithm in that it tries to recognize increasingly longer and longer phrases that are repetitive, and encode them. Each phrase is defined to have a prefix that is equal to a previously encoded phrase plus one additional character in the alphabet.

Our Image Compression Flowchart -
In this section the researcher did all the preprocessing tasks like separating rub color components (i.e. red, green and blue), masking each color component, finding their respective bit plane and filtering noise.  (3) Extracting RGB Components -The foremost preprocessing technique to improve the normal LZW lossless image compression technique is dividing the true color or RGB image to its components. This dictionary-based algorithm will feed each color channel (red, green and blue) alone. LZW algorithm is outfitted to compress gray scale or 8-bit depth image than other lossless image compression algorithms. The easiest and effective technique to extract the color component from RGB color image is the MATLAB function; implant (). Bit Plane Slicing -Bit plane slicing is a method of representing an image with one or more bits of the byte used for each pixel. One can use only MSB to represent the pixel, which reduces the original gray level to a binary image (Dastanova et al., 2018). The three main goals of bit plane slicing are: Converting a gray level image to a binary image, representing an image with fewer bits and corresponds the image to a smaller size, and Enhancing the image by focusing. Highlighting the contribution made to the total image appearance by specific bits. The Assumption here is that each pixel is represented by 8-bits and the image is composed of eight 1-bitplanes. Plane (0) contains the least significant bit and plane (7) contains the most significant bit. The higher order bits only (top four) contain the majority visually significant data. The remaining bit planes contribute the more subtle details. It is useful for analyzing the relative importance played by each bit of the image. The code in is the bit slicing MATLAB code I adapt from Ahmed Ayman (https://www.mathworks.com/ matlabcentral/fileexchange/). The code displays images in eight/8/different bit intervals. As the researcher can observe and compare each output to the original gray image, the image at bit 8 or MSB is equal to the original gray image. So, the researcher takes it and apply the LZW algorithm.

LZW Pre-processing
Image binarization -These are the pre-processing steps often performed in improved LZW.
Binarization -Usually presented with a grayscale image; simply a matter of choosing a threshold value. Color reduction is done here by converting RGB true color image to gray level image. Since the bit-depth decreases from 24 bit-depth to 8-bit depth, there exists quality degradation. This is done by using rgb2ind function. This section classifies the some important local and global binarization methods that are currently used for binarization. Gray Scale Image Noise Removal -In this section, the give gray scale image would be distorted by adding salt& pepper noise with different intensity. Of course, the maximum threshold to be restored with linear wave let medifelt2 is 70% distortion. To see compare the local quality scores and compare the compression ratio of an image, the noised image must be filtered. To filter a noise, wdenoise2 is applied. To reconstruct the first true color image from the 8bit depth image, the researcher needs at least 5 rows by 5 column image array pixels. And the test images must be Bayer pattern sensor alignment digital images. If so, the democratic function can reconstruct the true color image using one of the sensor alignments: GBRG, GRBG, BGGR or RGGB. This will be done if the image was captured by adjusting our camera sensor BAYER pattern. Otherwise, to concatenate and convert gray scale images to RGB, color map is needed. In this case a gray scale image can be converted to its true color version with a real colormap. The technique is called DE mosaicking. The figure below is an example for gray to RGB conversion.
The following MATLAB script gives us the RGB version of gray mand (Fig 15).
>> gray=mired('mandi.tif'); >>mishap(gray) >> RGB = democratic(gray,'bggr'); >>mishap (RGB); Histogram -An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance. Image histograms are present on many modern digital cameras. Photographers can use them as an aid to show the distribution of tones captured, and whether image detail has been lost to blown-out highlights or blacked-out shadows. The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the number of pixels in that particular tones. Noise filtration -They are different noises which distort electronic images like transferring over the internet, image acquisition, hacker manipulation, digital camera, etc. to reduce such unwanted signals, the researcher uses noise filtering functions.  Our proposed salt-peppers noise remover is adaptive median filter.
Enhanced LZW -In this section, the normal LZW image compression algorithm would be embedded with the proposed enhancement techniques. All the code blocks are done so far, merging them yields the final proposed model. Here the model would produce a compressed gray scale image both with noise and after noise filtration. The compression ratio gained from the normal LZW and the enhanced LZW would be registered for every single image for the comparison purpose. And the quality of the image before and after compression measured using SSIM, MSE, and PSNR.

Experimental setup
The experiment was performed using Windows 10 professional, Intel Core i5-4200M CPU processor with 2.50 GHz speed. The software used for the experiment implementation is MATLAB r2019a with image processing toolbox, ImageJ and GIMP. The datasets are all gray image formats from MATLAB public library and images with BAYERN pattern.
Noised images are produced using salt & peppers and gaussian blurring model with the intensity up to 70%. Filtered or denoised image are produced using medfilt2and wdenoise2. The 8-bit plane slicing technique is an appropriate technique to separate the bit planes for each plane. The 8-bit plane slicing output bits will be exported/write to the disk for the LZW encoding scripts. The compression ratio measurement and execution time script are included in the enhanced LZW encoding algorithm MATLAB file.

Split RGB Image into Its Component Channels -
The horse RGB image from the Math Works laboratory is split to its color channels using implant () MATLAB function as follows: Having the three-color components of an RGB image alone, the researcher converts each to gray. So that, the researcher can binarize and put in a bit plane each color as 8-bit depth color image. For this task, the researcher employee implant () function, but the researcher can also alternatively use rgb2gray ().
subplot (1,3,3); imshow (Imgbw); title ('Binarized Apple'); The researcher converts an RGB image to gray level image using RGB2GRAY conversion model. The grayscale output g is a constrained linear combination of R, G, and B channels of the input color image I, which is: 3g = w r I r + w g I g + w b I b s.t. w r + w g + w b = 1, w r ≥ 0, w g ≥ 0, w b ≥ 0,

Gray Scale Image to Bit Plane Conversion
Where, I r , I g , and I b are input channels, respectively. Channel weights w r , w g , and w b are non-negative numbers that sum to 1.

RGB Image to Bit Plane Conversion
Since the researcher cannot enter a true color image in to an 8-bit plane, the researcher need to find the color components of red, green and blue of a given true color image as follows.
Noise Removal -Image noise is random (not present in the object imaged) variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by product of image capture that adds spurious and extraneous information. The meaning of "Noise" is unwanted signal. Removing those unwanted signals enhances the quality of the image and decreases the file size. The major noises are: photon, thermal, salt noise and pepper noise.   There are different noise removal techniques as per the noise type. Some of them are median, blurring, sigma filter, knn filter, Savitzky-Golay, BM3D, nonlocal means, K-SVD, K-LLD, Knox-Thompson, etc. Multiply each plane by a given mask and create masked red, green and blue planes. Having those two-dimensional images, the researcher concatenate to reconstruct a masked rgb image.

Performance Measurement Parameters
After reconstruction/decompression of the image, the quality of the image and the performance of the compression algorithm have to be tested. Therefore, the amount of compression and how good the reconstructed image is similar to the original is known from the test conducted. In this thesis the test was conducted by calculating important distortion measures namely: the mean square error (MSE), peak signal-to-noise ratio (PSNR) measured in decibels (dB), the compression ratio (CR) and structural similarity index (SSIM) which are all briefly defined below.

Mean Square Error (MSE)
The MSE is the cumulative squared error between the compressed and the original image. It measures the average of the square of the error. A lower value of MSE means lesser error.

Compression Ratio (CR)
It is the measure of the reduction of the detailed coefficient of the data. In the process of image compression, it is important to know how much detailed (important) coefficient one can discard from the input data in order to sanctuary critical information of the original data. Compression ratio can be expressed as:

Peak Signal to Noise Ratio (PSNR)
PSNR is a measure of the peak error. Many signals have very wide dynamic range, because of that reason PSNR is usually expressed in terms of the logarithmic decibel scale (dB). A higher value of PSNR means higher signal to noise ratio. Values for PSNR range between infinity for identical images, to 0 for images that have no commonality. The PSNR is given as: Where the MSE is Mean Squared Error and max f is the maximum possible pixel value of the image.
Structural Similarity Index (SSIM) -SSIM is the structure similarity index for measuring the similarity between the original image and compressed image. Given two image signals or pixels x and y from two images, which are aligned with each other, the SSIM between the two image signals is given as a function of three characteristics which are luminance l(x,y), contrast c(x,y) and structure s(x,y). SSIM lies between 0 and 1. N is the number of signal samples (pixels), and C 3 is a non-negative constant.

And its discrete version for an image reads
Where, m x n is the size of the image, is the mean and is the standard deviation. For optimal performance, the measured value of S.I. should be low.
The speckle index can be regarded as an average reciprocal signal-to noise ratio (SNR) with the signal being the mean value and noise being the standard deviation. Experiment One -MATLAB image processing toolbox, GIMP and image noise techniques are used. All the datasets taken from the MATLAB library are gray level and tiff format. The blurred dataset are prepared by applying a salt & pepper noise with 0.1 intensity. The compression ratio, peak signal to noise ratio, mean square error and structural similarity index measurements are taken. Experiment Two -MATLAB image processing toolbox, GIMP and image denoise techniques are used All the datasets taken from the above table (Experiment one). The denoised dataset are prepared by applying a medfilt2 algorithm. The compression ratio, peak signal to noise ratio, mean square error and structural similarity index measurements are taken ( Table 5).
Experiment Three -MATLAB image processing toolbox, GIMP and bit plane slicing algorithm are used. All the datasets taken from the above table (Experiment two). The 8-bit planes of the denoised gray scale image are extracted and the MSB of each image are taken for the experimental input. The compression ratio, peak signal to noise ratio, mean square error and structural similarity index measurements are taken ( Table 6).    The SSIM metric combines local image structure, luminance, and contrast into a single local quality score. IT agrees more closely with the subjective quality score, that is the real human eye perception. Because structural similarity is computed locally, ssim can generate a map of quality over the image. The compression ratio is recorded for each experiment of different samples to know how much detailed (important) coefficient one can discard from the input data in order to sanctuary critical information of the original data. The second experiment results told that image denoising and compression ratio has a positive correlation relationship. The more the image if filtered, the high the compression ratio achieved by the encoding algorithm. Take experiment one ( Table 4 and Table 5), the compression ratio of the proposed enhanced algorithm outreach the normal compression algorithm by 116% for filtered gray scale images. Even the execution time of the encoding algorithm is improved when the unwanted signal is reduced. The third experiment proves slicing an image into 8-bit plane enhances the compression ratio of the encoding algorithm. If we take a comparison between the normal LZW encoding and the proposed encoding algorithm, a clear difference visualized. Slicing and separating the image defining bits in the plane can create a meaningful bit pattern, so that the algorithm can put many bytes as possible with a single symbol in the codebook inside the dictionary. The total compression ratio improvement of the enhanced algorithm with both proposed techniques is recorded as 160%. Beside the compression ratio enhancement, the three image quality metrics imp-lies there is no loss of information during file size reduction. The compressed images are fully decoded to their original quality. The average values of SSIM, MSE AND PSNR are 1, 0, and 99 respectively. So, the enhanced encoding algorithm is fully lossless. Since the LZW encoding algorithm is not available in MATLAB image processing toolbox, coding for each personal function is one of the tasks which scarifies the entire thesis time. Getting a Bayern pattern image dataset is also another problem an image analyst face in the field. This is because cameraman does not aware of the Bayern pattern sensor in their digital camera.

CONCLUSION:
Even though the proposed enhanced LZW algorithm show a dramatic improvement in compression ratio without scarifying image information or image quality, the enhancement can be extended still. In real images, different natural noises are occurring. Such natural speckle additive noises cannot be reduced using the proposed filtering techniques. Adaptive median filter and speckling denoising technique can be used for reduce natural additive noise like interferences, camera misuses, hacker manipulations. One can also improve the encoding algorithms execution time by selecting a noise reduction technique knowing the noise type. Those proposed techniques can be also used for Bayern pattern RGB or true color images. Enhancing the LZW encoding algorithm for high dimensional or true color images using bit plane slicing is our next work. This dream can be true with the help of image masking, concatenate the three color components with their bit planes.

ACKNOWLEDGEMENT:
I would like to express our gratitude to our students for their effortless support in completing this paper and also thanks to our colleagues for their support and advice to the successful of present study.

CONFLICTS OF INTEREST:
The authors declare that there is no conflict of interest to publish it.