Open-access Non-uniform Intelligent Down-sampling of Digital Curves for Efficient Compression

Abstract

A new method is proposed for the intelligent down-sampling of digital curves that provides efficient compression. The proposed down-sampling is non-uniform and is based on the spatial distribution of points on the digital curve (line diagram). The down-sampled points form a polyline or polygon which is an approximate representation of the input digital curve. The down-sampled points are determined using an optimization solver, such that the differential area between the original digital curve and the optimally generated down-sampled polyline or polygon, is minimal. The mean percentage execution time saved by our proposed method, compared to its nearest competitor, is found to be about 18%.

Keywords:
Digital Curves; Down-sampling; Pattern Search Algorithms; Error Area.

HIGHLIGHTS

A new method for intelligent down-sampling of curves is proposed

Proposed method is based on spatial distribution of points

Proposed method is computationally reliable and inexpensive

INTRODUCTION

In general, an analog time-varying signal is sampled above the Nyquist Sampling rate for future faithful reconstruction of the original data while avoiding the aliasing error. Down Sampling (DS) is the process of representing the signal by samples less than the original number of samples.DS introduces an error between the original signal and the reconstructed signal derived from the down-samples. This is demonstrated in Figure 1 where the original 1D signal with 20 uniform samples (Figure 1(a)), is down-sampled with eight non-uniformly spaced down-samples as shown in Figure 1(b). The resulting ‘Area Error’ is shown in green. In this paper, we consider the DS of spatial curves in 2D, where the points on the curve are represented by their Cartesian (x-y) coordinates.

Figure 1
Uniform Samples and non-uniform down-samples of a signal segment in 1D

Digital Curves

In computer graphics, a curve is represented by a series of closely spaced points forming an identifiable shape. The sequence of these closely spaced points forms the Digital Curve. The closer the spacing, the better the resolution, and the normal human vision sees it as a continuous curve. Thus, a finite number of discrete points represent a mathematical continuous curve to form the corresponding DC. In the case of 2D digital curves in the Cartesian plane, each point on the DC is represented by its x and y coordinates.

Spatial Down-Sampling

Spatial Down Sampling (SDS), of a digital curve is the process of representing it with a smaller number of samples to suit the desired objective. The down-sampled polyline or polygon is an approximate representation of the original curve with reduced resolution. Spatial down-sampling is adopted in the case of contour lines and line diagrams to remove redundancy and to achieve consequent compression that reduces the storage space. SDS also provides higher speed in shape processing operations like the calculation of perimeters and enclosed areas, shape recognition/classification, and so on [1-9].

Uniform vs Non-Uniform Down-sampling

In Uniform Down-Sampling (UDS), spatial separation between successive sample points is kept constant throughout the digital curve. Spatial separation could be in terms of the number of original sampling intervals or Euclidian distances. On the other hand, in Non-uniform Down-Sampling (NDS), the increment between successive sample points along the digital curve varies depending on its curvature. The density of samples is kept higher in those regions where significant features are present, as indicated by sharper curvatures. Even though UDS is easier to implement, the accuracy of representation is better in NDS. In the proposed work, NDS is intentionally adopted to match the variations in the curvature so that the overall approximation error is kept at a minimum.

Main Contribution of the Study

A new method designated as the Intelligent Non-uniform Down-Sampling (INDS) scheme is presented that minimizes the total deviational error between the original digital curve and its polyline or polygonal approximation obtained by down-sampling the original curve. The proposed scheme is implemented based on Solver-based Constrained Minimization.

The remaining part of the paper is organized as follows. Section 2 briefly reviews related works, and section 3 presents the preliminary symbols and notations along with the basic principle. In section 4, the actual algorithms are presented. Section 5 gives the experimental results and the evaluation of this work compared to other similar methods. Section 6 contains the Conclusion and the Future Scope.

LITERATURE REVIEW

In [1], the authors have adopted a Multi-Objective Genetic Algorithm to find the polygonal approximation to the given closed curve. In the proposed method, the optimization algorithm places the down-sample points on the given digital curve by minimizing the approximation error. The proposed approach is found to be superior compared to many classical methods. In [2], the authors have proposed an optimal technique using sequential search to determine the down-sampled points for a closed curve. The method selects the best possible initial point to continue the optimal search along the curve. However, the proposed method finds a solution very close to the optimal one while the time taken is relatively high compared to the global optimization methods. In [3], a new technique of locating sample points on contour lines with irregular stroke order has been presented. Here, the sample points are properly placed on such contours using a suitable optimization scheme. The basic idea is to consider the degree and edge length based on the graph theory for placing the sample points. The authors have claimed that their method provides the best placement of sample points for the retrieval of the original contour lines. The disadvantage of this method is the need for the graphical construction of the contour lines from the raster curve and the use of an iterative method for locating the optimal vertices of the downsampled polyline, which is computationally expensive.

In [4], the authors have demonstrated the application of the Genetic Algorithm (GA) for realizing the polygonal approximation of digital planar curves. In this method, a priori information about the order of vertices of the original curve need not be given, which enables the efficient approximation of closed or open curves. The authors have used six components for the fitness function of the GA. In [5], planar digital curves are approximated by their polygonal equivalents using the ‘Triangular Suppression” technique. In this method, special cut points enclosing higher curvature segments are selected for down-sampling. The proposed method is advantageous where the curvatures of the contour lines change rapidly. But, in this method, the determination of curvatures along the contour lines is computationally expensive.

In [6], the authors have used Monte Carlo (MC) optimization to determine the polygonal approximation of digital planar curves. They have applied the split and merge technique of the MC method for optimized local search. The authors have claimed that their proposed scheme is superior to the Genetic Algorithm method and the PSO (Particle Swarm Optimization) method in determining the polygonal approximation accompanied by lower computational overhead. However, this method cannot provide full global optimization. In [7], the polygonal approximation of a digital curve is derived based on its curvature. This algorithm, designated as Direction Change-based Polygonal Approximation (DCPA) has linear time complexity. In DCPA, redundant dominant points are eliminated by bidirectional scanning of the target curve. In DCPA, global optimality of the polygonal approximation is not assured as the selection of dominant points is carried out segment-wise.

In [8], the author has applied PSO algorithm to generate the approximate polygonal of a given digital planar curve. Here, the PSO method is enhanced using an embedded local optimizer. The author has claimed that the proposed enhanced PSO method is superior to other similar methods in terms of compression ratio and approximation accuracy. However, this method is computationally expensive when the size of the approximate polygon is large and concave. In [9], a polygonal approximation of a digital curve is used to fit the shape of plant leaves to identify the corresponding botanical classes. The scheme utilizes the minimum Euclidean distance criterion between a sample point and the edge of the partially built-down sampled curve. In [10], polygonal approximation of a digital curve is carried out by introducing a new metric, namely the "significant measure." This approach preserves essential shape features such as sharp turns, which are often lost in conventional techniques that rely on simple distance-based metrics. The proposed method works iteratively, starting with initially selected dominant points, derived using a Freeman chain code. The method measures the contribution of each point based on its projection onto the line segment formed by neighboring dominant points. Based on the position of the projection, additional significant points are calculated. This approach helps in retaining critical features of the curve, like sharp corners, while smoothing out less important details. However, due to its iterative nature, the method is computationally expensive. In [11], the authors present a new approach to approximate polygonal shapes using a combination of line simplification and smoothing techniques. This improves the accuracy of polygonal representations of digital while minimizing the number of vertices, which is a common challenge in fields like computer graphics, geographic information systems (GIS), and pattern recognition. The proposed method reduces the number of line segments in a polygonal representation, maintaining key shape features while smoothing sharp angles to avoid overfitting. The authors have shown that the proposed approach outperforms traditional methods by offering a balance between simplification and the preservation of essential geometric features of the digital curves. In [12], Instead of looking for the actual points on the digital picture boundary curve, the approach presented in this study removes the pseudo-redundant points that are not aiding in form retention and then uses the remaining high-curvature points to construct a polygonal approximation. The suggested approach obtains initial segmentation points by chain code assignment. The proposed method uses the sum of squares of deviation to compute the curvature at each initial pseudo point using integer arithmetic. The difference that all of the boundary points incurred between each initial segmented pseudo point's previous pseudo point and the subsequent initial pseudo point was taken into consideration. This approach then eliminates the redundant point from the subset of initial segmentation points with the lowest curvature deviation. The deviation data for the subsequent and earlier near pseudo points are then recalculated by the procedure. The effectiveness of the suggested approach in both quantitative and qualitative aspects is demonstrated adequately.

SYMBOLS, NOTATIONS, AND THE BASIC PRINCIPLE

Input Digital Curve

The 2D input DigitalCurve is designated by D(P) where P represents a series of M adjacent points as,

(1) P = [ p ( 1 ) , p ( 2 ) , , p ( i ) , , p ( M ) ]

The x-y coordinates of point p(i) is given by [a(i); b(i)] for i = 1 to M. Thus,

(1) p i = a i b i

Now, the point sequence P can be represented by a matrix of size 2×M as,

(3) P = a b = a 1 , a 2 , , a i , , a M b 1 , b 2 , , b i , , b M

In (1), p(1) is the starting point and p(M) is the ending point of the DC. The curve D(P) can be an open curve or a closed curve where the first point and the last point are the same {p(1) = p(M)}. In the proposed INDS scheme, D(P) is taken as a single-trace curve. That is, the curve is drawn without lifting the drawing pen. Here, the curve can have loops but cannot have disconnected segments. To show the difference, a few examples of single-trace and multi-trace curves are shown in Figures 2(a) and 2(b). As a digital curve, D(P) is a polyline or polygon formed by joining the adjacent points p(i) and p(i+1) for all i’s.

Figure 2
Single-Trace and Multi-Trace curves

Down Sample Points

The downsampled curve is designated by C(Q) where Q represents the sequence of vertices that make up the curve (which is a polyline or polygon) as,

(4) Q = [ q ( 1 ) , q ( 2 ) , , q ( j ) , , Q ( N ) ]

Here, N is the total number of points in curve C(Q), and q(j)’s, the vertices for j = 1 to N, are the downsampled points. Because of down-sampling, N is less than M. The x-y coordinates of q(j) are represented by u(j) and v(j) respectively as,

(5) q j = u j v j

Now, the point sequence Q can be represented by a matrix of size 2×N as,

(6) Q = u 1 , u 2 , , u j , , u N v 1 , v 2 , , v j , , v N

In general, with down-sampling, the separation between successive q(j)’s is relatively large, and consequently, the downsampled curve appears as a polyline or polygon (Q) that can be smoothened using Bezier curves or Cubic splines.

The process of optimal down-sampling is to determine these q(j)’s to get C(Q), which is as close to D(P) as possible. That is, the deviation between C(Q) and D(P) should be minimal. Thus, C(Q) approximates D(P) with a lower resolution. Based on this requirement, the following constraints are imposed while determining q(j)’s.

  • The starting and ending points of C(Q) are selected to be the same as those of D(P). That is,

    (7)q1=p1qN=pM

  • The q(j)’s are selected from among p(i)’s. That is, Q is a proper subset of P as,

    (8)QP

    The direction of traversal of C(Q) is taken the same as that of D(P).

  • The down sample points, q(j)’s are determined such that the differential area (error area) between D(P) and C(Q) is minimized.

Basic Principle

To explain the basic principle, consider an open DC of 81 original sample points, as shown in Figure 3(a). Let us approximate this DC by nine down sample points q(1) to q(9), as shown in Figure 3(a). Here, the q(j)’s are chosen with equal separations spanned by ten original samples.

Figure 3
Open DC curve and its 9 point under sampled approximation

Thus, in Figure 3(a), the downsampled curve C(Q) represents uniform DS. Here, the differential area, shown in magenta, is not optimal, as it is shown that the error area can be minimized by the optimal alternate choice of q(j)’s, as shown in Figure 3(b). In Figure 3(b), the original sample points covered by different C(Q) segments are not the same as those in Figure 3(a). Thus, the down-sampling is non-uniform but optimal. In the proposed INDS scheme, q(j)’s is determined using constrained optimization.

Solver-based Constrained Minimization

In standard programming languages (C++, Java, Python, etc.), several well-known built-in optimization solvers are available to minimize (or maximize) a given objective function with specified constraints. In our proposed scheme, the down-sampling process is carried out on Solver-Based Minimization (SBM) that minimizes the area between D(P) and C(Q). In INDS, The Matlab function patternsearch [13] is used to determine q(j)’s such that the Error Area (EA) between D(P) and C(Q) is minimized.

Problem Formulation

In INDS, the input digital curve D(P) and the number of down sample points N are given. From (7), point q(1) = p(1) and q(N) = P(M). Then, q(j)’s for j = 2 to N‒1 have to be selected from p(i)’s where the range of i is from 2 to M. The selection of q(j)’s for j = 2 to N‒1 in terms of p(i)’s is represented as,

(9) q j = p x j

Here, q(j) is selected as the x(j)th element of P where x(j)’s are the decision variables to be determined for minimum Error Area. Equation (9) is the index format representation of constraint (8). The sequence of decision variables is represented by x as,

(10) x = x 1 , x 2 , , x j , x j + 1 , , x N

The mapping from Q [series of q(j)’s] to P [appropriate p(i)’s] can be represented by a bipartite graph with several nodes from p(i)’s unconnected as shown in Figure 4 where the selected points from P are shown in red.

Figure 4
Mapping from Q to P

From the mapping, it is evident that

(11) x 1 < x 2 < < x j < x N - 1 < x N

Since x(j)’s are the indices for p(…)’s, x(j)’s have to be integers in the range 1 to M. The strict inequality (11) can be restated in the form of non-strict inequality as,

(12) x j 1 + x j + 1

For j = 1 to N‒1. Additionally, from (9) and (7),

(13) x 1 = 1 x N = M
Representation of Error Area

The total Error Area between D(P) and C(Q) in terms of x(j)’s is computed as follows. Consider the polygon formed by the consecutive vertices pxj, pxj+1,pxj+2,,pxj+1, pxj as shown in Figure 5.

Figure 5
Polygon(j) formed by segments pxj to pxj+1 and back to pxj

Figure 6
Uniform and optimal sampling of the digital curve

In Fig, 5, it should be noted that the last vertex pxj+1 which is a part of D(P), is connected back to its first vertex pxj through the line segment qj+1,qj of C(Q).

From (9) and as shown in Figure 5, it can be observed that the point qj=pxjandqj+1=pxj+1. This specific polygon is designated as Polygon(j) as it starts at point pxj. For an open DC, with N down sample points, the number of polygons formed is (N‒1), as can be seen from Figure 3, whereas, for a closed curve, it would be N. Therefore, with an open DC, the range for Polygon(j) is from j = 1 to(N ‒1).

Area of a Polygon

Given the coordinates of the vertices, the area of the polygon can be determined using the standard formula [14]. In this paper, the polygon area is calculated using the built-in Matlab function polyarea(VX,VY) [15], where VX and VY are the arrays of the x-y coordinates of the vertices of the polygon. Let the total number of vertices of Polygon(j) be L. Then, the x-y coordinates of the vertices of Polygon(j) are represented as,

(14) V X j = v x 1 , v x 2 , , v x k , , v x L V Y j = v y 1 , v y 2 , , v y k , , v y L

In (14), vx1,vy1 are the x-y coordinates of pxj.

Therefore, in terms of the given x-y coordinates of D(P) as given by (2) and (3),

(15) v x 1 , v y 1 = a x j , b x j

Similarly, vxL,vyL are the the x-y coordinates of pxj+1, expressed as,

(16) v x L , v y L = a x j + 1 , b x j + 1

Similarly, vxk,vyk are the x-y coordinates of pxj+k can be expressed as,

(17) v x k , v y k = a x j + k , b x j + k

In the light of (15), (16) and (17), the x-coordinates of Equations (14) can be rewritten as,

(18) V X j = a x j , a x j + 1 , a x j + 2 , , a x j + 1

Thus VX{j} is formed by the consecutive x-coordinates of D(P) from axj to axj+1. In terms of the colon operator, VXj can be expressed as,

(19) V X j = a x j : a x j + 1

Similarly,

(20) V Y j = b x j : b x j + 1

Thus, VX{j} and VY{j} can be expressed in terms of the decision variables [x(1), x(2),…,x(j),…, x(N)]. Now, the Error Area, EA(j) of the Polygon(j), can be expressed as,

(21) E A j = p o l y a r e a V X j , V Y j

Then, the total Error Area (TEA) is given by,

(22) T E A = j = 1 N - 1 E A j 22

For a closed DC, the upper limit on summation would be N. In our proposed method INDS, the objective function to be minimized is TEA as specified by (22). From (22), it is clear that TEA is a function of the decision vector x.

INTELLIGENT NON-UNIFORM DOWN-SAMPLING

The Intelligent Non-uniform Down-Sampling (INDS) algorithm determines the optimal down-sampled polygon C(Q) from the given digital curve D(P). Down-sample C(Q) is formed by its vertex sequence Q of q(j)’s for j = 1 to N as specified by (4). Individual q(j)’s are obtained asqj=pxj {see Eq. (9)} where the decision vector x=x1,x2,,xj,xj+1,,xN is determined using the optimal solver patternsearch(…) that minimizes the objective function TEA as given by (22) subjected to the constraints given by (12) and (13). The optimal solver patternsearch(…)returns the decision vector x, whose data type is double. But in INDS, the elements of x are used as the indices {as given by (9)}. Therefore, the double-valued x is converted to its equivalent integer vector using the round(x)function.

Formulation of the Objective Function TEA

The objective function is designated by get_TEA (), whose output (return value) is TEA. The objective function is formed in terms of the coordinates of the vertices of D(P) as given by (3) and the decision vector x whose length is specified as N. The input arguments to get_TEA () are vectors x, a, b, and scalar N. The function is formulated as follows.

Function TEA() TEA = get_TEA(x,a,b,N) x = round(x); \\to use the elements as indices TEA = 0; For j=1 to N-1 \\For a closed curve, use N instead of N-1 VX{j} = a(x(j)):a(x(j+1)); \\sequence of x-coordinates for the vertices of polygon(j), Eq (19). VY{j} = b(x(j)):b(x(j+1)); \\sequence of y-coordinates for the vertices of polygon(j). see (20) EA(j) = polyarea(VX{j},VY{j}); \\get the Error Area of Polygon(j) TEA = TEA+EA(j); \\realization of (22) End \\end of for loop. TEA is ready

Using this objective function, the patternsearch (…) solver gives the optimal decision vector x. From x, q(j)’s is obtained according to (9), and thus, the optimal C(Q) that minimizes TEA is obtained.

SIMULATION RESULTS

In computer graphics, a digital curve is actually a polyline or polygon with closely located vertices joined by line segments. Thus, the original digital curve D(P) to be downsampled is made up of M vertices, which satisfy (12) and (13), whose x-y coordinates are specified.

Example 1

The input digital curve D(P) has 81 closely spaced vertices, as shown in Figure 4(a). That is, in this example M = 81. In Figure 4(a), uniform sampling is used to get the downsampled polyline C(Q), where its vertices q(j)’s are separated by 10 vertices of D(P) throughout, as shown in Table 1. The number of down sample points is N = 9.

Table 1
Uniform Sampling with M =81

In uniform sampling, the TEA value in terms of pixels is found to be TEA(uniform) = 11,874. The optimal sample points obtained using the optimal solver patternsearch (…) and the corresponding q(j)’s are shown in Figure 4(b). The respective numerical values are shown in Table 2. Here, the non-uniform distribution of the values of x(j)’s can be observed compared to that of Table 1. With optimal x, the minimized TEA is found to be TEA(opt) = 7,814.

Table 2
Intelligent down-sampling for minimum Total Error Area
Compression Ration

In ISDN, the original 2D curve D(P) has M vertices specified by their x-y coordinates. Let the data size used for x-y coordinates be L bytes. Then, the data size of D(P) will be 2*M*L. After down sampling, the resulting C(Q) has N vertices, which are a subset of those of D(P). Therefore, the data size of C(Q) is 2*N*L. Hence the Compression Ratio (CR) is,

(23) C R = D a t a s i z e o f D P D a t a s i z e o f C Q = 2 * M * L 2 * N * L = M N

Thus, the smaller the value of N, the higher the Compression Ratio.

Error vs Compression Ratio

The CR value can be increased by reducing N which is the number of vertices of C(Q) that approximates the given D(P). Then the error between D(P) and C(Q) increases. Thus, the error metric TEA(opt) increases as the CR value is increased by reducing N. Example 2 demonstrates this aspect. Hereafter, when there is no ambiguity, we use just TEA instead of TEA(opt) to represent the optimal Total Error Area.

Example 2

Here, the original curve D(P) is the same as that used in Figure 4 with M = 81. The value of N is decreased from 11 to 6, and Optimal C(Q)’s obtained, using our ISDN method for down-sampling, are shown in Figures 7(a) to 7(f). The corresponding TEA’s and CR’s are shown Table 3.

Table 3
CR and TEA versus N

Figure 7(a)
Optimal C(Q) with N = 11

Figure 7(b)
Optimal C(Q) with N = 10

Figure 7(c)
Optimal C(Q) with N = 9

Figure 7(d)
Optimal C(Q) with N = 8

Figure 7(e)
Optimal C(Q) with N = 7

Figure 7(f)
Optimal C(Q) with N = 6

The graph of TEA and CR versus N is shown in Figure 8. From the graph of Figure 8, it can be seen that TEA and CR values increase as N decreases. The graph of TEA vs N depends on the nature of the original digital curve D(P).

Figure 8
TEA and CR versus N

Comparison with OSE Method

Kazuya Ose and coauthors (2017) [3], have used an iterative method for getting the optimal vertices of C(Q). This method is designated as the OSE method. The major drawback of the OSE method is its high computational overhead. The executive times of ISDN and OSE methods are compared in Example 3.

Example 3

In this case, input digital closed curve D(P)has M = 240 sample vertices. D(P) is downsampled using ISDN and OSE methods for N down sample vertices where N is varied from 10 to 24 in steps of 2. The graphs of execution times versus N are shown for ISDN and OSE methods in Figure 9.

Figure 9
Comparison of Execution times versus N for ISDN and OSE methods

In Figure 9, it can be seen that the execution time of our ISDN method is substantially less than that of the OSE method. Since the execution time values are machine-dependent, the mean percentage time saved using ISDN method, compared to the OSE method, is found to be about 18%.

CONCLUSION

An intelligent method of spatial down-sampling of digital curves has been presented. The method uses the patternsearch algorithm, a derivative-free optimization method, to determine the locations of down sample points to get the optimal polyline or polygonal approximation of the input digital curve. The proposed method reduces the number of sample points required to represent the given dense digital curve without affecting its visual representation. Thus, data compression is achieved and the consequent storage memory is substantially reduced. Experimental result shows that, on average, the execution time saved by our proposed method is about 18% compared to that of its nearest competitor. The proposed method can be extended to multi-trace curves and 3D surfaces. The storage and display of Multilevel contour maps of large uneven geographical regions can be accomplished efficiently. Additionally, our method is useful for computer graphics, geographic information systems (GIS), and pattern recognition.

  • Funding:
    This research received no external funding.

Acknowledgments:

Not Applicable

REFERENCES

  • 1 Locteau H, Raveaux R, Adam S, Lecourtier Y, Heroux P, Trupin E. Approximation of Digital Curves using a Multi-Objective Genetic Algorithm. In: 18th International Conference on Pattern Recognition (ICPR'06); 2006 Aug 20-24; Hong Kong, China: IEEE. 2006. p. 716-719
  • 2 Kolesnikov A, Fränti P. Polygonal approximation of closed discrete curves. Pattern Recognition. 2007 Apr;40(4):1282-93.
  • 3 Kazuya O, Kazunori I, Nobuo S. A sampling method for processing contours drawn with an uncertain stroke order and number. In: 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA); 2017 May 08-12; Nagoya, Japan: IEEE. 2017. p. 468-471
  • 4 Paola B, Victor A. Polygonal approximation of digital curves using genetic algorithms. In: 2012 IEEE International Conference on Industrial Technology; 2012 Mar 19-21; Athens, Greece; IEEE. 2012. p. 254-259
  • 5 Parvez MT, Mahmoud SA. Polygon approximation of planer curves using triangle suppression. In: 10th International Conference in Information Science, Signal Processing and their applications (ISSPA 2010); 2010 May 10-13; Kualalumpur, Malaysia: IEEE. 2010. p. 622-625
  • 6 Zhou X, Shang Y, Lu J. Polygon approximation of digital planar curves via hybrid Monte Carlo optimization. IEEE Signal Process. Lett. 2013 Feb;20(2):125-8.
  • 7 Liu H, Zhang X, Rockwood A. A direction change-based algorithm for polygon approximation. In: 21st International Conference on Pattern Recognition (ICPR2012); 2012 Nov 11-15; Tsukuba, Japan. IEEE; 2012. p. 3586-9.
  • 8 Yin PY. A discrete particle swarm algorithm for optimal polygonal approximation of digital curves. J. Vis. Commun. Image Represent. 2004 Jun;15(2):241-60.
  • 9 Kalengkongan WW, Silalahi BP, Herdiyeni Y, Douady S. Landmark analysis of leaf shape using dynamic threshold polygonal approximation. In: 2015 International Conference on Advanced Computer Science and Information Systems (ICACSIS); 2015 Oct 10-11; Depok, Indonesia. IEEE; 2015.
  • 10 Ramaiah M, Prasad DK. Polygonal approximation of digital planar curve using novel significant measure. In: Volosencu C, Küçük S, Guerrero J, Valero O, editors. Automation and Control. IntechOpen; 2020. p. 287-92.
  • 11 Sánchez PB, Cruz IM, Macedo MRG. A robust method for polygonal approximation by line simplification and smoothing. In: 2023 Mexican International Conference on Computer Science (ENC); 2023 Sep 11-13; Guanajuato, Mexico. IEEE; 2023. p. 1-8.
  • 12 Ramaiah M, Ravi V, Chandrasekaran V, Mohanraj V, Mani D, Maruthamuthu A. An efficient iterative pseudo point elimination technique to represent the shape of the digital image boundary. Multimed. Tools Appl. 2024. doi: 10.1007/s11042-024-20183-1.
    » https://doi.org/10.1007/s11042-024-20183-1.
  • 13 MathWorks. Find the minimum of a function using pattern search [Internet]. Natick, MA: The MathWorks, Inc.; c2023 [cited 2023 Nov 2]. Available from: https://in.mathworks.com/help/gads/patternsearch.html
    » https://in.mathworks.com/help/gads/patternsearch.html
  • 14 Braden B. The surveyor’s area formula. Coll. Math. J. 1986 Apr;17(4):326-37.
  • 15 MathWorks. Area of polygon [Internet]. Natick, MA: The MathWorks, Inc.; c2023 [cited 2023 Nov 2]. Available from: https://in.mathworks.com/help/matlab/ref/polyarea.html
    » https://in.mathworks.com/help/matlab/ref/polyarea.html
  • Editor-in-Chief: Alexandre Rasi Aoki
  • Associate Editor: Fabio Alessandro Guerra

Publication Dates

  • Publication in this collection
    24 Mar 2025
  • Date of issue
    2025

History

  • Received
    14 Aug 2024
  • Accepted
    27 Nov 2024
location_on
Instituto de Tecnologia do Paraná - Tecpar Rua Prof. Algacyr Munhoz Mader, 3775 - CIC, 81350-010 , Tel: +55 41 3316-3054 - Curitiba - PR - Brazil
E-mail: babt@tecpar.br
rss_feed Acompanhe os números deste periódico no seu leitor de RSS
Reportar erro