A Locally-Continuous Meshless Local Petrov-Galerkin Method Applied to a Two-point Boundary Value Problem

joaquimb@dc.ufc.br *Corresponding author https://doi.org/10.1590/1679-78256021 Abstract In recent years, the Meshless Local Petrov-Galerkin (MLPG) Method has attracted the attention of many researchers in solving several types of boundary value problems. This method is based on a local weak form, evaluated in local subdomains and does not require any mesh, either in the construction of the test and shape functions or in the integration process. However, the shape functions used in MLPG have complicated forms, which makes their computation and their derivative's computation costly. In this work, using the Moving Least Square (MLS) Method, we dissociate the point where the approximating polynomial's coefficients are optimized, from the points where its derivatives are computed. We argue that this approach not only is consistent with the underlying approximation hypothesis, but also makes computation of derivatives simpler. We apply our approach to a two-point boundary value problem and perform several tests to support our claim. The results show that the proposed model is efficient, achieves good precision, and is attractive to be applied


INTRODUCTION
Engineers and scientists often need to solve and simulate physical problems for which analytical solutions do not exist. Therefore, numerical methods are used to approximate those solutions. Among the existing numerical methods, the Finite Element Method (FEM) is one of the most widely used.
However, despite its great applicability, the FEM might have some drawbacks, especially due to its dependence on a good-quality mesh for delivering accurate approximations. Constructing such meshes may require either an intense human intervention or complex automated meshing techniques, which are highly expensive and complicated to perform in 3D domains. Also, sometimes complex remeshing techniques need to be used in the analyses of large deformation and fracture propagation problems (Liu, 2009;Sladek et al., 2013). In order to free the analyses from the problems associated with mesh generation, meshless methods have been developed, in which the domain of the problem is represented through a set of scattered points (nodes), without any explicit connection between them. Thus, in meshless methods, it is simple to include or to remove points, whenever necessary, during iterative computations.
Several meshless methods have already been proposed: smooth particle hydrodynamics (SPH) (Gingold and Monaghan, 1977); element free Galerkin (EFG) (Belytschko et al., 1994); reproducing kernel particle method (RKPM) (Liu et al., 1995); partition of unity finite element method (PUFEM) (Babuska and Melenk, 1997); natural element method (NEM) (Sukumar et al., 1998). However, as Atluri and Zhu (1998) pointed out, they were not truly meshless methods, since they make use of a background mesh for integrating the global weak form. Atluri and Zhu (1998) proposed the Meshless Local Petrov-Galerkin (MLPG) Method. That method is based on a local weak form, which is evaluated in local subdomains of simple forms, such as line segments, circles, squares and spheres. The method does not use a mesh at all, neither in the construction of test and shape functions nor in the integration process. That is why the authors called it a truly meshless method.
The MLPG has already been used to solve various types of boundary value problems (Amini et al., 2018;Han and Atluri, 2004;Hu and Sun, 2011;Kamranian et al., 2017;Liu et al., 2011;Sheikhi et al., 2019;Zhang et al., 2006). However, in developing those formulations, the authors broke the underlying consistency with the Moving Least-square assumptions, which, in our view, led to shape functions that have unduly complex forms, making their computation and their derivatives' computation quite costly (Liu, 2009;Mirzaei and Schaback, 2013). We believe that such complexity is unnecessary if the formulation maintains consistency with the basic assumptions. Thus, in this work, we show a Leastsquare-consistent displacement-based Meshless Local Petrov-Galerkin formulation and apply it to the analysis of a twopoint boundary value problem.
The remainder of this paper is organized as follows. In Section 2, we introduce the Least-square-consistent displacement-based MLPG, describing the problem to be addressed, the computation of the shape functions, the computation of the shape functions' derivatives, the choice of the test functions, the numerical integration and the enforcement of the essential boundary conditions. In Section 3, we present some tests. Finally, in Section 4, we present our conclusions.

LOCALLY-CONTINUOUS MLPG (LC-MLPG) FORMULATION
In this section, we explain the LC-MLPG formulation (see the overview shown in the Graphical Abstract), focusing on how we propose to compute a shape function's derivative. To achieve this goal, we consider a two-point boundary value problem.

Boundary Value Problem
Consider the ordinary differential equation that governs the problem of a cable, under constant tension (small deflection theory), subjected to a transverse force per unit length, ( ). The cable is embedded in a medium that provides a stiffness ( ) to its transverse displacement ( ). The domain Ω of the problem is one-dimensional, with 0 < < , and its boundary Γ consists of the two end points = 0 and = (Fig. 1). The following essential boundary conditions are assigned at the endpoints: (2)

Local Weak Form
In the MLPG method, the domain and the boundary of the problem are covered with an arbitrary number of scattered nodes. Each node has a local subdomain, Ω , with its local boundary, Γ (Fig. 2). In two and three-dimensional cases, subdomains can have arbitrary shapes. However, in one-dimensional cases, the subdomains are line segments whose union must cover the entire domain of the problem (Atluri and Zhu, 1998).

Figure 2:
The domain of the cable is covered by nodes scattered arbitrarily. All nodes will have a local subdomain Ω with its local boundary Γ . Ω and Ω represent, in that order, the local subdomains of nodes and , and their boundaries are Γ and Γ .
The generalized local weak form of the differential equation (Eq. (1)) over the local subdomain Ω of a node can be written as where ( ) is the test function associated with node . Equation (3) can be expanded as where and are, respectively, the coordinates at the beginning (left) and at the end (right) of subdomain Ω .

Trial Functions
The trial functions are local approximations of the true solution in a given subregion of arbitrary shape of the problem's domain. In this work, we use the moving least squares (MLS) to compute the trial functions' approximations, for its accuracy and simplicity to be extend to problems in higher dimensions. Because of those properties, MLS is commonly used in the literature (Atluri and Shen, 2002;Atluri and Zhu, 1998;Han et al., 2005;Atluri et al., 1999a;Atluri and Zhu, 2000).
Let ℎ ( , ̅ ) be the approximation of the true solution ( ) at a point in the subregion defined in the vicinity of point ̅ . MLS defines ℎ ( , ̅ ) as the following polynomial approximation where is the coefficient vector, which should be determined in such a way that the polynomial approximation is optimal, in the Least Squares sense, in the vicinity of ̅ . In a one-dimensional problem, is equal to + 1, where is the degree of the approximating polynomial (Atluri and Shen, 2002). For example, if = 2, then = 3 and ( ) = [ 1 2 ].
Suppose that, for the construction of the polynomial approximation in the vicinity of ̅ , we consider a set of nodes in that vicinity (Fig. 3). Thus, for a given node in that set, the difference between its exact displacement ( ) and its approximate displacement ℎ ( , ̅ ) is However, since the exact displacement ( ) is not known, we replace it with an unknown pseudo-displacement � to be determined, and Equation (7) is rewritten as Substituting Equation (6) into Equation (8) yields Considering the vector of errors for the nodes in the vicinity of ̅ we want to find the coefficients of the approximating polynomial such that the 2 norm of the error vector is minimized. However, instead of using the vector of errors of Equation (10), we construct a vector of weighted errors, to attribute more importance to the nodes that are closer to ̅ (Fig. 3). Thus, for node , the weight is defined as where ( ) is a bell-shaped weight function on the radial distance, = | − ̅ |, from point ̅ . Therefore, the vector of weighted errors is written as Problem  Figure 3: Influenced nodes in the vicinity of ̅ : nodes 1, 2 and 3. Node 2 has the strongest influence on ̅ and node 1 has the least influence on ̅ .
In order to find the coefficients ( ̅ ) of the approximating polynomial, we minimize the squared 2 norm of the vector of weighted errors (Notice that, in (Atluri and Zhu, 1998), the weighted squared errors are minimized while we minimize the squared weighted errors). Thus, defining the functional we pose the following unconstrained minimization problem: to find the optimum coefficients for ℎ ( , ̅ ), using the first order necessary condition Substituting Equations (6) to (12) into (15) yields the following Thus, at ̅ , and knowing that is a diagonal matrix, where and After solving the system of algebraic linear equations, the optimized coefficients are

Shape Functions
Substituting Equation (21) into Equation (6) yields where ( , ̅ ) represents the shape function vector, so that ( , ̅ ), with = 1, ⋯ , , denotes the shape function for node . Considering Equation (23), Equation (22) can be rewritten as in which ( , ̅ ) is defined as where ( ) is the -th term of the polynomial basis ( ) and . Note that, in order to compute the shape functions, the matrix needs to be invertible. To ensure that condition, the selected number of nodes, , in the vicinity of ̅ has to be greater than or equal to the number of monomials in the polynomial basis, i.e., ≥ (see (Atluri and Shen, 2002;Liu, 2009)).

Derivatives of the Shape Functions
The derivative of the shape functions with respect to are calculated as where Notice that, to compute the derivative of a shape function at ̅ , we simply compute ′ ( ̅ , ̅ ), using Equation (26). This is totally consistent with the formulation developed for the approximating polynomial and its coefficients (Eqs. (6) to (24)). However, methods, such as MLPG1 (Atluri et al., 1999b), use the polynomial in (6) just as an auxiliary polynomial to construct the trial function at ̅ , i.e., Those methods assume that the trial function trial ( ̅ ) computed in such a way (Eq. (27)) is continuous and differentiable everywhere (∀ ̅ --since ̅ is the new variable of domain) with respect to ̅ . Since the optimization of the coefficients of the auxiliary polynomial is based on a discrete set of nodes in the vicinity of ̅ , it is not guaranteed that the same set of discrete nodes is used to optimize the coefficients of the auxiliary polynomial at ̅ + d ̅ . So, in that sense, there is an inconsistency between the discrete nature of the optimization process and the continuity assumption required to ensure the differentiability of the matrices −1 and in (27).

Weight Function
In this work, the weight function is a bell-shaped function, whose support region is centered at ̅ . Thus, any node that falls outside that support region will not contribute to the trial function associated with the subregion around ̅ . We Latin American Journal of Solids and Structures, 2020, 17(8), e330 7/21 use the fourth order spline with compact support as weight function (Atluri and Shen, 2002;Atluri et al., 1999b;Sladek et al., 2010), i.e., where = ‖ -̅ ‖ is the distance from a point in the location to point ̅ and is the size (radius) of the support region of the weight function centered at ̅ . For this particular case, is referred to as ( ̅ ). The size ( ̅ ) is determined, as explained in Section 2.3.1, to contain the required points to ensure that matrix is invertible ( ≥ ). However, ( ̅ ) should also be small enough to maintain the local nature of the MLS approach (Atluri and Shen, 2002).

Test Functions
In MLPG the trial and test functions may be chosen from different function spaces. The test functions usually have compact support, which causes the integral to be calculated in a limited region, where the function is non-zero. Different test functions result in different MLPG methods (Atluri and Shen, 2002). In this work, the test functions are also bellshaped functions centered at the nodes (the same as the weight function of Eq. (28) with = ‖ -‖ and = ( )), resulting in the so-called MLPG1. However, ( ̅ ) need not be equal to ( ) (Fig. 4). The support of a test function is usually confined to a region around its associated node, such that all the neighboring nodes fall outside that region.

Discretization
Substituting Equation (24) into Equation (5) the local weak form for a node can be rewritten as This equation can be simplified into the following linear algebraic equation in � and The test function ( ) is chosen to vanish in the support boundary, then Equation (31) can be simplified. Therefore, for internal nodes (Fig. 4, nodes 2 to 4), we have for the left node = 0 (Fig. 4, node 1), then and, for the right node = (Fig. 4, node 5), then: Applying Equation (30) to all nodes in the problem's domain results in set of equations that, when grouped, constitute the final system of global equations After solving this system, the displacement at any point in the domain can be obtained through Equation (24), making use of the displacements computed for the nodes in the support domain of that point.

Numerical Integration
The integrals are calculate using Gauss-Legendre quadrature. Thus, for the internal nodes (Eqs. (33) and (32)), making the necessary parameterization, and where , = 1, … , , are the Legendre quadrature points, are their associated weights, and � � = + ; for the left node (Eqs. (34) and (32) where � � = 2 (1 + ); and for the right node (Eqs. (35) and (32)), where is computed using Equation (39) Notice that, in the computation of , the value of ̅ to be used is the one adopted for the trial function in whose definition node takes part. It is possible, for = , that the node takes part in multiple trial functions. In those cases, their contributions to should be added.

Enforcement of the Essential Boundary Conditions
The trial functions based on MLS, as described in Section 2.3, are not interpolating functions. Therefore, the shape functions associated with each node do not possess the Kronecker Delta property (Fig. 5). Thus, some special method is required to impose the essential boundary conditions. Although the most popular methods for that are the penalty method and the method of Lagrange multipliers, we use the MLS collocation method proposed in (Liu, 2009;Mirzaei, 2015), where: and Equations (41) and (42) replace Eqs. (38), (39) and (40) for the left and right nodes.
Loop over all nodes without prescribed displacement: Loop over all integration points : Calculate local force vector (Eq. (37)). Find the nearest ̅ from integration point . Loop over all in the vicinity of ̅ : Calculate local stiffness matrix (Eq. (36)).
Assemble local contributions into the global linear system and .
Loop over all nodes with prescribed displacement: Find the nearest ̅ from node . Loop over all x_J in the vicinity of ̅ : Enforce prescribed displacement (Eqs. (41) and (42)).
Solve the global linear system � = .

TESTS AND RESULTS
In this section, several tests demonstrate the advantages of our Least-Square-Consistent Meshless Local Petrov-Galerkin formulation over the traditional MLPG1 formulation. Both methods were implemented in C++, and the Eigen Library (Guennebaud et al., 2019) was used to handle matrix operations and to solve the linear system of algebraic equations. The simulations were performed in an Intel® Core™ i7-4500U CPU @ 1.80GHz × 4, 8.0GB RAM with the system Ubuntu 18.04.2 LTS.
In the following subsections, we compare our method with the original MLPG1 (Atluri and Shen, 2002;Atluri and Zhu, 1998), using a simple problem of a cable with fixed ends illustrated in Figure 1. In Section 3.1, we compare the relative errors for various parameter settings which are obtained by changes of: Polynomial order ( ), number of nodes, number of integration points, support radius of the test function and support radius of the trial functions. The error is measured against the analytical solution given in Equation (43). In Section 3.2, we fix all the parameters and vary the polynomial order of the trial function. Again, the results are compared against the analytical solution. In Section 3.3, we repeat the tests of Section 3.2 using a non-constant ( ) function. In that case, the ground truth for computing the error is the solution obtained by the finite difference method with a very fine discretization. In Section 3.4, we compare the computational performances of the approaches used in the previous subsections.

Parametric Error Analysis
For the analyzed problem shown in Figure 1, when ( ) and ( ) are constant, i.e., ( ) = and ( ) = , the analytical solution ( ) can be written as (Buchanan, 1994) where 2 = ⁄ . The radii and of the support regions for the test and trial functions, respectively, are written as r e (α) = α min and r w (α) = β min , where min is the minimum distance from a given node to its closest adjacent node. In this section, we present the results of a parametric error analysis in which we report the effects of varying the number nodes, the polynomial order of the trial function and the number of integration points for several values of and . The relative errors are computed as where and h are vectors whose components are the displacements at discrete points of the domain computed, respectively, with the analytical and the numerical solutions. In this section, the tests use the following parameters: = 3m, = 20N, = 2N/m 2 and = 10N/m. The tests are run with the following values of and : = 0.1 to 1.0, with steps of 0.1 and = 0.1 to 4.0, with steps of 0.1. However, to avoid unnecessary clutter in the plots, we show only four values of (0.1, 0.4, 0.7 and 1.0). The upper limit for was established as 4.0 to ensure enough range for the cubic polynomial order case, while still keeping the locality of the trial functions. We ran the same tests with 7, 13, 25 and 49 nodes. Those number of nodes were chosen so that the same set of tests could be performed for both the quadratic and the cubic order trial functions.

Trial functions of second degree ( = )
In our method, ̅ is located as illustrated in Figures 6a and 6b for the 7-node case, ensuring that matrix has an inverse ( ≥ 3). First, we consider the minimum number of trial functions, i.e., with ̅ located at every other node (Fig. 6a). Then, we tested trial functions with ̅ located at each internal node (Fig. 6b)   In the original MLPG1, ̅ is fused with , requiring a new computation of the shape functions for every integration point (see (Atluri and Zhu, 1998;Atluri et al., 1999b).
The results plotted in Figures 7 and 8 allow us to make some general observations. First, notice that, for our method, the error diminishes as the number of nodes and the number of integration points increase. However, MLPG1 shows a similar behavior only for some specific values of . Second, MLPG1's error is highly sensitive to variations of and , especially for a small number of nodes (for example, 7 and 13 nodes in the tests), making it difficult to suggest adequate values for those parameters. On the other hand, our method shows very smooth error curves, which are almost insensitive to changes in for a given value of . Third, MLPG1 requires a larger starting value of to be able to invert matrix . For smaller number of nodes, this implies an undue restriction on the locality of the solution.
We also want to point out some more specific observations from Figures 7 and 8: 1) for all the tests (with 7, 13, 25 and 49 nodes), regardless of the value of , MLPG1 requires to be at least 2.1 for to have an inverse, while our method requires a of 1.1; 2) The dependence relation of the error on the × -combination, on the number of nodes and on the number of integration points is very well-behaved in our method, and shows that the error diminishes as the number of nodes and integration points increase for any values of and . On the other hand, that dependence relation in MLPG1 is not well-behaved when the number of points is small (e.g., 7 and 13 points). Notice that, in those cases, for the whole range of a given value of is not always the best choice, while our method shows clearly the best to choose.

Trial functions of third degree ( = )
Since the results for different distributions of trial functions (locations of ̅ ) in the quadratic case were equivalent, for the trial functions of third degree, ̅ is located as illustrated in Figure 6c for the 7-node case, ensuring that matrix has an inverse ( ≥ 4). The distribution of the trial functions for the 13, 25 and 49-node cases follows the same pattern shown in Figure 6c.
Once again, notice that, in MLPG1 ̅ is fused with , requiring a new computation of the shape functions for every integration point.
The results plotted in Figure 9 allow us to make some general observations. First, notice that, for our method, the error diminishes as the number of nodes and integration points increase. However, MLPG1 shows a similar behavior only for some values of . Second, MLPG1's error is highly sensitive to variations of and , making it difficult to suggest adequate values for those parameters. On the other hand, our method shows very smooth error curves, which are almost insensitive to changes in for a given value of . Third, MLPG1 requires a larger starting value of to be able to invert matrix . For smaller number of nodes, this implies an undue restriction on the locality of the solution.
Again, we want to point out some more specific observations from Figure 9: 1) for the test with 7, 13 and 25 nodes, regardless of the value of , MLPG1 requires to be at least 3.1 for to have an inverse, while our method requires = 1.6. With 49 nodes, MLPG1 requires to be at least 3.2 for to have an inverse, while our method requires = 1.9; 2) The dependence relation of the error on the combination of , , number of nodes and number of integration points is very well-behaved in our method, and shows that the error diminishes as the number of nodes and the number of integration points increase for any values of and . On the other hand, that dependence relation in MLPG1 is very erratic and, for some values of , it deteriorates as the number of nodes and the number of integration points increase, which is a rather inconsistent behavior.

Figure 7:
Exhaustive test with quadratic polynomial base using 7, 13, 25 and 49 nodes, where ̅ is located in accordance with the pattern shown in Figure 6a. The graphs on the left columns are from the original MLPG1 method and on the right columns are from our approach. The tests are performed with 5, 10, 20 and 40 integration points as shown in each row. The -axis represents the percentage relative error (Eq. (44)), the x-axis shows the values of , and each curve corresponds to a specific value of .

Figure 8:
Exhaustive test with quadratic polynomial base using 7, 13, 25 and 49 nodes, where ̅ is located in accordance with the pattern shown in Figure 6b. The graphs on the left columns are from the original MLPG1 method and on the right columns are from our approach. The tests are performed with 5, 10, 20 and 40 integration points as shown in each row. The -axis represents the percentage relative error (Eq. (44)), the x-axis shows the values of , and each curve corresponds to a specific value of .

Figure 9:
Exhaustive test with cubic polynomial base using 7, 13, 25 and 49 nodes, where ̅ is located in accordance with the pattern shown in Figure 6c. The graphs on the left columns are from the original MLPG1 method and on the right columns are from our approach. The tests are performed with 5, 10, 20 and 40 integration points as shown in each row. The -axis represents the percentage relative error (Eq. (44)), the x-axis shows the values of , and each curve corresponds to a specific value of .

Varying with Constant ( )
In this test, we use specific values of and based on the results shown in Section 3.1 and compare both our solution and MLPG1's solution against the exact solution given in Equation (43).
Once again, we separate the results according to the polynomial degree of the trial functions. We present results for different numbers of integration points ( = 5, 10, 20 and 40) and nodes ( = 7, 13, 25 and 49).
The value of was fixed at 0.7 for the sake of MLPG1 (most stable results), although for our method = 1.0 would be the best choice. Since the value of determines the existence, or not, of the inverse of matrix (Eq. (21)), it depends on the value of . Thus, we chose the smallest value of required by each method.

Trial function of second degree ( = )
In these tests, we use = 2.1 for the original MLPG1, and = 1.1 for our method (those were the minimum required values to invert matrix ). Notice (see Fig. 7) that, if we also used = 2.1 in our method, the results would not be any different. For our method, we use the trial function distribution pattern shown in Figure 6a.
The results are shown in Figure 10. Notice that, with 10 integration points, all the results (any number of nodes) are visually indistinguishable from the exact solution. However, as we point out in Section 3.4, our method is much more efficient than MLPG1.

Trial functions of third degree ( = )
In these tests, we use = 3.2 for the original MLPG1, and = 1.9 for our method (those were the minimum required values to invert matrix ). For our method, we use the trial function distribution pattern shown in Figure 6c.
The results are shown in Figure 11. Notice that, with 10 integration points, all the results from 5 up to 25 nodes are visually indistinguishable from the exact solution. However, with 49 nodes, MLPG1 shows inconsistent behavior regardless of the number of integration points (see also Fig. 9, 49-node case). Our method, on the other hand, maintains consistency in all cases. Moreover, as we point out in Section 3.4, our method is much more efficient than MLPG1.

Summarized results at the midpoint
The midpoint in the current problem, = 1.5m, has a displacement of 0.5142m (see the exact solution in Eq. (43)). Table 1 shows comparative results for the displacement of that point, including the value of the relative error, i.e. As expected, those quantitative results confirm the qualitative results shown in Figures 10 and 11. In the quadratic case, all the relative errors of both methods are less than 0.1%. However, in the cubic case, our method presented a consistent diminishing of the relative error as the number of nodes increased. MLPG1 showed a similar behavior up to 25 nodes. However, with a 49-node discretization, the diminishing trend was lost, and the results are clearly wrong. Figure 10: Comparison of the MLPG1 method with our approach for the quadratic polynomial order, where ̅ is located in accordance with the pattern shown in Figure 6a. The displacements are shown at each node, using uniform discretization with 7, 13, 25 and 49 nodes; and 5, 10, 20 and 40 integration points. The support radii of the test functions were fixed at = 0.7, and the radii for the trial functions were: = 2.1 for the original MLPG1 method and = 1.1 for our method. Figure 11: Comparison of the MLPG1 method with our approach for the cubic polynomial order. The displacements are shown at each node, using uniform discretization with 7, 13, 25 and 49 nodes; and 5, 10, 20 and 40 integration points. The support radii of the test functions were fixed at = 0.7, and the radii for the trial functions were: = 3.2 for the original MLPG1 method and = 1.9 for our method.

Varying with Non-Constant ( )
Since the analytical result shown in Equation (43) corresponds to the case in which ( ) is constant, we cannot use it here because ( ) = sin( / ). Thus, we compare the results with a Finite Difference (FD) solution obtained with a very refined grid of 100 partitions. All other variables of the model are the same as the previous example.
The tests for this problem were exactly the same as those performed in Section 3.2. However, we present only the results with 40 integration points. Thus, Figures 12a and 12b show, respectively, the results for the quadratic and cubic cases ( = 2 and 3), using 40 integration points ( = 40) and discretizations of 7, 13, 25 and 49 nodes. Table 2 shows the midpoint's displacements and the corresponding relative errors. In this test, the original MLPG1 example, with cubic order trial functions and 49, nodes shows a deviation from the FDM's solution as in the previous test.
Both methods reach similar relative errors for the quadratic case. However, for the cubic case, our method always presents better results with consistent diminishing error trend as the discretization increases.

Computational Performances
This test compares the average time spent in computing the solution with the original MLPG1 method and with our method (using both ̅ distributions illustrated in Figures 6a and 6b). The simulations were repeated ten thousand times and the average simulation time was computed. We implemented a program with both methods using C++ and the simulations were made in the same machine. The tests used 7, 13, 25 and 49 nodes with = 2 (quadratic polynomial order), using 5 and 40 integration points.
The results plotted in Figure 13 indicate that our approach outperforms MLPG1. Considering, for example, the 40integration-point cases, which deliver the most accurate results, our method is approximately 30 times faster. Even if we were satisfied with the results obtained with 5 integration points and 49 nodes, our method was approximately 5 times faster. Also, notice that the performance of our method with 40 integration points is comparable to the performance of MLPG1 with 5 integration points. So, for equivalent time performance, our approach delivers more accurate results. As shown in Figure 7, for 5 integration points, regardless of the number of nodes, MLPG1 delivers a relative error of approximately 8%. However, with the same computational times for each discretization, our method delivers results with relative errors less than 0.01% (see Table 1).

CONCLUSIONS
We investigated the behavior of a one-dimensional problem of a cable with fixed ends using the meshless local Petrov-Galerkin method using the same weight function as the test function. That method is also known as MLPG1 (Atluri and Shen, 2002) and uses the Moving Least Square (MLS) to construct local trial functions. The MLS collocation method was used to enforce the displacement boundary conditions.
The problem was sufficiently simple for us to assess the pitfalls of the original MLPG formulation and to propose a consistent MLPG formulation that does not suffer of the same drawbacks. Our development shows that the location around which the approximating polynomial's coefficients are optimized should be dissociated from the place where the polynomial is evaluated, and its derivatives are computed. This is truly consistent with the error minimization associated with the MLS method, and makes the method simple and more powerful. We demonstrated, through a series of tests, that the consistent formulation delivers accurate results in an efficient manner. Moreover, with our method, it is easy to recommend sound values of and to deliver accurate results. In fact, although this is a very important issue for the practical application of the method, such recommendation is often overlooked in the literature.
As future work, the proposed approach will be extended to two-dimensional problems next and then, to threedimensional ones.