COMBINATORIAL DUAL BOUNDS ON THE LEAST COST INFLUENCE PROBLEM

. The Least Cost Influence Problem is a combinatorial optimization problem that appears in the context of social networks. The objective is to give incentives to individuals of a network, such that some information spreads to a desired fraction of the network at minimum cost. We introduce a problem-dependent algorithm in a branch-and-bound scheme to compute a dual bound for this problem. The idea is to exploit the connectivity properties of sub-graphs of the input graph associated with each node of the branch-and-bound tree and use it to increase each sub-problem’s lower bound. Our algorithm works well and finds a lower bound tighter than the LP-relaxation in linear time in the size of the graph. Computational experiments with synthetic graphs and real-world social networks show improvements in using our proposed bounds. The improvements are gains in running time or gap reduction for exact solutions to the problem.


INTRODUCTION
In the context of diffusion of information in social networks, early studies on sociology (Rogers, 2010) observe that information spreads through a social system like a contagious process, starting with a small group and spreading to other individuals in the system through the relationship between them.One relevant problem that emerges from this background is identifying a good set of individuals to target, the early adopters, and hoping that the chosen individuals can persuade their friends to adopt a new behavior, which also influences friends of friends by generating a LEM, which seeks a target set of minimum size, ensuring the activation of a given fraction of the network.The literature refers to this problem as the TARGET SET SELECTION (TSS).Besides being NP-hard to solve, this problem is also hard to approximate.Chen (Chen, 2009) proved that TSS cannot be approximated within a poly-logarithmic ratio.Even with explicitly deterministic thresholds, the problem is NP-hard to approximate within a ratio of n 1−ε for every ε > 0 (Chen, 2009;Kempe et al., 2003).This problem becomes tractable in some restricted class of graphs.For example, the problem can be solved in linear time if the underlying graph is a tree (Chen, 2009), a block-cactus graph (Chiang et al., 2013), or a bounded treewidth graph (Ben-Zwi et al., 2011).
In this work, we investigate an extension of the TSS problem called LEAST COST INFLUENCE PROBLEM (LCIP).Rather than searching a group of individuals to start a propagation, this problem offers incentives for individuals to adopt new behaviors and then trigger a cascade that spreads to a given fraction of the network.Demaine et al. (Demaine et al., 2014) extend the INFLUENCE MAXIMIZATION PROBLEM by proposing a fractional version that incorporates the idea of offering discounts instead of making a binary choice of individuals.Günnec ¸et al. (Günnec ȩt al., 2020a,b) focused on mathematical programming models for the LCIP and described algorithms for the problem on trees.Besides that, they observe that the LCIP is a generalization of the TSS problem, so the LCIP has at least the same complexity as the TSS problem from a theoretical point of view.Fischetti et al. (Fischetti et al., 2018) present a generalization of the LCIP and introduce the concept of activation functions, which extends the commonly used threshold functions.They also propose a mathematical heuristic and an exact algorithm based on column generation.

Contributions
While previous works provided relevant exact solutions (Ackerman et al., 2010;Fischetti et al., 2018;Günnec ¸et al., 2020b) and heuristic algorithms that can be used as upper bounds for this Observing that the influence propagation network is a directed acyclic graph (DAG), we derive a problem-dependent relaxation algorithm.The proposed algorithm exploits the connectivity properties of graphs to obtain a lower bound for the problem.Furthermore, we prove that the algorithm is correct and show experimentally that our lower bounds are tighter than the linear programming relaxation, providing smaller optimality gaps.To the best of our knowledge, there are no works on combinatorial lower bounds for this problem.The main objective of our relaxation is for fathoming in a branch-and-bound algorithm and helping reduce the computational effort to obtain exact solutions.We provide theoretical analysis on the complexity of our algorithm when dealing with a particular case of the problem, where the diffusion needs to reach only a fraction of the network instead of the whole network.In this case, our analysis leads to a related problem whose optimal solution is a dual bound for the original problem.This related problem is NP-hard, but it paves the way for improving the dual bound.To make our dual bound stronger, we also propose a branching rule that prioritizes the exploration of some branches of the decision tree.Such a prioritization leads to disconnected sub-graphs associated with the sub-problems of the branch-and-bound tree, which in turn improves the quality of our lower bounds.
The rest of this paper is organized as follows.Section 2 contains a brief overview of the social network diffusion process and the problem definition.Section 3 describes the mathematical programming formulation of the problem and the exact method to solve it.Section 4 proposes an algorithm to find lower bounds on the problem and presents a theoretical analysis of our findings.We introduce a branching rule to strengthen our dual bound algorithm in Section 5. Computational experiments are presented in Section 6.We conclude the paper in Section 7.

PROBLEM DEFINITION
Let G be a directed graph that models a social network.The vertex set V (G) represents individuals, and the arc set E(G) corresponds to the relationships between these individuals.We denote the vertex set and arc set by V and E, respectively, when the context is explicit.Each arc (i, j) ∈ E has an associated weight d i j > 0 that models the influence of i over j.
To model the diffusion of influence between the individuals in a social network, we consider a well-known model called the threshold model, presented by (Granovetter, 1978).In this model, we say that a vertex i is active when persuaded to adopt new behavior and inactive, otherwise.Every i ∈ V has a threshold t i > 0 which indicates the amount of influence needed to activate i, coming from i's neighbors.The activation process is progressive, i.e., each vertex can be active or inactive, and a vertex can change from inactive to active, but not the other way around.Initially, a subset A 0 ⊆ V is chosen to be active.Then, the vertices in A 0 send influence to their inactive neighbors.These neighbors might become active in the next iteration and give rise to a new set A 1 ⊆ V of active vertices.This process is repeated until no vertex can be activated.Let {A τ } T τ=0 be the sequence of vertices activated during the diffusion process, where A T is the first set in the sequence that cannot activate any other vertex in the graph.We say that any vertex i ∈ A T \ A 0 has been influenced by A 0 .Also, we consider the offer of external influences.These influences, which we call incentives, aim to break an individual's resistance in being influenced in the activation process.The incentives are represented by a vector y ∈ Z |V | , where each coordinate y i ∈ N 0 denotes the amount of incentive given to a vertex i ∈ V .Applying the incentive y i on a vertex i decreases its threshold t i and makes it more susceptible to activation.This incentive is added with the influence coming from the other vertices.Thus, the initial set of active vertices is given by A 0 = {i ∈ V : y i ≥ t i }.The vertices in A 0 begin the process as active and all the others as inactive.Time progresses in discrete steps τ = 0, 1, . . ., T , and an inactive vertex i becomes active at time τ if the total influence of its active in-neighbors plus its incentive exceeds the threshold t i , i.e., if where N i denotes the set of in-neighbors of i.
Figure 1 illustrates the activation process in the threshold model with incentives.Again, we have a directed graph with thresholds on the vertices and weights of influence on the arcs.Suppose that we want to activate 100% of the vertices.The process starts by setting the vertex a as active and all the others as inactive.We are paying enough incentive to achieve a's threshold without the influence of the neighbors, so vertex a receives 1 unit of incentive.We will try to activate the remaining vertices using incentives to decrease their threshold.Vertex b has threshold 1 and will be activated only by a in the next step, so no incentive is necessary.Next, in Figure 1c, vertex c has two active neighbors in which the weight of incoming arcs sums to 3 units.To achieve c's threshold, we must pay one incentive unit, so c is activated.The process continues as a cascade until all vertices are activated.Hence, in this example, we paid a total of 3 units of incentive to activate the whole network, that is, y a = y c = y e = 1 and y b = y d = 0.
The problem consists of offering incentives to the vertices to trigger a cascade of influence that spreads to a given fraction α of the whole network.The goal is to minimize the sum of incentives given to individuals on the social network.The definition is given below.
Problem 1 (Least Cost Influence Problem (LCIP)).We are given a real number α ∈ [0, 1], a directed graph G with weights d i j on each arc (i, j) ∈ E, and a threshold t i , for each vertex i ∈ V .The goal is to find a vector y ∈ Z |V | of incentives that minimizes the sum ∑ i∈V y i , ensuring that at least ⌈α|V |⌉ vertices are activated by the end of the activation process.
In the remainder of this text, we will refer explicitly to the graph associated with a solution y.Thus we introduce the following notation.For time steps τ = 0, 1, . . ., T − 1, let E τ = {(i, j) ∈ E : i ∈ A τ , j ∈ A τ+1 \ A τ }, be the set of arcs in which the influence was exerted at time τ.Consider the set E * = T −1 τ=0 E τ .We say that the propagation graph G * = (A T , E * ) is the graph induced by the solution y.The graph in Figure 1e is an example of a propagation graph.It follows from the definition of the activation process that the propagation graph is acyclic.

INTEGER LINEAR PROGRAMMING FORMULATION
Different integer linear programming (ILP) formulations express the propagation using variables on the arcs (Ackerman et al., 2010;Günnec ¸et al., 2020a,b;Raghavan & Zhang, 2019).The following formulation is a particular case of the model proposed by Fischetti et al. (Fischetti et al., 2018).For each vertex i ∈ V , let x i be a binary variable indicating whether i is active at the end of the diffusion process.Similarly, for each arc (i, j) ∈ E, let z i j be a binary variable indicating whether i exerts influence over j.As mentioned previously, the integer variable y i is the incentive to be paid to a vertex i ∈ V . min The objective function minimizes the total incentive offered to influence a given portion of the network.Constraints (1) models the condition that a vertex i ∈ V gets active only when the total influence received from its active neighbors plus its incentive is greater than or equal to its threshold.The cycle elimination constraints in (2) generalize the classic cycle elimination from (Grötschel et al., 1985) and impose that the propagation graph must be acyclic.Meaning that the number of chosen arcs in a cycle C cannot be greater than the number of active vertices in V (C) \ {k}, where V (C) is the set of vertices in the cycle.Constraints (3) ensures that an arc (i, j) can be chosen only if vertex i is activated.Note that if there is an arc ( j, i) in G, constraints (2) imply that z i j + z ji ≤ x i , and therefore, z i j ≤ x i is redundant.Finally, constraints (4) impose that, by the end of the diffusion process, the number of active vertices is at least ⌈α|V |⌉.

Solving the LCIP using branch-and-cut
Due to the number of possible cycles in the graph G, the number of constraints in (2) grows exponentially.The exact standard procedure for solving integer linear programs with exponential constraints is the branch-and-cut method, which combines LP-based branch-and-bound and constraint generation techniques.For completeness, we will take a brief look at the branch-andbound approach.If the reader is already familiar with the concepts in this algorithm, you can go straight to Section 4.
The branch-and-bound method solves the problem by dividing it into smaller sub-problems.The principle is to split the feasible space into successively smaller subsets so that distinct subsets can be evaluated directly by implicit enumeration until the best solution is found.The method employs a tree structure consisting of nodes and branches to manage the subdivisions of the feasible region.In this tree, each node represents a sub-problem.To generate the solutions and efficiently explore the feasible region, two problem-specific routines are required, the branch and the bound.
• Branching is the procedure that splits a parent node into smaller sub-problems generating child nodes.In our case, we chose a binary variable x i (or z i j ) and solve two sub-cases, namely the case x i = 0 and the case x i = 1.
• Bounding is the computation of lower and upper bounds used to avoid the complete exploration of all sub-trees.Since we are looking for the optimality conditions that will provide stopping criteria, an important task is to find a lower (dual) bound z ≤ z * and an upper (primal) bound z ≥ z * , where z * is the optimum value for the objective function of the original problem.If z ≥ z, we do not need to consider the current sub-problem.Every feasible solution provides an upper bound (we can use heuristics to find it).The dual bound is an optimistic estimation of the objective function for the region represented by the node at hand.The most common approach to get dual bounds is by relaxing the integrality constraints of the original problem.
The set of all cycle elimination constraints in the model ( 1)-( 7) is too large to write down.So, at each node of the branch-and-bound tree, violated inequalities are dynamically generated by solving the separation problem associated with inequalities in (2).In this approach, we implement the separation procedure proposed by (Grötschel et al., 1985).In short, the procedure adapts a shortest path algorithm to find a cycle that violates the cycle elimination constraints.
The branch-and-bound efficiency depends on how close to the optimal solution are the bounds of the sub-problems.Thus, we propose a combinatorial relaxation algorithm that can be used at each node of the branch-and-bound tree to obtain lower bounds.

LOWER BOUND ALGORITHM
Consider the following aspects of the LCIP.The associated propagation graph is a DAG for every solution y, with at least one vertex with no incoming arcs (source).Thus, at least one vertex needs to be paid the total threshold value for any solution.The vertices chosen to receive the total incentive have the minimum threshold value in the best case.
We introduce a combinatorial algorithm to obtain a dual (lower) bound for this problem.The idea is to use connectivity properties of a sub-graph of the input graph at each node of the branchand-bound tree.In the branch-and-bound, the recursive decomposition of a problem into subproblems generates a decision tree where its root corresponds to the original problem, and each node corresponds to a smaller sub-problem.A natural branching rule in a branch-and-bound algorithm is by variable fixing.In the case of the formulation (1)-( 7), we can fix the values of the binary variables x and z.Suppose a binary variable, say z i j , is selected to be the branching variable.Then two sub-problems are generated by fixing z i j = 0 in one branch and z i j = 1 in the other.We observe that fixing some arc variables z i j (or vertex variables x i ) at zero means that these arcs (or vertices) were not chosen, which can disconnect the sub-graph related to that node of the decision tree.We are interested in using this information to increase the lower bound of each sub-problem during the branch-and-bound algorithm.To illustrate the idea behind this strategy, consider the following example.Let the directed graph in Figure 2a be the input graph of the LCIP.We start the branch-and-bound tree by fixing some arc variables.Figure 3 shows the first levels of such structure, where the black node represents the sub-problem obtained by fixing the arc variables in z ae = 1 and z dc = 0.As the propagation graph of this sub-problem cannot contain the arc (d, c), we can represent the sub-graph associated with this node by removing the arc (d, c) from the original graph.In this way, we arrive at the sub-graph in Figure 2b, which is not strongly connected and is made of three different strongly connected components.A different sub-graph G ′ of the input graph G is considered at each branch-and-bound tree node.We obtain G ′ from G by removing arcs and vertices in which the corresponding variables were fixed at zero by the decision tree.That is, : z i j is fixed in zero}, at the current node of the branch-and-bound tree.
Let H be the condensed component graph of G ′ .From now on, we consider that H has the arc weights and vertex thresholds defined as follows.Consider C u and C v the s.c.c.'s associated with u, v ∈ V (H) and We set the weight of each arc (u, v) ∈ E(H) to be the sum of all arc weights that go from  The algorithm to find a lower bound for the LCIP follows. Algorithm Let l LP be the lower bound obtained at the current node of the branch-and-bound tree by standard LP-relaxation, and let l be the lower bound obtained by the procedure described in Algorithm 1.
When updating the lower bound l at the current node, we do l = max{l LP , l}.
Now, we show that Algorithm 1 indeed returns a valid lower bound.If the sub-graph G ′ is strongly connected, the lower bound l is trivial by the observations at the beginning of this section.We return this trivial lower bound if α < 1, even if G ′ is not strongly connected.Finding a tighter lower bound in this case may be computationally costly.Consequently, the algorithm would no longer be scalable.In Section 4.1, we explain this situation in detail.
When G ′ is not strongly connected and α = 1, we can improve the trivial lower bound by examining the condensed component graph H structure.In this case, computing the total cost for activating all the vertices in the graph H (step (6) of Algorithm 1) is a sub-problem.As H is a condensed component graph of G ′ , H is directed and acyclic.Therefore, we apply the algorithm proposed by (Günnec ¸et al., 2020a) to get an optimal solution of the LCIP in DAGs in linear time.
The proof of correctness of our combinatorial relaxation follows.
Lemma 1.Let G ′ be a sub-graph of a not strongly connected graph G and H be the condensed component graph of G ′ .If α = 1, the optimum of the LCIP on H is a lower bound of the optimum of the LCIP on G ′ .
Proof.Let y and y * be the optimal solutions to the LCIP on H and G ′ , respectively.We will prove that for each v ∈ V (H) and its associated s.c.c.C v of G ′ , we have that If v is a source in H, we are obligated to pay the threshold value of at least one vertex in C v .Hence, it follows that Now suppose that v is not a source in H. Let G * be the propagation graph associated with y * and let G * v be the sub-graph of G * induced by V (C v ).Since the propagation graph is acyclic, there must exist a vertex j ∈ where the second inequality follows from the fact that j receives influence only from vertices outside of C v , and the third inequality follows the definition of t v .
Altogether, we conclude that

□
Theorem 1.The solution obtained by the Algorithm 1 is a lower bound for the LEAST COST INFLUENCE PROBLEM in the sub-graph G ′ .
Proof.For the first case, the result is direct by the observations at the beginning of this section.
The case in which G ′ is not strongly connected and α = 1 (step 6 of the algorithm) holds by Lemma 1. □ Below we check that our algorithm has polynomial time complexity.Algorithm 1 is general and finds a dual bound for every case of the LCIP.However, there is room for improvement in the case where α < 1 and the sub-graph G ′ is not strongly connected.The idea is to do something similar to the case of α = 1 and solve the problem on the condensed component graph, which leads to a new combinatorial problem, as we explain next.
To improve the lower bound for the case in which we are not interested in achieving 100% adoption, we propose assigning a weight to each vertex of the condensed component graph H and using the total weight of the activated vertices to satisfy the portion of achieved vertices on the original problem.In this way, we arrive at a new variant of the LCIP, defined in Problem 2.
To assign weights to the vertices, we proceed as follows.Let H be the condensed component graph of where the binary decision variables x v indicate that v is active, and the integer variable y v ≥ 0 represents the incentive assigned to v.
In this model, the objective function minimizes the incentives paid to the vertices of graph H.The cover constraints in (8) ensure that the total weight of the active vertices is at least the parameter κ.Note that the value of κ relates to the original graph G because we still want to achieve the portion α of the original network.If we use κ = ⌈α|V (G ′ )|⌉ instead, we have a valid lower bound, but with ⌈α|V (G)|⌉, the lower bound can be higher (which is better).If we have w(v) for some reason during the branch-and-bound search, the problem (WLCIP) is infeasible, and we must cut off the current node.Constraints in (9) respect the thresholds, the total of influence coming from active neighbors of v together with an incentive need to be at least the threshold of v if it is active.Constraints (10) and ( 11) ensure the integrality of the variables.
When α = 1, we have a restricted case of the WLCIP in which we want to activate all the vertices.Therefore, we can ignore the weights of vertices of graph H, and the problem becomes the LCIP again.
Theorem 3 states that an optimal solution of the WLCIP on H provides a valid lower bound for the LCIP on G ′ .
Theorem 3. Let G ′ be a sub-graph of G and H a condensed component graph of G ′ .The optimum of the WLCIP on H is a lower bound of the optimum of LCIP on G ′ .
Proof.Let y v and y * i be the optimal incentive given for each v ∈ V (H) and i ∈ V (G ′ ) for the WLCIP and the LCIP, respectively.We will show that it is always possible to construct a feasible solution y for the WLCIP on H from an optimal solution of the LCIP on G ′ , such that ∑ v∈V (H) i .We will see that this is sufficient to prove what we want.
Let A G ′ ⊆ V (G ′ ) be the set of active vertices related to y * .Define a set of active vertices in H as , where C v is the strongly connected component associated with a condensed vertex v.It means that we activate a vertex v of H if C v has at least one active vertex.This way, we can compute the incentives to be paid as It remains to compare the new solution y with y * (see Figure 5 for a concrete example of the notation used in the rest of the proof).So, let G * be the propagation graph of G ′ associated with y * and denote as G * v the sub-graph of G * induced by the vertices of We have that Inequality 12 holds because t v ≤ t j ′ by the definition of the thresholds of H. Inequality 13 holds because   5b has value 3, the incentives are: y u = 2, y w = 1 and y v = 0.In Figure 5a, A In Figure 5b, from the solution of the LCIP on G ′ , we have To prove Inequality 14, let E uv = {(i, j) ∈ E(G ′ ) : i ∈ V (C u ) and j ∈ V (C v )} be the set of arcs going from C u to C v .So, by the definition of the weight of the arcs of H, we have that where the first inequality comes from the optimality of y.The second inequality holds by Inequalities 12 and 13. □ We are interested in solving the WLCIP in condensed component graphs in this work.Despite this, the problem is general and is defined for any directed graph.It is a generalization of the LCIP where, for each vertex i of a given input graph, a weight w(i) is attached to it.We can see the LCIP as a particular case where all vertices have the same weight.So, as the LCIP is NP-hard, WLCIP is NP-hard as well.Besides being difficult to solve in general cases, unfortunately, this problem is difficult to solve in DAGs, as stated in the following theorem.
Proof.Suppose that there is a polynomial-time algorithm for the WLCIP on DAGs.We show that we would be able to solve the minimization version of the knapsack problem (minknapsack) in polynomial time in such a case.However, min-knapsack is a notorious NP-hard problem (Carnes & Shmoys, 2008;Csirik, 1991).
Let (I, c, b, B) be an instance of min-knapsack, where for each item i ∈ I, c(i) > 0 is its cost and b(i) > 0 its size.We aim to find a subset Create the following instance for the WLCIP.We set V = {v i : i ∈ I}, E = / 0, w(v i ) = b(i) and Let A ⊆ V be the set of activated vertices in an optimal solution.Observe J = {i : v i ∈ A} is an optimal solution for min-knapsack.□ Even though the WLCIP is NP-hard on DAGs, demanding a somewhat elaborate mathematical approach, it deserves further consideration.If we solve the WLCIP to optimality to obtain a lower bound for α < 1, Algorithm 1 loses scalability.Furthermore, we observed in preliminary experiments that the LP-relaxation of the formulation ( 8)-( 11) does not provide a better lower bound than min {t i } (line 3 of Algorithm 1).Because of this, we do not solve the WLCIP in the experiments of Section 6. Attempts should be made to find lower bounds for the WLCIP tighter than the LP-relaxation of the formulation ( 8)-(11).For example, such attempts could be made by a combinatorial relaxation or by the LP-relaxation of a stronger formulation.

BRANCHING RULE
Observe that the greater the number of s.c.c. in G ′ , the greater the lower bound can be.To take advantage of this, we formulate a branching rule that increases the number of strongly connected components in the sub-graphs associated with the child nodes sub-problems in the branch-andbound tree.
The idea is to give higher priority to branch on the fractional variables associated with strong bridges (Definition 2) or strong articulation points (Definition 3) of G ′ .In this way, when we fix in zero a variable associated with a strong articulation point (or strong bridge) and remove the corresponding vertex (or arc) of G ′ , the number of components in G ′ increases.Consequently, the number of vertices of the condensed graph increases too, and, in turn, the lower bound can also increase.
Definition 2 (Strong Bridge).A strong bridge of a directed graph G is an arc whose removal increases the number of strongly connected components of G.
Definition 3 (Strong Articulation Point).A strong articulation point of a directed graph G is a vertex whose removal increases the number of strongly connected components of G.
All the strong bridges and strong articulation points can be computed in et al., 2012).
Algorithm 2 describes a branching rule based on these concepts.To describe it, we define the following notation.Denote by S the set of all connectivity cuts of G, that is, S contains all strong bridges and strong articulation points.Also, let sc(G, s) represent the number of strongly connected components generated by removing s from G ′ , where s ∈ S can be an arc or a vertex of G.
For a solution (x, ŷ, ẑ) of the continuous relaxation of a sub-problem (or feasible region) P, the set ∈ Z} contains the elements of G associated with the fractional variables of an LP-relaxation of the ILP model in Section 3. The list L represents the list of active nodes of the branch-and-bound tree.Each node represents a problem, and P denotes the current sub-problem.
Use another branching rule.
In Algorithm 2, the first step is to compute all the strong bridges and strong articulation points.Subroutine CONNECTIVITYCUTS(G ′ ) represents this procedure which can be done using the algorithm presented in (Italiano et al., 2012).If there are fractional variables associated with the connectivity cuts, we choose the one that generates more strongly connected components (line 5).So we branch on the chosen variable by fixing it in zero for one child node and in one for the other.

COMPUTATIONAL EXPERIMENTS
We conducted computational experiments with the branch-cut-and-price framework SCIP 6.0 running in an Intel Core i5-3210M 2.50GHz with 4GB of RAM.We used Gurobi Optimizer 8.1 as the underlying LP-solver and implemented the algorithms in C++.The test set is composed of synthetic random directed graphs and real networks.
Table 1 summarizes the difference in performance when we apply the lower bound in the branchand-cut.Each value on the table is the average of 5 executions.Every execution generates a new graph of a given size and average degree.The first column contains the name of instances in format n-deg where n is the number of vertices, and deg is the average degree.The second column is the value of β , the rewiring probability of the generative random graph model.The third column is the value of α, the portion of the network to be activated in the LCIP.In the other columns, BC means the branch-and-cut algorithm using only the LP-relaxation, and BC+ means using our combinatorial relaxation to get the lower bound, including the branching rule (Algorithm 2).Next, we present the time in seconds for those that finished before the time limit.The time limit for these instances is set to 1800 seconds.Dashed cells in the column "time" indicate that the running time reached the time limit.We marked in boldface the best results.E.g., in the instance "50-4" with α = 1, our method (BC+) required less running time to find the optimum.In column "dual bound", the higher, the better, while the contrary is in the "primal bound".Observe that our algorithm finds a better value for the dual bound in most instances.
We have the corresponding average optimality gap between the dual and primal bounds in the last column.The symbol ∞ in the column "gap" denotes the gap is infinity or very large.The optimality gap is computed as defined in both solvers SCIP and Gurobi.Let l be the dual bound, and u be the primal bound.We set the gap to ∞ if l = 0. Otherwise, the gap is (u − l)/l.Recall that the values in Table 1 are averages, so the gaps on the table are the average gaps.The number in parentheses next to the infinity symbol is the number of instances where the lower bound is greater than zero.The results exhibited in Table 1 illustrate our algorithm behavior for graphs of different sizes and densities.Our algorithm is not effective when the graphs are small (50-4, for example).In some cases, the running time is worse than the branch-and-cut using only the LP-relaxation (BC).However, as the density and the number of vertices increases, our algorithm (BC+) achieves better gaps.While the BC has lower bounds equal to zero in many instances, BC+ always provides a lower bound greater than zero, contributing to smaller gaps.In the vast majority of the instances, our algorithm provides smaller gaps.

Real world networks
We also performed experiments with real-world social networks to demonstrate the effects of applying the lower bound algorithm on real data.We used the datasets of the Koblenz Network Collection (Kunegis, 2013), human social network category.
Table 2 shows a short description of each social network used here.For each network, n is the number of vertices, and m is the number of arcs.The weights on the arcs are the original weights of the networks.On graphs with no arcs weights, we set the weights to 1. Lastly, the threshold t i , for each vertex i, is defined in the same way as in the synthetic graphs (see Section 6.1).Table 3 summarizes the results for real-world social networks.Dashed cells indicate that the running reached the time limit (1800 seconds).Also, we enabled all the presolving methods of the SCIP framework for both algorithms BC and BC+.These presolving methods provide gains in running time or gap reduction for our algorithm, except in the Residence hall and Advogato networks.In the column entitled "dual bound", the higher the number, the better.This column shows that our algorithm provides higher lower bounds in the majority of the networks for different values of α.It implies gains in the running time or gap reduction.Regarding the primal bounds, the reader can see that the gains of BC+ are not so expressive compared to BC.Despite that, the gains with the lower bounds outweigh the losses with the primal bound.Moreover, lower bounds are commonly used to approximate the optimality gap of heuristic methods.Note that for three networks (Wiki-vote, DBLP, Cora Citation), there is a small benefit in applying the dual bound algorithm, i.e., the performance of the branch-and-cut is almost the same whether using the lower bound or not.These networks have in common that the problem was entirely solved in the root node of the branch-and-bound tree for both algorithms BC and BC+.We believe this happens because such networks are more sparse than the others.Thus no branching is performed on the variables, and the lower bound algorithm has no chance to exploit the connectivity of the sub-graphs.In this way, when there are few changes in the structures of the sub-graphs obtained from the branches, it is expected that our algorithm cannot increase the dual bounds significantly.

CONCLUSION
We proposed and analyzed an algorithm to compute a lower bound for the Least Cost Influence Problem based on particular properties of the problem.In addition to the algorithm itself, some auxiliary strategies proved to be helpful to increase the lower bounds.We designed custom branching strategies to strengthen the lower bounds by using strong bridges and strong articulation points.We also provide computational experiments on large social networks to check for practical applicability, showing that our algorithm can out-perform the linear programming relaxation.
Our results show that the subject should be approached carefully, and we envision some space for improvements.For example, it is possible to improve the experimental results by finding a relaxed solution such that its value corresponds to the lower bound found by our algorithm.Also, in dense graphs, we observed that the bounds behave better when α = 1 than when α < 1.However, when we are not interested in influencing all the network individuals and the subgraphs are not strongly connected, the task becomes significantly more complex.In this case, getting higher lower bounds implies solving a new NP-hard problem named Weighted Least Cost Influence Problem.Because of this, it may be preferable to keep the algorithm efficient and straightforward.Despite the theoretical conclusions about the WLCIP, we do not rule out the possibility of finding other methods for obtaining better dual bounds efficiently in the case of α < 1.Therefore, seeking new ways to approach this case is an open question in this study.
To conclude, we believe that our theoretical findings on the WLCIP can open up the path for research on deriving a strong formulation for the WLCIP and finding combinatorial algorithms to solve it.

Figure 1 -
Figure 1 -Activation process in the threshold model considering incentives, starting from a target set S = {a}.The number attached to each vertex represents its threshold.Sets A τ contains the active vertices at time τ, for τ = 0, ..., 4. Labels in each arc (i, j) denote the weight of influence d i j .Highlighted arcs indicate who influences whom in the process.The vector y indexed by the vertices contains the incentive offered to each vertex.

Figure 2 -
Figure 2 -The directed graph in Figure (b) is obtained by removing the arc (d, c) of the graph in Figure (a).The vertex colors indicate the strongly connected components.

Figure 3 -
Figure 3 -Example of a decision tree with branching decisions on the binary variables.The black node represents the sub-problem obtained by fixing variables z ae = 1 and z dc = 0.

Figure 4 -
Figure 4 -Example of a condensed component graph.The original graph is on the left, and the condensed component graph is on the right.

Figure 4
Figure4presents an example for the condensed component graph of a small graph.The labels in each vertex of the figure denote the name and the threshold, respectively.For instance, the vertex in upper left corner has name a and threshold t a = 1.In the leftmost graph, there are three strongly connected components C u ,C w and C v .For simplicity, we only show the arc weights between different components.In the second graph, we have the condensed component graph with the new thresholds and weights on arcs.For instance, the arc (u, v) has weight d uv = d ad + d c f = 4, and the vertex v has threshold t v = min{2, 2, 3}.

Theorem 2 .
Given a directed graph G with n vertices and m arcs, Algorithm 1 takes O(n + m) time.Proof.To construct the condensed component graph, we can use Kosaraju-Sharir's algorithm (Sharir, 1981), which runs in time O(n + m).Line 3 can be executed in O(n) time, and line 6 in O(n + m).□ 4.1 The case of α < 1 ) (a) Propagation graph of the LCIP in a directed graph G ′ for α = 0.75.The thresholds are t i = 2, for every i ∈ V (G ′ ) and d i j = 1, for every arc (i, j) ∈ E(G ′ ), except for (g, d) which is d gd = 2. (b) Propagation graph of the WLCIP in a condensed component graph H of G ′ for κ = 6.The thresholds are t v = 2, for every v ∈ V (H), the vertex and arc weights are on the figure.

Figure 5 -
Figure 5 -The solution of the LCIP in Figure 5a has value 5, the incentives are: y a = 2, y b = y g = y h = 1 and y c = y d = y e = y f = 0.The solution of the WLCIP in Figure5bhas value 3, the incentives are: y u = 2, y w = 1 and y v = 0.In Figure5a, AG ′ = V (G ′ ) \ {c}, G * v = {{d, e}, {e, f }}, where d is the source of G * v .In Figure5b, from the solution of the LCIP on G ′ , we haveA H = V (H).Considering j ′ = d, in this example, {N j ′ \C v } ∩ A G ′ = {g}.Lastly, ∑ 1: Combinatorial Lower Bound Input: G ′ , influence d on arcs, thresholds t on vertices, and α Output: A lower bound l for the LCIP begin if G ′ is strongly connected or α < 1 then return min with the vertex v of H and fix a value κ = ⌈α|V (G)|⌉.∈ V (H), weight of influence d uv > 0 on the arcs (u, v) ∈ E(H), and an integer κ.Find a vector y ∈ Z |V | of incentives which minimizes the sum ∑ min ∑ v∈V (H)

Table 1 -
Experiments with synthetic small-world graphs.

Table 2 -
Real world social networks.