Iterative Byzantine Vector Consensus in Incomplete Graphs ^{†}^{†}thanks: This research is supported in part by National Science Foundation award CNS1059540 and Army Research Office grant W911NF0710287. Any opinions, findings, and conclusions or recommendations expressed here are those of the authors and do not necessarily reflect the views of the funding agencies or the U.S. government.
Abstract
This work addresses Byzantine vector consensus (BVC), wherein the input at each process is a dimensional vector of reals, and each process is expected to decide on a decision vector that is in the convex hull of the input vectors at the faultfree processes [3, 8]. The input vector at each process may also be viewed as a point in the dimensional Euclidean space , where is a finite integer. Recent work [3, 8] has addressed Byzantine vector consensus in systems that can be modeled by a complete graph. This paper considers Byzantine vector consensus in incomplete graphs. In particular, we address a particular class of iterative algorithms in incomplete graphs, and prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition. The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for ; thus, a weaker condition may potentially suffice for Byzantine vector consensus.
1 Introduction
This work addresses Byzantine vector consensus (BVC), wherein the input at each process is a dimensional vector of reals, and each process is expected to decide on a decision vector that is in the convex hull of the input vectors at the faultfree processes [3, 8]. The input vector at each process may also be viewed as a point in the dimensional Euclidean space , where is a finite integer. Due to this correspondence, we use the terms point and vector interchangeably. Recent work [3, 8] has addressed Byzantine vector consensus in systems that can be modeled by a complete graph. The correctness conditions for Byzantine vector consensus (elaborated below) cannot be satisfied by independently performing consensus on each element of the input vectors; therefore, new algorithms are necessary. Here we consider Byzantine vector consensus in incomplete graphs. In particular, we address a particular class of iterative algorithms in incomplete graphs, and prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. The paper extends our past work on scalar consensus in incomplete graphs in presence of Byzantine faults [9], which yielded an exact characterization of graphs in which the problem is solvable. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition; the proof follows a structure previously used in our work to prove correctness of other consensus algorithms [7, 5].
The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for ; thus, it is possible that a weaker condition may also suffice for Byzantine vector consensus. We hope that this paper will motivate further work on identifying the tight sufficient condition.
In other related work [6], we present another generalization of the consensus problem considered in [3, 8]. In particular, [6] considers the problem of deciding on a convex hull (instead of just one point) that is contained in the convex hull of the inputs at the faultfree nodes.
The paper is organized as follows. Section 2 presents our system model. The iterative algorithm structure considered in our work is presented in Section 3. Section 4 presents a necessary condition, and Section 5 presents a sufficient condition. Section 5 also presents an iterative algorithm and proves its correctness under the sufficient condition. The paper concludes with a summary in Section 6.
2 System Model
The system is assumed to be synchronous.^{1}^{1}1Analogous results can be similarly derived for asynchronous systems, using the asynchronous algorithm structure presented in [9] for the case of . The communication network is modeled as a simple directed graph , where is the set of processes, and is the set of directed edges between the processes in . Thus, . We assume that , since the consensus problem for is trivial. Process can reliably transmit messages to process , , if and only if the directed edge is in . Each process can send messages to itself as well, however, for convenience of presentation, we exclude selfloops from set . That is, for . We will use the terms edge and link interchangeably.
For each process , let be the set of processes from which has incoming edges. That is, as the set of processes to which process has outgoing edges. That is, , and . However, we note again that each process can indeed send messages to itself. . Since we exclude selfloops from . Similarly, define
We consider the Byzantine failure model, with up to processes becoming faulty. A faulty process may misbehave arbitrarily. The faulty processes may potentially collaborate with each other. Moreover, the faulty processes are assumed to have a complete knowledge of the execution of the algorithm, including the states of all the processes, contents of messages the other processes send to each other, the algorithm specification, and the network topology.
Notation:
We use the notation to denote the size of a set or a multiset, and the notation to denote the absolute value of a real number .
3 Byzantine Vector Consensus and Iterative Algorithms
Byzantine vector consensus:
We are interested in iterative algorithms that satisfy the following conditions in presence of up to Byzantine faulty processes:

Termination: Each faultfree process must terminate after a finite number of iterations.

Validity: The state of each faultfree process at the end of each iteration must be in the convex hull of the dimensional input vectors at the faultfree processes.

Agreement: When the algorithm terminates, the th elements of the decision vectors at any two faultfree processes, where , must be within of each other, where is a predefined constant.
Any information carried over by a process from iteration to iteration is considered the state of process at the end of iteration . The above validity condition forces the algorithms to maintain “minimal” state, for instance, precluding the possibility of remembering messages received in several of the past iterations, or remembering the history of detected misbehavior of the neighbors. Therefore, we focus on algorithms with a simple iterative structure, described below.
Iterative structure:
Each process maintains a state variable , which is a dimensional vector. The initial state of process is denoted as , and it equals the input provided to process . For , denotes the state of process at the end of the th iteration of the algorithm. At the start of the th iteration (), the state of process is . The iterative algorithms of interest will require each process to perform the following three steps in the th iteration. Each “value” referred in the algorithm below is a dimensional vector (or, equivalently, a point in the dimensional Euclidean space).

Transmit step: Transmit current state, namely , on all outgoing edges to processes in .

Receive step: Receive values on all incoming edges from processes in . Denote by the multiset^{2}^{2}2The same value may occur multiple times in a multiset. of values received by process from its neighbors. The size of multiset is .

Update step: Process updates its state using a transition function as follows. is a part of the specification of the algorithm, and takes as input the multiset and state .
(1)
The decision (or output) of each process equals its state when the algorithm terminates.
We assume that each element of the input vector at each faultfree process is lower bounded by a constant and upper bounded by a constant . The iterative algorithm may terminate after a number of rounds that is a function of and . and are assumed to be known a priori. This assumption holds in many practical systems, because the input vector elements represent quantities that are constrained. For instance, if the input vectors are probability vectors, then and . If the input vectors represent locations in 3dimensional space occupied by mobile robots, then and are determined by the boundary of the region in which the robots are allowed to operate.
In Section 4, we develop a necessary condition that the graph must satisfy in order for the Byzantine vector consensus algorithm to be solvable using the above iterative structure. In Section 5, we develope a sufficient condition, such that the Byzantine vector consensus algorithm is solvable using the above iterative structure in any graph that satisfies this condition. We present an iterative algorithm, and prove its correctness under the sufficient condition.
4 A Necessary Condition
Hereafter, when we refer to an iterative algorithm, we mean an algorithm with the iterative structure specified in the previous section. In this section, we state a necessary condition on graph to be able to achieve Byzantine vector consensus using an iterative algorithm. First we introduce some notations.
Definition 1

Define to be a dimensional vector with all its elements equal to 0. Thus, corresponds to the origin in the dimensional Euclidean space.

Define , , to be a dimensional vector with the th element equal to , and the remaining elements equal to 0. Recall that is the parameter of the agreement condition.
Definition 2
For nonempty disjoint sets of processes and , and a nonnegative integer ,

if and only if there exists a process that has at least incoming edges from processes in , i.e., .

iff is not true.
Definition 3
denotes the convex hull of a multiset of points .
Now we state the necessary condition.
Condition NC:
For any partition , , of set , where , for , and , there exist (, ), such that
That is, there are incoming links from
processes in to some process in .
Lemma 1
If the Byzantine vector consensus problem can be solved using an iterative algorithm in , then satisfies Condition NC.
Proof: The proof is by contradiction. Suppose that Condition NC is not true. Then there exists a certain partition such that (), , and for , .
Let the initial state of each process in be (). Suppose that all the processes in set are faulty. For each link such that and (), the faulty process sends value to process in each iteration.
We now prove by induction that if the iterative algorithm satisfies the validity condition then the state of each faultfree process at the start of iteration equals , for all . The claim is true for by assumption on the inputs at the faultfree processes. Now suppose that the claim is true through iteration , and prove it for iteration . Thus, the state of each faultfree process in at the start of iteration equals , .
Consider any faultfree process , where . In iteration , process will receive from each faultfree incoming neighbor , and receive from each faulty incoming neighbor. These received values form the multiset . Since the condition in the lemma is assumed to be false, for any , , we have
Thus, at most incoming neighbors of belong to , and therefore, at most values in equal .
Since process does not know which of its incoming neighbors, if any, are faulty, it must allow for the possibility that any of its incoming neighbors are faulty. Let , , be the set containing all the incoming neighbors of process in . Since , ; therefore, all the processes in are potentially faulty. Also, by assumption, the values received from all faultfree processes equal their input, and the values received from faulty processes in equal . Thus, due to the validity condition, process must choose as its new state a value that is in the convex hull of the set
where . Since this observation is true for each , it follows that the new state must be a point in the convex hull of
It is easy to verify that the above intersection only contains the point . Therefore, . Thus, the state of process at the start of iteration equals . This concludes the induction.
The above result implies that the state of each faultfree process remains unchanged through the iterations. Thus, the state of any two faultfree processes differs in at least one vector element by , precluding agreement.
The above lemma demonstrates the necessity of Condition NC. Necessary condition NC implies a lower bound on the number of processes in , as stated in the next lemma.
Lemma 2
Suppose that the Byzantine vector consensus problem can be solved using an iterative algorithm in . Then, .
Proof: Since the Byzantine vector consensus problem can be solved using an iterative algorithm in , by Lemma 1, graph must satisfy Condition NC. Suppose that . Then there exists , , such that we can partition into sets such that for each , , and . Define . Since for each , it is clear that this partition of cannot satisfy Condition NC. This is a contradiction.
When , the input at each process is a scalar. For the case, our prior work [9] yielded a tight necessary and sufficient condition for Byzantine consensus to be achievable in using iterative algorithms. For , the necessary condition stated in Lemma 1 is equivalent to the necessary condition in [9]. We previously showed that, for , the same condition is also sufficient [9]. However, in general, for , Condition NC is not proved sufficient. Instead, we prove the sufficiency of another condition stated in the next section.
5 A Sufficient Condition
We now present Condition SC that is later proved to be sufficient for achieving Byzantine vector consensus in graph using an iterative algorithm.
Condition SC:
For any partition of set , such that and are both
nonempty, and ,
at least one of these conditions is true: , or .
Later in the paper we will present a Byzantine vector consensus algorithm named ByzIter that is proved correct in all graphs that saitsfy Condition SC. The proof will make use of Lemmas 3 and 4 presented below.
Lemma 3
For , if graph satisfies Condition SC, then indegree of each process in must be at least . That is, for each , .
Definition 4
Reduced Graph: For a given graph and such that , a graph is said to be a reduced graph, if: (i) , and (ii) is obtained by first removing from all the links incident on the processes in , and then removing up to additional incoming links at each process in .
Note that for a given and a given , multiple reduced graphs may exist (depending on the choice of the links removed at each process).
Lemma 4
Suppose that graph satisfies Condition SC, and . Then, in any reduced graph , there exists a process that has a directed path to all the remaining processes in .
5.1 Algorithm ByzIter
We will prove that, if graph satisfies Condition SC, then Algorithm ByzIter presented below achieves Byzantine vector consensus. Algorithm ByzIter has the threestep structure described in Section 3.
The proposed algorithm is based on the following result by Tverberg [4].
Theorem 1
(Tverberg’s Theorem [4]) For any integer , and for every multiset containing at least points in , there exists a partition of into nonempty multisets such that .
The points in above need not be distinct [4]; thus, the same point may occur multiple times in , and also in each of its subsets (’s) above. The partition in Theorem 1 is called a Tverberg partition, and the points in in Theorem 1 are called Tverberg points.
Algorithm ByzIter

Each iteration consists of three steps: Transmit, Receive, and Update:

Transmit step: Transmit current state on all outgoing edges.

Receive step: Receive values on all incoming edges. These values form multiset of size . (If a message is not received from some incoming neighbor, then that neighbor must be faulty. In this case, the missing message value is assumed to be by default. Recall that we assume a synchronous system.)

Update step: Form a multiset using the steps below:

Initialize as empty.

Add to , any one Tverberg point corresponding to each multiset such that . Since , by Theorem 1, such a Tverberg point exists.
is a multiset; thus a single point may appear in more than once. Note that
(2) 


Termination: Each faultfree process terminates after completing iterations, where is a constant defined later in (9). The value of depends on graph , constants and defined earlier, and parameter of agreement.
The proof of correctness of Algorithm ByzIter makes use of a matrix representation of the algorithm’s behavior. Before presenting the matrix representation, we introduce some notations and definitions related to matrices.
5.2 Matrix Preliminaries
We use boldface letters to denote matrices, rows of matrices, and their elements. For instance, denotes a matrix, denotes the th row of matrix , and denotes the element at the intersection of the th row and the th column of matrix .
Definition 5
A vector is said to be stochastic if all its elements are nonnegative, and the elements add up to 1. A matrix is said to be row stochastic if each row of the matrix is a stochastic vector.
For matrix products, we adopt the “backward” product convention below, where ,
(3) 
For a row stochastic matrix , coefficients of ergodicity and are defined as follows [10]:
Claim 1
For any square row stochastic matrices ,
Claim 2
If all the elements in any one column of matrix are lower bounded by a constant , then . That is, if , such that , , then .
5.3 Correctness of Algorithm ByzIter
This section presents a key lemma, Lemma 5, that helps us in proving the correctness of Algorithm ByzIter. In particular, Lemma 5 allows us to use results for nonhomogeneous Markov chains to prove the correctness of Algorithm ByzIter.
Let denote the actual set of faulty processes in a given execution of Algorithm ByzIter. Let . Thus, . Without loss of generality, suppose that processes through are faultfree, and if , processes through are faulty.
In the analysis below, it is convenient to view the state of each process as a point in the dimensional Euclidean space. Denote by the column vector consisting of the initial states of the faultfree processes. The th element of is , the initial state of process . Thus, is a vector consisting of points in the dimensional Euclidean space. Denote by , for , the column vector consisting of the states of the faultfree processes at the end of the th iteration. The th element of vector is state .
Lemma 5
Suppose that graph satisfies Condition SC. Then the state updates performed by the faultfree processes in the th iteration () of Algorithm ByzIter can be expressed as
(4) 
where is a row stochastic matrix with the following property: there exists a reduced graph , and a constant () that depends only on graph , such that
if or edge is in .
Proof: The proof is presented in Appendix C.
Matrix above is said to be a transition matrix. As the lemma states, is a row stochastic matrix. The proof of Lemma 5 shows how to identify a suitable row stochastic matrix for each iteration . The matrix depends on , as well as the behavior of the faulty processes. is the th row of transition matrix . Thus, (4) implies that
That is, the state of any faultfree process at the end of iteration can be expressed as a convex combination of the state of just the faultfree processes at the end of iteration . Recall that vector only includes the state of faultfree processes.
Theorem 2
Algorithm ByzIter satisfies the termination, validity and agreement conditions.
5.4 Algorithm ByzIter Satisfies the Validity Condition
Observe that . Therefore, by repeated application of (4), we obtain for ,
(5) 
Since each is row stochastic, the matrix product is also a row stochastic matrix. Recall that vector only includes the state of faultfree processes. Thus, (5) implies that the state of each faultfree process at the end of iteration can be expressed as a convex combination of the initial state of the faultfree processes. Therefore, the validity condition is satisfied.
5.5 Algorithm ByzIter Satisfies the Termination Condition
Algorithm ByzIter stops after a finite number () of iterations, where is a constant that depends only on , , and . Therefore, trivially, the algorithm satisfies the termination condition. Later, using (9) we define a suitable value for .
5.6 Algorithm ByzIter Satisfies the Agreement Condition
The proof structure below is derived from our previous work wherein we proved the correctness of an iterative algorithm for scalar Byzantine consensus (i.e., the case of ) [7] and its generalization to a broader class of fault sets [5].
Let denote the set of all the reduced graph of corresponding to fault set . Thus, is the set of all the reduced graph of corresponding to actual fault set . Let
depends only on and , and it is finite. Note that .
For each reduced graph , define connectivity matrix as follows, where :

if either , or edge exists in reduced graph .

, otherwise.
Thus, the nonzero elements of row correspond to the incoming links at process in the reduced graph , and the selfloop at process . Observe that has a nonzero diagonal.
Lemma 6
For any , and any , matrix product has at least one nonzero column (i.e., a column with all elements nonzero).
Proof: Each reduced graph contains processes because the fault set contain processes. By Lemma 4, at least one process in the reduced graph, say process , has directed paths to all the processes in the reduced graph . Element of matrix product is 1 if and only if process has a directed path to process containing at most edges; each of these directed paths must contain less than edges, because the number of processes in the reduced graph is . Since has directed paths to all the processes, it follows that, when , all the elements in the th column of must be nonzero.
For matrices and of identical dimensions, we say that if and only if , . Lemma 7 relates the transition matrices with the connectivity matrices. Constant used in the lemma below was introduced in Lemma 5.
Lemma 7
For any , there exists a reduced graph such that , where is the connectivity matrix for .
Proof: Appendix D presents the proof.
Lemma 8
At least one column in the matrix product is nonzero.
Proof: Since is a product of connectivity matrices corresponding to the reduced graphs in , and , connectivity matrix corresponding to at least one reduced graph in , say matrix , will appear in the above product at least times.
By Lemma 6, contains a nonzero column; say the th column of is nonzero. Also, by definition, all the connectivity matrices () have a nonzero diagonal. These two observations together imply that the th column in the product is nonzero.^{3}^{3}3The product can be viewed as the product of instances of “interspersed” with matrices with nonzero diagonals.
Let us now define a sequence of matrices , , such that each of these matrices is a product of of the matrices. Specifically,
(6) 
(7) 
Lemma 9
For , is a row stochastic matrix, and
Proof: is a product of row stochastic matrices (); therefore, is row stochastic. From Lemma 7, for each ,
Therefore,
By using in Lemma 8, we conclude that the matrix product on the left side of the above inequality contains a nonzero column. Therefore, since , on the right side of the inequality also contains a nonzero column.
Observe that is finite, and hence,
is nonzero. Since the nonzero terms in matrices are all 1,
the nonzero elements in
must each be 1. Therefore, there exists a nonzero column in
with all the elements in the column being .
Therefore, by Claim 2, .
Let us now continue with the proof of agreement. Consider the coefficient of ergodicity .
(8)  
Observe that the upper bound on right side of (8) depends only on graph and , and is independent of the input vectors, the fault set , and the behavior of the faulty processes. Also, the upper bound on the right side of (8) is a nonincreasing function of . Define as the smallest positive integer for which the right hand side of (8) is smaller than , where denotes the absolute value of real number . Thus,
(9) 
Recall that and depend only on . Thus, depends only on graph , and constants , and .
Recall that is a row stochastic matrix. Let . From (5) we know that state of any faultfree process is obtained as the product of the th row of and . That is, .
Recall that is a dimensional vector. Let us denote the th element of as , . Also, by , let us denote a vector consisting of the th elements of . Then by the definitions of , and , for any two faultfree processes and , we have
(10)  
(11)  
(12)  
(13)  
(14)  
(16) 
(17) 
The output of a faultfree process equals its state at termination (after
iterations).
Thus, (17) implies that Algorithm ByzIter satisfies the agreement condition.
6 Summary
This paper addresses Byzantine vector consensus (BVC), wherein the input at each process is a dimensional vector of reals, and each process is expected to decide on a decision vector that is in the convex hull of the input vectors at the faultfree processes [3, 8]. We address a particular class of iterative algorithms in incomplete graphs, and prove a necessary condition (NC), and a sufficient condition (SC), for the graphs to be able to solve the vector consensus problem iteratively. This paper extends our past work on scalar consensus (i.e., ) in incomplete graphs in presence of Byzantine faults [9, 7], which yielded an exact characterization of graphs in which the problem is solvable for . However, the necessary condition NC presented in the paper for vector consensus does not match with the sufficient condition SC. We hope that this paper will motivate further work on identifying the tight sufficient condition.
References
 [1] S. Dasgupta, C. Papadimitriou, and U. Vazirani. Algorithms. McGrawHill Higher Education, 2006.
 [2] J. Hajnal. Weak ergodicity in nonhomogeneous markov chains. In Proceedings of the Cambridge Philosophical Society, volume 54, pages 233–246, 1958.
 [3] H. Mendes and M. Herlihy. Multidimensional approximate agreement in byzantine asynchronous systems. In 45th ACM Symposium on the Theory of Computing (STOC), June 2013.
 [4] M. A. Perles and M. Sigron. A generalization of Tverberg’s theorem, 2007. CoRR, http://arxiv.org/abs/0710.4668.
 [5] L. Tseng and N. H. Vaidya. Iterative approximate byzantine consensus under a generalized fault model. In International Conference on Distributed Computing and Networking (ICDCN), January 2013.
 [6] L. Tseng and N. H. Vaidya, Byzantine Convex Consensus: An Optimal Algorithm, 2013. CoRR, http://arxiv.org/abs/1307.1332.
 [7] N. H. Vaidya. Matrix representation of iterative approximate byzantine consensus in directed graphs. CoRR http://arxiv.org/abs/1203.1888, March 2012.
 [8] N. H. Vaidya and V. K. Garg. Byzantine vector consensus in complete graphs. In ACM Symposium on Principles of Distributed Computing (PODC), July 2013.
 [9] N. H. Vaidya, L. Tseng, and G. Liang. Iterative approximate byzantine consensus in arbitrary directed graphs. In ACM Symposium on Principles of Distributed Computing (PODC), July 2012.
 [10] J. Wolfowitz. Products of indecomposable, aperiodic, stochastic matrices. In Proceedings of the American Mathematical Society, pages 733–737, 1963.
Appendix A Proof of Lemma 3
Lemma 3
For , if graph satisfies Condition SC, then
indegree of each process in must be at least .
That is, for each , .
Proof:
The proof is by contradiction.
As per the assumption in the lemma, , and graph satisfies condition SC.
Suppose that some process has indegree at most . Define , and . Partition the processes in into sets and such that , and . Such sets and exist because indegree of process is at most . thus defined form a partition of .
Now, and , and . Thus, there can be at most link from to any process in , and . Therefore, . Also, because , . Thus, there can be at most links from to process , which is the only process in . Therefore, . Thus, the above partition of does not satisfy Condition SC. This is a contradiction.
Appendix B Proof of Lemma 4
Before presenting the proof of Lemma 4, we introduce some terminology.
Definition 6
Graph decomposition: Let be a directed graph. Partition graph into strongly connected components, , where is a nonzero integer dependent on graph , such that

every pair of processes within the same strongly connected component has directed paths in to each other, and

for each pair of processes, say and , that belong to two different strongly connected components, either does not have a directed path to in , or does not have a directed path to in .
Construct a graph wherein each strongly connected component above is represented by vertex , and there is an edge from vertex to vertex only if the processes in have directed paths in to the processes in .
It is known that the decomposition graph is a directed acyclic graph [1].
Definition 7
Source component: Let be a directed graph, and let be its decomposition as per Definition 6. Strongly connected component of is said to be a source component if the corresponding vertex in is not reachable from any other vertex in .
Lemma 4
Suppose that graph satisfies Condition SC, and .
Then, in any reduced graph , there exists
a process that has a directed path to all the remaining processes in .
Proof:
Suppose that graph satisfies Condition SC.
We first prove that the reduced graph contains exactly one
source component.
Since , reduced graph contains at least one process; therefore, at least one source component must exist in the reduced graph . (If consists of a single strongly connected component, then that component is trivially a source component.)
So it remains to prove that cannot contain more than one source component. The proof is by contradiction.
Suppose that the decomposition of contains at least two source components. Let the sets of processes in two such source components of the reduced graph be denoted as and , respectively. Let . Observe that form a partition of the processes in . Since is a source component in the reduced graph , there are no directed links in from any process in to the processes in . Similarly, since is a source component in the reduced graph , there are no directed links in from any process in to the processes in . These observations, together with the manner in which is defined, imply that (i) there are at most links in from the processes in to each process in , and (ii) there are at most links in from the processes in to each process in . Therefore, in graph , and . This violates Condition SC, resulting in a contradiction. Thus, we have proved that must contain exactly one source component.
Consider any process in the unique source component, say process . By definition of a strongly connected component, process has directed paths to all the processes in the source component using the edges in . Also, by the uniqueness of the source component, all other strongly connected components in (if any exist) are not source components, and hence reachable from the source component the edges in . Therefore, process also has paths to all the processes in that are outside the source component as well. Therefore, process has paths to all the process in . This proves the lemma.
The above proof shows that, if Condition SC is true, then each reduced graph contains exactly one source component. It is also possible to show that, if each reduced graph contains exactly one source component, then Condition SC is satisfied.
Appendix C Proof of Lemma 5
Recall that is actual set of faults in a given execution of the proposed algorithm, and . As noted before, without loss of generality, we assume that processes 1 through are faultfree, and rest are faulty. To simplify the terminology, the definition below assumes a certain iteration index .
Definition 8
dependence: For a constant , , a point in the convex hull of is said to be dependent on process if there exist constants , , such that , , and
such that
is said to be the weight of in the above convex combination.
Lemma 10
Let be a nonempty subset of faultfree processes. Any point in the convex hull of is dependent on at least one faultfree process in .
Proof: Recall that we assume processes 1 through to be faultfree, and the remaining processes to be faulty. Any point in the convex hull of the state of faultfree processes in can be written as their convex combination. Since there are at most faultfree processes in , and their weights in the convex combination add to 1, at least one of the weights must be , proving the lemma.
Definition 9
Points in multiset are said to be collectively dependent on processes in set , if for each , there exists such that is dependent on .
Lemma 5
Suppose that graph satisfies Condition SC. Then the state updates performed by the faultfree processes in the th iteration () of Algorithm ByzIter can be expressed as
(18) 
where is a row stochastic matrix with the following property: there exists a reduced graph , and a constant () that depends only on graph , such that
if or edge is in .
Proof: We consider the case of separately from .

: When , all the processes are faultfree (i.e., ), and . In this case, there is only one reduced graph, which is identical to . Because , each multiset used in the Update step of Algorithm ByzIter to compute multiset contains value received from exactly one incoming neighbor. (When , and Condition SC holds true, it is possible that exactly one process in the graph has no incoming neighbors. If some process has no incoming neighbors, then .)
For , that is, containing a single point , the Tverberg point for is as well. Thus, , and is simply the average of and the values received from all the incoming neighbors of , which are necessarily faultfree (because ). Thus, is a convex combination of the elements of , wherein the weight assigned to each such that or is . Since , by defining , the statement of the lemma follows.

: Consider a faultfree process . Suppose that the number of faulty incoming neighbors of process is . When Condition SC holds, and , as shown in Lemma 3, each process has an indegree of at least . Therefore, for some integer , let
Recall that the Update step of Algorithm ByzIter enumerates suitable subsets of multiset , and picks one Tverberg point corresponding to each such . By an inductive argument we will identify such subsets , such that the Tverberg points added to corresponding to those subsets are collectively dependent on at least faultfree incoming neighbors of process . Let the Tverberg point added corresponding to be denoted as .

Consider a subset of such that . A Tverberg point for is added to in the Update step. By the definition of a Tverberg point, there exists a partition
