Sale!

ECECS 407 Multi-Party Computation Assignment 2 solved

Original price was: $35.00.Current price is: $30.00. $25.50

Category:

Description

5/5 - (5 votes)

1 Introduction
Multi-party computation is the generalization of two-party computation. Instead of garbled circuits, we use a
technique based on secret sharing. The idea is that we store a confidential dataset of secret shared values, and
can run queries and computations on it without revealing the data.
In this programming assignment, you’ll implement routines for encoding and decoding secret shared data, a
multi-party computation protocol, and finally several MPC programs.
This document gives a short but mostly self-contained overview of the protocol and outlines the tasks to
complete in the assignment, to serve as a checklist. The comments in the provided skeleton code also follow this
closely.
2 Multi-Party Computation Protocol
Secret sharing MPC is about computing on secret shared data. A secret sharing of x ∈ Fp is a way of encoding
x in such a way that N nodes each store a different share (node i stores JxK
(i)
. The idea is that confidentiality,
integrity, and availability guarantees should hold even if up to f of the nodes are compromised.
We use the Shamir secret sharing scheme. To encode a piece of secret data x ∈ Fp, where Fp is a finite field
of prime order-p, we first generate a random degree-f polynomial φ, such that x = φ(0). Each node i stores their
share of the data as JxK
(i) = φ(i). The data can be recovered from a subset of f + 1 shares from honest parties
through Lagrange interpolation (though more thought is needed to handle Byzantine errors).
Multiparty computations We can write a variety of programs to operate on secret shared data. For example, given secret shared inputs, JxK, …, we may want to evaluate a program prog to get JoK = Jprog(x, …)K =
execMPC(prog, JxK, …) and then run o ← open(JoK) to publish just the output of the program.
MPC programs like prog are written in a uniform way, even though they are executed in a replicated fashion
among the N different nodes. Linear operations on secret shared data can simpliy be performed locally at each
node. The opening operation, x ← open(JxK) requires communication between all the nodes. In this assignment,
the skeleton code includes a simple MPC simulator, which runs N instances of a program prog in separate (green-
)threads, and routes messages between them when encounter an open instruction. This simulator makes use of
the secret sharing encoder and decoder you complete, but otherwise should work already.
To build useful programs on top of MPC, we often need to make use of pre-processing data. The challenge
is that although we can perform linear operations on secret shared data trivially, we need to use a variety of
clever algorithms to do non-linear operations. For example, given a secret value JxK and a random preprocessing
value JrK, publishing open(JrxK) reveals no information about x, but can be used to compute Jx
−1
K, which would
otherwise be difficult. The MPC simulator provides easy access to a stash of random preprocessing values.
Below we describe the required tasks in detail, in the order we recommend approaching them. You can also
search for the string TODO in the python files for hints on where to begin.
Skeleton Code Explanation The skeleton code consists of the four files polynomials.py, secretsharing.py,
mpc sim.py, and fouriertransform.py. Each of these has clearly marked Problems and TODO: hints to suggest
where to include your code.
How to run the code You can run each file directly to see some test cases run, e.g. python secretsharing.py.
3 Tasks
1. Interpolating Polynomials [15pts] We will build on the same library for polynomials over finite fields that
we used in MP1. This provides operations like coefficients, including adding and multiplying polynomials
1
3 TASKS
together. However we need to implement other operations. The file polynomials.py contains tasks to
complete these additional functions. As a warmup, the first task you’ll need to complete is polynomial
evaluation: given the coefficients for φ(·), and a value for x, evaluate φ(x). Use Horner’s rule to do it
efficiently.
The next task is about Lagrange polynomials. Given a set of values x0, …xk, compute the lagrange polynomials `i(x) for 0 ≤ i ≤ k.
The lagrange polynomials can be used to perform polynomial interpolation. Given k+1 points (x0, y0), …,(xk, yk),
find a polynomial φ of (at most) degree-k that coincides with each of these points, yk = φ(xk).
The secretsharing.py file defines a particular field Fp we’ll use for this assignment — it’s chosen for a
few special properties we’ll get to later. Given polynomial interpolation, we can immediately reconstruct a
shared value from all n shares. This is provided for you, you can go straight to mpc sim.py now, but we’ll
come back to secretsharing.py in Problem 3.
2. MPC Applications [30pts] The mpc sim.py file contains a simulator for MPC protocols, which is mostly
complete (once you finish the interpolation routines, it will mostly work).
Your task will be implement MPC programs that compute on secret shared values. Secret shares are ordinary
field elements, so linear operations on the field elements correspond to linear operations on the shared values.
Secret shared values can be opened, making use of the reconstruction protocols above. The MPC applications
are presented as puzzles, although we’ve gone over the main ideas in lectures.
(a) Beaver Multiplication. [5pts] Just multiplying the shares together, JxK·JyK, would give a degree-2t polynomial, which can’t be recovered directly. Beaver Multiplication approch makes use of a preprocessed
”beaver triple” of shared values, JaK, JbK, JabK, which are used once in this protocol and then discarded.
(b) Computing the inverse of a shared value. [5pts] We start with a secret shared value JxK as input, and
want to obtain a share of J1/xK. This is a nonlinear operation. Not only does 1/JxK not work, we can’t
in general invert the polynomial φ at all (polynomials form a ring, not a field). We can make use of a
pre-processing random share JrK where r is a random field element.
(c) Generating a random bit. [5pts] Although we are assuming we have a supply of random field elements
JrK where r is uniform in the field Fp, we often want to generate a random bit specifically, JbK where
b
$← {0, 1}. Try to think of a way to achieve convert JrK into JbK without revealing b.
(d) Computing powers in constant rounds. [10pts] Given JxK, we’d like to compute Jx
2
K, …Jx
`
K, the first
` powers of JxK. Clearly this can be done sequentially using beaver multiplication. But this requires
many round trips — round trip latency can be the bottlenck in a distributed system. Can we trade
throughput for latency, and compute these in constant round?
(e) Extracting random values. [5pts] Suppose we have shared values, Jx1K, …, JxnK, each contributed by a
different party. Honest parties generate these randomly and forget about them, but up to f of them
may be adversarially chosen, and we don’t know which are which. Can we combine these “sketchy”
shared values, so that we have N − f “good” shared values, which are guaranteed to be random and
unknown?
3. Robust Reconstruction of Secret Shares[10pts]
However, we’ll need a more robust approach in the case that some nodes crash or are compromised. The
main insight is that if 2f + 1 shares are recovered, at least f + 1 of them must be honest by assumption, and
a degree-f polynomial is fully determined by f + 1 points. Hence if we can find a degree-f polynomial that
coincides with 2f + 1 points, we’ll know it’s the correct one.
Your task is to complete a function that searches for a suitable polynomial by brute force, trying subsets of
f + 1 vaues at a time and checking if they work.
Note: More efficient approaches are possible here as well, for example the Berlekamp-Welch method for
decoding. This is left as optional suggestion for bonus points, brute force will work for our test cases.
You can complete the MPC applications just with the simple reconstruction, so the recommended order is
to leave robust reconstuction for later.
4. Discrete Fourier Transform on Finite Fields [20pts] The fourier transform isn’t just for real numbers.
We can use the fast fourier transform over finite fields to accelerate computations involving polynomials.
Suppose we have an element in the field ω that is a primitive 2
k
-th root of unity. This means that |ω| = 2k
,
so k is the smallest number such that ω
2
k
= 1. Then given coefficients of a polynomial φ(x) = a0 + a1x +
…+a
2
k−1
, the Discrete Fourier Transform (DFT) is defined as the evaluation of the polynomial at the points
φ(ω
0
), φ(ω
1
), …, φ(ω
2
k−1
).
2
3 TASKS
The field Fp is chosen for a particular property that makes it amenable to do secret sharing using DFT. The
order of the multiplicative group, Fp
×, p − 1, is divisible by a large power of 2, 216|p − 1. This means that
we are able to find 2n-roots of unity. Your first task is to give an algorithm to find one.
Given a field and an nth root of unity, we can Fast Fourier Transform (FFT) algorithm to compute DFT,
and hence evaluation of a degree-n polynomial at n distinct points, in only O(n log n) time. This can be a
big computation savings compared to O(n
2
) which is what the earlier solutions take.
Reference: “The (finite field) Fast Fourier Transform”1
5. Additional directions. See mpc sim.py for additional challenges to earn bonus points.
1https://pdfs.semanticscholar.org/7a2a/4e7f8c21342a989d253704cedfb936bee7d7.pdf
3