Electrolocation without an electric image

In this section, we consider the localization problem using a minimal receptor array, i.e. we consider the information provided by an array of N = 12 receptors. Ideally, we would follow the approach of Pourziaei et al [18]. and determine a configuration and minimum number of receptors that would ensure an unambiguous determination of the object location. Given the nonlinearity of the equations, it is expected that, to avoid the possibility of multiple solutions, this minimal number will be greater than the number of unknowns. Unfortunately, the equations for this problem are not amenable to such an approach. Thus, instead, we consider the case of N = 12, and numerically investigate the feasibility of using Newton’s method [19] to solve the resulting nonlinear system of 12 equations in 12 unknowns, with the understanding that multiple solutions may be possible. We construct a series of test problems that we will use as a feasibility study, or proof of concept, which will provide insight into the solvability of this approach.

The test problems are constructed by choosing an array of 12 receptors, assigning values to the unknowns $\mathbf_o^$, $\tilde}_o^$, $\mathbf_o^$ and $\tilde}_o^$, and using (5) to find the appropriate Δφj, $j = 1,\dots, 12$ for this receptor configuration. The computed values for Δφj and the receptor array become the ‘known’ values in the test problem in the equations (5), which we solve to recover the assigned values of the unknowns. If this were a linear system, this would be a trivial task. However, the nonlinearity of the problem may cause it to be prohibitively difficult to solve. Although we can be assured that the ‘true’ (i.e. the assigned) solution is a solution of the equations, there is no reason to expect that other solutions do not exist. In fact, symmetries of the system guarantee there to be others. In particular, a reflection symmetry across z = 0 dictates that for any solution with $z_o^\neq 0$, there is another solution with $z_o^\rightarrow -z_o^, E_z^\rightarrow -E_z^$, for k = 1 and/or k = 2. In this case, we can avoid ambiguity by only considering solutions z > 0. Similarly, there is also a ‘time-reversal’ symmetry of the system that leads to spurious solutions. Namely, if $\mathbf_o^ \leftrightarrow \mathbf_o^$, then $\Delta \phi_j \leftrightarrow -\Delta \phi_j$, regardless of the locations of the receptors, i.e. the problem is essentially the same if the object is moved from a to b versus being moved from b to a. Thus, for any given solution, $\mathbf_o^ \leftrightarrow \mathbf_o^$, $\tilde}_o^ \leftrightarrow -\tilde}_o^$ will also be a solution, regardless of the locations of the receptors. In this case, we may be required to use some a priori knowledge of the unperturbed electric field, e.g. that $E_z \gt 0$ for certain xo. Additional solutions are also expected due to symmetries in the receptor array (see below). Furthermore, the nonlinearity of the system may lead to any number of other solutions independent of the symmetries.

The solvability of the test problems is tied to the (local) convergence of Newton’s method, which is assured by a theorem that holds as long as three conditions are satisfied [20]. In particular, the theorem assumes that (1) a solution $\mathbf^*$ exists, (2) the Jacobian is Lipschitz continuous at $\mathbf^*$, and (3) the Jacobian is nonsingular at $\mathbf^*$, where the Jacobian is the matrix of partial derivatives with respect to the unknowns of the equation. If these conditions hold, then the theorem states that given a sufficiently good guess of the solution, the Newton iterations will converge to $\mathbf^*$.

Given that the Jacobian at $\mathbf^*$ is nonsingular, the implicit function theorem can be used to show that the solution $\mathbf^*$ is isolated, i.e. there is a neighborhood around $\mathbf^*$, possibly small, in which $\mathbf^*$ is the only solution. This does not imply that the solution is globally unique. However, generally, if all solutions are isolated, it may be that prior information or experience could be used to determine which is the correct physical solution. In the context of using Newton’s method to find the solution, this prior information comes in the form of the initial guess.

If, on the other hand, the Jacobian at the true solution $\mathbf^*$ is singular, then the solution at $\mathbf^*$ may not be isolated and thus may be difficult, or impossible, to locate. In particular, the iterations using Newton’s method may not converge, or may converge very slowly. Furthermore, if the Jacobian is close to singular, similar practical issues may arise. In this case, solutions may still be isolated, but there may be multiple solutions close enough together that prior information would not be able to distinguish between the multiple solutions. Furthermore, large errors in the numerical approximations may arise.

For our problem, the first two conditions of the convergence theorem follow from our construction of the problem. In particular, we know that a solution exists (because we have constructed it as such), and (local) Lipschitz continuity of the Jacobian follows from the observation that all partial derivatives of all the components of the Jacobian will be bounded as long as the receptor locations do not coincide with the object location (specifically as long as $ | \mathbf_j-\mathbf_o^ | $ is bounded away from zero). This is easily avoided by requiring that the object never enters the receptor plane (specifically, we assume $z_o^\neq 0$). It is left to determine whether the Jacobian is singular.

There are cases for which the Jacobian is, indeed, singular. Specifically, the Jacobian will be singular if the object has not moved (i.e. if $\mathbf_o^ = \mathbf_o^$). In this case, regardless of the location of the receptors, the first six columns of the Jacobian will be a scalar multiple of the last six, when evaluated at the solution, leading to the singularity.

Symmetries in the receptor array can also lead to singularities in the Jacobian. For example, consider the case in which the electric field is uniform and vertical, i.e. $\mathbf_o^ = \mathbf_o^ = (0,0,E_z)$, and the receptor array has a reflection symmetry across the x-axis, for which $x_k \rightarrow x_j$, $y_k \rightarrow - y_j$, for all receptors, then the Jacobian will be singular for an object moving along the x-axis at constant height zo, i.e. if $\mathbf_o^ = (x_o^, 0, z_o)$, $\mathbf_o^ = (x_o^, 0, z_o)$, for any $x_o^$, $x_o^$. A singularity of the Jacobian can also occur if the object follows a symmetric path across a symmetry of the receptor array. For example, given a receptor array that has a reflection symmetry across the y-axis ($x_k \rightarrow -x_j$, $y_k \rightarrow y_j$), and in addition four of the receptors lie on the y-axis, then if there is a possibly nonuniform electric field for which $\mathbf_o^ = (0,E_y,E_z)$ and $\mathbf_o^ = (0,-E_y,Ez)$, and the object moves from $\mathbf_o^ = (x_o, y_o, z_o)$ to $\mathbf_o^ = (-x_o, y_o, z_o)$, then the Jacobian will be singular for all xo, yo, zo. Similarly, under the same symmetry assumptions, a singularity occurs when the object follows the same path and there is a uniform electric field which satisfies $\mathbf_o^ = \mathbf_o^ = (E_x,0,E_z)$. Note that these arguments hold with $x \leftrightarrow y$, see below.

All of the cases we have described for which the Jacobian is singular are generally isolated, in the sense that a small change in the receptor or object locations will lead to a nonsingular Jacobian. However, it is likely there are other cases which lead to a singular Jacobian, and it is very difficult to prove, in general, when these cases will occur, or whether they will be isolated. Thus, we instead investigate the condition number of the Jacobian as it depends on the parameter values and receptor array configurations. The condition number of a matrix is infinite when the matrix is singular, and furthermore, a large condition number indicates that a matrix is close to singular. In addition to the issues discussed above regarding nearly singular Jacobians, operations with matrices with large condition number (e.g. solutions of linear systems) may lead to large errors and will be particularly sensitive to noise. By plotting the condition number as a function of the parameters and receptor locations, we can investigate the situations in which it may not be possible to solve the equations accurately.

In figure 2, we present some examples in which the condition number of the Jacobian of (5) is computed for the following array of 12 receptors:

Equation (6)

(see figure 1). In figures 2(a) and (b), we plot the condition number as a function of the x and y components of the final object location, $x_o^$ and $y_o^$, and we take the initial object location $\mathbf_o^ = ( -1.0, -0.5, 5.0)$ cm, and the z component of the final object location $z_o^ = 5.0$ cm. In figure 2(a), we consider a uniform vertical electric field, specifically $\tilde}_o^ = \tilde}_o^ = \gamma (0.0,0.0,0.9)$  cm2mV, while in figure 2(b) the electric field is nonuniform: $\tilde}_o^ = \gamma (-0.7, 0.5, 0.9)$  cm2mV and $\tilde}_o^ = \gamma (0.8,-1.0,0.9)$  cm2mV, where $\gamma = \Gamma a^3 = -0.0599$ cm3, which corresponds to an object radius a = 0.5 cm, a relative permittivity of the object $\epsilon_i = 2.25$ (polyethylene), and a relative permittivity of the medium $\epsilon_e = 80.1$ (water at 20∘ C).

Figure 2. Condition number of the Jacobian of (5) as a function of: (a) final object location: $x_o^$ versus $y_o^$ with purely vertical uniform electric field; (b) final object location: $x_o^$ versus $y_o^$ with nonuniform electric field; (c) x-component of initial and final electric field: $\tilde_x^$ versus $\tilde_x^$, where the object follows a symmetric path across a symmetry of the receptor array; (d) x-component of initial and final electric field: $\tilde_x^$ versus $\tilde_x^$ where the object follows a symmetric path, but receptors are randomly displaced from the symmetric spacing of (c). The condition number is represented on a logarithmic (base 10) scale. See text for specific parameter values used.

Standard image High-resolution image

In both figures 2(a) and (b), the singularity that occurs when $\mathbf_o^ = \mathbf_o^$ is clearly visible, and for parameter values close to this point, the condition number remains high. Generally, as the distance between the initial and final object locations increases, the condition number decreases. However, there are filaments, throughout the range of parameters, along which the condition number remains high. In addition, along these filaments, there are isolated locations, that may not be close to the $\mathbf_o^ = \mathbf_o^$ singularity, at which the condition number is very high. However, given the assumption that the object is in motion, taking the final object location measurement $\mathbf_o^$ at a slightly later time can result in the object moving from a point of high condition number to one of lower condition number, as long as the motion is not along a filament, e.g. in the case of figure 2(a), as long as the object has some motion in the x direction, i.e. for $x_o^\neq x_o^$. Note that, regardless, the condition number tends to decrease, even along the filaments, as all quantities move away from the singular location. In particular, the condition number tends to be low if all quantities change from initial to final locations, with a larger reduction when the differences between quantities at initial and final locations are larger. A change in the components of the electric field also changes the orientation of the filaments, and leads to an overall reduction in condition number; see figure 2(b).

The receptor array (6) has been chosen to avoid the symmetries that lead to a singular Jacobian in the cases where $x_o^ \neq x_o^$. However, a singularity may still occur if this restriction is lifted, i.e. if $x_o^ = x_o^$. In figure 2(c), a singularity due to the symmetry of the receptor array and object motion can be seen. In particular, the condition number is plotted as a function of $E_x^$ and $E_x^$, where the scaled electric fields at the initial and final object locations are $\tilde}_o^ = \gamma(E_x^,0.0,0.9)$ cm2mV and $\tilde}_o^ = \gamma(E_x^,0.0,0.9)$ cm2mV, respectively, while the object is moved symmetrically across the x-axis from $\mathbf_o^ = (0.5, -1.0, 5.0)$ cm to $\mathbf_o^ = (0.5, 1.0, 5.0)$ cm. The symmetry singularity is clearly visible for $E_x^ = E_x^$. However, this singularity can be eliminated by destroying the symmetry, either by shifting the whole array, or by randomly shifting the location of each of the receptors by a small amount. For example, figure 2(d) shows the same scenario as figure 2(c), except that the receptor array positions are randomly perturbed with a Gaussian random noise with variance 0.05 from their original locations. Even this small perturbation of the array eliminates the singularity. As such, we generally do not expect the symmetries of the array (6) to cause issues in the localization. Therefore, for simplicity of the computations presented below, we take the array (6) as is, without perturbation, and assume that the object path does not have this very specific symmetry.

Some additional observations (not shown) are that the problem is much better conditioned, in general, when the object changes its vertical distance from the receptor array, and increasing the distance between receptors reduces the condition number.

The theoretical results of the previous section have shown that the problem is well-posed as long as the object has moved sufficiently far. Specifically, if we choose a sufficiently good initial guess of the solution, then the Newton iterations will converge. Thus, it is left to determine the practical issue of whether an appropriate guess can be found, and whether the iterations converge to the correct solution.

Again, in general, it is not possible to determine which initial guesses will be sufficient for a given solution. In fact, the size and shape of the neighborhood around the solution which will lead to convergence of the Newton iterations may not only be very complicated, but may also depend on the solution itself. Instead, we return to studying test problems. These cannot be used to prove the behavior for all solutions, but by choosing generic examples, it can be expected that the results will be similar to many practical cases.

To test the importance of the quality of the initial guess, we pick the initial guess X0 as a random perturbation from the known correct solution $\mathbf_\mathrm  \in \mathbb^$. The computations are performed with three possible outcomes: convergence to the correct solution $\mathbf_\mathrm $, convergence to another solution, or no convergence. The result is recorded in one of 30 equally spaced bins corresponding to the size of the random perturbation (measured in the two-norm) from the correct solution. The random perturbations are chosen such that there will be approximately the same number of initial conditions in each bin. After we have approximately $10000$ such perturbations for each bin, the number of initial guesses in each bin that leads to convergence to the correct solution is divided by the total number of perturbations in the respective bin to obtain a probability of convergence for each bin. The results are plotted in figure 3. For these computations, we take $\mathbf_\mathrm $ to be $\mathbf_o^ = ( -1.0, -0.5, 5.0)$ cm, $\mathbf_o^ = ( 2.0, 1.5, 5.0)$ cm and $\tilde}_o = \tilde}_o^ = \gamma (0.0,0.0,0.9)$ cm$^2\,$mV, and where $\gamma = -0.0599$ cm3 as above. The locations of the 12 receptors in the receptor array are chosen as in equation (6).

Figure 3. Convergence fraction for an array with N = 12 receptors versus the two-norm of $\mathbf_0 - \mathbf_\mathrm $, the difference between the initial guess X0 for the Newton iterations and the correct solution $\mathbf_\mathrm $. See text for parameter values used.

Standard image High-resolution image

Figure 3 indicates that for initial guesses X0 that are less than 1 in the two-norm from the correct solution $\mathbf_\mathrm $, approximately half lead to convergence to the correct solution. However, only in the bin corresponding to guesses closest to the correct solution did all the initial guesses converge. In a small fraction of cases, the guesses lead to convergence to a different solution (less than 0.2% of cases for guesses furthest from the correct solution). Similar results are achieved with other test cases (not shown).

The results indicate the solvability, in principle, of the problem. However, it is left to determine if, in practice, an initial guess with the required accuracy could be found. Indeed, this is a short-coming that we will address below. Furthermore, in practice, there will be limited accuracy with which the receptors can measure the potentials, and it is left to determine how this will affect the solvability. We address this issue in the next subsection.

In practice, the receptors will not be ideal, and thus, there will be an error associated with the measurements of the electric potential at the receptor locations which will fluctuate for each different measurement. We refer to this as receptor noise. We also include in this definition any other noise induced on the spatial scale of the receptors (i.e. that is different for each receptor and each measurement).

There will also be variations in the electrical properties of the media due to, e.g. temperature variations or other inhomogeneities of the media, which we refer to as environmental noise. We include in this category any bias in the measurement of the potentials that is the same for all receptors. The effects of environmental noise will induce variations in the field values ($\tilde^}$, $k = 1,2$) at the locations of the object (see section 2). However, if we assume that these variations are on a slower time scale than the time scale of the motion of the object, then the variations will introduce no error in the values of the object locations. That is, it can be considered that the environmental noise leads to the modified values $\hat_x^ = \tilde_x^+\xi_x^$, $\hat_y^ = \tilde_y^+ \xi_y^$ and $\hat_z^ = \tilde_z^+\xi_z^$, which, given the assumption, would be the same for both measurements of the potential at both object locations, and for all receptors. Thus, solution of the localization equations (5) would lead to no error in the object locations, while the computation of the other unknown quantities would lead to the hatted values rather than the corresponding true values. Essentially, variations due to environmental noise only influence the ambient electric field, of which the method does not require information.

Thus, we only consider the receptor noise in the investigation of the effects of noise on the localization method. We model the receptor noise using Gaussian noise at the receptor level. That is, we assume that the measurements made by a receptor ($\phi_j^$ and $\phi_j^$, and thus $\Delta \phi_j = \phi_j^-\phi_j^$) is the true value plus an independent Gaussian random variable with mean zero and standard deviation ε, which we call the level of the noise. Again, we consider a test problem with the $\mathbf_\mathrm $ given by $\mathbf_o^ = ( -1.0, -0.5, 5.0)$ cm, $\mathbf_o^ = ( 2.0, 1.5, 5.0)$ cm and $\tilde}_o^ = \tilde}_o^ = \gamma (0.0,0.0,0.9)$ cm$^2\,$mV, where $\gamma = -0.0599$ cm3. The locations of the 12 receptors in the receptor array are chosen as in equation (6). We consider $10000$ measurement realizations for each noise level ε, where each realization involves sampling a Gaussian random variable with mean zero and standard deviation ε for each individual receptor. As such, we expect the solutions for all realizations at each noise level to form a distribution, which, if the method is robust to noise, will have mean close to $\mathbf_\mathrm $. The standard deviation of this distribution quantifies the sample-to-sample variability, i.e. the error one may expect when taking a single realization. In this way, the mean and standard deviation provide an indication of the robustness of the method to noise. For each realization, we compute the solution of the system (5) using Newton’s method; for initial guess we use $\mathbf_\mathrm $.

In figure 4, the standard deviations of the components of the final object location $\mathbf_o^$ across the

Comments (0)

No login
gif