Issue 
Security and Safety
Volume 2, 2023
Security and Safety in Unmanned Systems



Article Number  2023029  
Number of page(s)  16  
Section  Other Fields  
DOI  https://doi.org/10.1051/sands/2023029  
Published online  12 December 2023 
Research Article
Adaptive cooperative secure control of networked multiple unmanned systems under FDI attacks
The School of Automation and the Key Laboratory of Autonomous Intelligent Unmanned Systems, Beijing Institute of Technology, Beijing, 100081, China
^{*} Corresponding author (email: xuyong@bit.edu.cn)
Received:
22
March
2023
Revised:
1
August
2023
Accepted:
13
September
2023
With the expanding applications of multiple unmanned systems in various fields, more and more research attention has been paid to their security. The aim is to enhance the antiinterference ability, ensure their reliability and stability, and better serve human society. This article conducts adaptive cooperative secure tracking consensus of networked multiple unmanned systems subjected to false data injection attacks. From a practical perspective, each unmanned system is modeled using highorder unknown nonlinear discretetime systems. To reduce the communication bandwidth between agents, a quantizerbased codec mechanism is constructed. This quantizer uses a uniform logarithmic quantizer, combining the advantages of both quantizers. Because the transmission information attached to the false data can affect the accuracy of the decoder, a new adaptive law is added to the decoder to overcome this difficulty. A distributed controller is devised in the backstepping framework. Rigorous mathematical analysis shows that our proposed control algorithms ensure that all signals of the resultant systems remain bounded. Finally, simulation examples reveal the practical utility of the theoretical analysis.
Key words: Secure cooperative control / networked multiple unmanned systems / false data injection attacks / encodingdecoding strategy
Citation: Zhang Y, Mei D, Xu Y et al. Adaptive cooperative secure control of networked multiple unmanned systems under FDI attacks. Security and Safety 2023; 2: 2023029. https://doi.org/10.1051/sands/2023029
© The Author(s) 2023. Published by EDP Sciences and China Science Publishing & Media Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Owing to the rapid development of technology in artificial intelligence, countries around the world have developed unmanned systems. These systems do not require the physical presence of human operators on board and have increasingly autonomous functions, overcoming the limitations of human operators. These applications have gradually permeated various sectors, including national industrial production, social life, defense technology, and others such as selfdriving vehicles and unmanned aerial vehicle reconnaissance [1, 2]. Since multiple systems offer more flexibility and robustness than single systems, extensive research has been conducted on multiple unmanned systems (MUSs). In particular, the distributed cooperative control of MUSs has garnered significant attention from scholars [3–8]. In future control designs, MUSs could be equipped with tiny builtin microprocessors that gather data from adjacent agents and control actions based on preestablished rules, given the ubiquitous nature of nonlinear phenomena in the real world. Consequently, digital platforms are employed for the controllers, and the control policy is modified only at specific time intervals. Thus, the study of nonlinear discretetime MUSs holds great importance, and promising results have been reported so far [9–12]. With the advancement of networks, conserving network transmission bandwidth in MUSs has emerged as a prevalent trend.
Quantization is a straightforward yet effective method to alleviate the transmission load in communication channels within MUSs, yielding commendable outcomes [13–15]. Concerning control strategies employing quantized data, Zhang et al. [16] addressed quantized control for linear systems with state observers. Similarly, Liu et al. [17, 18] explored inputquantized state feedback control and statequantized output feedback control for nonlinear systems, respectively. To enhance data transmission’s security and efficiency in resourceconstrained communication networks, the encodingdecoding technology based on quantization has captured scholars’ attention. In [19], two encodingdecoding schemes were proposed, enabling the attainment of average consensus in linear systems. Notably, recent extensions [20–22] have surfaced; for detailed information, refer to [20], which introduced a control protocol utilizing symmetric compensation techniques alongside a distributed codec scheme relying on quantization to achieve consensus in discretetime systems within jointly connected networks. Dong [21] pioneered consensus among multiple nonlinear systems with limited bandwidth, introducing a novel approach to designing a distributed controller for both inner and outer loops. In [22], a quantized datadriven method based on a codec mechanism was proposed for nonlinear nonaffine systems. Moreover, Ren et al. [23] and Zhu et al. [24] utilized datadriven control methods and an encodingdecoding uniform quantization mechanism, effectively compressing communication data volume between agents. Thus, the development of a practical control scheme for MUSs to alleviate transmission load in communication channels is of paramount importance.
As networks rapidly expand, network security issues have taken on greater prominence. In networked environments, MUSs can achieve efficient cooperation while simultaneously facing increased security risks. This has spurred researchers to delve into the security concerns of MUSs. The communication networks of MUSs are susceptible to denialofservice and false data injection (FDI) attacks. FDI attacks occur when an adversary intercepts communication between agents and deliberately injects inaccurate packets [25]. In [26], an optimal FDI attack scheme based on historical and current residuals was proposed to maximally undermine performance from the attacker’s perspective. Li et al. [27] examined security consensus in a multiple input multiple output system under FDI attacks, employing an eventtriggering proposal. Similarly, Meng et al. [28] presented the adaptive resilient control problem for linear multiple systems subjected to sensor and actuator attacks. Zhang et al. [29] introduced a resilient observerbased control strategy triggered by events to address secure consensus in multipleagent scenarios under FDI attacks following Bernoulli processes. In addressing unknown FDI attacks, Wang et al. [30] proposed an observerbased fully asynchronous eventtriggered controller to achieve bipartite consensus among multiple agents. In the context of output consensus driven under FDI attacks, Huo et al. [31] developed state and outputfeedback cooperative control strategies. Hu et al. [32] tackled stochastic analysis and controller design problems for networked systems, considering false data injection attacks in the sensorcontroller and controlleractuator channels. Handling quasiconsensus in stochastic nonlinear timevarying multiagent systems with multimodal FDI models was addressed in [33]. However, due to exposure, the communication networks among agents are more vulnerable to attacks. To address this, Tahoun and Arafa [34] designed a distributed adaptive secure control scheme for multiagent networked systems under unknown FDI and replay attacks. Consequently, ensuring the reliability and security of communication networks necessitates the exploration of secure control techniques for networked nonlinear MUSs vulnerable to FDI attacks.
As a consequence, secure cooperative control of MUSs under FDI attacks has garnered significant interest and importance. Drawing from prior research, quantizationbased coding and decoding techniques have demonstrated the capability to achieve secure and efficient data transmission within resourceconstrained digital communication networks. In light of this, the current study delves into the secure cooperative control challenge posed by discretetime nonlinear MUSs in the presence of FDI attacks. The study also introduces a quantizationbased codec scheme designed to facilitate efficient and secure data transmission in resourceconstrained digital communication networks. Existing quantizationbased coding and decoding techniques have predominantly been applied to linear multiagent systems [19, 20, 23, 24], or specifically to the controllertoactuator component of an agent [35, 36]. However, there is a noticeable dearth of results for highorder unknown nonlinear multiagent systems. This gap arises due to the complexities introduced by highorder nonlinearity and unknown variables, rendering the application of existing quantizationbased codec techniques challenging. Moreover, in the presence of cyber attacks, these conventional methods fail to ensure system implementation or consistency. In scenarios involving network attacks, existing techniques also fall short of guaranteeing the system’s ability to achieve coherence. Consequently, the adaptation of quantizationbased codec techniques to highorder unknown nonlinear MUSs under FDI attacks represent a formidable challenge, which serves as the impetus behind this paper. Consequently, this paper proposes a codecbased neural network control approach for addressing the distributed quantized cooperative control problem within highorder nonlinear multiagent systems. The framework takes into account transmission channels between agents that are potentially subject to FDI.
The remainder of this study is organized as follows: Section 2 presents the preliminaries and problem formulation. The design of secure consensus control and the stability analysis are detailed in Sections 3 and 4, respectively. Section 5 provides simulation examples, while Section 6 concludes the study.
2. Preliminaries
2.1. System model
This study discusses the secure cooperative control of MUSs comprising N agents, wherein each agent is described by
$$\begin{array}{c}\hfill \{\begin{array}{cc}\hfill {x}_{i,m}(k+1)& ={g}_{i,m}{x}_{i,m+1}(k)+{\u03dd}_{i,m}({\overline{x}}_{i,m}(k)),\hfill \\ \hfill {x}_{i,n}(k+1)& ={g}_{i,n}{u}_{i}(k)+{\u03dd}_{i,n}({\overline{x}}_{i,n}(k)),\hfill \\ \hfill {y}_{i}(k)& ={x}_{i,1}(k),\hfill \end{array}\end{array}$$(1)
where m = 1, …, n − 1, i = 1, ⋯, N, k = 0, 1, 2, …, ${\overline{x}}_{i,n}(k)={[{x}_{i,1}(k),\dots ,{x}_{i,n}(k)]}^{T}$ is the system state, g_{i, m} is an unknown positive constant gain, ${\u03dd}_{i,m}({\overline{x}}_{i,m}(k))$ is an unknown nonlinear Lipschitz function, u_{i}(k) is the plant control input, and y_{i}(k) is the plant output.
Remark 1. System (1) is a unified structure of multiple MUSs, such as the motion of aircraft wings [37], robotic manipulators [38], and ship steering systems [39]. Each variable in (1) depends on the specific characteristics of the actual system.
Definition 1. ϝ(x):Ω → R is the Lipschitz function if there exists a Ψ such that
$$\begin{array}{cc}\hfill \u03dd(x)\u03dd(y)& \le \mathrm{\Psi}xy\hfill \end{array}$$
where x, y ∈ Ω.
Definition 2. The solution of the system (1) is uniformly ultimately bounded, if for all states x(0)∈D, and ∀k ≥ ℝ(ϵ, x(0)), ∥x(k)∥ ≤ ϵ hold, wherein D ⊂ ℝ is a compact set, N(ϵ, x(0)) is a number, and ϵ > 0.
2.2. Communication graph
A directed graph is used to describe information communication among agents. 𝒢 = (𝒩, ℰ, 𝒜) is introduced with a set of vertices 𝒩 = {1, …, N}, directed channels ℰ, and relevant adjacency matrices 𝒜 = (a_{i, j}). If information is transmitted from vertex j to i, (j, i) is defined as a graph edge (a directed channel), and, a_{i, j} is defined as the weight. If and only if (j, i)∈ℰ do we obtain, a_{i, j} > 0; else, we obtain a_{i, j} = 0. The set of neighboring vertexes i is defined as 𝒩_{i} = {j(j,i)∈ℰ}. The indegree of vertex i is denoted as D = diag(d_{1}, …, d_{N}) with ${d}_{i}=\sum _{j=1}^{N}{a}_{i,j}$. The Laplacian matrix of 𝒢 is represented by ℒ = D − 𝒜, and ℒ1_{N × 1} = 0 with 1_{N × 1} = [1, 1, …, 1].
Assumption 1. A graph is considered connected if there exists a path between any two nodes, which is formed by a series of edges. For the purposes of this study, assume that the directed graph 𝒢 is connected.
2.3. Encodingdecoding scheme
This subsection introduces the designed codec, intended for later use in the controller design. Its purpose is to employ the quantizer for the reduction of information transfer between agents and to address the issue of insufficient bandwidth. Precisely, in this scheme, each transmission channel encodes the sender’s state value as a data point prior to transmission. Subsequently, the receiver employs a decoder to estimate the sender’s state after receiving the data. The encoder E_{i} corresponding to agent i is expressed as follows:
$$\begin{array}{c}\hfill \{\begin{array}{cc}\hfill {\xi}_{i,r}(0)& =0,\hfill \\ \hfill {\xi}_{i,r}(k)& ={\xi}_{i,r}(k1)+{s}_{i,r}^{a}(k)\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\hfill \\ \hfill {s}_{i,r}(k)& =q({x}_{i,r}(k){\xi}_{i,r}(k1))\hfill \end{array}\end{array}$$(2)
where ξ_{i, r}(k)(r = 1, …, n) are the internal states of E_{i}, x_{i, r}(k) is the input, and ${s}_{i,r}^{a}(k)$ is the output and is transmitted to the neighbor. Here, q is a quantizer, which is expressed as follows
$$\begin{array}{c}\hfill q(\mu )=\{\begin{array}{cc}& {q}_{l}\left({\zeta}_{\mathit{th}}\right)+\lfloor \frac{\mu {\zeta}_{\mathit{th}}}{h}+o\rfloor h,\&\mu \ge {\zeta}_{\mathit{th}}\hfill \\ \hfill & {q}_{l}\left({\zeta}_{\mathit{th}}\right),\&\mu \le {\zeta}_{\mathit{th}}\hfill \end{array}\end{array}$$(3)
where the specifies constant ζ_{t h} > 0 that serves as the threshold for switching between the logarithmic and uniform quantizers; The uniform quantizer error is characterized by a parameter h = q_{l}(ζ_{t h})−ζ_{t h}, where q_{l}(μ) represents the output of a logarithmic quantizer, as expressed below
$$\begin{array}{c}\hfill {q}_{l}(\mu )=\{\begin{array}{cc}& {\zeta}_{\iota}sgn(\mu ),\&\frac{{\zeta}_{\iota}}{1+\u03f5}<\mu \frac{{\zeta}_{\iota}}{1\u03f5}\hfill \\ \hfill & 0,\&\mu \le \frac{{\zeta}_{min}}{1+\u03f5}\hfill \end{array}\end{array}$$(4)
with $\frac{{\zeta}_{min}}{1+\u03f5}>0$ determining the dead zone size for q_{l}(μ), ζ_{ι} = κ^{1 − ι} ζ_{min}, ι = 1, 2, ⋯ and $\kappa =\frac{1\u03f5}{1+\u03f5}$ with 0 < ϵ < 1. Additionally, the variable o is defined as follows: o = 1 if q_{l}(ζ_{t h})< ζ_{t h}, o = 0 otherwise. The threshold ζ_{t h} is a preset positive constant for switching between the logarithmic and uniform. The quantization error q^{i j}(μ)−μ is bounded by $max\{\frac{\u03f5{\u03f5}^{2}}{1+\u03f5}{\zeta}_{\mathit{th}},h\}$.
Remark 2. The logarithmic quantizer reduces quantization errors when the amplitude is small. However, as the signal amplitude increases, the quantization levels of the logarithmic quantizer become coarser. To address this issue, a logarithmicuniform quantizer is employed. This quantizer aims to minimize the average communication rate across instances.
Assuming that the communication between agents (j, i) may be subject to an FDI attack, ${s}_{j,r}^{i}(k)={s}_{j,r}^{a}(k)+{\delta}_{j}^{i}(k)$ is the signal received by agent i from neighbor agent j after the FDI attack. The decoder ${D}_{j}^{i}$ in agent i is designed as follows
$$\begin{array}{c}\hfill \{\begin{array}{cc}\hfill {\xi}_{j,r}^{i}(0)& =0,\hfill \\ \hfill {\xi}_{j,r}^{i}(k)& ={\xi}_{j,r}^{i}(k1)+{s}_{j,r}^{i}(k){\widehat{\delta}}_{j}^{i}(k)\hfill \end{array}\end{array}$$(5)
where ${\widehat{\delta}}_{j}^{i}(k)={\widehat{\delta}}_{j}^{i}(k1){L}_{i}({K}_{i}{z}_{i,1}(k1)+{\widehat{\delta}}_{j}^{i}(k1))$ is the estimation of the false signal ${\delta}_{j}^{i}(k)$, with L_{i}, K_{i} being designed parameters and z_{i, 1} being designed as (6), ${\xi}_{j,r}^{i}(k)(r=1,\dots ,n)$ are the outputs of ${D}_{j}^{i}$. Especially, when the communication is no attack, one has ${s}_{j,r}^{i}(k)={s}_{j,r}^{a}(k)$ and ${\widehat{\delta}}_{j}^{i}(k)=0$.
Lemma 1. Considering an unknown nonlinear MUS (1), the encoding scheme (2) can estimate the state x_{i}.
Proof Denote e_{i, r}(k)=x_{i, r}(k)−ξ_{i, r}(k), which can be rewritten as
$$\begin{array}{cc}\hfill {e}_{i,r}(k)& =({x}_{i,r}(k){\xi}_{i,r}(k1)){s}_{i,r}(k)={\mathrm{\Delta}}_{i,r}(k)\hfill \end{array}$$
where Δ_{i, r} = q(μ)−μ represents the quantized error. Thus, the errors e_{i, r}(k) are bounded by ${\overline{\mathrm{\Delta}}}_{i,r}=max\{\frac{m{m}^{2}}{1+m}{\zeta}_{\mathit{th}},h\}$.▫
Remark 3. Unlike observers that are utilized for state estimation or observation within systems, encoders and decoders primarily handle the encoding and decoding of signals. An encoder/decoder constitutes a system or algorithm used for the encoding and decoding of signals or data. The encoder transforms the input signal into a designated encoding format, while the decoder reverts the encoded signal back to its original form. Their advantage lies in conserving communication bandwidth.
Assumption 2. The false signal ${\delta}_{j}^{i}(k)$ is bounded, that is, ${\delta}_{j}^{i}(k)\le {\overline{\delta}}_{j}^{i}$.
3. Distributed statefeedback controller design
This section outlines the structure of a distributed controller based on a codec scheme within a backstepping framework. The following error variables are established
$$\begin{array}{cc}\hfill {z}_{i,1}(k)& =\sum _{j=1}^{N}{a}_{i,j}({\xi}_{i,1}(k){\xi}_{j,1}^{i}(k))\hfill \end{array}$$(6) $$\begin{array}{cc}\hfill {z}_{i,r}(k)& ={\xi}_{i,r}(k){\alpha}_{i,r1}(k)\hfill \end{array}$$(7)
where α_{i, r − 1}(r = 2, …, n) represent the intermediate variables to be designed.
Step 1. According to (6), z_{i, 1}(k + 1) can be produced
$$\begin{array}{cc}\hfill {z}_{i,1}(k+1)& =\sum _{j=1}^{N}{a}_{i,j}[{\mathrm{\Delta}}_{i,1}^{j}(k+1)+{g}_{i,1}[{e}_{i,2}(k)+{z}_{i,2}(k)+{\alpha}_{i,1}(k)]\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}{\u03dd}_{i,1}\left({x}_{i,1}\right){\u03dd}_{i,1}\left({\xi}_{i,1}\right)+{\u03dd}_{i,1}\left({\xi}_{i,1}\right){\overline{\u03dd}}_{j,1}\left({\chi}_{j,1}\right)]\hfill \end{array}$$(8)
where ${\xi}_{j,1}(k+1)={\overline{\u03dd}}_{j,1}({\chi}_{j,1})$, ${\overline{\u03dd}}_{j,1}({\chi}_{j,1}(k))$ is a nonlinear mapping, ${\chi}_{j,1}={[{\xi}_{j,1}(k),{s}_{j}^{i}(k),{\widehat{\delta}}_{j}^{i}(k)]}^{T}$. The desired feedback control is established and approximated using a neural network (NN) [40] as follows
$$\begin{array}{cc}\hfill {\alpha}_{i,1}^{\ast}(k)& =\frac{1}{{g}_{i,1}}[{\u03dd}_{i,1}\left({\xi}_{i,1}\right)+{\overline{\u03dd}}_{j,1}\left({\chi}_{j,1}\right)]\hfill \\ \hfill & ={w}_{i,1}^{T}{\varphi}_{i,1}\left({\vartheta}_{i,1}^{T}{X}_{i,1}\right)+{\epsilon}_{i,1}(k)\hfill \end{array}$$(9)
where w_{i, 1} and ϑ_{i, 1} are weights of the output and hidden layer, ϕ_{i, 1} is the function of the hidden layer, X_{i, 1} = [ξ_{i, 1}(k),χ_{j, 1}(k)]^{T} is the input of NN, and approximation error ${\epsilon}_{i,1}(k)\le {\overline{\epsilon}}_{i,1}$. Noting
$$\begin{array}{cc}\hfill {\u03dd}_{i,1}\left({x}_{i,1}\right){\u03dd}_{i}\left({\xi}_{i,1}\right)& \le {\mathrm{\Psi}}_{i,1}{x}_{i,1}{\xi}_{i,1}\hfill \\ \hfill & \le {\mathrm{\Psi}}_{i,1}{e}_{i,1}(k).\hfill \end{array}$$(10)
Choose suitable positive parameters Ψ_{i, 1} and ${\overline{m}}_{i,1}$ such that ${\mathrm{\Delta}}_{i,1}^{j}(k+1)+{g}_{i,1}{e}_{i,2}(k)+{\mathrm{\Psi}}_{i,1}{e}_{i,1}(k)\le {\overline{m}}_{i,1}{\overline{\mathrm{\Delta}}}_{i}$ holds. Design α_{i, 1}(k) and ${\widehat{w}}_{i,1}(k+1)$ as follows
$$\begin{array}{cc}\hfill {\alpha}_{i,1}(k)& ={\widehat{w}}_{i,1}^{T}(k){\varphi}_{i,1}(k)\hfill \end{array}$$(11) $$\begin{array}{cc}\hfill {\widehat{w}}_{i,1}(k+1)& ={\widehat{w}}_{i,1}(k){l}_{i,1}{\varphi}_{i,1}(k)({k}_{i,1}{z}_{i,1}(k)+{\widehat{w}}_{i,1}^{T}(k){\varphi}_{i,1}(k))\hfill \end{array}$$(12)
where l_{i, 1} and k_{i, 1} are positive parameters. Then, one can obtain
$$\begin{array}{cc}\hfill {z}_{i,1}(k+1)& =\sum _{j=1}^{N}{a}_{i,j}{g}_{i,1}[{z}_{i,2}(k)+{\stackrel{~}{w}}_{i,1}(k){\varphi}_{i,1}(k)+{\epsilon}_{i,1}(k)]+{d}_{i}{\overline{m}}_{i,1}{\overline{\mathrm{\Delta}}}_{i}\hfill \end{array}$$(13)
where ${\stackrel{~}{w}}_{i,1}(k)={\widehat{w}}_{i,1}(k){w}_{i,1}$ is approximate error.
Step ϒ(ϒ = 1, … n − 1). Based on z_{i, Υ}(k)=ξ_{i, Υ}(k)−α_{i, Υ − 1}, z_{i, Υ}(k + 1) is computed
$$\begin{array}{cc}\hfill {z}_{i,\mathrm{{\rm Y}}}(k+1)& ={\mathrm{\Delta}}_{i,\mathrm{{\rm Y}}}(k+1)+{g}_{i,\mathrm{{\rm Y}}}[{e}_{i,\mathrm{{\rm Y}}+1}(k)+{z}_{i,\mathrm{{\rm Y}}+1}(k)+{\alpha}_{i,\mathrm{{\rm Y}}}(k)]\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+{\u03dd}_{i,\mathrm{{\rm Y}}}\left({\overline{x}}_{i,\mathrm{{\rm Y}}}\right){\u03dd}_{i,\mathrm{{\rm Y}}}\left({\overline{\xi}}_{i,\mathrm{{\rm Y}}}\right)+{\u03dd}_{i,\mathrm{{\rm Y}}}\left({\overline{\xi}}_{i,\mathrm{{\rm Y}}}\right){\overline{\u03dd}}_{i,\mathrm{{\rm Y}}}\left({\alpha}_{i,\mathrm{{\rm Y}}1}(k)\right)\hfill \end{array}$$(14)
where ${\alpha}_{i,\mathrm{{\rm Y}}1}(k+1)={\overline{\u03dd}}_{i,\mathrm{{\rm Y}}}({\alpha}_{i,\mathrm{{\rm Y}}1}(k))$, ${\overline{\u03dd}}_{i,\mathrm{{\rm Y}}}({\alpha}_{i,\mathrm{{\rm Y}}1}(k))$ is a nonlinear mapping.
The desired feedback control is established and approximated it by an NN as follows
$$\begin{array}{cc}\hfill {\alpha}_{i,\mathrm{{\rm Y}}}^{\ast}(k)& =\frac{1}{{g}_{i,\mathrm{{\rm Y}}}}[{\u03dd}_{i,\mathrm{{\rm Y}}}\left({\xi}_{i,\mathrm{{\rm Y}}}\right)+{\overline{\u03dd}}_{i,\mathrm{{\rm Y}}}\left({\alpha}_{i,\mathrm{{\rm Y}}1}(k)\right)]\hfill \\ \hfill & ={w}_{i,\mathrm{{\rm Y}}}^{T}{\varphi}_{i,\mathrm{{\rm Y}}}\left({\vartheta}_{i,\mathrm{{\rm Y}}}^{T}{X}_{i,\mathrm{{\rm Y}}}\right)+{\epsilon}_{i,\mathrm{{\rm Y}}}(k)\hfill \end{array}$$(15)
where ${X}_{i,\mathrm{{\rm Y}}}={[{\overline{\xi}}_{i,\mathrm{{\rm Y}}}^{T}(k),{\widehat{x}}_{j,1}(k),{\overline{\widehat{w}}}_{i,\mathrm{{\rm Y}}1}^{T}(k)]}^{T}$, ${\epsilon}_{i,\mathrm{{\rm Y}}}(k)\le {\overline{\epsilon}}_{i,\mathrm{{\rm Y}}}$.
Likewise,
$$\begin{array}{cc}\hfill {\u03dd}_{i,\mathrm{{\rm Y}}}\left({\widehat{x}}_{i,\mathrm{{\rm Y}}}\right){\u03dd}_{i,\mathrm{{\rm Y}}}\left({\xi}_{i,\mathrm{{\rm Y}}}\right)& \le {\mathrm{\Psi}}_{i,\mathrm{{\rm Y}}}\sum _{o=1}^{\mathrm{{\rm Y}}}{x}_{i,o}{\xi}_{i,o}\le {\mathrm{\Psi}}_{i,\mathrm{{\rm Y}}}\sum _{o=1}^{r}{e}_{i,o}(k)\hfill \end{array}$$(16)
and ${\mathrm{\Delta}}_{i,\mathrm{{\rm Y}}}(k+1)+{g}_{i,\mathrm{{\rm Y}}}{e}_{i,\mathrm{{\rm Y}}+1}(k)+{\mathrm{\Psi}}_{i,\mathrm{{\rm Y}}}\sum _{o=1}^{\mathrm{{\rm Y}}}{e}_{i,o}(k)\le {\overline{m}}_{i,\mathrm{{\rm Y}}}{\overline{\mathrm{\Delta}}}_{i}$, where Ψ_{i, Υ} and ${\overline{m}}_{i,\mathrm{{\rm Y}}}$ are positive constants.
Construct the virtual controller and adaptive law as follows
$$\begin{array}{cc}\hfill {\alpha}_{i,\mathrm{{\rm Y}}}(k)& ={\widehat{w}}_{i,\mathrm{{\rm Y}}}^{T}(k){\varphi}_{i,\mathrm{{\rm Y}}}(k)\hfill \end{array}$$(17) $$\begin{array}{cc}\hfill {\widehat{w}}_{i,\mathrm{{\rm Y}}}(k+1)& ={\widehat{w}}_{i,\mathrm{{\rm Y}}}(k){l}_{i,\mathrm{{\rm Y}}}{\varphi}_{i,\mathrm{{\rm Y}}}(k)({k}_{i,\mathrm{{\rm Y}}}{z}_{i,\mathrm{{\rm Y}}}(k)+{\widehat{w}}_{i,\mathrm{{\rm Y}}}^{T}(k){\varphi}_{i,\mathrm{{\rm Y}}}(k))\hfill \end{array}$$(18)
where l_{i, Υ} and k_{i, Υ} are positive constants.
Thus, one has
$$\begin{array}{cc}\hfill {z}_{i,\mathrm{{\rm Y}}}(k+1)& ={g}_{i,\mathrm{{\rm Y}}}[{z}_{i,\mathrm{{\rm Y}}+1}(k)+{\stackrel{~}{w}}_{i,\mathrm{{\rm Y}}}(k){\varphi}_{i,\mathrm{{\rm Y}}}(k)+{\epsilon}_{i,\mathrm{{\rm Y}}}(k)]+{\overline{m}}_{i,\mathrm{{\rm Y}}}{\overline{\mathrm{\Delta}}}_{i}\hfill \end{array}$$(19)
where ${\stackrel{~}{w}}_{i,\mathrm{{\rm Y}}}(k)={\widehat{w}}_{i,\mathrm{{\rm Y}}}(k){w}_{i,\mathrm{{\rm Y}}}$.
Step n. Depending on z_{i, n}(k)=ξ_{i, n}(k)−α_{i, n − 1}, we calculate z_{i, n}(k + 1) as follows
$$\begin{array}{cc}\hfill {z}_{i,n}(k+1)& ={\mathrm{\Delta}}_{i,n}^{j}(k+1)+{g}_{i,n}{u}_{i}(k){\overline{\u03dd}}_{i,n}\left({\alpha}_{i,n1}(k)\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+{\u03dd}_{i,n}\left({\overline{x}}_{i,n}\right){\u03dd}_{i,n}\left({\overline{\xi}}_{i,n}\right)+{\u03dd}_{i,n}\left({\overline{\xi}}_{i,n}\right)\hfill \end{array}$$(20)
where ${\alpha}_{i,n1}(k+1)={\overline{\u03dd}}_{i,n}({\alpha}_{i,n1}(k))$, ${\overline{\u03dd}}_{i,n}({\alpha}_{i,n1}(k))$ is a nonlinear mapping.
An ideal feedback control is defined and approximated by an NN as
$$\begin{array}{cc}\hfill {u}_{i}^{\ast}(k)& =\frac{1}{{g}_{i,n}}[{\u03dd}_{i,n}\left({\xi}_{i,n}\right)+{\overline{\u03dd}}_{i,n}\left({\alpha}_{i,n1}(k)\right)]\hfill \\ \hfill & ={w}_{i,n}^{T}{\varphi}_{i,n}\left({\vartheta}_{i,n}^{T}{X}_{i,n}\right)+{\epsilon}_{i,n}(k)\hfill \end{array}$$(21)
where ${X}_{i,n}={[{\overline{\xi}}_{i,n}^{T}(k),{\widehat{x}}_{j,1}(k),{\overline{\widehat{w}}}_{i,n1}^{T}(k)]}^{T}$, ${\epsilon}_{i,n}(k)\le {\overline{\epsilon}}_{i,n}$.
Observing
$$\begin{array}{cc}\hfill {\u03dd}_{i,n}\left({\widehat{x}}_{i,n}\right){\u03dd}_{i,n}\left({\xi}_{i,n}\right)& \le {\mathrm{\Psi}}_{i,n}\sum _{\beta =1}^{n}{x}_{i,\beta}{\xi}_{i,\beta}\le {\mathrm{\Psi}}_{i,n}\sum _{\mathrm{{\rm Y}}=1}^{n}{e}_{i,\beta}(k)\hfill \end{array}$$(22)
and choosing positive constants Ψ_{i, n} and ${\overline{m}}_{i,n}$, then, ${\mathrm{\Delta}}_{i,n}^{j}(k+1)+{\mathrm{\Psi}}_{i,n}\sum _{\mathrm{{\rm Y}}=1}^{n}{e}_{i,\mathrm{{\rm Y}}}(k)\le {\overline{m}}_{i,n}{\overline{\mathrm{\Delta}}}_{i}$ holds.
The actual controller and adaptive law are established as follows
$$\begin{array}{cc}\hfill {u}_{i}(k)& ={\widehat{w}}_{i,n}^{T}(k){\varphi}_{i,n}(k)\hfill \end{array}$$(23) $$\begin{array}{cc}\hfill {\widehat{w}}_{i,n}(k+1)& ={\widehat{w}}_{i,n}(k){l}_{i,n}{\varphi}_{i,n}(k)({k}_{i,n}{z}_{i,n}(k)+{\widehat{w}}_{i,n}^{T}(k){\varphi}_{i,n}(k))\hfill \end{array}$$(24)
where parameters l_{i, n} > 0 and k_{i, n} > 0.
Thus,
$$\begin{array}{cc}\hfill {z}_{i,n}(k+1)& ={g}_{i,n}[{\stackrel{~}{w}}_{i,n}(k){\varphi}_{i,n}(k)+{\epsilon}_{i,n}(k)]+{\overline{m}}_{i,n}{\overline{\mathrm{\Delta}}}_{i}\hfill \end{array}$$(25)
holds, where ${\stackrel{~}{w}}_{i,n}(k)={\widehat{w}}_{i,n}(k){w}_{i,n}$.
The block diagram of the design process is shown in Figure 1.
Figure 1. Block diagram of design process 
4. Stability analysis
For a multiagent system composed of N agents utilizing the unknown nonlinear system (1), if the conditions
$$\begin{array}{cc}\hfill 0<{r}_{i,m}<2,\phantom{\rule{4pt}{0ex}}0& <{l}_{i,m}<1,\phantom{\rule{4pt}{0ex}}m=1,\dots ,n,\phantom{\rule{4pt}{0ex}}0<{L}_{i}<1\hfill \\ \hfill {k}_{i,1}& <\sqrt{\frac{{r}_{i,1}}{16{\overline{g}}_{i,1}^{2}}}\hfill \\ \hfill {k}_{i,o}& <\sqrt{\frac{{r}_{i,o}}{16{\overline{g}}_{i,o}^{2}}\frac{{r}_{i,o1}}{4}},\phantom{\rule{4pt}{0ex}}o=2,\dots ,n1\hfill \\ \hfill {k}_{i,n}& <\sqrt{\frac{{r}_{i,n}}{12{\overline{g}}_{i,n}^{2}}\frac{{r}_{i,n1}}{4}}\hfill \end{array}$$
are satisfied, the designed distributed control scheme (23) ensures that the consensus error and all the signals are semiglobally bounded.
Proof Adopt the Lynapunov function
$$\begin{array}{cc}\hfill {V}_{i}(k)& =\frac{{r}_{i,1}}{8{\overline{g}}_{i,1}^{2}}{z}_{i,1}^{2}(k)+\sum _{o=2}^{n1}\frac{{r}_{i,o}{d}_{i}^{2}}{8{\overline{g}}_{i,o}^{2}}{z}_{i,o}^{2}(k)+\frac{{d}_{i}^{2}}{6{\overline{g}}_{i,n}^{2}}{z}_{i,n}^{2}(k)+\sum _{o=1}^{n}\frac{{d}_{i}^{2}}{{l}_{i,o}}{\stackrel{~}{w}}_{i,o}^{T}(k){\stackrel{~}{w}}_{i,o}(k)+\sum _{j\in {\mathcal{N}}_{i}}\frac{1}{{L}_{i}}{\left({\stackrel{~}{\delta}}_{j}^{i}\right)}^{2}(k).\hfill \end{array}$$(26)
The difference of (26) can be calculated as
$$\begin{array}{cc}\hfill \mathrm{\Delta}{V}_{i}& \le \frac{{r}_{i,1}}{8{\overline{g}}_{i,1}^{2}}{z}_{i,1}^{2}(k)\sum _{o=2}^{n1}(\frac{{r}_{i,o}}{8{\overline{g}}_{i,o}^{2}}\frac{{r}_{i,o1}}{2}){d}_{i}^{2}{z}_{i,o}^{2}(k)(\frac{{r}_{i,n}}{6{\overline{g}}_{i,n}^{2}}\frac{{r}_{i,n1}}{2}){d}_{i}^{2}{z}_{i,n}^{2}(k)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\sum _{o=1}^{n}[\frac{{r}_{i,o}}{2{\overline{g}}_{i,o}^{2}}{\left({d}_{i}{\overline{m}}_{i,o}{\overline{\mathrm{\Delta}}}_{i}\right)}^{2}+\frac{{r}_{i,o}}{2}{\left({d}_{i}{\overline{\epsilon}}_{i,o}\right)}^{2}+\frac{{r}_{i,o}}{2}{\left({d}_{i}{\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}\left({X}_{i,o}\right)\right)}^{2}]\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\sum _{o=1}^{n}{[2{d}_{i,o}^{2}{\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}({k}_{i,o}{z}_{i,o}(k)+{\widehat{w}}_{i,o}^{T}(k){\varphi}_{i,o})+{\widehat{w}}_{i,o}^{T}(k){\varphi}_{i,o})}^{2}\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+{d}_{i,o}^{2}{l}_{i,o}{\parallel {\varphi}_{i,o}\parallel}^{2}\left({k}_{i,o}{z}_{i,o}(k)\right]\sum _{j\in {\mathcal{N}}_{i}}2{\stackrel{~}{\delta}}_{j}^{i}({K}_{i}{z}_{i,1}(k)+{\widehat{\delta}}_{j}^{i}(k))\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\sum _{j\in {\mathcal{N}}_{i}}{L}_{i}{({K}_{i}{z}_{i,1}(k)+{\widehat{\delta}}_{j}^{i}(k))}^{2}+\sum _{j\in {\mathcal{N}}_{i}}\frac{4}{{L}_{i}}{\left({\overline{\delta}}_{j}^{i}\right)}^{2}.\hfill \end{array}$$(27)
By choosing 0 < l_{i, β} < 1 and 0 < L_{i} < 1, the subsequent inequality is valid
$$\begin{array}{cc}& \sum _{o=1}^{n}[2{d}_{i}^{2}{\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}({k}_{i,o}{z}_{i,o}(k)+{\widehat{w}}_{i,o}^{T}(k){\varphi}_{i,o})\hfill \\ \hfill & \phantom{\rule{2em}{0ex}}+{d}_{i}^{2}{l}_{i,o}{\parallel {\varphi}_{i,o}\parallel}^{2}{({k}_{i,o}{z}_{i,o}(k)+{\widehat{w}}_{i,o}^{T}(k){\varphi}_{i,o})}^{2}]\hfill \\ \hfill & \phantom{\rule{2em}{0ex}}=\sum _{o=1}^{n}[2{d}_{i}^{2}{\left({\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}\right)}^{2}2{\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}({k}_{i,o}{z}_{i,o}(k)+{w}_{i,o}^{T}(k){\varphi}_{i,o})\hfill \\ \hfill & \phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}+{d}_{i}^{2}{\parallel {\varphi}_{i,o}\parallel}^{2}{({k}_{i,o}{z}_{i,o}(k)+{\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}+{w}_{i,o}^{T}(k){\varphi}_{i,o})}^{2}\hfill \\ \hfill & \phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}(1{l}_{i,o}){d}_{i}^{2}{\parallel {\varphi}_{i,o}\parallel}^{2}{({k}_{i,o}{z}_{i,o}(k)+{\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}+{w}_{i,o}^{T}(k){\varphi}_{i,o})}^{2}]\hfill \\ \hfill & \phantom{\rule{2em}{0ex}}\le \sum _{o=1}^{n}[{d}_{i}^{2}{\left({\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}\right)}^{2}+2{d}_{i}^{2}{\overline{w}}_{i,o}^{2}+2{d}_{i}^{2}{k}_{i,o}^{2}{z}_{i,o}^{2}(k)].\hfill \end{array}$$(28)
Similarly,
$$\begin{array}{cc}& 2{\stackrel{~}{\delta}}_{j}^{i}({K}_{i}{z}_{i,1}(k)+{\widehat{\delta}}_{j}^{i}(k))+{L}_{i}{({K}_{i}{z}_{i,1}(k)+{\widehat{\delta}}_{j}^{i}(k))}^{2}\le {\left({\stackrel{~}{\delta}}_{j}^{i}\right)}^{2}+2{K}_{i}^{2}{z}_{i,1}^{2}+2{\left({\overline{\delta}}_{j}^{i}\right)}^{2}.\hfill \end{array}$$(29)
Furthermore, ΔV_{i} can be rewritten as
$$\begin{array}{cc}\hfill \mathrm{\Delta}{V}_{i}& \le (\frac{{r}_{i,1}}{8{\overline{g}}_{i,1}^{2}}2{k}_{i,1}^{2}2{K}_{i}^{2}){z}_{i,1}^{2}(k)\sum _{o=2}^{n1}(\frac{{r}_{i,o}{d}_{i}^{2}}{8{\overline{g}}_{i,o}^{2}}\frac{{r}_{i,o1}{d}_{i}^{2}}{2}\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}2{k}_{i,o}^{2}{d}_{i}^{2}){z}_{i,o}^{2}(k)(\frac{{r}_{i,n}{d}_{i}^{2}}{6}\frac{{r}_{i,n1}{d}_{i}^{2}}{2}2{k}_{i,n}^{2}{d}_{i}^{2}){z}_{i,n}^{2}(k)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\sum _{o=1}^{n}[\frac{{r}_{i,o}}{2}{\left({d}_{i}{\overline{m}}_{i,o}{\overline{\mathrm{\Delta}}}_{i}\right)}^{2}+\frac{{r}_{i,o}}{2}{\left({d}_{i}{\overline{\epsilon}}_{i,o}\right)}^{2}+2{d}_{i}^{2}{\overline{w}}_{i,o}^{2}]\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}\sum _{o=1}^{n}(1\frac{{r}_{i,o}}{2}){d}_{i}^{2}{\left({\stackrel{~}{w}}_{i,o}^{T}(k){\varphi}_{i,o}\right)}^{2}\sum _{j\in {\mathcal{N}}_{i}}{\left({\stackrel{~}{\delta}}_{j}^{i}(k)\right)}^{2}+\sum _{j\in {\mathcal{N}}_{i}}(2+\frac{4}{{L}_{i}}){\left({\overline{\delta}}_{j}^{i}\right)}^{2}.\hfill \end{array}$$(30)
Based on (30), ΔV_{i}(k)≤0 holds as
$$\begin{array}{cc}\hfill {z}_{i,1}(k)& >\sqrt{\frac{{\mathcal{M}}_{i}}{\frac{{r}_{i,1}}{8{\overline{g}}_{i,1}^{2}}2{k}_{i,1}^{2}2{K}_{i}^{2}}}\hfill \end{array}$$(31)
or
$$\begin{array}{cc}\hfill {z}_{i,o}(k)& >\sqrt{\frac{{\mathcal{M}}_{i}}{\frac{{r}_{i,o}{d}_{i}^{2}}{8{\overline{g}}_{i,o}^{2}}\frac{{r}_{i,o1}{d}_{i}^{2}}{2}2{k}_{i,o}^{2}{d}_{i}^{2}}}\hfill \end{array}$$(32)
or
$$\begin{array}{cc}\hfill {z}_{i,n}(k)& >\sqrt{\frac{{\mathcal{M}}_{i}}{\frac{{r}_{i,n}{d}_{i}^{2}}{6}\frac{{r}_{i,n1}{d}_{i}^{2}}{2}2{k}_{i,n}^{2}{d}_{i}^{2}}}\hfill \end{array}$$(33)
where
$$\begin{array}{cc}\hfill {\mathcal{M}}_{i}& =\sum _{o=1}^{n}[\frac{{r}_{i,o}}{2}{\left({d}_{i}{\overline{m}}_{i,o}{\overline{\mathrm{\Delta}}}_{i}\right)}^{2}+\frac{{r}_{i,o}}{2}{\left({d}_{i}{\overline{\epsilon}}_{i,o}\right)}^{2}+2{d}_{i}^{2}{\overline{w}}_{i,o}^{2}]+\sum _{j\in {\mathcal{N}}_{i}}(2+\frac{4}{{L}_{i}}){\left({\overline{\delta}}_{j}^{i}\right)}^{2}.\hfill \end{array}$$(34)
As mentioned by the extension theorem in [41], the previous analysis proves that the error z_{i, 1}(k),…,z_{i, n}(k), adaptive laws ${\widehat{w}}_{i,1}(k),\dots ,{\widehat{w}}_{i,n}(k)$ and ${\widehat{\delta}}_{i}(k)$ are bounded. Further, it can be observed that u_{i} is bounded. In addition, we derive the bounds of ${\stackrel{~}{\delta}}_{j}^{i}$. According to (30), ΔV_{i}(k)≤0 holds as follows
$$\begin{array}{cc}\hfill \left{\stackrel{~}{\delta}}_{j}^{i}(k)\right& >\sqrt{{\mathcal{M}}_{i}}.\hfill \end{array}$$(35)
Based on the standard Lyapunov extension theorem, ${\stackrel{~}{\delta}}_{j}^{i}$ is bounded.
By representing e_{j, 𝜘}(k)=x_{j, 𝜘}(k)−ξ_{j, 𝜘}(k) (𝜘=1, …, n), the error is reworked as
$$\begin{array}{cc}\hfill {e}_{j,\varkappa}(k)& =({x}_{j,\varkappa}(k){\xi}_{j,\varkappa}(k1)){s}_{j,\varkappa}(k){\widehat{\delta}}_{j}^{i}(k)\hfill \\ \hfill & =({x}_{j,\varkappa}(k){\xi}_{j,\varkappa}(k1)){s}_{j,\varkappa}^{a}(k){\stackrel{~}{\delta}}_{j}^{i}(k)\hfill \\ \hfill & ={\mathrm{\Delta}}_{j,\varkappa}(k){\stackrel{~}{\delta}}_{j}^{i}(k)\hfill \end{array}$$
where ${s}_{j,\varkappa}^{a}(k)$ denotes the true signal. Thus, the errors e_{j, 𝜘}(k) are bounded.
Furthermore, one obtains
$$\begin{array}{cc}\hfill \underset{t\to \infty}{lim}{y}_{i}(k){y}_{j}(k)& \le \left{z}_{i,1}(k)\right+{x}_{i,1}(k){\xi}_{i,1}(k)+{\xi}_{j,1}(k){x}_{j,1}(k)\hfill \\ \hfill & \le \left{z}_{i,1}(k)\right+\left{e}_{i,1}(k)\right+\left{e}_{j,1}(k)\right\hfill \\ \hfill & \le \left{z}_{i,1}(k)\right+{\overline{\mathrm{\Delta}}}_{i}+{\overline{\mathrm{\Delta}}}_{j}+\left{\stackrel{~}{\delta}}_{j}^{i}(k)\right.\hfill \end{array}$$
In accordance with the bounds of z_{i, 1}(k) and ${\stackrel{~}{\delta}}_{j}^{i}(k)$, consensus errors are uniformly bounded. In conclusion, the proof is finished.
5. Simulation results
In this section, two simulation cases are presented to illustrate the effectiveness of the control method. Example 1. A system comprising six singlelink manipulators is utilized, with each singlelink manipulator being modeled according to [42]
$$\begin{array}{c}\hfill \{\begin{array}{cc}\hfill \frac{\mathrm{d}\theta}{\mathrm{d}t}& =\omega \hfill \\ \hfill \frac{\mathrm{d}\omega}{\mathrm{d}t}& ={G}_{i}^{1}({u}_{t}Mg{L}_{i}sin(\theta ){f}_{d}\frac{\mathrm{d}\theta}{\mathrm{d}t})\hfill \end{array}\end{array}$$
where M = 1 kg and L_{i} = 0.1i + 0.5 m. System states are selected as the current angle θ and associated angular velocity ω. Here, let the moment of inertia ${G}_{i}^{1}={(0.1i+0.5)}^{2}$ kg ⋅ m^{2}, the viscous friction f_{d} = 2 kg ⋅ m^{2}, and acceleration of gravity g = 9.81 m s^{−2}. The Euler method with a sampling interval Δt = 0.1 s is used to discrete the systems as follows
$$\begin{array}{c}\hfill \{\begin{array}{cc}\hfill {x}_{i,1}(k+1)& ={x}_{i,1}(k)+\mathrm{\Delta}t{x}_{i,2}(k)\hfill \\ \hfill {x}_{i,2}(k+1)& ={G}_{i}^{1}\mathrm{\Delta}t{u}_{i}(k){G}_{i}^{1}Mg{L}_{i}\mathrm{\Delta}tsin({x}_{i,1}(k))+(1{f}_{d}{G}_{i}^{1}\mathrm{\Delta}t){x}_{i,2}(k)\hfill \\ \hfill {y}_{i}(k)& ={x}_{i,1}(k),i=1,\dots ,6.\hfill \end{array}\end{array}$$
They are connected to each other through a network, the structure of which is shown in Figure 2. We assume that the false signals 5sin(k) and 5cos(k) are injected into the edges (v_{1}, v_{6}) and (v_{4}, v_{3}).
Figure 2. Communication topology 
Choose the parameters as l_{1} = [0.1, 0.9, 0.1, 0.01, 0.1, 0.1]^{T}, l_{2} = [0.9, 0.5, 0.5, 0.5, 0.1, 0.1]^{T}, k_{1} = [0.1, 0.3, 0.5, 0.3, 0.3, 1.3]^{T}, k_{2} = [0.1, 0.5, 0.1, 0.5, 0.5, 0.5]^{T}. Set the initial value as x_{1}(0) = [0.5,−8.5]^{T}, x_{2}(0) = [−0.3,1.05]^{T}, x_{3}(0) = [1,−10.85]^{T}, x_{4}(0) = [0.2, −2.06]^{T}, x_{5}(0) = [−0.2,2.56]^{T}, x_{6}(0) = [−0.5,6.05]^{T}, ${\widehat{w}}_{11}\left(0\right)={[0.1,0.0,0.1,0.1,1]}^{T}$, ${\widehat{w}}_{21}\left(0\right)={[0.01,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{31}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{41}\left(0\right)={\widehat{w}}_{51}\left(0\right)={\widehat{w}}_{61}\left(0\right)=[0.1,0.1,0.1,0.1,$ 0.1]^{T}, ${\widehat{w}}_{12}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{22}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{32}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{42}\left(0\right)={\widehat{w}}_{52}\left(0\right)={\widehat{w}}_{62}\left(0\right)={[0.3,0.1,0.1,0.1,0.1]}^{T}$. Figures 3–9 shows the simulation results.
Figure 3. Outputs y_{i} (i = 1, …, 6) of all agents 
Figure 4. Consensus errors y_{i} − y_{j} (i = 1, …, 6) 
Figure 5. Controller u_{i} of every agent 
Figure 6. The norms of adaptive laws w_{i, 1} (i = 1, …, 6) 
Figure 7. The norms of adaptive laws w_{i, 2} (i = 1, …, 6) 
Figure 8. The input x_{i, 1} − ξ_{i, 1} and output q(x_{i, 1} − ξ_{i, 1}) of quantizer 
Figure 9. The input x_{i, 2} − ξ_{i, 2} and output q(x_{i, 2} − ξ_{i, 2}) of quantizer 
In Figure 3, the outputs of all the agents are plotted. Figure 4 shows the consensus errors. The control input u_{i}, and norms of the adaptive laws w_{i, 1} and w_{i, 2} are shown in Figures 5–7, respectively. From observation, they are bounded. Figures 8–9 illustrate the inputs and outputs of quantizers.
Example 2. Next, a cooperative example consisting of six unmanned ships is given to further demonstrate the effectiveness of the designed controller. Each unmanned ship model is as follows [43]
$$\begin{array}{c}\hfill {T}_{i}\ddot{{\psi}_{i}}+\dot{{\psi}_{i}}+{K}_{i}{\dot{{\psi}_{i}}}^{3}={\varphi}_{i}{\nu}_{i}\end{array}$$(36)
with T_{i} > 0, the Norrbin coefficient K_{i}, gain ϕ_{i}, the course angle ψ_{i}, course angle rate ${r}_{i}={\dot{\psi}}_{i}$, and command rudder angle ν_{i}. We define the x_{i, 1} = ψ_{i} and x_{i, 2} = r_{i}, and use the Euler method with the sampling interval Δt = 0.1s to discrete the systems as
$$\begin{array}{c}\hfill \{\begin{array}{cc}\hfill {x}_{i,1}(k+1)& ={x}_{i,1}(k)+\mathrm{\Delta}t{x}_{i,2}(k)\hfill \\ \hfill {x}_{i,2}(k+1)& ={T}_{i}^{1}\mathrm{\Delta}t{\nu}_{i}(k){T}_{i}^{1}\mathrm{\Delta}t{x}_{i,2}(k){G}_{i}^{1}{K}_{i}\mathrm{\Delta}t{x}_{i,2}^{3}(k)\hfill \\ \hfill {y}_{i}(k)& ={x}_{i,1}(k),i=1,\dots ,6.\hfill \end{array}\end{array}$$
The communication network structure is displayed in Figure 10. Assume that false signals 0.5sin(k) and 0.5cos(k) are injected into edges (v_{2}, v_{1}) and (v_{5}, v_{4}), respectively.
Figure 10. Communication topology 
Choose the parameters as l_{1} = [0.9, 0.9, 0.9, 0.01, 0.9, 0.9]^{T}, l_{2} = [0.9, 0.5, 0.5, 0.5, 0.5, 0.5]^{T}, k_{1} = [0.1, 0.1, 1.5, 0.3, 1.1, 0.1]^{T}, k_{2} = [0.1, 0.1, 0.1, 0.1, 1.5, 0.1]^{T}. Set the initial value as x_{1}(0) = [0.5,−8.5]^{T}, x_{2}(0) = [−0.3,−0.3]^{T}, x_{3}(0) = [1,−13]^{T}, x_{4}(0) = [0.2,−4.8]^{T}, x_{5}(0) = [−0.2,−0.64]^{T}, x_{6}(0) = [−0.5,2.5]^{T}, ${\widehat{w}}_{11}\left(0\right)=[0.5,0.1,0.1,0.1,$ 0.1]^{T}, ${\widehat{w}}_{21}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{31}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{51}\left(0\right)={[0.3,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{41}\left(0\right)={\widehat{w}}_{61}\left(0\right)={[0.3,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{12}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{22}\left(0\right)={[0.1,0.1,0.1,0.1,0.1]}^{T}$, ${\widehat{w}}_{32}\left(0\right)=[0.1,$ 0.1, 0.1, 0.1, −0.1]^{T}, ${\widehat{w}}_{42}\left(0\right)={\widehat{w}}_{52}\left(0\right)={\widehat{w}}_{61}\left(0\right)={[0.3,0.1,0.1,0.1,0.1]}^{T}$.
Figures 11–15 show the simulation results. In Figure 11, the outputs of all the agents are plotted. Figure 12 draws the consensus errors. The control input u_{i}, and norms of the adaptive laws w_{i, 1} and w_{i, 2} are given in Figures 13–15, separately. From observation, they are bounded.
Figure 11. Outputs y_{i} (i = 1, …, 6) of all agents 
Figure 12. Consensus errors y_{i} − y_{j} (i = 1, …, 6) 
Figure 13. Controller u_{i} of every agent 
Figure 14. The norms of adaptive laws w_{i, 1} (i = 1, …, 6) 
Figure 15. The norms of adaptive laws w_{i, 2} (i = 1, …, 6) 
Existing quantizationbased coding and decoding techniques are primarily suited for linear multiagent systems [19, 20] or the controllertoactuator segment of an agent [35, 36]. They rarely address the complexities of highorder unknown nonlinear multiagent systems. This scarcity of coverage arises from the inherent difficulty of applying existing quantizationbased codec techniques to systems characterized by highorder nonlinearity and unknown variables. Moreover, in the presence of cyber attacks, these conventional techniques fail to ensure system implementation. Similarly, when facing network attacks, these methods are incapable of ensuring system consistency. Hence, the challenge lies in adapting quantizationbased codec techniques to highorder unknown nonlinear MUSs within the context of FDI attacks.
6. Conclusion
This study focuses on dealing with the distributed cooperative control issue in unknown nonlinear discretetime MUSs by utilizing limited communication resources within a backstepping framework. The communication between agents is facilitated using a quantizerbased codec mechanism, which addresses the limitations of communication bandwidth and ensures efficient transmission. Furthermore, FDI attacks are considered in the communication channels among agents, prompting the use of an adaptive method to counter the detected attacks. Employing the Lyapunov analysis technique, the developed control strategy guarantees the boundedness of all signals through appropriate parameter selection. Finally, examples are provided to illustrate the feasibility and effectiveness of the proposed scheme. In future work, the scope of the study will be extended to encompass the distributed cooperative control of nonlinear continuoustime MUSs using a finitelevel quantizerbased codec mechanism.
Conflict of Interest
The author declares no conflict of interest.
Data Availability
No data are associated with this article.
Authors’ Contributions
Yanhui Zhang designed the control scheme and wrote this paper. Di Mei discussed the recent development. Yong Xu and Lihua Dou supervised and corrected typos in the paper.
Acknowledgments
We thank the EditorinChief, the Associate Editor, and the anonymous reviewers for their insightful and constructive comments.
Funding
This work was supported in part by the National Natural Science Foundation of China under Grant U20B2073, Grant 62103047, Beijing Institute of Technology Research Fund Program for Young Scholars, and Young Elite Scientists Sponsorship Program by BAST (Grant No. BYESS2023365).
References
 Watts AC, Ambrosia VG and Hinkley EA. Unmanned aircraft systems in remote sensing and scientific research: classification and considerations of use. Remote Sens 2012; 4: 1671–92. [CrossRef] [Google Scholar]
 Verfuss UK, Aniceto AS, Harris DV et al. A review of unmanned vehicles for the detection and monitoring of marine fauna. Mar Pollut Bull 2019; 140: 17–29. [CrossRef] [PubMed] [Google Scholar]
 OlfatiSaber R and Murray RM. Consensus problems in networks of agents with switching topology and timedelays. IEEE Trans Autom Control 2004; 49: 1520–33. [CrossRef] [Google Scholar]
 Yang R, Liu L and Feng G. An overview of recent advances in distributed coordination of multiagent systems. Unmanned Syst 2022; 10: 307–25. [CrossRef] [Google Scholar]
 Shan H, Xue H, Hu S et al. Finitetime dynamic surface control for multiagent systems with prescribed performance and unknown control directions. Int J Syst Sci 2022; 53: 325–36. [CrossRef] [Google Scholar]
 Zhang J, Liu S and Zhang X. Observerbased distributed consensus for nonlinear multiagent systems with limited data rate. Sci Chin Inf Sci 2022; 65 192204. [Google Scholar]
 Zhu Y, Wang Z, Liang H et al. Neuralnetworkbased predefinedtime adaptive consensus in nonlinear multiagent systems with switching topologies. IEEE Trans Neural Networks Learn Syst 2023, doi: http://doi.org/10.1109/TNNLS.2023.3238336. [Google Scholar]
 Chen J, Xie J, Li J et al. Humanintheloop fuzzy iterative learning control of consensus for unknown mixedorder nonlinear multiagent systems. IEEE Trans Fuzzy Syst 2023, doi: 10.1109/TFUZZ.2023.3296572. [Google Scholar]
 Zhang H, Jiang H, Luo Y et al. Datadriven optimal consensus control for discretetime multiagent systems with unknown dynamics using reinforcement learning method. IEEE Trans Ind Electron 2016; 64: 4091–100. [Google Scholar]
 Li S, Zhang J, Li X et al. Formation control of heterogeneous discretetime nonlinear multiagent systems with uncertainties. IEEE Trans Ind Electron 2017; 64: 4730–40. [CrossRef] [Google Scholar]
 Jiang Y, Fan J, Gao W et al. Cooperative adaptive optimal output regulation of nonlinear discretetime multiagent systems. Automatica 2020; 121: 109149. [CrossRef] [Google Scholar]
 Li H and Li X. Distributed fixedtime consensus of discretetime heterogeneous multiagent systems via predictive mechanism and Lyapunov approach. IEEE Trans Circuits Syst II: Express Briefs 2023, doi: 10.1109/TCSII.2023.3302136. [Google Scholar]
 Fu M and Xie L. The sector bound approach to quantized feedback control. IEEE Trans Autom Control 2005; 50: 1698–711. [CrossRef] [Google Scholar]
 Hayakawa T, Ishii H and Tsumura K. Adaptive quantized control for linear uncertain discretetime systems. Automatica 2009; 45: 692–700. [CrossRef] [Google Scholar]
 Nesic D and Liberzon D. A unified framework for design and analysis of networked and quantized control systems. IEEE Trans Autom Control 2009; 54: 732–47. [CrossRef] [Google Scholar]
 Zhang Y, Zhang J, Liu X et al. Quantizedoutput feedback model reference control of discretetime linear systems. Automatica 2022; 137: 110027. [CrossRef] [Google Scholar]
 Liu G, Pan Y, Lam HK et al. Eventtriggered fuzzy adaptive quantized control for nonlinear multiagent systems in nonaffine purefeedback form. Fuzzy Sets Syst 2021; 416: 27–46. [CrossRef] [Google Scholar]
 Liu W, Ma Q, Xu S et al. State quantized output feedback control for nonlinear systems via eventtriggered sampling. IEEE Trans Autom Control 2022; 67: 6810–17. [CrossRef] [Google Scholar]
 Carli R, Fagnani F, Frasca P et al. Efficient quantized techniques for consensus algorithms. NeCST workshop 2007; 1–8. [Google Scholar]
 Li T and Xie L. Distributed consensus over digital networks with limited bandwidth and timevarying topologies. Automatica 2011; 47: 2006–15. [CrossRef] [Google Scholar]
 Dong W. Consensus of highorder nonlinear continuoustime systems with uncertainty and limited communication data rate. IEEE Trans Autom Control 2009; 64: 2100–2107. [Google Scholar]
 Zhang H, Chi R, Hou Z et al. Datadriven iterative learning control using a uniform quantizer with an encoding–decoding mechanism. Int J Robust Nonlinear Control 2022; 32: 4336–4354. [CrossRef] [Google Scholar]
 Ren H, Liu R, Cheng Z et al. Datadriven eventtriggered control for nonlinear multiagent systems with uniform quantization. IEEE Trans Circuits Syst II: Express Briefs 2023, doi: 10.1109/TCSII.2023.3305946. [Google Scholar]
 Zhu P, Jin S, Bu X et al. Distributed datadriven control for a connected heterogeneous vehicle platoon under quantized and switching topologies communication. IEEE Trans Veh Technol 2023; 72: 9796–807. [CrossRef] [Google Scholar]
 Wang P, Ren X, Hu S et al. Eventbased adaptive compensation control of nonlinear cyberphysical systems under actuator failure and false data injection attack. In: 2021 40th Chinese Control Conference. IEEE, 2021, 509–14. [CrossRef] [Google Scholar]
 Guo H, Sun J and Pang Z. Residualbased false data injection attacks against multisensor estimation systems. IEEE/CAA J Autom Sinica 2023;in press, doi: 10.1109/JAS.2023.123441. [Google Scholar]
 Li X, Zhou Q, Li P et al. Eventtriggered consensus control for multiagent systems against false datainjection attacks. IEEE Trans Cybern 2020; 50: 1856–66. [CrossRef] [PubMed] [Google Scholar]
 Meng M, Xiao G, Li B. Adaptive consensus for heterogeneous multiagent systems under sensor and actuator attacks. Automatica 2020; 122: 109242. [CrossRef] [Google Scholar]
 Zhang S, Che W and Deng C. Observerbased eventtriggered secure synchronization control for multiagent systems under false data injection attacks. Int J. Robust Nonlinear Control 2022; 32: 4843–60. [CrossRef] [Google Scholar]
 Wang Z, Shi S, He W et al. Observerbased asynchronous eventtriggered bipartite consensus of multiagent systems under false data injection attacks. IEEE Trans Control Netw Syst 2023; in press, doi: 10.1109/TCNS.2023.3235425. [Google Scholar]
 Huo S, Huang D, Zhang Y. Secure output synchronization of heterogeneous multiagent systems against false data injection attacks. Sci Chin Inf Sci 2022; 65: 162204. [CrossRef] [Google Scholar]
 Hu Z, Chen K, Deng F et al. H_{∞} controller design for networked systems with twochannel packet dropouts and FDI attacks. IEEE Trans Cybern 2023, doi: 10.1109/TCYB.2022.3233065. [PubMed] [Google Scholar]
 Wei B, Tian E, Gu Z et al. Quasiconsensus control for stochastic multiagent systems: when energy harvesting constraints meet multimodal FDI attacks. IEEE Trans Cybern 2023, doi: 10.1109/TCYB.2023.3253141. [PubMed] [Google Scholar]
 Tahoun A and Arafa M. Cooperative control for cyberphysical multiagent networked control systems with unknown false datainjection and replay cyberattacks. ISA Trans 2021; 110: 1–14. [CrossRef] [PubMed] [Google Scholar]
 Cao L, Ren H, Li H et al. Eventtriggered outputfeedback control for largescale systems with unknown hysteresis. IEEE Trans Cybern 2020; 51: 5236–47. [Google Scholar]
 Chen G, Yao D, Li H et al. Saturated threshold eventtriggered control for multiagent systems under sensor attacks and its application to UAVs. IEEE Trans Circuits Syst I Regul Pap 2021; 69: 884–95. [Google Scholar]
 Hashemi M, Javad A, Jafar G et al. Robust adaptive actuator failure compensation for a class of uncertain nonlinear systems. Int J Autom Comput 2017; 14: 719–28. [CrossRef] [Google Scholar]
 Liu H and Zhang T. Adaptive neural network finitetime control for uncertain robotic manipulators. J Int Robotic Syst 2014; 75: 363–77. [CrossRef] [Google Scholar]
 Du J, Guo C, Yu S et al. Adaptive autopilot design of timevarying uncertain ships with completely unknown control coefficient. IEEE J Oceanic Eng 2007; 32: 346–52. [CrossRef] [Google Scholar]
 Park J and Sandberg IW. Universal approximation using radialbasisfunction networks. Neural Comput 1991; 3: 246–57. [CrossRef] [PubMed] [Google Scholar]
 Jagannathan S, Vandegrift MW and Lewis FL. Adaptive fuzzy logic control of discretetime dynamical systems. Automatica 2000; 36: 229–41. [CrossRef] [Google Scholar]
 Ma H and Yang GH. Simultaneous fault diagnosis for robot manipulators with actuator and sensor faults. Inf Sci 2016; 366: 12–30. [CrossRef] [Google Scholar]
 Tzeng C, Goodwin G and Crisafulli S. Feedback linearization design of a ship steering autopilot with saturating and slew rate limiting actuator. Int J Adap Control Signal Process 1999; 13: 23–30. [CrossRef] [Google Scholar]
Yanhui Zhang received a B.S. degree and an M.S. degree in Applied Mathematics from Bohai University, Jinzhou, China, in 2016 and 2019, respectively. She received her Ph.D. degree in the School of Automation, Beijing Institute of Technology in 2023. Her current research interests include fuzzy adaptive control, multiagent systems, and nonlinear systems.
Di Mei received a B.Eng. degree in automation from Beijing Institute of Technology, Beijing, China, in 2015, and an M.Eng. degree in control engineering from Beijing Information Science and Technology University, Beijing, China, in 2018. He is currently pursuing a Ph.D. degree from the College of Automation, Beijing Institute of Technology, Beijing, China. His current research interest includes multiagent systems, distributed cooperative control, and reinforcement learning.
Yong Xu received a Ph.D. degree in control science and engineering from Zhejiang University, Hangzhou, China, in 2020. He held a postdoctoral position supervised by Prof. Jian Sun at the State Key Lab of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology from 2020 to 2022. In July 2022, he joined the school of school of Automation at Beijing Institute of Technology, Beijing, where he is currently an Associate Profes sor. His current research interest includes multiagent systems, Reinforcement learning/datadriven control, security analysis, and control of networked systems.
Lihua Dou received B.S., M.S., and Ph.D. degrees in control theory and control engineering from Beijing Institute of Technology, Beijing, China, in 1979, 1987, and 2001, respectively. She is currently a professor of control science and engineering with the Key Laboratory of Complex System Intelligent Control and Decision, School of Automation, Beijing Institute of Technology. Her research interests include multiobjective optimization and decision, pattern recognition, and image processing.
All Figures
Figure 1. Block diagram of design process 

In the text 
Figure 2. Communication topology 

In the text 
Figure 3. Outputs y_{i} (i = 1, …, 6) of all agents 

In the text 
Figure 4. Consensus errors y_{i} − y_{j} (i = 1, …, 6) 

In the text 
Figure 5. Controller u_{i} of every agent 

In the text 
Figure 6. The norms of adaptive laws w_{i, 1} (i = 1, …, 6) 

In the text 
Figure 7. The norms of adaptive laws w_{i, 2} (i = 1, …, 6) 

In the text 
Figure 8. The input x_{i, 1} − ξ_{i, 1} and output q(x_{i, 1} − ξ_{i, 1}) of quantizer 

In the text 
Figure 9. The input x_{i, 2} − ξ_{i, 2} and output q(x_{i, 2} − ξ_{i, 2}) of quantizer 

In the text 
Figure 10. Communication topology 

In the text 
Figure 11. Outputs y_{i} (i = 1, …, 6) of all agents 

In the text 
Figure 12. Consensus errors y_{i} − y_{j} (i = 1, …, 6) 

In the text 
Figure 13. Controller u_{i} of every agent 

In the text 
Figure 14. The norms of adaptive laws w_{i, 1} (i = 1, …, 6) 

In the text 
Figure 15. The norms of adaptive laws w_{i, 2} (i = 1, …, 6) 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.