Sigma_k Equation_b0745757的博客-程序员资料

\section{Elementary inequalities for $\sigma_k$ function (Lecture given by Xinan Ma)}
The elementary symmetric functions appear naturally in the geometric quantities. In order to carry on analysis, we need to understand properties of the elementary symmetric functions.

For $1\leq k\leq n$, and $\lambda=(\lambda_1,\lambda_2,...,\lambda_2)\in \mathbb{R}^n$, the $k$-th elementary symmetric function is
defined as
\begin{equation}
\sigma_k(\lambda)=\sum\limits_{1\leq i_1<i_2<...<i_k\leq n}\lambda_{i_1}\lambda_{i_2}...\lambda_{i_k}.\nonumber
\end{equation}
where the sum is taken over all strictly increasing sequences $i_1,...,i_k$ of the indices from
the set $\{1,2,...,n\}$. The definition can be extended to symmetric matrices
Denote $\lambda(W)=(\lambda_1(W),...,\lambda_n(W))$ to be the eigenvalues of the symmetric matrix $W$, set
$\sigma_k(W)=\sigma_k(\lambda(W))$.
It is convenient to set
\begin{equation}
\sigma_0(W) = 1,\ \ \ \sigma_k(W)=0\ \ \ for\ \ \ k > n.
\end{equation}
It follows directly from the definition that, for any $n\times n$ symmetric matrix $W$, and
$t\in \mathbb{R}$,
\begin{equation}
\sigma_n(I+tW)=\det(I_n+tW)=\sum\limits_{i=1}^n\sigma_i(W)t^i.
\end{equation}
Conversely, (2.2) can also be used to define $\sigma_k(W)$, $\forall\ \ k=0,...,n.$\\

An important property of $\sigma_k$ is the divergent free structure.

 

We say a function $u\in C^2(\Omega)\cap C^0(\overline{\Omega})$ is
$k$-admissible if
\begin{equation}
\lambda(D^2u)\in \overline{\Gamma}_k,
\end{equation}

where $\Gamma_k$ is an open symmetric convex cone in $\mathbb{R}^n$, with vertex at the origin,
given by
\begin{equation}
\Gamma_k=\{(\lambda_1,\lambda_2,...,\lambda_n)\in\mathbb{R}^n\mid \sigma_j(\lambda)>0, \ \ \forall \ j=1,2,...,k \}
\end{equation}

Clearly $\sigma_k(\lambda)=0$ for $\lambda\in\partial\Gamma_k$,
\begin{equation}
\Gamma_n\subset\Gamma_{n-1}\subset...\subset\Gamma_1\nonumber
\end{equation}
$\Gamma_n$ is the positive cone,

$$\Gamma_n=\{(\lambda_1,\lambda_2,...,\lambda_n)\in\mathbb{R}^n\mid \lambda_1>0,\ \lambda_2>0,...,\ \lambda_n>0\}\nonumber
$$
and $\Gamma_1$ is the half space $\{\lambda\in\mathbb{R}^n\mid \sum\limits_{j=1}^n\lambda_{j}>0\}$. A function is $1$-admissible if
and only if it is sub-harmonic, and an $n$-admissible function must be convex.
For any $2\leq k\leq n$, a $k$-admissible function is sub-harmonic, and the set of
all $k-$ admissible functions is a convex cone in $C^2(\Omega)$.\\

The cone $\Gamma_k$ may also be equivalently defined as the component $\{\lambda\in\mathbb{R}^n\mid \sigma_k(\lambda)>0\}$
containing the vector $(1,1,...,1)$, or characterized as
$\Gamma_k=\{\lambda\in\mathbb{R}^n\mid 0<\sigma_k(\lambda)\leq\sigma_k(\lambda+\eta),\ \forall\ \eta_i\geq0\ \ i=1,2,...,n\}.$\\

Let
$$
\sigma_k(\lambda)=\sum\limits_{1\leq i_1<i_2<...<i_k\leq n}\lambda_{i_1}\lambda_{i_2}...\lambda_{i_k}.\nonumber
$$
For example, $\sigma_1(\lambda)=\sum\limits_{i=1}^{n}\lambda_i$ and $\sigma_n(\lambda)=\prod\limits_{i=1}^{n}\lambda_i$. Especially, if $n=3$ and $k=2$, we have $\sigma_2(\lambda)=\lambda_1\lambda_2+\lambda_2\lambda_3+\lambda_3\lambda_1$.\\

We collect some inequalities related to the polynomial $\sigma_k(\lambda)$,
which are needed in our investigation of the $k-$Hessian equation. Denote $\sigma_0=1$ and $\sigma_k=0$ for $k>n$.\\


(1)$\sigma_{k+1}(\lambda)=\sigma_{k+1}(\lambda| i)+\lambda_i\sigma_{k}(\lambda|i)$, where $(\lambda| i)=(\lambda_1,...,\hat{\lambda}_i,...,\lambda_n)$, delete $\lambda_i$;\\
(2)$\sum\limits_{i=1}^n\lambda_i\sigma_k(\lambda| i)=(k+1)\sigma_{k+1}(\lambda)$;\\
(3)$\sum\limits_{i=1}^n\sigma_k(\lambda|i)=(n-k)\sigma_k(\lambda)$;\\
(4)$\frac{\partial\sigma_{k+1}(\lambda)}{\partial\lambda_i}=\sigma_k(\lambda| i)$;\\
(5)$\sum\limits_{i=1}^n\lambda_i^2\sigma_k(\lambda| i)=\sum\limits_{i=1}^n\lambda_i(\sigma_{k+1}(\lambda)-\sigma_{k+1}(\lambda| i))=\sigma_1(\lambda)\cdot\sigma_{k+1}(\lambda)-(k+2)\sigma_{k+2}(\lambda)$.

The above five inequalities follows easily from fundamental algebraic identity.

(Newton's inequality) $\forall\ k\geq1$,
$$
(n-k+1)(k+1)\sigma_{k+1}(\lambda)\cdot\sigma_{k-1}(\lambda)\leq k(n-k)^2\sigma_k^2(\lambda),
$$
that exactly is
$$
\frac{\sigma_{k+1}(\lambda)}{C_n^{k+1}}\cdot\frac{\sigma_{k-1}(\lambda)}{C_n^{k-1}}\leq \Big(\frac{\sigma_{k}(\lambda)}{C_n^{k}}\Big)^2.\nonumber
$$

See reference book: Hardy Littlewood Ploya [Inequalities], D.S.MitrinoviC [Analytic Inequalities].\\


Garding's inequality(1958)[See CNS III]:\\
Assume that $\lambda\in\Gamma_k$,\ $\mu\in\Gamma_k$, then
$$
\frac{1}{k}\sum\limits_{i=1}^{n}\mu_i\sigma_{k-1}(\lambda| i)\geq\left(\sigma_k(\mu)\right)^{\frac{1}{k}}\cdot\left(\sigma_k(\lambda)\right)^{1-\frac{1}{k}}
$$
Some corollary: For $\lambda\in\Gamma_k$,\ $\sigma_k^{\frac{1}{k}}(\lambda)$ is concave w.r.t. $\lambda$.\\

We only need to show that for any $\lambda\in\Gamma_k$,\ $\mu\in\Gamma_k$,
$$
\sigma^{\frac{1}{k}}_{k}(\mu)\leq\sigma^{\frac{1}{k}}_{k}(\lambda)+\frac{1}{k}\sigma^{\frac{1}{k}-1}_{k}(\lambda)\sigma_{k-1}(\lambda| i)(\mu_i-\lambda_i).
$$
By Lemma1.1(2), we have $\frac{1}{k}\sigma^{\frac{1}{k}-1}_{k}(\lambda)\sigma_{k-1}\lambda_i=\sigma^{\frac{1}{k}}_{k}(\lambda)$. It follows that the above inequality is equivalent to
$$
\sigma^{\frac{1}{k}}_{k}(\mu)\leq\frac{1}{k}\sigma^{\frac{1}{k}-1}_{k}(\lambda)\sigma_{k-1}(\lambda| i)\mu_i.
$$
This is the Garding's inequality. We are done!

The following three statements are equivalent:\\
(1)$\Gamma_k=\{(\lambda_1,\lambda_2,...,\lambda_n)\in\mathbb{R}^n\mid \sigma_j(\lambda)>0, \ \ \forall \ j=1,2,...,k \}$;\\
(2)$\Gamma_k=\{\lambda\in\mathbb{R}^n\mid 0<\sigma_k(\lambda)\leq\sigma_k(\lambda+\eta),\ \forall\ \eta=(\eta_1,...,\eta_n),\ \eta_i\geq0\ \ \mbox{for}\ \ i=1,2,...,n\}$;\\
(3)$\Gamma_k$ is defined as the component $\{\lambda\in\mathbb{R}^n\mid \sigma_k(\lambda)>0\}$
containing the vector $(1,1,...,1)$ or $\Gamma_n$.

(Ellipticity):\\
$\forall \lambda\in\Gamma_k$, $\forall\ h\in\{1,2,...,k-1\}$, we have $\sigma_{h}(\lambda|i)>0$,\ $\forall\ i=1,2,...,n$. \\
It follows that
$\frac{\partial\sigma_{k}(\lambda)}{\partial\lambda_i}=\sigma_{k-1}(\lambda| i)>0$, which indicates the ellipticity of the $\sigma_k$ equation.

Lemma 1.4 can be proved by induction for $h$.


(Newton-Maclaurin's inequality)\\
$\forall\ k\geq1$, $\forall$ $\lambda\in\Gamma_k$,
$
\left(\frac{\sigma_{k}(\lambda)}{C_n^{k}}\right)^{\frac{1}{k}}\leq \left(\frac{\sigma_{k}(\lambda)}{C_n^{l}}\right)^{\frac{1}{l}},\ \ k>l.\nonumber
$
We only need to show
$
\left(\frac{\sigma_{l}(\lambda)}{C_n^{l}}\right)^{\frac{1}{l}}\leq \left(\frac{\sigma_{l-1}(\lambda)}{C_n^{l-1}}\right)^{\frac{1}{l-1}},\ \ k\geq l\geq1.\nonumber
$
This can be proved by induction for $l$.

(Generalized Newton-Maclaurin's inequality)
$\forall$ $\lambda\in\Gamma_k$, $\forall\ k>1\geq0,\ r>s\geq0,\ k\geq r,\ l\geq s$, then
$
\left(\frac{\frac{\sigma_{k}(\lambda)}{C_n^{k}}}{\frac{\sigma_{l}(\lambda)}{C_n^{l}}}\right)^{\frac{1}{k-l}}\leq \left(\frac{\frac{\sigma_{r}(\lambda)}{C_n^{r}}}{\frac{\sigma_{s}(\lambda)}{C_n^{s}}}\right)^{\frac{1}{r-s}},\nonumber
$
with $"="$ holds iff $\lambda_1=...=\lambda_n$.

$
$\forall\ k\geq2$, $\forall$ $\lambda\in\Gamma_k$. \\
(1)$\lambda_i\leq\sigma_1(\lambda)$, $\forall$ $i=1,2,...,n$;\\
(2)$\sum\limits_{i=1}^{n}\frac{\partial\sigma^{\frac{1}{k}}_k(\lambda)}{\partial\lambda_i}\geq\Big(C^k_n\Big)^{\frac{1}{k}}$;\\
(3)$\sum\limits_{i=1}^{n}\frac{\partial(\frac{\sigma_k}{\sigma_l}(\lambda))^{\frac{1}{k-l}}}{\partial\lambda_i}\geq\Big(\frac{C^k_n}{C^l_n}\Big)^{\frac{1}{k-l}}$, where $k>l\geq0$.
$
(1)For $k\geq2$, $\lambda\in\Gamma_k$, by Lemma 1.4, we have $\sigma_1(\lambda|i)>0$ for each $i=1,2,...,n$. Then we have
$
\sigma_1(\lambda)=\lambda_i+\sigma_1(\lambda|i)>\lambda_i,\ \ \ \forall \ \ i=1,2,...,n.
$
(2)By Lemma 1.1 and Lemma 1.5(1), we have
$
\sum\limits_{i=1}^{n}\frac{\partial\sigma^{\frac{1}{k}}_k(\lambda)}{\partial\lambda_i}&=&\sum\limits_{i=1}^{n}\frac{1}{k}\sigma^{\frac{1}{k}-1}_k(\lambda)\sigma_{k-1}(\lambda| i)\nonumber\\
&=&\frac{n-k+1}{k}\sigma^{\frac{1}{k}-1}_k(\lambda)\cdot\sigma_{k-1}(\lambda)\nonumber\\
&\geq& (C^k_n)^{\frac{1}{k}}
$

$
$\forall$ $\lambda\in\Gamma_k$, $\lambda_1\geq\lambda_2\geq...\geq\lambda_n$, \\
(1)$\sigma_{k-1}(\lambda|n)\geq...\geq\sigma_{k-1}(\lambda|1)$;\\
(2)$\sigma_k(\lambda)\leq C^k_n\cdot\prod\limits_{i=1}^n\lambda_i$ and $\lambda_k>0$.
$
(1)By Lemma 1.1, we have
$
\sigma_{k-1}(\lambda|i)=\sigma_{k-1}(\lambda| i,j)+\lambda_j\sigma_{k-2}(\lambda|i,j),\nonumber\\
\sigma_{k-1}(\lambda|j)=\sigma_{k-1}(\lambda| j,i)+\lambda_i\sigma_{k-2}(\lambda|j,i).
$
It follows that
$
\sigma_{k-1}(\lambda|i)-\sigma_{k-1}(\lambda| j)=(\lambda_i-\lambda_j)\cdot\sigma_{k-2}(\lambda|i,j)
$
Then (1) follows from Lemma 1.4 clearly.

$\forall$ $\lambda\in\Gamma_k$, $\lambda_1\geq\lambda_2\geq...\geq\lambda_n$, \\
(1)$\lambda_1\sigma_{k-1}(\lambda|1)\geq\frac{k}{n}\sigma_{k}(\lambda)$, which implies that $\lambda_1$'s upper bound is crucial for the uniformly ellipticity of equation $\sigma_k=f$;\\
(2)$\sigma_{k-1}(\lambda|k)\geq \theta(k,n)\sigma_{k-1}(\lambda)$.


(1)By Lemma 1, we have $\sigma_k(\lambda)=\lambda_1\cdot\sigma_{k-1}(\lambda| 1)+\sigma_{k}(\lambda|1)$. If $\sigma_{k}(\lambda|1)\leq0$, the inequality follows clearly. Now we assume $\sigma_{k}(\lambda|1)>0$, by Lemma 1.4, we have $(\lambda|1)\in \Gamma_k$. By Lemma 1.5(2), we have
$
\frac{k}{n-k}\cdot\frac{\sigma_{k}(\lambda|1)}{\sigma_{k-1}(\lambda| 1)}\leq\frac{\sigma_1(\lambda|1)}{n-1}\leq \lambda_1.
$
Then
$
\sigma_{k}(\lambda|1)\leq\frac{n-k}{k}\lambda_1\cdot\sigma_{k-1}(\lambda|1).
$
Combining $\sigma_k(\lambda)=\lambda_1\cdot\sigma_{k-1}(\lambda| 1)+\sigma_{k}(\lambda|1)$ and the above inequality, we have
$
\sigma_k(\lambda)\leq\lambda_1\cdot\sigma_{k-1}(\lambda| 1)+\frac{n-k}{k}\lambda_1\cdot\sigma_{k-1}(\lambda| 1)=\frac{n}{k}\lambda_1\cdot\sigma_{k-1}(\lambda|1).
$
This completes the proof of (1).\\
(2)By Lemma 1.1, we have
 $
\sigma_k(\lambda|1,k)+\lambda_1\sigma_{k-1}(\lambda| k)=\sigma_k(\lambda|k)&=&\sigma_k-\lambda_k\sigma_{k-1}(\lambda|k)\nonumber\\
&\geq&-\lambda_k\sigma_{k-1}(\lambda|k),\\
\sigma_{k-1}(\lambda|1,k)+\lambda_1\sigma_{k-2}(\lambda| 1,k)&=&\sigma_{k-1}(\lambda|k).
$
Eliminate $\lambda_1$ from above formula, we have
$
&&\sigma_{k-1}^2(\lambda|1,k)-\sigma_{k-2}(\lambda|1,k)\sigma_{k}(\lambda| 1,k)\nonumber\\
&\leq&\sigma_{k-1}(\lambda|k)\Big(\sigma_{k-1}(\lambda| 1,k)+\lambda_k\sigma_{k-2}(\lambda|1,k)\Big)\nonumber\\
&=&\sigma_{k-1}(\lambda| k)\sigma_{k-1}(\lambda|1)\leq \sigma_{k-1}^2(\lambda| k)
$
By Newton's inequality, we have
$
\sigma_{k-1}^2(\lambda|1,k)-\sigma_{k-2}(\lambda|1,k)\sigma_{k}(\lambda| 1,k)\geq\left(1-\frac{(k-1)(n-k-1)}{k(n-k)}\right)\sigma_{k-1}^2(\lambda|1,k).
$
Combining (1.18) with the inequality above, we have
$
\sigma_{k-1}(\lambda| k)\geq\sqrt{1-\frac{(k-1)(n-k-1)}{k(n-k)}}\sigma_{k-1}(\lambda|1,k).
$
It follows that
$
|\sigma_{k-1}(\lambda| 1,k)|\leq\sqrt{\frac{k(n-k)}{n-1}}\sigma_{k-1}(\lambda|k).
$
Therefore,
$
\sigma_{k-1}(\lambda|k)&=&\sigma_{k-1}(\lambda|1,k)+\lambda_1\sigma_{k-2}(\lambda| 1,k)\nonumber\\
&\geq&-C_k\sigma_{k-1}(\lambda|k)+\lambda_1\sigma_{k-2}(\lambda| 1,k),
$
where $C(n,k)=\sqrt{\frac{k(n-k)}{n-1}}$.\\

Hence,
$
\sigma_{k-1}(\lambda|k)\geq\frac{\lambda_1}{1+C(n,k)}\sigma_{k-2}(\lambda|1,k).
$
Then the result follows clearly by recursion.\\
For more inequalities, see\\
Mi Lin and Neil S. Trudinger, On some inequalities for elementary symmetric functions,
Bull. Austral. Math. Soc., 50 (1994), 317--326.


$
$\frac{\sigma_k(\lambda)}{\sigma_{k-1}(\lambda)}$ is concave w.r.t. $\lambda\in\Gamma_{k-1}$. We must make full use of the condition $\lambda\in\Gamma_{k-1}$. Generally, if $0\leq l\leq k$, we have that $\Big(\frac{\sigma_k(\lambda)}{\sigma_{k-l}(\lambda)}\Big)^{\frac{1}{k-l}}$ is concave w.r.t. $\lambda\in\Gamma_{k}$.
$
While $\lambda\in\Gamma_k$, see the proof in G.Lieberman's book [second order parabolic differential equations 2nd] P404 or D.S.MitrinoviC [Analytic Inequalities]P102.\\
We only need two show that for any $\lambda,\mu\in\Gamma_k$,
$
\frac{\sigma_k(\lambda+\mu)}{\sigma_{k-1}(\lambda+\mu)}\geq\frac{\sigma_k(\lambda)}{\sigma_{k-1}(\lambda)}+\frac{\sigma_k(\mu)}{\sigma_{k-1}(\mu)}
$
Once the above inequality has been established, it is easy to prove that $\Big(\frac{\sigma_k(\lambda)}{\sigma_{k-l}(\lambda)}\Big)^{\frac{1}{k-l}}$ is concave.
\\

Matrix case:
$(w_{ij})_{n\times n}$ is symmetric, and $\lambda(w_{ij})=(\lambda_1,\lambda_2,...,\lambda_n)$.
$
\sigma_k(D^2w)=\frac{1}{k!}\sum\limits_{i_1,...,i_k;j_1,...,j_k}\delta(i _1,...,i_k;j_1,...,j_k)w_{i_1j_1}...w_{i_kj_k}
$
where $\delta(\cdot;\cdot)$ is the generalized kronecker symbol.
$
\frac{\partial\sigma_k(D^2w)}{\partial w_{ij}}=\frac{1}{(k-1)!}\sum\limits_{i_1,...,i_{k-1};j_1,...,j_{k-1}}\delta(i,i_1,...,i_{k-1};j,j_1,...,j_{k-1})w_{i_1j_1}...w_{i_{k-1}j_{k-1}}.
$
Similarly, we can define
$
\frac{\partial^2\sigma_k(D^2w)}{\partial w_{ij}\partial w_{rs}}=\frac{1}{(k-2)!}\sum\limits_{i_1,...,i_{k-2};j_1,...,j_{k-2}}\delta(i,r,i_1,...,i_{k-2};j,s,j_1,...,j_{k-2})w_{i_1j_1}...w_{i_{k-2}j_{k-2}}.$


$W=(w_{ij})_{n\times n}$, and $\lambda(w_{ij})=(\lambda_1,\lambda_2,...,\lambda_n)$. \\
If $W$ is diagonal, $\lambda_i=w_{ii}$ is different from each other, then\\
(1)$\frac{\partial\lambda_i}{\partial w_{ii}}=1$, otherwise, $\frac{\partial\lambda_i}{\partial w_{rs}}=0$.\\
(2)$\frac{\partial^2\lambda_i}{\partial w_{ii}\partial w_{jj}}=\frac{1}{\lambda_i-\lambda_j}, i\neq j$, otherwise $\frac{\partial^2\lambda_i}{\partial w_{ij}\partial w_{kl}}=0$.

Some Corollary: If $W$ is diagonal, then\\
(1)$\frac{\partial\sigma_k(W)}{\partial w_{ii}}=\sigma_{k-1}(\lambda|i)$, otherwise, $\frac{\partial\sigma_k(W)}{\partial w_{ij}}=0$. It is easy to check that (by Lemma 1.1(5))
$
\sum\limits_{i,j,m}\frac{\partial\sigma_k(W)}{\partial w_{ij}}w_{im}w_{mj}=\sum\limits_{m}\lambda_m^2\sigma_{k-1}(\lambda|m)=\sigma_1(\lambda)\cdot\sigma_k(\lambda)+(k+1)\sigma_{k+1}(\lambda).
$

(2)Second derivatives:\\
$\frac{\partial^2\sigma_k(W)}{\partial w_{ij}\partial w_{rs}}=\sigma_{k-2}(\lambda|i,r)$, for $i=r,\ j=s,\ i\neq r$; \\ $\frac{\partial^2\sigma_k(W)}{\partial w_{ij}\partial w_{rs}}=-\sigma_{k-2}(\lambda|i,r)$, for $i=s,\ j=r,\ i\neq r$;\\
otherwise $\frac{\partial^2\sigma_k(W)}{\partial w_{ij}\partial w_{rs}}=0$.\\

Divergence free structure: $\sum\limits_{i=1}^n\partial_i\Big(\frac{\partial\sigma_k(W)}{\partial w_{ij}}\Big)=0$, for any $j=1,2,...,n$.\\

The above formula can be generalized to $F(W)=f(\lambda(W))$. If $W$ is diagonal, then\\
(1)$\frac{\partial F(W)}{\partial w_{ii}}=\frac{\partial f(\lambda)}{\partial\lambda_i}\delta_{ij}$.\\
(2)$\frac{\partial^2 F(W)}{\partial w_{ij}\partial w_{rs}}=\frac{\partial^2 f(\lambda)}{\partial\lambda_i\partial\lambda_r}\delta_{ij}\delta_{rs}+\frac{\frac{\partial f(\lambda)}{\partial\lambda_i}-\frac{\partial f(\lambda)}{\lambda_j}}{\lambda_i-\lambda_j }\delta_{is}\delta_{jr}(1-\delta_{ij})$.\\


$g(x)=\log\sigma_k(\lambda)$, $\lambda\in\Gamma_k$, then
$
\sum\limits_{i=1}^n(g_{ii}+\frac{g_i}{\lambda_i})\xi_i^2+\sum_{i\neq j}g_{ij}\xi_i\xi_j\geq0.
$

See Guan and Ma, Invention Mathematica.


$f(\lambda)$ is concave and its homogeneity degree is one $\Longleftrightarrow$ $\log f(\lambda)$ is concave.

转载于:https://www.cnblogs.com/Analysis-PDE/p/11128375.html

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/b0745757/article/details/101093106

智能推荐

andriod常用控件通信(TextView,Button,EditView,Menu)_weixin_30509393的博客-程序员资料

看mars的视频,写了一个乘法计算,只要涉及到多个Activity的通信问题,现总结如下:多个activity的通信靠的是Intent(Aanroid提供的避免多组建耦合的一个消息机制,翻译叫意图)基本通信原理如下:另外罗列下Intent几个比较重要的方法getIntent()-----创建一个Intent;putExtra()------发送Intent消息内容get...

SpringBoot24:thymeleaf在js中取值_qq_38198467的博客-程序员资料

在js中的request域中取值&lt;script th:inline="javascript"&gt; var allhero = /*[[${allHero}]]*/ "无值"; console.log(allhero); &lt;/script&gt;使用的使用要在标签种声明一下是thymeleaf中的javascript ,然后使用类似...

【FastRTPS】对象和数据结构_JL_Gao的博客-程序员资料

对象和数据结构eProsima Fast RTPS中的对象是按模块划分的。Publisher Subscriber模块RTPS标准的上层抽象,有以下几个结构:Domain(域):用于创建、管理、销毁高层的ParticipantPariticipant(参与者):包含多个Publisher和Subscriber,并管理它们的配置    ParticipantAttribute...

互联网 DevOps、AIOps、DevSecOps、TestOps 那些事儿_蔚1的博客-程序员资料

互联网 DevOps、AIOps、DevSecOps、TestOps 那些事儿,主要是介绍分享下目前互联网比较火热的 DevOps、AIOps、DevSecOps 等技术,这些技术都是如何在互联网的研发中起到作用和提高效率的。特别是如今的互联网,已经不是一门技术,或者几个人 就可以做出一些大的成绩的时代了, 需要整个工程效率,工程体系的创新和自动化 智能化,未来的技术很多,需要我们这代年青人去创新...

[Err] 1030 - Got error 168 from storage engine 数据库表迁移的时候出现的错误_regean的博客-程序员资料

意思是导入的数据库引擎与我库比匹配:首先看下表结构; 建表语句有没有问题:如下:DROP TABLE IF EXISTS `T_ACHIEVEMENT`;先执行下这行   看看有没有问题 ENGINE=INNODB  这个需要和服务器里面的相对应:default_storage_engine = InnoDB再执行建表语句   看看有没有问题;在建表语句中要注意几个地方,其中有一个...

kafka源码分析三_yjh314的博客-程序员资料

/** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding cop

随便推点

Spring拦截器(实现自定义注解)_spring拦截器配置注解_书香水墨的博客-程序员资料

一、定义注解类@Target(ElementType.METHOD)@Retention(RetentionPolicy.RUNTIME)public @interface Authority { String[] value() default {""}}二、定义控制器@Controllerpublic class UserController { @GetMapping("/test") @Authority("") public String welcome(

java.lang.UnsupportedClassVersionError: 主类名称 : Unsupported major.minor version_风铃峰顶的博客-程序员资料

Exception in thread "main" java.lang.UnsupportedClassVersionError: 主类名称 : Unsupported major.minor version 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.def...

Java程序员字节跳动 后台开发岗三轮技术面试分享总结_weixin_45039616的博客-程序员资料

字节跳动特重视算法,不怎么关注编程语言啊,开发框架啊什么的,比较注重考察思维能力,也会问一些基础的操作系统网络通信什么的感觉头条的面试思路跟微软谷歌差不多 就靠算法题看编程能力然后也问了问我读研期间写的几个系统总体感觉真是 难哭了呜呜第一轮视频面试视频面试是通过牛客网,双方写的代码都可以实时反馈给对方1.(项目经历)面试官特别关注项...

NOWCODER编程题——合并两个有序链表_再见_旧时光的博客-程序员资料

题目描述合并两个有序链表,合并后依然有序题目分析当第一个链表是空链表就把它和第二个链表合并,结果是第二个链表;同样,第二个链表是空表,合并结果是第一个链表;如果两都是空链表,合并结果也是空链表;比较两个链表的头结点,小的作为合并后的头结点,在剩余节点中,再次比较两个链表的头结点。代码实现/*struct ListNode {    int val;    struct...

winpe镜像文件iso下载_手把手教你从微软官网下载Windows10最新ISO镜像_weixin_39712865的博客-程序员资料

经常在后台收到各位询问如何下载微软官方提供的最新镜像,实际上很简单,微软官方提供了“媒介创建工具”,可以帮助我们下载微软官方ISO镜像,甚至可以直接使用U盘制作系统盘,今天就手把手教你从微软官方网站下载Windows10最新ISO镜像。一、下载媒介创建工具 ”媒介创建工具“是微软官方提供的用于下载最新版系统镜像的工具。每个大版本更新,微软都会更新该工具。因此需...

推荐文章

热门文章

相关标签